linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets
@ 2020-01-16  0:46 Wunderlich, Mark
  2020-01-17  1:13 ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Wunderlich, Mark @ 2020-01-16  0:46 UTC (permalink / raw)
  To: linux-nvme; +Cc: Sagi Grimberg

Host and target patch series to allow setting socket priority.
Enables ability to associate all sockets related to NVMf TCP
traffic to a priority group that will perform optimized
network processing for this traffic class. Maintain initial
default priority of zero, only setting socket priority when
module option set to non-zero value, when optimized
network processing conditions exist.

Basic performance measurements for io_uring, pvsync2, and libaio
show benefit of improved lower latency performance when enhanced
processing support is triggered via socket priority.

The data gathered using infradead branch nvme-5.5-rc.  Data shows
Existing OS baseline performance without patch applied to both host
and target,  performance when patch applied and priority value set but
no support for enhanced traffic processing exists,  and finally when
patch applied and some form of advanced traffic processing exists
such as symmetric queueing.

The patches were defined to allow for any priority value to be used
to support future flexibility to specify any unique value for optional
advanced network processing.  If having this flexibility is reviewed as
not beneficial to the community at this time, then the desired option
would be to set the default module priority to non-zero, to always
allow for advanced network traffic processing if possible.  But data
does show a slight performance dip when the priority is set when
there is no advanced network support available.

All data gathered using 48 I/O queues and 48 poll queues established
Between host and target.  Host busy_poll value set at 5000.  Target
busy_read value set at 60000.  Single Optane device used.

IO_URING:
Baseline:  No patches applied
QueueDepth/Batch    IOPS(K)    Avg. Latency(usec)    99.99(usec)
1/1    24.2    40.90    70.14
32/8  194    156.88   441

Patch added, priority=1, But enhanced processing not available:
1/1    22     43.16    77.31
32/8  222     137.89    277

Patch added, priority=1, enhanced NIC processing added
1/1    30.9    32.01    59.64
32/8   259     119.26    174

PVSYNC2:
Baseline:  No patches applied
1/1    24.3    40.85    80.38
32/8   24.3    40.84   73.21

Patch added, priority=1:
1/1    23.1    42.87    79.36
32/8   23.1    43     79.36

Patch added, priority=1, enhanced NIC processing added
1/1    31.3     31.69    62.20
32/8   31.3    31.63   62.20

LIBAIO:
Baseline:  No patches applied
1/1    26.2    37.67    77.31
32/8   139    220.10   807

Patch added, priority=1:
1/1    24.6    40    78.33
32/8   138    220.91   791

Patch added, priority=1, enhanced NIC processing added
1/1    28     34.03    69.12
32/8   278    90.77    139


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets
  2020-01-16  0:46 [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets Wunderlich, Mark
@ 2020-01-17  1:13 ` Sagi Grimberg
  2020-01-30 22:51   ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Sagi Grimberg @ 2020-01-17  1:13 UTC (permalink / raw)
  To: Wunderlich, Mark, linux-nvme

Mark,

> Host and target patch series to allow setting socket priority.
> Enables ability to associate all sockets related to NVMf TCP
> traffic to a priority group that will perform optimized
> network processing for this traffic class. Maintain initial
> default priority of zero, only setting socket priority when
> module option set to non-zero value, when optimized
> network processing conditions exist.
> 
> Basic performance measurements for io_uring, pvsync2, and libaio
> show benefit of improved lower latency performance when enhanced
> processing support is triggered via socket priority.
> 
> The data gathered using infradead branch nvme-5.5-rc.  Data shows
> Existing OS baseline performance without patch applied to both host
> and target,  performance when patch applied and priority value set but
> no support for enhanced traffic processing exists,  and finally when
> patch applied and some form of advanced traffic processing exists
> such as symmetric queueing.
> 
> The patches were defined to allow for any priority value to be used
> to support future flexibility to specify any unique value for optional
> advanced network processing.  If having this flexibility is reviewed as
> not beneficial to the community at this time, then the desired option
> would be to set the default module priority to non-zero, to always
> allow for advanced network traffic processing if possible.  But data
> does show a slight performance dip when the priority is set when
> there is no advanced network support available.

I think we can keep the special mode in a modparam for the time being,
especially given that other configuration is needed as well. The fact
that there is a small delta if adq is not enabled but so_priority is
set is acceptable, and if this option is used we can assume the user
relies on adq anyhow.

> All data gathered using 48 I/O queues and 48 poll queues established
> Between host and target.

I'm assuming that overall IOPs/throughput on a multithreaded workload
also improved (when not capped by the single Optane device).

>  Host busy_poll value set at 5000.  Target
> busy_read value set at 60000.

This shouldn't make a difference for nvme-tcp which overrides busy
polling for its own blk_poll hook.

>  Single Optane device used.
> 
> IO_URING:
> Baseline:  No patches applied
> QueueDepth/Batch    IOPS(K)    Avg. Latency(usec)    99.99(usec)
> 1/1    24.2    40.90    70.14
> 32/8  194    156.88   441
> 
> Patch added, priority=1, But enhanced processing not available:
> 1/1    22     43.16    77.31
> 32/8  222     137.89    277
> 
> Patch added, priority=1, enhanced NIC processing added
> 1/1    30.9    32.01    59.64
> 32/8   259     119.26    174
> 
> PVSYNC2:
> Baseline:  No patches applied
> 1/1    24.3    40.85    80.38
> 32/8   24.3    40.84   73.21
> 
> Patch added, priority=1:
> 1/1    23.1    42.87    79.36
> 32/8   23.1    43     79.36
> 
> Patch added, priority=1, enhanced NIC processing added
> 1/1    31.3     31.69    62.20
> 32/8   31.3    31.63   62.20

No need to report 32/8 for pvsync2

> 
> LIBAIO:
> Baseline:  No patches applied
> 1/1    26.2    37.67    77.31
> 32/8   139    220.10   807
> 
> Patch added, priority=1:
> 1/1    24.6    40    78.33
> 32/8   138    220.91   791
> 
> Patch added, priority=1, enhanced NIC processing added
> 1/1    28     34.03    69.12
> 32/8   278    90.77    139

Given how trivial the patch set is with the obvious improvement,
I say we should take it as is.

For the series:
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets
  2020-01-17  1:13 ` Sagi Grimberg
@ 2020-01-30 22:51   ` Sagi Grimberg
  2020-01-31  0:48     ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Sagi Grimberg @ 2020-01-30 22:51 UTC (permalink / raw)
  Cc: Keith Busch, Wunderlich, Mark, linux-nvme

Keith,

Unless there is any other feedback, I suggest we queue it for
5.6 perhaps?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets
  2020-01-30 22:51   ` Sagi Grimberg
@ 2020-01-31  0:48     ` Keith Busch
  2020-02-11 20:31       ` Keith Busch
  0 siblings, 1 reply; 6+ messages in thread
From: Keith Busch @ 2020-01-31  0:48 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Wunderlich, Mark, linux-nvme

On Thu, Jan 30, 2020 at 02:51:13PM -0800, Sagi Grimberg wrote:
> Keith,
> 
> Unless there is any other feedback, I suggest we queue it for
> 5.6 perhaps?

Sounds good, queued up for-5.6.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets
  2020-01-31  0:48     ` Keith Busch
@ 2020-02-11 20:31       ` Keith Busch
  2020-02-12 20:02         ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Keith Busch @ 2020-02-11 20:31 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Wunderlich, Mark, linux-nvme

On Fri, Jan 31, 2020 at 09:48:18AM +0900, Keith Busch wrote:
> On Thu, Jan 30, 2020 at 02:51:13PM -0800, Sagi Grimberg wrote:
> > Keith,
> > 
> > Unless there is any other feedback, I suggest we queue it for
> > 5.6 perhaps?
> 
> Sounds good, queued up for-5.6.

Sorry all, I did this a little to late for the 5.6 merge window. I've
started a 5.7 branch and collecting patches there.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets
  2020-02-11 20:31       ` Keith Busch
@ 2020-02-12 20:02         ` Sagi Grimberg
  0 siblings, 0 replies; 6+ messages in thread
From: Sagi Grimberg @ 2020-02-12 20:02 UTC (permalink / raw)
  To: Keith Busch; +Cc: Wunderlich, Mark, linux-nvme


>>> Keith,
>>>
>>> Unless there is any other feedback, I suggest we queue it for
>>> 5.6 perhaps?
>>
>> Sounds good, queued up for-5.6.
> 
> Sorry all, I did this a little to late for the 5.6 merge window. I've
> started a 5.7 branch and collecting patches there.

That is fine.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-02-12 20:02 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-16  0:46 [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets Wunderlich, Mark
2020-01-17  1:13 ` Sagi Grimberg
2020-01-30 22:51   ` Sagi Grimberg
2020-01-31  0:48     ` Keith Busch
2020-02-11 20:31       ` Keith Busch
2020-02-12 20:02         ` Sagi Grimberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).