linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets
@ 2020-01-16  0:46 Wunderlich, Mark
  2020-01-17  1:13 ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Wunderlich, Mark @ 2020-01-16  0:46 UTC (permalink / raw)
  To: linux-nvme; +Cc: Sagi Grimberg

Host and target patch series to allow setting socket priority.
Enables ability to associate all sockets related to NVMf TCP
traffic to a priority group that will perform optimized
network processing for this traffic class. Maintain initial
default priority of zero, only setting socket priority when
module option set to non-zero value, when optimized
network processing conditions exist.

Basic performance measurements for io_uring, pvsync2, and libaio
show benefit of improved lower latency performance when enhanced
processing support is triggered via socket priority.

The data gathered using infradead branch nvme-5.5-rc.  Data shows
Existing OS baseline performance without patch applied to both host
and target,  performance when patch applied and priority value set but
no support for enhanced traffic processing exists,  and finally when
patch applied and some form of advanced traffic processing exists
such as symmetric queueing.

The patches were defined to allow for any priority value to be used
to support future flexibility to specify any unique value for optional
advanced network processing.  If having this flexibility is reviewed as
not beneficial to the community at this time, then the desired option
would be to set the default module priority to non-zero, to always
allow for advanced network traffic processing if possible.  But data
does show a slight performance dip when the priority is set when
there is no advanced network support available.

All data gathered using 48 I/O queues and 48 poll queues established
Between host and target.  Host busy_poll value set at 5000.  Target
busy_read value set at 60000.  Single Optane device used.

IO_URING:
Baseline:  No patches applied
QueueDepth/Batch    IOPS(K)    Avg. Latency(usec)    99.99(usec)
1/1    24.2    40.90    70.14
32/8  194    156.88   441

Patch added, priority=1, But enhanced processing not available:
1/1    22     43.16    77.31
32/8  222     137.89    277

Patch added, priority=1, enhanced NIC processing added
1/1    30.9    32.01    59.64
32/8   259     119.26    174

PVSYNC2:
Baseline:  No patches applied
1/1    24.3    40.85    80.38
32/8   24.3    40.84   73.21

Patch added, priority=1:
1/1    23.1    42.87    79.36
32/8   23.1    43     79.36

Patch added, priority=1, enhanced NIC processing added
1/1    31.3     31.69    62.20
32/8   31.3    31.63   62.20

LIBAIO:
Baseline:  No patches applied
1/1    26.2    37.67    77.31
32/8   139    220.10   807

Patch added, priority=1:
1/1    24.6    40    78.33
32/8   138    220.91   791

Patch added, priority=1, enhanced NIC processing added
1/1    28     34.03    69.12
32/8   278    90.77    139


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-02-12 20:02 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-16  0:46 [PATCH 0/2] nvme-tcp: Set SO_PRIORITY for all sockets Wunderlich, Mark
2020-01-17  1:13 ` Sagi Grimberg
2020-01-30 22:51   ` Sagi Grimberg
2020-01-31  0:48     ` Keith Busch
2020-02-11 20:31       ` Keith Busch
2020-02-12 20:02         ` Sagi Grimberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).