All of lore.kernel.org
 help / color / mirror / Atom feed
* A second case of XPS considerably reducing single-stream performance
@ 2016-08-24 15:52 Rick Jones
  2016-08-24 23:46 ` Rick Jones
  0 siblings, 1 reply; 6+ messages in thread
From: Rick Jones @ 2016-08-24 15:52 UTC (permalink / raw)
  To: netdev; +Cc: sathya.perla, ajit.khaparde, sriharsha.basavapatna, somnath.kotur

Back in February of this year, I reported some performance issues with 
the ixgbe driver enabling XPS by default and instance network 
performance in OpenStack:

http://www.spinics.net/lists/netdev/msg362915.html

I've now seen the same thing with be2net and Skyhawk.  In this case, the 
magnitude of the delta is even greater.  Disabling XPS increased the 
netperf single-stream performance out of the instance from an average of 
4108 Mbit/s to 8888 Mbit/s or 116%.

Should drivers really be enabling XPS by default?

		      Instance To Outside World
			Single-stream netperf
		    ~30 Samples for Each Statistic
                               Mbit/s

                  Skyhawk            BE3 #1            BE3 #2
              XPS On   XPS Off  XPS On   XPS Off  XPS On   XPS Off
Median        4192     8883     8930     8853     8917     8695
Average       4108     8888     8940     8859     8885     8671

happy benchmarking,

rick jones

The sample counts below may not fully support the additional statistics 
but for the curious:

raj@tardy:/tmp$ ~/netperf2_trunk/doc/examples/parse_single_stream.py -r 
6 waxon_performance.log  -f 2
Field2,Min,P10,Median,Average,P90,P99,Max,Count
be3-1,8758.850,8811.600,8930.900,8940.555,9096.470,9175.839,9183.690,31
be3-2,8588.450,8736.967,8917.075,8885.322,9017.914,9075.735,9094.620,32
skyhawk,3326.760,3536.008,4192.780,4108.513,4651.164,4723.322,4724.320,27
0 too-short lines ignored.
raj@tardy:/tmp$ ~/netperf2_trunk/doc/examples/parse_single_stream.py -r 
6 waxoff_performance.log  -f 2
Field2,Min,P10,Median,Average,P90,P99,Max,Count
be3-1,8461.080,8634.690,8853.260,8859.870,9064.480,9247.770,9253.050,31
be3-2,7519.130,8368.564,8695.140,8671.241,9068.588,9200.719,9241.500,27
skyhawk,8071.180,8651.587,8883.340,8888.411,9135.603,9141.229,9142.010,32
0 too-short lines ignored.

"waxon" is with XPS enabled, "waxoff" is with XPS disabled.  The servers 
are the same models/config as in February.

stack@np-cp1-comp0013-mgmt:~$ sudo ethtool -i hed3
driver: be2net
version: 10.6.0.3
firmware-version: 10.7.110.45

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: A second case of XPS considerably reducing single-stream performance
  2016-08-24 15:52 A second case of XPS considerably reducing single-stream performance Rick Jones
@ 2016-08-24 23:46 ` Rick Jones
  2016-08-25 19:19   ` Alexander Duyck
  0 siblings, 1 reply; 6+ messages in thread
From: Rick Jones @ 2016-08-24 23:46 UTC (permalink / raw)
  To: netdev; +Cc: sathya.perla, ajit.khaparde, sriharsha.basavapatna, somnath.kotur

Also, while it doesn't seem to have the same massive effect on 
throughput, I can also see out of order behaviour happening when the 
sending VM is on a node with a ConnectX-3 Pro NIC.  Its driver is also 
enabling XPS it would seem.  I'm not *certain* but looking at the traces 
it appears that with the ConnectX-3 Pro there is more interleaving of 
the out-of-order traffic than there is with the Skyhawk.  The ConnectX-3 
Pro happens to be in a newer generation server with a newer processor 
than the other systems where I've seen this.

I do not see the out-of-order behaviour when the NIC at the sending end 
is a BCM57840.  It does not appear that the bnx2x driver in the 4.4 
kernel is enabling XPS.

So, it would seem that there are three cases of enabling XPS resulting 
in out-of-order traffic, two of which result in a non-trivial loss of 
performance.

happy benchmarking,

rick jones

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: A second case of XPS considerably reducing single-stream performance
  2016-08-24 23:46 ` Rick Jones
@ 2016-08-25 19:19   ` Alexander Duyck
  2016-08-25 20:18     ` Rick Jones
  2016-08-25 21:02     ` Tom Herbert
  0 siblings, 2 replies; 6+ messages in thread
From: Alexander Duyck @ 2016-08-25 19:19 UTC (permalink / raw)
  To: Rick Jones
  Cc: Netdev, sathya.perla, ajit.khaparde, sriharsha.basavapatna,
	somnath.kotur

On Wed, Aug 24, 2016 at 4:46 PM, Rick Jones <rick.jones2@hpe.com> wrote:
> Also, while it doesn't seem to have the same massive effect on throughput, I
> can also see out of order behaviour happening when the sending VM is on a
> node with a ConnectX-3 Pro NIC.  Its driver is also enabling XPS it would
> seem.  I'm not *certain* but looking at the traces it appears that with the
> ConnectX-3 Pro there is more interleaving of the out-of-order traffic than
> there is with the Skyhawk.  The ConnectX-3 Pro happens to be in a newer
> generation server with a newer processor than the other systems where I've
> seen this.
>
> I do not see the out-of-order behaviour when the NIC at the sending end is a
> BCM57840.  It does not appear that the bnx2x driver in the 4.4 kernel is
> enabling XPS.
>
> So, it would seem that there are three cases of enabling XPS resulting in
> out-of-order traffic, two of which result in a non-trivial loss of
> performance.
>
> happy benchmarking,
>
> rick jones

The problem is that there is no socket associated with the guest from
the host's perspective.  This is resulting in the traffic bouncing
between queues because there is no saved socket  to lock the interface
onto.

I was looking into this recently as well and had considered a couple
of options.  The first is to fall back to just using skb_tx_hash()
when skb->sk is null for a given buffer.  I have a patch I have been
toying around with but I haven't submitted it yet.  If you would like
I can submit it as an RFC to get your thoughts.  The second option is
to enforce the use of RPS for any interfaces that do not perform Rx in
NAPI context.  The correct solution for this is probably some
combination of the two as you have to have all queueing done in order
at every stage of the packet processing.

- Alex

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: A second case of XPS considerably reducing single-stream performance
  2016-08-25 19:19   ` Alexander Duyck
@ 2016-08-25 20:18     ` Rick Jones
  2016-08-25 20:44       ` Alexander Duyck
  2016-08-25 21:02     ` Tom Herbert
  1 sibling, 1 reply; 6+ messages in thread
From: Rick Jones @ 2016-08-25 20:18 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: Netdev, sathya.perla, ajit.khaparde, sriharsha.basavapatna,
	somnath.kotur

On 08/25/2016 12:19 PM, Alexander Duyck wrote:
> The problem is that there is no socket associated with the guest from
> the host's perspective.  This is resulting in the traffic bouncing
> between queues because there is no saved socket  to lock the interface
> onto.
>
> I was looking into this recently as well and had considered a couple
> of options.  The first is to fall back to just using skb_tx_hash()
> when skb->sk is null for a given buffer.  I have a patch I have been
> toying around with but I haven't submitted it yet.  If you would like
> I can submit it as an RFC to get your thoughts.  The second option is
> to enforce the use of RPS for any interfaces that do not perform Rx in
> NAPI context.  The correct solution for this is probably some
> combination of the two as you have to have all queueing done in order
> at every stage of the packet processing.

I don't know with interfaces would be hit, but just in general, I'm not 
sure that requiring RPS be enabled is a good solution - picking where 
traffic is processed based on its addressing is fine in a benchmarking 
situation, but I think it is better to have the process/thread scheduler 
decide where something should run and not the addressing of the 
connections that thread/process is servicing.

I would be interested in seeing the RFC patch you propose.

Apart from that, given the prevalence of VMs these days I wonder if 
perhaps simply not enabling XPS by default isn't a viable alternative. 
I've not played with containers to know if they would exhibit this too.

Drifting ever so slightly, if drivers are going to continue to enable 
XPS by default, Documentation/networking/scaling.txt might use a tweak:

diff --git a/Documentation/networking/scaling.txt 
b/Documentation/networking/sca
index 59f4db2..8b5537c 100644
--- a/Documentation/networking/scaling.txt
+++ b/Documentation/networking/scaling.txt
@@ -402,10 +402,12 @@ acknowledged.

  ==== XPS Configuration

-XPS is only available if the kconfig symbol CONFIG_XPS is enabled (on by
-default for SMP). The functionality remains disabled until explicitly
-configured. To enable XPS, the bitmap of CPUs that may use a transmit
-queue is configured using the sysfs file entry:
+XPS is available only when the kconfig symbol CONFIG_XPS is enabled
+(on by default for SMP). The drivers for some NICs will enable the
+functionality by default.  For others the functionality remains
+disabled until explicitly configured. To enable XPS, the bitmap of
+CPUs that may use a transmit queue is configured using the sysfs file
+entry:

  /sys/class/net/<dev>/queues/tx-<n>/xps_cpus


The original wording leaves the impression that XPS is not enabled by 
default.

rick

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: A second case of XPS considerably reducing single-stream performance
  2016-08-25 20:18     ` Rick Jones
@ 2016-08-25 20:44       ` Alexander Duyck
  0 siblings, 0 replies; 6+ messages in thread
From: Alexander Duyck @ 2016-08-25 20:44 UTC (permalink / raw)
  To: Rick Jones
  Cc: Netdev, sathya.perla, ajit.khaparde, sriharsha.basavapatna,
	somnath.kotur

On Thu, Aug 25, 2016 at 1:18 PM, Rick Jones <rick.jones2@hpe.com> wrote:
> On 08/25/2016 12:19 PM, Alexander Duyck wrote:
>>
>> The problem is that there is no socket associated with the guest from
>> the host's perspective.  This is resulting in the traffic bouncing
>> between queues because there is no saved socket  to lock the interface
>> onto.
>>
>> I was looking into this recently as well and had considered a couple
>> of options.  The first is to fall back to just using skb_tx_hash()
>> when skb->sk is null for a given buffer.  I have a patch I have been
>> toying around with but I haven't submitted it yet.  If you would like
>> I can submit it as an RFC to get your thoughts.  The second option is
>> to enforce the use of RPS for any interfaces that do not perform Rx in
>> NAPI context.  The correct solution for this is probably some
>> combination of the two as you have to have all queueing done in order
>> at every stage of the packet processing.
>
>
> I don't know with interfaces would be hit, but just in general, I'm not sure
> that requiring RPS be enabled is a good solution - picking where traffic is
> processed based on its addressing is fine in a benchmarking situation, but I
> think it is better to have the process/thread scheduler decide where
> something should run and not the addressing of the connections that
> thread/process is servicing.
>
> I would be interested in seeing the RFC patch you propose.
>
> Apart from that, given the prevalence of VMs these days I wonder if perhaps
> simply not enabling XPS by default isn't a viable alternative. I've not
> played with containers to know if they would exhibit this too.
>
> Drifting ever so slightly, if drivers are going to continue to enable XPS by
> default, Documentation/networking/scaling.txt might use a tweak:
>
> diff --git a/Documentation/networking/scaling.txt
> b/Documentation/networking/sca
> index 59f4db2..8b5537c 100644
> --- a/Documentation/networking/scaling.txt
> +++ b/Documentation/networking/scaling.txt
> @@ -402,10 +402,12 @@ acknowledged.
>
>  ==== XPS Configuration
>
> -XPS is only available if the kconfig symbol CONFIG_XPS is enabled (on by
> -default for SMP). The functionality remains disabled until explicitly
> -configured. To enable XPS, the bitmap of CPUs that may use a transmit
> -queue is configured using the sysfs file entry:
> +XPS is available only when the kconfig symbol CONFIG_XPS is enabled
> +(on by default for SMP). The drivers for some NICs will enable the
> +functionality by default.  For others the functionality remains
> +disabled until explicitly configured. To enable XPS, the bitmap of
> +CPUs that may use a transmit queue is configured using the sysfs file
> +entry:
>
>  /sys/class/net/<dev>/queues/tx-<n>/xps_cpus
>
>
> The original wording leaves the impression that XPS is not enabled by
> default.
>
> rick

That's true. The original documentation probably wasn't updated after
I added netif_set_xps_queue giving drivers the ability to specify the
XPS map themselves.  That was a workaround to get the drivers to stop
bypassing all of this entirely and using ndo_select_queue.

We might want to tweak the documentation to state that XPS is disabled
unless either the user enables it via the syfs or the device's driver
enables it via netif_set_xps_queue.  If you want to submit something
like that as an official patch I would have no problem with it.  Then
if nothing else it becomes much easier to identify which drivers
enable this by default as all you have to do is plug the function into
LXR and you have your list.

- Alex

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: A second case of XPS considerably reducing single-stream performance
  2016-08-25 19:19   ` Alexander Duyck
  2016-08-25 20:18     ` Rick Jones
@ 2016-08-25 21:02     ` Tom Herbert
  1 sibling, 0 replies; 6+ messages in thread
From: Tom Herbert @ 2016-08-25 21:02 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: Rick Jones, Netdev, sathya.perla, ajit.khaparde,
	sriharsha.basavapatna, somnath.kotur

On Thu, Aug 25, 2016 at 12:19 PM, Alexander Duyck
<alexander.duyck@gmail.com> wrote:
> On Wed, Aug 24, 2016 at 4:46 PM, Rick Jones <rick.jones2@hpe.com> wrote:
>> Also, while it doesn't seem to have the same massive effect on throughput, I
>> can also see out of order behaviour happening when the sending VM is on a
>> node with a ConnectX-3 Pro NIC.  Its driver is also enabling XPS it would
>> seem.  I'm not *certain* but looking at the traces it appears that with the
>> ConnectX-3 Pro there is more interleaving of the out-of-order traffic than
>> there is with the Skyhawk.  The ConnectX-3 Pro happens to be in a newer
>> generation server with a newer processor than the other systems where I've
>> seen this.
>>
>> I do not see the out-of-order behaviour when the NIC at the sending end is a
>> BCM57840.  It does not appear that the bnx2x driver in the 4.4 kernel is
>> enabling XPS.
>>
>> So, it would seem that there are three cases of enabling XPS resulting in
>> out-of-order traffic, two of which result in a non-trivial loss of
>> performance.
>>
>> happy benchmarking,
>>
>> rick jones
>
> The problem is that there is no socket associated with the guest from
> the host's perspective.  This is resulting in the traffic bouncing
> between queues because there is no saved socket  to lock the interface
> onto.
>
> I was looking into this recently as well and had considered a couple
> of options.  The first is to fall back to just using skb_tx_hash()
> when skb->sk is null for a given buffer.  I have a patch I have been
> toying around with but I haven't submitted it yet.  If you would like
> I can submit it as an RFC to get your thoughts.  The second option is
> to enforce the use of RPS for any interfaces that do not perform Rx in
> NAPI context.  The correct solution for this is probably some
> combination of the two as you have to have all queueing done in order
> at every stage of the packet processing.
>
I have thought several times about creating flow states for packets
coming from VMs. This can be done similar to how we do RFS, call flow
dissector to get a hash of the flow and then use that to index into a
table that contains the last queue-- only change the queue when
criteria are meant to prevent OOO. This would result in flow dissector
on such packets which seems a bit expensive, it would be nice if the
VM can just give us the hash in a TX descriptor. There are other
benefits with a more advanced mechanism, for instance we might be able
to cache routes or IP tables results (stuff we might keep if there
were a transport socket).

Tom

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-08-25 21:03 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-24 15:52 A second case of XPS considerably reducing single-stream performance Rick Jones
2016-08-24 23:46 ` Rick Jones
2016-08-25 19:19   ` Alexander Duyck
2016-08-25 20:18     ` Rick Jones
2016-08-25 20:44       ` Alexander Duyck
2016-08-25 21:02     ` Tom Herbert

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.