[net-next] virtio-net: enable multiqueue by default
diff mbox series

Message ID 1480048646-17536-1-git-send-email-jasowang@redhat.com
State New, archived
Headers show
Series
  • [net-next] virtio-net: enable multiqueue by default
Related show

Commit Message

Jason Wang Nov. 25, 2016, 4:37 a.m. UTC
We use single queue even if multiqueue is enabled and let admin to
enable it through ethtool later. This is used to avoid possible
regression (small packet TCP stream transmission). But looks like an
overkill since:

- single queue user can disable multiqueue when launching qemu
- brings extra troubles for the management since it needs extra admin
  tool in guest to enable multiqueue
- multiqueue performs much better than single queue in most of the
  cases

So this patch enables multiqueue by default: if #queues is less than or
equal to #vcpu, enable as much as queue pairs; if #queues is greater
than #vcpu, enable #vcpu queue pairs.

Cc: Hannes Frederic Sowa <hannes@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Neil Horman <nhorman@redhat.com>
Cc: Jeremy Eder <jeder@redhat.com>
Cc: Marko Myllynen <myllynen@redhat.com>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Comments

Michael S. Tsirkin Nov. 25, 2016, 4:43 a.m. UTC | #1
On Fri, Nov 25, 2016 at 12:37:26PM +0800, Jason Wang wrote:
> We use single queue even if multiqueue is enabled and let admin to
> enable it through ethtool later. This is used to avoid possible
> regression (small packet TCP stream transmission). But looks like an
> overkill since:
> 
> - single queue user can disable multiqueue when launching qemu
> - brings extra troubles for the management since it needs extra admin
>   tool in guest to enable multiqueue
> - multiqueue performs much better than single queue in most of the
>   cases
> 
> So this patch enables multiqueue by default: if #queues is less than or
> equal to #vcpu, enable as much as queue pairs; if #queues is greater
> than #vcpu, enable #vcpu queue pairs.
> 
> Cc: Hannes Frederic Sowa <hannes@redhat.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Neil Horman <nhorman@redhat.com>
> Cc: Jeremy Eder <jeder@redhat.com>
> Cc: Marko Myllynen <myllynen@redhat.com>
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>

OK at some level but all uses of num_online_cpus()
like this are racy versus hotplug.
I know we already have this bug but shouldn't we fix it
before we add more?


> ---
>  drivers/net/virtio_net.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index d4ac7a6..a21d93a 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1886,8 +1886,11 @@ static int virtnet_probe(struct virtio_device *vdev)
>  	if (vi->any_header_sg)
>  		dev->needed_headroom = vi->hdr_len;
>  
> -	/* Use single tx/rx queue pair as default */
> -	vi->curr_queue_pairs = 1;
> +	/* Enable multiqueue by default */
> +	if (num_online_cpus() >= max_queue_pairs)
> +		vi->curr_queue_pairs = max_queue_pairs;
> +	else
> +		vi->curr_queue_pairs = num_online_cpus();
>  	vi->max_queue_pairs = max_queue_pairs;
>  
>  	/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
> @@ -1918,6 +1921,8 @@ static int virtnet_probe(struct virtio_device *vdev)
>  		goto free_unregister_netdev;
>  	}
>  
> +	virtnet_set_affinity(vi);
> +
>  	/* Assume link up if device can't report link status,
>  	   otherwise get link status from config. */
>  	if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
> -- 
> 2.7.4
Jason Wang Nov. 25, 2016, 5:36 a.m. UTC | #2
On 2016年11月25日 12:43, Michael S. Tsirkin wrote:
> On Fri, Nov 25, 2016 at 12:37:26PM +0800, Jason Wang wrote:
>> >We use single queue even if multiqueue is enabled and let admin to
>> >enable it through ethtool later. This is used to avoid possible
>> >regression (small packet TCP stream transmission). But looks like an
>> >overkill since:
>> >
>> >- single queue user can disable multiqueue when launching qemu
>> >- brings extra troubles for the management since it needs extra admin
>> >   tool in guest to enable multiqueue
>> >- multiqueue performs much better than single queue in most of the
>> >   cases
>> >
>> >So this patch enables multiqueue by default: if #queues is less than or
>> >equal to #vcpu, enable as much as queue pairs; if #queues is greater
>> >than #vcpu, enable #vcpu queue pairs.
>> >
>> >Cc: Hannes Frederic Sowa<hannes@redhat.com>
>> >Cc: Michael S. Tsirkin<mst@redhat.com>
>> >Cc: Neil Horman<nhorman@redhat.com>
>> >Cc: Jeremy Eder<jeder@redhat.com>
>> >Cc: Marko Myllynen<myllynen@redhat.com>
>> >Cc: Maxime Coquelin<maxime.coquelin@redhat.com>
>> >Signed-off-by: Jason Wang<jasowang@redhat.com>
> OK at some level but all uses of num_online_cpus()
> like this are racy versus hotplug.
> I know we already have this bug but shouldn't we fix it
> before we add more?

Not sure I get the point, do you mean adding get/put_online_cpus()? But 
is it a real bug? We don't do any cpu specific things so I believe it's 
not necessary (unless we want to keep #queues == #vcpus magically but I 
don't think so). Admin need to re-configure #queues after cpu hotplug if 
they wish.

Thanks
Neil Horman Nov. 28, 2016, 3:25 p.m. UTC | #3
On Fri, Nov 25, 2016 at 06:43:08AM +0200, Michael S. Tsirkin wrote:
> On Fri, Nov 25, 2016 at 12:37:26PM +0800, Jason Wang wrote:
> > We use single queue even if multiqueue is enabled and let admin to
> > enable it through ethtool later. This is used to avoid possible
> > regression (small packet TCP stream transmission). But looks like an
> > overkill since:
> > 
> > - single queue user can disable multiqueue when launching qemu
> > - brings extra troubles for the management since it needs extra admin
> >   tool in guest to enable multiqueue
> > - multiqueue performs much better than single queue in most of the
> >   cases
> > 
> > So this patch enables multiqueue by default: if #queues is less than or
> > equal to #vcpu, enable as much as queue pairs; if #queues is greater
> > than #vcpu, enable #vcpu queue pairs.
> > 
> > Cc: Hannes Frederic Sowa <hannes@redhat.com>
> > Cc: Michael S. Tsirkin <mst@redhat.com>
> > Cc: Neil Horman <nhorman@redhat.com>
> > Cc: Jeremy Eder <jeder@redhat.com>
> > Cc: Marko Myllynen <myllynen@redhat.com>
> > Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> > Signed-off-by: Jason Wang <jasowang@redhat.com>
> 
> OK at some level but all uses of num_online_cpus()
> like this are racy versus hotplug.
> I know we already have this bug but shouldn't we fix it
> before we add more?
> 
Isn't the fix orthogonal to this use though?  That is to say, you shoudl
register a hotplug notifier first, and use the handler to adjust the number of
queues on hotplug add/remove?

Neil

> 
> > ---
> >  drivers/net/virtio_net.c | 9 +++++++--
> >  1 file changed, 7 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index d4ac7a6..a21d93a 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -1886,8 +1886,11 @@ static int virtnet_probe(struct virtio_device *vdev)
> >  	if (vi->any_header_sg)
> >  		dev->needed_headroom = vi->hdr_len;
> >  
> > -	/* Use single tx/rx queue pair as default */
> > -	vi->curr_queue_pairs = 1;
> > +	/* Enable multiqueue by default */
> > +	if (num_online_cpus() >= max_queue_pairs)
> > +		vi->curr_queue_pairs = max_queue_pairs;
> > +	else
> > +		vi->curr_queue_pairs = num_online_cpus();
> >  	vi->max_queue_pairs = max_queue_pairs;
> >  
> >  	/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
> > @@ -1918,6 +1921,8 @@ static int virtnet_probe(struct virtio_device *vdev)
> >  		goto free_unregister_netdev;
> >  	}
> >  
> > +	virtnet_set_affinity(vi);
> > +
> >  	/* Assume link up if device can't report link status,
> >  	   otherwise get link status from config. */
> >  	if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
> > -- 
> > 2.7.4
Neil Horman Nov. 28, 2016, 3:26 p.m. UTC | #4
On Fri, Nov 25, 2016 at 12:37:26PM +0800, Jason Wang wrote:
> We use single queue even if multiqueue is enabled and let admin to
> enable it through ethtool later. This is used to avoid possible
> regression (small packet TCP stream transmission). But looks like an
> overkill since:
> 
> - single queue user can disable multiqueue when launching qemu
> - brings extra troubles for the management since it needs extra admin
>   tool in guest to enable multiqueue
> - multiqueue performs much better than single queue in most of the
>   cases
> 
> So this patch enables multiqueue by default: if #queues is less than or
> equal to #vcpu, enable as much as queue pairs; if #queues is greater
> than #vcpu, enable #vcpu queue pairs.
> 
> Cc: Hannes Frederic Sowa <hannes@redhat.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Neil Horman <nhorman@redhat.com>
> Cc: Jeremy Eder <jeder@redhat.com>
> Cc: Marko Myllynen <myllynen@redhat.com>
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
>  drivers/net/virtio_net.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index d4ac7a6..a21d93a 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1886,8 +1886,11 @@ static int virtnet_probe(struct virtio_device *vdev)
>  	if (vi->any_header_sg)
>  		dev->needed_headroom = vi->hdr_len;
>  
> -	/* Use single tx/rx queue pair as default */
> -	vi->curr_queue_pairs = 1;
> +	/* Enable multiqueue by default */
> +	if (num_online_cpus() >= max_queue_pairs)
> +		vi->curr_queue_pairs = max_queue_pairs;
> +	else
> +		vi->curr_queue_pairs = num_online_cpus();
>  	vi->max_queue_pairs = max_queue_pairs;
>  
>  	/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
> @@ -1918,6 +1921,8 @@ static int virtnet_probe(struct virtio_device *vdev)
>  		goto free_unregister_netdev;
>  	}
>  
> +	virtnet_set_affinity(vi);
> +
>  	/* Assume link up if device can't report link status,
>  	   otherwise get link status from config. */
>  	if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
> -- 
> 2.7.4
> 
Acked-by: Neil Horman <nhorman@tuxdriver.com>
David Miller Nov. 28, 2016, 4:28 p.m. UTC | #5
From: "Michael S. Tsirkin" <mst@redhat.com>
Date: Fri, 25 Nov 2016 06:43:08 +0200

> On Fri, Nov 25, 2016 at 12:37:26PM +0800, Jason Wang wrote:
>> We use single queue even if multiqueue is enabled and let admin to
>> enable it through ethtool later. This is used to avoid possible
>> regression (small packet TCP stream transmission). But looks like an
>> overkill since:
>> 
>> - single queue user can disable multiqueue when launching qemu
>> - brings extra troubles for the management since it needs extra admin
>>   tool in guest to enable multiqueue
>> - multiqueue performs much better than single queue in most of the
>>   cases
>> 
>> So this patch enables multiqueue by default: if #queues is less than or
>> equal to #vcpu, enable as much as queue pairs; if #queues is greater
>> than #vcpu, enable #vcpu queue pairs.
>> 
>> Cc: Hannes Frederic Sowa <hannes@redhat.com>
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Cc: Neil Horman <nhorman@redhat.com>
>> Cc: Jeremy Eder <jeder@redhat.com>
>> Cc: Marko Myllynen <myllynen@redhat.com>
>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
> 
> OK at some level but all uses of num_online_cpus()
> like this are racy versus hotplug.
> I know we already have this bug but shouldn't we fix it
> before we add more?

This is more being used like a heuristic in this scenerio, and in
fact I would say one would keep the code this way even once proper
hotplug handlers are installed to adjust the queued dynamically if
there is a desired (which is also not necessarily the case).

I really don't think this change should be held on up on this issue.
So can we please make some forward progress here?

Thanks.
John Fastabend Nov. 28, 2016, 4:38 p.m. UTC | #6
On 16-11-28 08:28 AM, David Miller wrote:
> From: "Michael S. Tsirkin" <mst@redhat.com>
> Date: Fri, 25 Nov 2016 06:43:08 +0200
> 
>> On Fri, Nov 25, 2016 at 12:37:26PM +0800, Jason Wang wrote:
>>> We use single queue even if multiqueue is enabled and let admin to
>>> enable it through ethtool later. This is used to avoid possible
>>> regression (small packet TCP stream transmission). But looks like an
>>> overkill since:
>>>
>>> - single queue user can disable multiqueue when launching qemu
>>> - brings extra troubles for the management since it needs extra admin
>>>   tool in guest to enable multiqueue
>>> - multiqueue performs much better than single queue in most of the
>>>   cases
>>>
>>> So this patch enables multiqueue by default: if #queues is less than or
>>> equal to #vcpu, enable as much as queue pairs; if #queues is greater
>>> than #vcpu, enable #vcpu queue pairs.
>>>
>>> Cc: Hannes Frederic Sowa <hannes@redhat.com>
>>> Cc: Michael S. Tsirkin <mst@redhat.com>
>>> Cc: Neil Horman <nhorman@redhat.com>
>>> Cc: Jeremy Eder <jeder@redhat.com>
>>> Cc: Marko Myllynen <myllynen@redhat.com>
>>> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
>>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>>
>> OK at some level but all uses of num_online_cpus()
>> like this are racy versus hotplug.
>> I know we already have this bug but shouldn't we fix it
>> before we add more?
> 
> This is more being used like a heuristic in this scenerio, and in
> fact I would say one would keep the code this way even once proper
> hotplug handlers are installed to adjust the queued dynamically if
> there is a desired (which is also not necessarily the case).
> 
> I really don't think this change should be held on up on this issue.
> So can we please make some forward progress here?
> 
> Thanks.
> 

Also it might be worth noting all the other multiqueue capable
ethernet devices I checked use some variation of this heuristic.
Typically related to what other features are enables RSS, DCB, etc.
So it at least should be familiar to folks who do hotplug cpus on
bare metal boxes.

.John
Michael S. Tsirkin Nov. 28, 2016, 4:52 p.m. UTC | #7
On Fri, Nov 25, 2016 at 12:37:26PM +0800, Jason Wang wrote:
> We use single queue even if multiqueue is enabled and let admin to
> enable it through ethtool later. This is used to avoid possible
> regression (small packet TCP stream transmission). But looks like an
> overkill since:
> 
> - single queue user can disable multiqueue when launching qemu
> - brings extra troubles for the management since it needs extra admin
>   tool in guest to enable multiqueue
> - multiqueue performs much better than single queue in most of the
>   cases
> 
> So this patch enables multiqueue by default: if #queues is less than or
> equal to #vcpu, enable as much as queue pairs; if #queues is greater
> than #vcpu, enable #vcpu queue pairs.
> 
> Cc: Hannes Frederic Sowa <hannes@redhat.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Neil Horman <nhorman@redhat.com>
> Cc: Jeremy Eder <jeder@redhat.com>
> Cc: Marko Myllynen <myllynen@redhat.com>
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>

OK I stil htink we should handle cpu hotplug better
but this can be done separately.

Acked-by: Michael S. Tsirkin <mst@redhat.com>

> ---
>  drivers/net/virtio_net.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index d4ac7a6..a21d93a 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1886,8 +1886,11 @@ static int virtnet_probe(struct virtio_device *vdev)
>  	if (vi->any_header_sg)
>  		dev->needed_headroom = vi->hdr_len;
>  
> -	/* Use single tx/rx queue pair as default */
> -	vi->curr_queue_pairs = 1;
> +	/* Enable multiqueue by default */
> +	if (num_online_cpus() >= max_queue_pairs)
> +		vi->curr_queue_pairs = max_queue_pairs;
> +	else
> +		vi->curr_queue_pairs = num_online_cpus();
>  	vi->max_queue_pairs = max_queue_pairs;
>  
>  	/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
> @@ -1918,6 +1921,8 @@ static int virtnet_probe(struct virtio_device *vdev)
>  		goto free_unregister_netdev;
>  	}
>  
> +	virtnet_set_affinity(vi);
> +
>  	/* Assume link up if device can't report link status,
>  	   otherwise get link status from config. */
>  	if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
> -- 
> 2.7.4
David Miller Nov. 28, 2016, 6:18 p.m. UTC | #8
From: Jason Wang <jasowang@redhat.com>
Date: Fri, 25 Nov 2016 12:37:26 +0800

> We use single queue even if multiqueue is enabled and let admin to
> enable it through ethtool later. This is used to avoid possible
> regression (small packet TCP stream transmission). But looks like an
> overkill since:
> 
> - single queue user can disable multiqueue when launching qemu
> - brings extra troubles for the management since it needs extra admin
>   tool in guest to enable multiqueue
> - multiqueue performs much better than single queue in most of the
>   cases
> 
> So this patch enables multiqueue by default: if #queues is less than or
> equal to #vcpu, enable as much as queue pairs; if #queues is greater
> than #vcpu, enable #vcpu queue pairs.
> 
> Cc: Hannes Frederic Sowa <hannes@redhat.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Neil Horman <nhorman@redhat.com>
> Cc: Jeremy Eder <jeder@redhat.com>
> Cc: Marko Myllynen <myllynen@redhat.com>
> Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>

Applied, thanks Jason.

Patch
diff mbox series

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index d4ac7a6..a21d93a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1886,8 +1886,11 @@  static int virtnet_probe(struct virtio_device *vdev)
 	if (vi->any_header_sg)
 		dev->needed_headroom = vi->hdr_len;
 
-	/* Use single tx/rx queue pair as default */
-	vi->curr_queue_pairs = 1;
+	/* Enable multiqueue by default */
+	if (num_online_cpus() >= max_queue_pairs)
+		vi->curr_queue_pairs = max_queue_pairs;
+	else
+		vi->curr_queue_pairs = num_online_cpus();
 	vi->max_queue_pairs = max_queue_pairs;
 
 	/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
@@ -1918,6 +1921,8 @@  static int virtnet_probe(struct virtio_device *vdev)
 		goto free_unregister_netdev;
 	}
 
+	virtnet_set_affinity(vi);
+
 	/* Assume link up if device can't report link status,
 	   otherwise get link status from config. */
 	if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {