All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
@ 2019-02-13 14:51 Yuri Benditovich
  2019-02-18  3:49 ` Jason Wang
  0 siblings, 1 reply; 20+ messages in thread
From: Yuri Benditovich @ 2019-02-13 14:51 UTC (permalink / raw)
  To: qemu-devel, Jason Wang, Michael S . Tsirkin, yan

https://bugzilla.redhat.com/show_bug.cgi?id=1608226
On startup/link-up in multiqueue configuration the virtio-net
tries to starts all the queues, including those that the guest
will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
If the guest driver does not allocate queues that it will not
use (for example, Windows driver does not) and number of actually
used queues is less that maximal number supported by the device,
this causes vhost_net_start to fail and actually disables vhost
for all the queues, reducing the performance.
Current commit fixes this: initially only first queue is started,
upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
requested by the guest.

Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
---
 hw/net/virtio-net.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 3f319ef723..d3b1ac6d3a 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
 {
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
     NetClientState *nc = qemu_get_queue(n->nic);
-    int queues = n->multiqueue ? n->max_queues : 1;
+    int queues = n->multiqueue ? n->curr_queues : 1;
 
     if (!get_vhost_net(nc->peer)) {
         return;
@@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
         return VIRTIO_NET_ERR;
     }
 
-    n->curr_queues = queues;
     /* stop the backend before changing the number of queues to avoid handling a
      * disabled queue */
+    virtio_net_set_status(vdev, 0);
+
+    n->curr_queues = queues;
+
     virtio_net_set_status(vdev, vdev->status);
     virtio_net_set_queues(n);
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-13 14:51 [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest Yuri Benditovich
@ 2019-02-18  3:49 ` Jason Wang
  2019-02-18  9:58   ` Yuri Benditovich
  0 siblings, 1 reply; 20+ messages in thread
From: Jason Wang @ 2019-02-18  3:49 UTC (permalink / raw)
  To: Yuri Benditovich, qemu-devel, Michael S . Tsirkin, yan


On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> On startup/link-up in multiqueue configuration the virtio-net
> tries to starts all the queues, including those that the guest
> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> If the guest driver does not allocate queues that it will not
> use (for example, Windows driver does not) and number of actually
> used queues is less that maximal number supported by the device,


Is this a requirement of e.g NDIS? If not, could we simply allocate all 
queues in this case. This is usually what normal Linux driver did.


> this causes vhost_net_start to fail and actually disables vhost
> for all the queues, reducing the performance.
> Current commit fixes this: initially only first queue is started,
> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> requested by the guest.
>
> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> ---
>   hw/net/virtio-net.c | 7 +++++--
>   1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 3f319ef723..d3b1ac6d3a 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>   {
>       VirtIODevice *vdev = VIRTIO_DEVICE(n);
>       NetClientState *nc = qemu_get_queue(n->nic);
> -    int queues = n->multiqueue ? n->max_queues : 1;
> +    int queues = n->multiqueue ? n->curr_queues : 1;
>   
>       if (!get_vhost_net(nc->peer)) {
>           return;
> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>           return VIRTIO_NET_ERR;
>       }
>   
> -    n->curr_queues = queues;
>       /* stop the backend before changing the number of queues to avoid handling a
>        * disabled queue */
> +    virtio_net_set_status(vdev, 0);


Any reason for doing this?

Thanks


> +
> +    n->curr_queues = queues;
> +
>       virtio_net_set_status(vdev, vdev->status);
>       virtio_net_set_queues(n);
>   

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-18  3:49 ` Jason Wang
@ 2019-02-18  9:58   ` Yuri Benditovich
  2019-02-18 16:39     ` Michael S. Tsirkin
  0 siblings, 1 reply; 20+ messages in thread
From: Yuri Benditovich @ 2019-02-18  9:58 UTC (permalink / raw)
  To: Jason Wang; +Cc: qemu-devel, Michael S . Tsirkin, Yan Vugenfirer

On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > On startup/link-up in multiqueue configuration the virtio-net
> > tries to starts all the queues, including those that the guest
> > will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > If the guest driver does not allocate queues that it will not
> > use (for example, Windows driver does not) and number of actually
> > used queues is less that maximal number supported by the device,
>
>
> Is this a requirement of e.g NDIS? If not, could we simply allocate all
> queues in this case. This is usually what normal Linux driver did.
>
>
> > this causes vhost_net_start to fail and actually disables vhost
> > for all the queues, reducing the performance.
> > Current commit fixes this: initially only first queue is started,
> > upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > requested by the guest.
> >
> > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > ---
> >   hw/net/virtio-net.c | 7 +++++--
> >   1 file changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 3f319ef723..d3b1ac6d3a 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> >   {
> >       VirtIODevice *vdev = VIRTIO_DEVICE(n);
> >       NetClientState *nc = qemu_get_queue(n->nic);
> > -    int queues = n->multiqueue ? n->max_queues : 1;
> > +    int queues = n->multiqueue ? n->curr_queues : 1;
> >
> >       if (!get_vhost_net(nc->peer)) {
> >           return;
> > @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> >           return VIRTIO_NET_ERR;
> >       }
> >
> > -    n->curr_queues = queues;
> >       /* stop the backend before changing the number of queues to avoid handling a
> >        * disabled queue */
> > +    virtio_net_set_status(vdev, 0);
>
>
> Any reason for doing this?

I think there are 2 reasons:
1. The spec does not require guest SW to allocate unused queues.
2. We spend guest's physical memory to just make vhost happy when it
touches queues that it should not use.

Thanks,
Yuri Benditovich

>
> Thanks
>
>
> > +
> > +    n->curr_queues = queues;
> > +
> >       virtio_net_set_status(vdev, vdev->status);
> >       virtio_net_set_queues(n);
> >

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-18  9:58   ` Yuri Benditovich
@ 2019-02-18 16:39     ` Michael S. Tsirkin
  2019-02-18 20:49       ` Yuri Benditovich
  0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2019-02-18 16:39 UTC (permalink / raw)
  To: Yuri Benditovich; +Cc: Jason Wang, qemu-devel, Yan Vugenfirer

On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > > On startup/link-up in multiqueue configuration the virtio-net
> > > tries to starts all the queues, including those that the guest
> > > will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > > If the guest driver does not allocate queues that it will not
> > > use (for example, Windows driver does not) and number of actually
> > > used queues is less that maximal number supported by the device,
> >
> >
> > Is this a requirement of e.g NDIS? If not, could we simply allocate all
> > queues in this case. This is usually what normal Linux driver did.
> >
> >
> > > this causes vhost_net_start to fail and actually disables vhost
> > > for all the queues, reducing the performance.
> > > Current commit fixes this: initially only first queue is started,
> > > upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > > requested by the guest.
> > >
> > > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > > ---
> > >   hw/net/virtio-net.c | 7 +++++--
> > >   1 file changed, 5 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 3f319ef723..d3b1ac6d3a 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > >   {
> > >       VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > >       NetClientState *nc = qemu_get_queue(n->nic);
> > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > +    int queues = n->multiqueue ? n->curr_queues : 1;
> > >
> > >       if (!get_vhost_net(nc->peer)) {
> > >           return;
> > > @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > >           return VIRTIO_NET_ERR;
> > >       }
> > >
> > > -    n->curr_queues = queues;
> > >       /* stop the backend before changing the number of queues to avoid handling a
> > >        * disabled queue */
> > > +    virtio_net_set_status(vdev, 0);
> >
> >
> > Any reason for doing this?
> 
> I think there are 2 reasons:
> 1. The spec does not require guest SW to allocate unused queues.
> 2. We spend guest's physical memory to just make vhost happy when it
> touches queues that it should not use.
> 
> Thanks,
> Yuri Benditovich

The spec also says:
	queue_enable The driver uses this to selectively prevent the device from executing requests from this
	virtqueue. 1 - enabled; 0 - disabled.

While this is not a conformance clause this strongly implies that
queues which are not enabled are never accessed by device.

Yuri I am guessing you are not enabling these unused queues right?



> >
> > Thanks
> >
> >
> > > +
> > > +    n->curr_queues = queues;
> > > +
> > >       virtio_net_set_status(vdev, vdev->status);
> > >       virtio_net_set_queues(n);
> > >

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-18 16:39     ` Michael S. Tsirkin
@ 2019-02-18 20:49       ` Yuri Benditovich
  2019-02-18 23:34         ` Michael S. Tsirkin
  0 siblings, 1 reply; 20+ messages in thread
From: Yuri Benditovich @ 2019-02-18 20:49 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Jason Wang, qemu-devel, Yan Vugenfirer

On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> > On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > >
> > > On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > > > https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > > > On startup/link-up in multiqueue configuration the virtio-net
> > > > tries to starts all the queues, including those that the guest
> > > > will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > > > If the guest driver does not allocate queues that it will not
> > > > use (for example, Windows driver does not) and number of actually
> > > > used queues is less that maximal number supported by the device,
> > >
> > >
> > > Is this a requirement of e.g NDIS? If not, could we simply allocate all
> > > queues in this case. This is usually what normal Linux driver did.
> > >
> > >
> > > > this causes vhost_net_start to fail and actually disables vhost
> > > > for all the queues, reducing the performance.
> > > > Current commit fixes this: initially only first queue is started,
> > > > upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > > > requested by the guest.
> > > >
> > > > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > > > ---
> > > >   hw/net/virtio-net.c | 7 +++++--
> > > >   1 file changed, 5 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > index 3f319ef723..d3b1ac6d3a 100644
> > > > --- a/hw/net/virtio-net.c
> > > > +++ b/hw/net/virtio-net.c
> > > > @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > > >   {
> > > >       VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > >       NetClientState *nc = qemu_get_queue(n->nic);
> > > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > > +    int queues = n->multiqueue ? n->curr_queues : 1;
> > > >
> > > >       if (!get_vhost_net(nc->peer)) {
> > > >           return;
> > > > @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > > >           return VIRTIO_NET_ERR;
> > > >       }
> > > >
> > > > -    n->curr_queues = queues;
> > > >       /* stop the backend before changing the number of queues to avoid handling a
> > > >        * disabled queue */
> > > > +    virtio_net_set_status(vdev, 0);
> > >
> > >
> > > Any reason for doing this?
> >
> > I think there are 2 reasons:
> > 1. The spec does not require guest SW to allocate unused queues.
> > 2. We spend guest's physical memory to just make vhost happy when it
> > touches queues that it should not use.
> >
> > Thanks,
> > Yuri Benditovich
>
> The spec also says:
>         queue_enable The driver uses this to selectively prevent the device from executing requests from this
>         virtqueue. 1 - enabled; 0 - disabled.
>
> While this is not a conformance clause this strongly implies that
> queues which are not enabled are never accessed by device.
>
> Yuri I am guessing you are not enabling these unused queues right?

Of course, we (Windows driver) do not.
The code of virtio-net passes max_queues to vhost and this causes
vhost to try accessing all the queues, fail on unused ones and finally
leave vhost disabled at all.

>
>
>
> > >
> > > Thanks
> > >
> > >
> > > > +
> > > > +    n->curr_queues = queues;
> > > > +
> > > >       virtio_net_set_status(vdev, vdev->status);
> > > >       virtio_net_set_queues(n);
> > > >

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-18 20:49       ` Yuri Benditovich
@ 2019-02-18 23:34         ` Michael S. Tsirkin
  2019-02-19  6:27           ` Jason Wang
  0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2019-02-18 23:34 UTC (permalink / raw)
  To: Yuri Benditovich; +Cc: Jason Wang, qemu-devel, Yan Vugenfirer

On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> > > On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > >
> > > > On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > > > > On startup/link-up in multiqueue configuration the virtio-net
> > > > > tries to starts all the queues, including those that the guest
> > > > > will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > > > > If the guest driver does not allocate queues that it will not
> > > > > use (for example, Windows driver does not) and number of actually
> > > > > used queues is less that maximal number supported by the device,
> > > >
> > > >
> > > > Is this a requirement of e.g NDIS? If not, could we simply allocate all
> > > > queues in this case. This is usually what normal Linux driver did.
> > > >
> > > >
> > > > > this causes vhost_net_start to fail and actually disables vhost
> > > > > for all the queues, reducing the performance.
> > > > > Current commit fixes this: initially only first queue is started,
> > > > > upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > > > > requested by the guest.
> > > > >
> > > > > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > > > > ---
> > > > >   hw/net/virtio-net.c | 7 +++++--
> > > > >   1 file changed, 5 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > index 3f319ef723..d3b1ac6d3a 100644
> > > > > --- a/hw/net/virtio-net.c
> > > > > +++ b/hw/net/virtio-net.c
> > > > > @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > > > >   {
> > > > >       VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > > >       NetClientState *nc = qemu_get_queue(n->nic);
> > > > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > > > +    int queues = n->multiqueue ? n->curr_queues : 1;
> > > > >
> > > > >       if (!get_vhost_net(nc->peer)) {
> > > > >           return;
> > > > > @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > > > >           return VIRTIO_NET_ERR;
> > > > >       }
> > > > >
> > > > > -    n->curr_queues = queues;
> > > > >       /* stop the backend before changing the number of queues to avoid handling a
> > > > >        * disabled queue */
> > > > > +    virtio_net_set_status(vdev, 0);
> > > >
> > > >
> > > > Any reason for doing this?
> > >
> > > I think there are 2 reasons:
> > > 1. The spec does not require guest SW to allocate unused queues.
> > > 2. We spend guest's physical memory to just make vhost happy when it
> > > touches queues that it should not use.
> > >
> > > Thanks,
> > > Yuri Benditovich
> >
> > The spec also says:
> >         queue_enable The driver uses this to selectively prevent the device from executing requests from this
> >         virtqueue. 1 - enabled; 0 - disabled.
> >
> > While this is not a conformance clause this strongly implies that
> > queues which are not enabled are never accessed by device.
> >
> > Yuri I am guessing you are not enabling these unused queues right?
> 
> Of course, we (Windows driver) do not.
> The code of virtio-net passes max_queues to vhost and this causes
> vhost to try accessing all the queues, fail on unused ones and finally
> leave vhost disabled at all.


Jason, at least for 1.0 accessing disabled queues looks like a spec
violation. What do you think?

> >
> >
> >
> > > >
> > > > Thanks
> > > >
> > > >
> > > > > +
> > > > > +    n->curr_queues = queues;
> > > > > +
> > > > >       virtio_net_set_status(vdev, vdev->status);
> > > > >       virtio_net_set_queues(n);
> > > > >

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-18 23:34         ` Michael S. Tsirkin
@ 2019-02-19  6:27           ` Jason Wang
  2019-02-19 14:19             ` Michael S. Tsirkin
  2019-02-21  6:00             ` Yuri Benditovich
  0 siblings, 2 replies; 20+ messages in thread
From: Jason Wang @ 2019-02-19  6:27 UTC (permalink / raw)
  To: Michael S. Tsirkin, Yuri Benditovich; +Cc: Yan Vugenfirer, qemu-devel


On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
> On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
>> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
>>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
>>>>>
>>>>> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
>>>>>> On startup/link-up in multiqueue configuration the virtio-net
>>>>>> tries to starts all the queues, including those that the guest
>>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
>>>>>> If the guest driver does not allocate queues that it will not
>>>>>> use (for example, Windows driver does not) and number of actually
>>>>>> used queues is less that maximal number supported by the device,
>>>>>
>>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate all
>>>>> queues in this case. This is usually what normal Linux driver did.
>>>>>
>>>>>
>>>>>> this causes vhost_net_start to fail and actually disables vhost
>>>>>> for all the queues, reducing the performance.
>>>>>> Current commit fixes this: initially only first queue is started,
>>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
>>>>>> requested by the guest.
>>>>>>
>>>>>> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
>>>>>> ---
>>>>>>    hw/net/virtio-net.c | 7 +++++--
>>>>>>    1 file changed, 5 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>>>>> index 3f319ef723..d3b1ac6d3a 100644
>>>>>> --- a/hw/net/virtio-net.c
>>>>>> +++ b/hw/net/virtio-net.c
>>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>>>>>>    {
>>>>>>        VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>>>>>        NetClientState *nc = qemu_get_queue(n->nic);
>>>>>> -    int queues = n->multiqueue ? n->max_queues : 1;
>>>>>> +    int queues = n->multiqueue ? n->curr_queues : 1;
>>>>>>
>>>>>>        if (!get_vhost_net(nc->peer)) {
>>>>>>            return;
>>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>>>>>>            return VIRTIO_NET_ERR;
>>>>>>        }
>>>>>>
>>>>>> -    n->curr_queues = queues;
>>>>>>        /* stop the backend before changing the number of queues to avoid handling a
>>>>>>         * disabled queue */
>>>>>> +    virtio_net_set_status(vdev, 0);
>>>>>
>>>>> Any reason for doing this?
>>>> I think there are 2 reasons:
>>>> 1. The spec does not require guest SW to allocate unused queues.
>>>> 2. We spend guest's physical memory to just make vhost happy when it
>>>> touches queues that it should not use.
>>>>
>>>> Thanks,
>>>> Yuri Benditovich
>>> The spec also says:
>>>          queue_enable The driver uses this to selectively prevent the device from executing requests from this
>>>          virtqueue. 1 - enabled; 0 - disabled.
>>>
>>> While this is not a conformance clause this strongly implies that
>>> queues which are not enabled are never accessed by device.
>>>
>>> Yuri I am guessing you are not enabling these unused queues right?
>> Of course, we (Windows driver) do not.
>> The code of virtio-net passes max_queues to vhost and this causes
>> vhost to try accessing all the queues, fail on unused ones and finally
>> leave vhost disabled at all.
>
> Jason, at least for 1.0 accessing disabled queues looks like a spec
> violation. What do you think?


Yes, but there's some issues:

- How to detect a disabled queue for 0.9x device? Looks like there's no 
way according to the spec, so device must assume all queues was enabled.

- For 1.0, if we depends on queue_enable, we should implement the 
callback for vhost I think. Otherwise it's still buggy.

So it looks tricky to enable and disable queues through set status

Thanks


>
>>>
>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>> +
>>>>>> +    n->curr_queues = queues;
>>>>>> +
>>>>>>        virtio_net_set_status(vdev, vdev->status);
>>>>>>        virtio_net_set_queues(n);
>>>>>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-19  6:27           ` Jason Wang
@ 2019-02-19 14:19             ` Michael S. Tsirkin
  2019-02-20 10:13               ` Jason Wang
  2019-02-21  6:00             ` Yuri Benditovich
  1 sibling, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2019-02-19 14:19 UTC (permalink / raw)
  To: Jason Wang; +Cc: Yuri Benditovich, Yan Vugenfirer, qemu-devel

On Tue, Feb 19, 2019 at 02:27:35PM +0800, Jason Wang wrote:
> 
> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
> > On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
> > > On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > > > On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> > > > > On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > > 
> > > > > > On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > > > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > > > > > > On startup/link-up in multiqueue configuration the virtio-net
> > > > > > > tries to starts all the queues, including those that the guest
> > > > > > > will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > > > > > > If the guest driver does not allocate queues that it will not
> > > > > > > use (for example, Windows driver does not) and number of actually
> > > > > > > used queues is less that maximal number supported by the device,
> > > > > > 
> > > > > > Is this a requirement of e.g NDIS? If not, could we simply allocate all
> > > > > > queues in this case. This is usually what normal Linux driver did.
> > > > > > 
> > > > > > 
> > > > > > > this causes vhost_net_start to fail and actually disables vhost
> > > > > > > for all the queues, reducing the performance.
> > > > > > > Current commit fixes this: initially only first queue is started,
> > > > > > > upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > > > > > > requested by the guest.
> > > > > > > 
> > > > > > > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > > > > > > ---
> > > > > > >    hw/net/virtio-net.c | 7 +++++--
> > > > > > >    1 file changed, 5 insertions(+), 2 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > > > index 3f319ef723..d3b1ac6d3a 100644
> > > > > > > --- a/hw/net/virtio-net.c
> > > > > > > +++ b/hw/net/virtio-net.c
> > > > > > > @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > > > > > >    {
> > > > > > >        VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > > > > >        NetClientState *nc = qemu_get_queue(n->nic);
> > > > > > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > > > > > +    int queues = n->multiqueue ? n->curr_queues : 1;
> > > > > > > 
> > > > > > >        if (!get_vhost_net(nc->peer)) {
> > > > > > >            return;
> > > > > > > @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > > > > > >            return VIRTIO_NET_ERR;
> > > > > > >        }
> > > > > > > 
> > > > > > > -    n->curr_queues = queues;
> > > > > > >        /* stop the backend before changing the number of queues to avoid handling a
> > > > > > >         * disabled queue */
> > > > > > > +    virtio_net_set_status(vdev, 0);
> > > > > > 
> > > > > > Any reason for doing this?
> > > > > I think there are 2 reasons:
> > > > > 1. The spec does not require guest SW to allocate unused queues.
> > > > > 2. We spend guest's physical memory to just make vhost happy when it
> > > > > touches queues that it should not use.
> > > > > 
> > > > > Thanks,
> > > > > Yuri Benditovich
> > > > The spec also says:
> > > >          queue_enable The driver uses this to selectively prevent the device from executing requests from this
> > > >          virtqueue. 1 - enabled; 0 - disabled.
> > > > 
> > > > While this is not a conformance clause this strongly implies that
> > > > queues which are not enabled are never accessed by device.
> > > > 
> > > > Yuri I am guessing you are not enabling these unused queues right?
> > > Of course, we (Windows driver) do not.
> > > The code of virtio-net passes max_queues to vhost and this causes
> > > vhost to try accessing all the queues, fail on unused ones and finally
> > > leave vhost disabled at all.
> > 
> > Jason, at least for 1.0 accessing disabled queues looks like a spec
> > violation. What do you think?
> 
> 
> Yes, but there's some issues:
> 
> - How to detect a disabled queue for 0.9x device? Looks like there's no way
> according to the spec, so device must assume all queues was enabled.

Traditionally devices assumed that queue address 0 implies not enabled.

> - For 1.0, if we depends on queue_enable, we should implement the callback
> for vhost I think. Otherwise it's still buggy.
> 
> So it looks tricky to enable and disable queues through set status
> 
> Thanks


Do you agree it's a compliance issue though?

> 
> > 
> > > > 
> > > > 
> > > > > > Thanks
> > > > > > 
> > > > > > 
> > > > > > > +
> > > > > > > +    n->curr_queues = queues;
> > > > > > > +
> > > > > > >        virtio_net_set_status(vdev, vdev->status);
> > > > > > >        virtio_net_set_queues(n);
> > > > > > > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-19 14:19             ` Michael S. Tsirkin
@ 2019-02-20 10:13               ` Jason Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Jason Wang @ 2019-02-20 10:13 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Yan Vugenfirer, Yuri Benditovich, qemu-devel


On 2019/2/19 下午10:19, Michael S. Tsirkin wrote:
> On Tue, Feb 19, 2019 at 02:27:35PM +0800, Jason Wang wrote:
>> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
>>> On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
>>>> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin<mst@redhat.com>  wrote:
>>>>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
>>>>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang<jasowang@redhat.com>  wrote:
>>>>>>> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
>>>>>>>> On startup/link-up in multiqueue configuration the virtio-net
>>>>>>>> tries to starts all the queues, including those that the guest
>>>>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
>>>>>>>> If the guest driver does not allocate queues that it will not
>>>>>>>> use (for example, Windows driver does not) and number of actually
>>>>>>>> used queues is less that maximal number supported by the device,
>>>>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate all
>>>>>>> queues in this case. This is usually what normal Linux driver did.
>>>>>>>
>>>>>>>
>>>>>>>> this causes vhost_net_start to fail and actually disables vhost
>>>>>>>> for all the queues, reducing the performance.
>>>>>>>> Current commit fixes this: initially only first queue is started,
>>>>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
>>>>>>>> requested by the guest.
>>>>>>>>
>>>>>>>> Signed-off-by: Yuri Benditovich<yuri.benditovich@daynix.com>
>>>>>>>> ---
>>>>>>>>     hw/net/virtio-net.c | 7 +++++--
>>>>>>>>     1 file changed, 5 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>>>>>>> index 3f319ef723..d3b1ac6d3a 100644
>>>>>>>> --- a/hw/net/virtio-net.c
>>>>>>>> +++ b/hw/net/virtio-net.c
>>>>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>>>>>>>>     {
>>>>>>>>         VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>>>>>>>         NetClientState *nc = qemu_get_queue(n->nic);
>>>>>>>> -    int queues = n->multiqueue ? n->max_queues : 1;
>>>>>>>> +    int queues = n->multiqueue ? n->curr_queues : 1;
>>>>>>>>
>>>>>>>>         if (!get_vhost_net(nc->peer)) {
>>>>>>>>             return;
>>>>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>>>>>>>>             return VIRTIO_NET_ERR;
>>>>>>>>         }
>>>>>>>>
>>>>>>>> -    n->curr_queues = queues;
>>>>>>>>         /* stop the backend before changing the number of queues to avoid handling a
>>>>>>>>          * disabled queue */
>>>>>>>> +    virtio_net_set_status(vdev, 0);
>>>>>>> Any reason for doing this?
>>>>>> I think there are 2 reasons:
>>>>>> 1. The spec does not require guest SW to allocate unused queues.
>>>>>> 2. We spend guest's physical memory to just make vhost happy when it
>>>>>> touches queues that it should not use.
>>>>>>
>>>>>> Thanks,
>>>>>> Yuri Benditovich
>>>>> The spec also says:
>>>>>           queue_enable The driver uses this to selectively prevent the device from executing requests from this
>>>>>           virtqueue. 1 - enabled; 0 - disabled.
>>>>>
>>>>> While this is not a conformance clause this strongly implies that
>>>>> queues which are not enabled are never accessed by device.
>>>>>
>>>>> Yuri I am guessing you are not enabling these unused queues right?
>>>> Of course, we (Windows driver) do not.
>>>> The code of virtio-net passes max_queues to vhost and this causes
>>>> vhost to try accessing all the queues, fail on unused ones and finally
>>>> leave vhost disabled at all.
>>> Jason, at least for 1.0 accessing disabled queues looks like a spec
>>> violation. What do you think?
>> Yes, but there's some issues:
>>
>> - How to detect a disabled queue for 0.9x device? Looks like there's no way
>> according to the spec, so device must assume all queues was enabled.
> Traditionally devices assumed that queue address 0 implies not enabled.


So device enable the queue through write a non zero value for queue 
address? Unfortunately this misses the spec.


>
>> - For 1.0, if we depends on queue_enable, we should implement the callback
>> for vhost I think. Otherwise it's still buggy.
>>
>> So it looks tricky to enable and disable queues through set status
>>
>> Thanks
> Do you agree it's a compliance issue though?


Yes, but it needs more work than what this patch did.

Thanks

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-19  6:27           ` Jason Wang
  2019-02-19 14:19             ` Michael S. Tsirkin
@ 2019-02-21  6:00             ` Yuri Benditovich
  2019-02-21  6:49               ` Jason Wang
  1 sibling, 1 reply; 20+ messages in thread
From: Yuri Benditovich @ 2019-02-21  6:00 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, Yan Vugenfirer, qemu-devel

On Tue, Feb 19, 2019 at 8:27 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
> > On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
> >> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> >>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
> >>>>>
> >>>>> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> >>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> >>>>>> On startup/link-up in multiqueue configuration the virtio-net
> >>>>>> tries to starts all the queues, including those that the guest
> >>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> >>>>>> If the guest driver does not allocate queues that it will not
> >>>>>> use (for example, Windows driver does not) and number of actually
> >>>>>> used queues is less that maximal number supported by the device,
> >>>>>
> >>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate all
> >>>>> queues in this case. This is usually what normal Linux driver did.
> >>>>>
> >>>>>
> >>>>>> this causes vhost_net_start to fail and actually disables vhost
> >>>>>> for all the queues, reducing the performance.
> >>>>>> Current commit fixes this: initially only first queue is started,
> >>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> >>>>>> requested by the guest.
> >>>>>>
> >>>>>> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> >>>>>> ---
> >>>>>>    hw/net/virtio-net.c | 7 +++++--
> >>>>>>    1 file changed, 5 insertions(+), 2 deletions(-)
> >>>>>>
> >>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> >>>>>> index 3f319ef723..d3b1ac6d3a 100644
> >>>>>> --- a/hw/net/virtio-net.c
> >>>>>> +++ b/hw/net/virtio-net.c
> >>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> >>>>>>    {
> >>>>>>        VirtIODevice *vdev = VIRTIO_DEVICE(n);
> >>>>>>        NetClientState *nc = qemu_get_queue(n->nic);
> >>>>>> -    int queues = n->multiqueue ? n->max_queues : 1;
> >>>>>> +    int queues = n->multiqueue ? n->curr_queues : 1;
> >>>>>>
> >>>>>>        if (!get_vhost_net(nc->peer)) {
> >>>>>>            return;
> >>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> >>>>>>            return VIRTIO_NET_ERR;
> >>>>>>        }
> >>>>>>
> >>>>>> -    n->curr_queues = queues;
> >>>>>>        /* stop the backend before changing the number of queues to avoid handling a
> >>>>>>         * disabled queue */
> >>>>>> +    virtio_net_set_status(vdev, 0);
> >>>>>
> >>>>> Any reason for doing this?
> >>>> I think there are 2 reasons:
> >>>> 1. The spec does not require guest SW to allocate unused queues.
> >>>> 2. We spend guest's physical memory to just make vhost happy when it
> >>>> touches queues that it should not use.
> >>>>
> >>>> Thanks,
> >>>> Yuri Benditovich
> >>> The spec also says:
> >>>          queue_enable The driver uses this to selectively prevent the device from executing requests from this
> >>>          virtqueue. 1 - enabled; 0 - disabled.
> >>>
> >>> While this is not a conformance clause this strongly implies that
> >>> queues which are not enabled are never accessed by device.
> >>>
> >>> Yuri I am guessing you are not enabling these unused queues right?
> >> Of course, we (Windows driver) do not.
> >> The code of virtio-net passes max_queues to vhost and this causes
> >> vhost to try accessing all the queues, fail on unused ones and finally
> >> leave vhost disabled at all.
> >
> > Jason, at least for 1.0 accessing disabled queues looks like a spec
> > violation. What do you think?
>
>
> Yes, but there's some issues:
>
> - How to detect a disabled queue for 0.9x device? Looks like there's no
> way according to the spec, so device must assume all queues was enabled.

Can you please add several words - what is 0.9 device (probably this
is more about driver) and
what is the problem with it?

>
> - For 1.0, if we depends on queue_enable, we should implement the
> callback for vhost I think. Otherwise it's still buggy.
>
> So it looks tricky to enable and disable queues through set status

If I succeed to modify the patch such a way that it will act only in
'target' case,
i.e. only if some of queueus are not initialized (at time of
driver_ok), will it be more safe?

>
> Thanks
>
>
> >
> >>>
> >>>
> >>>>> Thanks
> >>>>>
> >>>>>
> >>>>>> +
> >>>>>> +    n->curr_queues = queues;
> >>>>>> +
> >>>>>>        virtio_net_set_status(vdev, vdev->status);
> >>>>>>        virtio_net_set_queues(n);
> >>>>>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-21  6:00             ` Yuri Benditovich
@ 2019-02-21  6:49               ` Jason Wang
  2019-02-21  8:18                 ` Yuri Benditovich
  0 siblings, 1 reply; 20+ messages in thread
From: Jason Wang @ 2019-02-21  6:49 UTC (permalink / raw)
  To: Yuri Benditovich; +Cc: Michael S. Tsirkin, Yan Vugenfirer, qemu-devel


On 2019/2/21 下午2:00, Yuri Benditovich wrote:
> On Tue, Feb 19, 2019 at 8:27 AM Jason Wang <jasowang@redhat.com> wrote:
>>
>> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
>>> On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
>>>> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>>>>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
>>>>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
>>>>>>> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
>>>>>>>> On startup/link-up in multiqueue configuration the virtio-net
>>>>>>>> tries to starts all the queues, including those that the guest
>>>>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
>>>>>>>> If the guest driver does not allocate queues that it will not
>>>>>>>> use (for example, Windows driver does not) and number of actually
>>>>>>>> used queues is less that maximal number supported by the device,
>>>>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate all
>>>>>>> queues in this case. This is usually what normal Linux driver did.
>>>>>>>
>>>>>>>
>>>>>>>> this causes vhost_net_start to fail and actually disables vhost
>>>>>>>> for all the queues, reducing the performance.
>>>>>>>> Current commit fixes this: initially only first queue is started,
>>>>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
>>>>>>>> requested by the guest.
>>>>>>>>
>>>>>>>> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
>>>>>>>> ---
>>>>>>>>     hw/net/virtio-net.c | 7 +++++--
>>>>>>>>     1 file changed, 5 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>>>>>>> index 3f319ef723..d3b1ac6d3a 100644
>>>>>>>> --- a/hw/net/virtio-net.c
>>>>>>>> +++ b/hw/net/virtio-net.c
>>>>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>>>>>>>>     {
>>>>>>>>         VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>>>>>>>         NetClientState *nc = qemu_get_queue(n->nic);
>>>>>>>> -    int queues = n->multiqueue ? n->max_queues : 1;
>>>>>>>> +    int queues = n->multiqueue ? n->curr_queues : 1;
>>>>>>>>
>>>>>>>>         if (!get_vhost_net(nc->peer)) {
>>>>>>>>             return;
>>>>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>>>>>>>>             return VIRTIO_NET_ERR;
>>>>>>>>         }
>>>>>>>>
>>>>>>>> -    n->curr_queues = queues;
>>>>>>>>         /* stop the backend before changing the number of queues to avoid handling a
>>>>>>>>          * disabled queue */
>>>>>>>> +    virtio_net_set_status(vdev, 0);
>>>>>>> Any reason for doing this?
>>>>>> I think there are 2 reasons:
>>>>>> 1. The spec does not require guest SW to allocate unused queues.
>>>>>> 2. We spend guest's physical memory to just make vhost happy when it
>>>>>> touches queues that it should not use.
>>>>>>
>>>>>> Thanks,
>>>>>> Yuri Benditovich
>>>>> The spec also says:
>>>>>           queue_enable The driver uses this to selectively prevent the device from executing requests from this
>>>>>           virtqueue. 1 - enabled; 0 - disabled.
>>>>>
>>>>> While this is not a conformance clause this strongly implies that
>>>>> queues which are not enabled are never accessed by device.
>>>>>
>>>>> Yuri I am guessing you are not enabling these unused queues right?
>>>> Of course, we (Windows driver) do not.
>>>> The code of virtio-net passes max_queues to vhost and this causes
>>>> vhost to try accessing all the queues, fail on unused ones and finally
>>>> leave vhost disabled at all.
>>> Jason, at least for 1.0 accessing disabled queues looks like a spec
>>> violation. What do you think?
>>
>> Yes, but there's some issues:
>>
>> - How to detect a disabled queue for 0.9x device? Looks like there's no
>> way according to the spec, so device must assume all queues was enabled.
> Can you please add several words - what is 0.9 device (probably this
> is more about driver) and
> what is the problem with it?


It's not a net specific issue. 0.9x device is legacy device defined in 
the spec. We don't have a method to disable and enable a specific queue 
at that time. Michael said we can assume queue address 0 as disabled, 
but there's still a question of how to enable it. Spec is unclear and it 
was too late to add thing for legacy device. For 1.0 device we have 
queue_enable, but its implementation is incomplete, since it can work 
with vhost correctly, we probably need to add thing to make it work.


>
>> - For 1.0, if we depends on queue_enable, we should implement the
>> callback for vhost I think. Otherwise it's still buggy.
>>
>> So it looks tricky to enable and disable queues through set status
> If I succeed to modify the patch such a way that it will act only in
> 'target' case,
> i.e. only if some of queueus are not initialized (at time of
> driver_ok), will it be more safe?


For 1.0 device, we can fix the queue_enable, but for 0.9x device how do 
you enable one specific queue in this case? (setting status?)

A fundamental question is what prevents you from just initialization all 
queues during driver start? It looks to me this save lots of efforts 
than allocating queue dynamically.

Thanks


>
>> Thanks
>>
>>
>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>> +
>>>>>>>> +    n->curr_queues = queues;
>>>>>>>> +
>>>>>>>>         virtio_net_set_status(vdev, vdev->status);
>>>>>>>>         virtio_net_set_queues(n);
>>>>>>>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-21  6:49               ` Jason Wang
@ 2019-02-21  8:18                 ` Yuri Benditovich
  2019-02-21  9:40                   ` Jason Wang
  2019-02-28 14:08                   ` Michael S. Tsirkin
  0 siblings, 2 replies; 20+ messages in thread
From: Yuri Benditovich @ 2019-02-21  8:18 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, Yan Vugenfirer, qemu-devel

On Thu, Feb 21, 2019 at 8:49 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> On 2019/2/21 下午2:00, Yuri Benditovich wrote:
> > On Tue, Feb 19, 2019 at 8:27 AM Jason Wang <jasowang@redhat.com> wrote:
> >>
> >> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
> >>> On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
> >>>> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >>>>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> >>>>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
> >>>>>>> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> >>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> >>>>>>>> On startup/link-up in multiqueue configuration the virtio-net
> >>>>>>>> tries to starts all the queues, including those that the guest
> >>>>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> >>>>>>>> If the guest driver does not allocate queues that it will not
> >>>>>>>> use (for example, Windows driver does not) and number of actually
> >>>>>>>> used queues is less that maximal number supported by the device,
> >>>>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate all
> >>>>>>> queues in this case. This is usually what normal Linux driver did.
> >>>>>>>
> >>>>>>>
> >>>>>>>> this causes vhost_net_start to fail and actually disables vhost
> >>>>>>>> for all the queues, reducing the performance.
> >>>>>>>> Current commit fixes this: initially only first queue is started,
> >>>>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> >>>>>>>> requested by the guest.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> >>>>>>>> ---
> >>>>>>>>     hw/net/virtio-net.c | 7 +++++--
> >>>>>>>>     1 file changed, 5 insertions(+), 2 deletions(-)
> >>>>>>>>
> >>>>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> >>>>>>>> index 3f319ef723..d3b1ac6d3a 100644
> >>>>>>>> --- a/hw/net/virtio-net.c
> >>>>>>>> +++ b/hw/net/virtio-net.c
> >>>>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> >>>>>>>>     {
> >>>>>>>>         VirtIODevice *vdev = VIRTIO_DEVICE(n);
> >>>>>>>>         NetClientState *nc = qemu_get_queue(n->nic);
> >>>>>>>> -    int queues = n->multiqueue ? n->max_queues : 1;
> >>>>>>>> +    int queues = n->multiqueue ? n->curr_queues : 1;
> >>>>>>>>
> >>>>>>>>         if (!get_vhost_net(nc->peer)) {
> >>>>>>>>             return;
> >>>>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> >>>>>>>>             return VIRTIO_NET_ERR;
> >>>>>>>>         }
> >>>>>>>>
> >>>>>>>> -    n->curr_queues = queues;
> >>>>>>>>         /* stop the backend before changing the number of queues to avoid handling a
> >>>>>>>>          * disabled queue */
> >>>>>>>> +    virtio_net_set_status(vdev, 0);
> >>>>>>> Any reason for doing this?
> >>>>>> I think there are 2 reasons:
> >>>>>> 1. The spec does not require guest SW to allocate unused queues.
> >>>>>> 2. We spend guest's physical memory to just make vhost happy when it
> >>>>>> touches queues that it should not use.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Yuri Benditovich
> >>>>> The spec also says:
> >>>>>           queue_enable The driver uses this to selectively prevent the device from executing requests from this
> >>>>>           virtqueue. 1 - enabled; 0 - disabled.
> >>>>>
> >>>>> While this is not a conformance clause this strongly implies that
> >>>>> queues which are not enabled are never accessed by device.
> >>>>>
> >>>>> Yuri I am guessing you are not enabling these unused queues right?
> >>>> Of course, we (Windows driver) do not.
> >>>> The code of virtio-net passes max_queues to vhost and this causes
> >>>> vhost to try accessing all the queues, fail on unused ones and finally
> >>>> leave vhost disabled at all.
> >>> Jason, at least for 1.0 accessing disabled queues looks like a spec
> >>> violation. What do you think?
> >>
> >> Yes, but there's some issues:
> >>
> >> - How to detect a disabled queue for 0.9x device? Looks like there's no
> >> way according to the spec, so device must assume all queues was enabled.
> > Can you please add several words - what is 0.9 device (probably this
> > is more about driver) and
> > what is the problem with it?
>
>
> It's not a net specific issue. 0.9x device is legacy device defined in
> the spec. We don't have a method to disable and enable a specific queue
> at that time. Michael said we can assume queue address 0 as disabled,
> but there's still a question of how to enable it. Spec is unclear and it
> was too late to add thing for legacy device. For 1.0 device we have
> queue_enable, but its implementation is incomplete, since it can work
> with vhost correctly, we probably need to add thing to make it work.
>
>
> >
> >> - For 1.0, if we depends on queue_enable, we should implement the
> >> callback for vhost I think. Otherwise it's still buggy.
> >>
> >> So it looks tricky to enable and disable queues through set status
> > If I succeed to modify the patch such a way that it will act only in
> > 'target' case,
> > i.e. only if some of queueus are not initialized (at time of
> > driver_ok), will it be more safe?
>
>
> For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
> you enable one specific queue in this case? (setting status?)
>

Do I understand correctly that for 0.9 device in some cases the device will
receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
Or the problem is different?

> A fundamental question is what prevents you from just initialization all
> queues during driver start? It looks to me this save lots of efforts
> than allocating queue dynamically.
>

This is not so trivial in Windows driver, as it does not have objects for queues
that it does not use. Linux driver first of all allocates all the
queues and then
adds Rx/Tx to those it will use. Windows driver first decides how many queues
it will use then allocates objects for them and initializes them from zero to
fully functional state.

> Thanks
>
>
> >
> >> Thanks
> >>
> >>
> >>>>>
> >>>>>>> Thanks
> >>>>>>>
> >>>>>>>
> >>>>>>>> +
> >>>>>>>> +    n->curr_queues = queues;
> >>>>>>>> +
> >>>>>>>>         virtio_net_set_status(vdev, vdev->status);
> >>>>>>>>         virtio_net_set_queues(n);
> >>>>>>>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-21  8:18                 ` Yuri Benditovich
@ 2019-02-21  9:40                   ` Jason Wang
  2019-02-22  1:35                     ` Michael S. Tsirkin
  2019-02-28 14:08                   ` Michael S. Tsirkin
  1 sibling, 1 reply; 20+ messages in thread
From: Jason Wang @ 2019-02-21  9:40 UTC (permalink / raw)
  To: Yuri Benditovich; +Cc: Yan Vugenfirer, qemu-devel, Michael S. Tsirkin


On 2019/2/21 下午4:18, Yuri Benditovich wrote:
>> For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
>> you enable one specific queue in this case? (setting status?)
>>
> Do I understand correctly that for 0.9 device in some cases the device will
> receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
> Or the problem is different?


Let me clarify, VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is used to control the 
the number of queue pairs used by device for doing transmission and 
reception. It was not used to enable or disable a virtqueue.

For 1.0 device, we should use queue_enable in pci cfg to enable and 
disable queue:


We could do:

1) allocate memory and set queue_enable for vq0

2) allocate memory and set queue_enable for vq1

3) Set vq paris to 1

4) allocate memory and set queue_enable for vq2

5) allocate memory and set queue_enable for vq3

6) set vq pairs to 2


But this requires a proper implementation for queue_enable for vhost 
which is missed in qemu and probably what you really want to do.

but for 0.9x device, there's no such way to do this. That's the issue. 
So driver must allocate all queBes before starting the device, otherwise 
there's no way to enable it afterwards. There're tricks to make it work 
like what is done in your patch, but it depends on a specific 
implementation like qemu which is sub-optimal.


>
>> A fundamental question is what prevents you from just initialization all
>> queues during driver start? It looks to me this save lots of efforts
>> than allocating queue dynamically.
>>
> This is not so trivial in Windows driver, as it does not have objects for queues
> that it does not use. Linux driver first of all allocates all the
> queues and then
> adds Rx/Tx to those it will use. Windows driver first decides how many queues
> it will use then allocates objects for them and initializes them from zero to
> fully functional state.


Well, you just need to allocate some memory for the virtqueue, there's 
no need to make it visible to the rest until it was enabled.

Thanks


>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-21  9:40                   ` Jason Wang
@ 2019-02-22  1:35                     ` Michael S. Tsirkin
  2019-02-22  3:04                       ` Jason Wang
  0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22  1:35 UTC (permalink / raw)
  To: Jason Wang; +Cc: Yuri Benditovich, Yan Vugenfirer, qemu-devel

On Thu, Feb 21, 2019 at 05:40:22PM +0800, Jason Wang wrote:
> 
> On 2019/2/21 下午4:18, Yuri Benditovich wrote:
> 
>         For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
>         you enable one specific queue in this case? (setting status?)
> 
> 
>     Do I understand correctly that for 0.9 device in some cases the device will
>     receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
>     Or the problem is different?
> 
> 
> Let me clarify, VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is used to control the the
> number of queue pairs used by device for doing transmission and reception. It
> was not used to enable or disable a virtqueue.
> 
> For 1.0 device, we should use queue_enable in pci cfg to enable and disable
> queue:
> 
> 
> We could do:
> 
> 1) allocate memory and set queue_enable for vq0
> 
> 2) allocate memory and set queue_enable for vq1
> 
> 3) Set vq paris to 1
> 
> 4) allocate memory and set queue_enable for vq2
> 
> 5) allocate memory and set queue_enable for vq3
> 
> 6) set vq pairs to 2


I do not think spec allows this.


The driver MUST follow this sequence to initialize a device:
1. Reset the device.
2. Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
3. Set the DRIVER status bit: the guest OS knows how to drive the device.
4. Read device feature bits, and write the subset of feature bits understood by the OS and driver to the
device. During this step the driver MAY read (but MUST NOT write) the device-specific configuration
fields to check that it can support the device before accepting it.
5. Set the FEATURES_OK status bit. The driver MUST NOT accept new feature bits after this step.
6. Re-read device status to ensure the FEATURES_OK bit is still set: otherwise, the device does not
support our subset of features and the device is unusable.
7. Perform device-specific setup, including discovery of virtqueues for the device, optional per-bus setup,
reading and possibly writing the device’s virtio configuration space, and population of virtqueues.
8. Set the DRIVER_OK status bit. At this point the device is “live”.


Thus vqs are setup at step 7.

# of vq pairs are set up through a command which is a special
buffer, and spec says:

The driver MUST NOT send any buffer available notifications to the device before setting DRIVER_OK.


> 
> But this requires a proper implementation for queue_enable for vhost which is
> missed in qemu and probably what you really want to do.
> 
> but for 0.9x device, there's no such way to do this. That's the issue.

0.9x there's no queue enable, assumption is PA!=0 means VQ has
been enabled.


> So
> driver must allocate all queBes before starting the device, otherwise there's
> no way to enable it afterwards.


As per spec queues must be allocated before DRIVER_OK.

That is universal.


> There're tricks to make it work like what is
> done in your patch, but it depends on a specific implementation like qemu which
> is sub-optimal.
> 
> 
> 
> 
>         A fundamental question is what prevents you from just initialization all
>         queues during driver start? It looks to me this save lots of efforts
>         than allocating queue dynamically.
> 
> 
>     This is not so trivial in Windows driver, as it does not have objects for queues
>     that it does not use. Linux driver first of all allocates all the
>     queues and then
>     adds Rx/Tx to those it will use. Windows driver first decides how many queues
>     it will use then allocates objects for them and initializes them from zero to
>     fully functional state.
> 
> 
> Well, you just need to allocate some memory for the virtqueue, there's no need
> to make it visible to the rest until it was enabled.
> 
> Thanks
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-22  1:35                     ` Michael S. Tsirkin
@ 2019-02-22  3:04                       ` Jason Wang
  2019-02-22  3:10                         ` Michael S. Tsirkin
  0 siblings, 1 reply; 20+ messages in thread
From: Jason Wang @ 2019-02-22  3:04 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Yuri Benditovich, Yan Vugenfirer, qemu-devel


On 2019/2/22 上午9:35, Michael S. Tsirkin wrote:
> On Thu, Feb 21, 2019 at 05:40:22PM +0800, Jason Wang wrote:
>> On 2019/2/21 下午4:18, Yuri Benditovich wrote:
>>
>>          For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
>>          you enable one specific queue in this case? (setting status?)
>>
>>
>>      Do I understand correctly that for 0.9 device in some cases the device will
>>      receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
>>      Or the problem is different?
>>
>>
>> Let me clarify, VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is used to control the the
>> number of queue pairs used by device for doing transmission and reception. It
>> was not used to enable or disable a virtqueue.
>>
>> For 1.0 device, we should use queue_enable in pci cfg to enable and disable
>> queue:
>>
>>
>> We could do:
>>
>> 1) allocate memory and set queue_enable for vq0
>>
>> 2) allocate memory and set queue_enable for vq1
>>
>> 3) Set vq paris to 1
>>
>> 4) allocate memory and set queue_enable for vq2
>>
>> 5) allocate memory and set queue_enable for vq3
>>
>> 6) set vq pairs to 2
>
> I do not think spec allows this.
>
>
> The driver MUST follow this sequence to initialize a device:
> 1. Reset the device.
> 2. Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
> 3. Set the DRIVER status bit: the guest OS knows how to drive the device.
> 4. Read device feature bits, and write the subset of feature bits understood by the OS and driver to the
> device. During this step the driver MAY read (but MUST NOT write) the device-specific configuration
> fields to check that it can support the device before accepting it.
> 5. Set the FEATURES_OK status bit. The driver MUST NOT accept new feature bits after this step.
> 6. Re-read device status to ensure the FEATURES_OK bit is still set: otherwise, the device does not
> support our subset of features and the device is unusable.
> 7. Perform device-specific setup, including discovery of virtqueues for the device, optional per-bus setup,
> reading and possibly writing the device’s virtio configuration space, and population of virtqueues.
> 8. Set the DRIVER_OK status bit. At this point the device is “live”.
>
>
> Thus vqs are setup at step 7.
>
> # of vq pairs are set up through a command which is a special
> buffer, and spec says:
>
> The driver MUST NOT send any buffer available notifications to the device before setting DRIVER_OK.


So you meant write to queue_enable is forbidden after DRIVER_OK (though 
it's not very clear to me from the  spec). And if a driver want to 
enable new queues, it must reset the device?


>
>
>> But this requires a proper implementation for queue_enable for vhost which is
>> missed in qemu and probably what you really want to do.
>>
>> but for 0.9x device, there's no such way to do this. That's the issue.
> 0.9x there's no queue enable, assumption is PA!=0 means VQ has
> been enabled.
>
>
>> So
>> driver must allocate all queBes before starting the device, otherwise there's
>> no way to enable it afterwards.
>
> As per spec queues must be allocated before DRIVER_OK.
>
> That is universal.


If I understand correctly, this is not what is done by current windows 
drivers.

Thanks


>
>> There're tricks to make it work like what is
>> done in your patch, but it depends on a specific implementation like qemu which
>> is sub-optimal.
>>
>>
>>
>>
>>          A fundamental question is what prevents you from just initialization all
>>          queues during driver start? It looks to me this save lots of efforts
>>          than allocating queue dynamically.
>>
>>
>>      This is not so trivial in Windows driver, as it does not have objects for queues
>>      that it does not use. Linux driver first of all allocates all the
>>      queues and then
>>      adds Rx/Tx to those it will use. Windows driver first decides how many queues
>>      it will use then allocates objects for them and initializes them from zero to
>>      fully functional state.
>>
>>
>> Well, you just need to allocate some memory for the virtqueue, there's no need
>> to make it visible to the rest until it was enabled.
>>
>> Thanks
>>
>>
>>
>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-22  3:04                       ` Jason Wang
@ 2019-02-22  3:10                         ` Michael S. Tsirkin
  2019-02-22  4:22                           ` Michael S. Tsirkin
  0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22  3:10 UTC (permalink / raw)
  To: Jason Wang; +Cc: Yuri Benditovich, Yan Vugenfirer, qemu-devel

On Fri, Feb 22, 2019 at 11:04:05AM +0800, Jason Wang wrote:
> 
> On 2019/2/22 上午9:35, Michael S. Tsirkin wrote:
> > On Thu, Feb 21, 2019 at 05:40:22PM +0800, Jason Wang wrote:
> > > On 2019/2/21 下午4:18, Yuri Benditovich wrote:
> > > 
> > >          For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
> > >          you enable one specific queue in this case? (setting status?)
> > > 
> > > 
> > >      Do I understand correctly that for 0.9 device in some cases the device will
> > >      receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
> > >      Or the problem is different?
> > > 
> > > 
> > > Let me clarify, VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is used to control the the
> > > number of queue pairs used by device for doing transmission and reception. It
> > > was not used to enable or disable a virtqueue.
> > > 
> > > For 1.0 device, we should use queue_enable in pci cfg to enable and disable
> > > queue:
> > > 
> > > 
> > > We could do:
> > > 
> > > 1) allocate memory and set queue_enable for vq0
> > > 
> > > 2) allocate memory and set queue_enable for vq1
> > > 
> > > 3) Set vq paris to 1
> > > 
> > > 4) allocate memory and set queue_enable for vq2
> > > 
> > > 5) allocate memory and set queue_enable for vq3
> > > 
> > > 6) set vq pairs to 2
> > 
> > I do not think spec allows this.
> > 
> > 
> > The driver MUST follow this sequence to initialize a device:
> > 1. Reset the device.
> > 2. Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
> > 3. Set the DRIVER status bit: the guest OS knows how to drive the device.
> > 4. Read device feature bits, and write the subset of feature bits understood by the OS and driver to the
> > device. During this step the driver MAY read (but MUST NOT write) the device-specific configuration
> > fields to check that it can support the device before accepting it.
> > 5. Set the FEATURES_OK status bit. The driver MUST NOT accept new feature bits after this step.
> > 6. Re-read device status to ensure the FEATURES_OK bit is still set: otherwise, the device does not
> > support our subset of features and the device is unusable.
> > 7. Perform device-specific setup, including discovery of virtqueues for the device, optional per-bus setup,
> > reading and possibly writing the device’s virtio configuration space, and population of virtqueues.
> > 8. Set the DRIVER_OK status bit. At this point the device is “live”.
> > 
> > 
> > Thus vqs are setup at step 7.
> > 
> > # of vq pairs are set up through a command which is a special
> > buffer, and spec says:
> > 
> > The driver MUST NOT send any buffer available notifications to the device before setting DRIVER_OK.
> 
> 
> So you meant write to queue_enable is forbidden after DRIVER_OK (though it's
> not very clear to me from the  spec). And if a driver want to enable new
> queues, it must reset the device?


That's my reading.  What do you think?


> 
> > 
> > 
> > > But this requires a proper implementation for queue_enable for vhost which is
> > > missed in qemu and probably what you really want to do.
> > > 
> > > but for 0.9x device, there's no such way to do this. That's the issue.
> > 0.9x there's no queue enable, assumption is PA!=0 means VQ has
> > been enabled.
> > 
> > 
> > > So
> > > driver must allocate all queBes before starting the device, otherwise there's
> > > no way to enable it afterwards.
> > 
> > As per spec queues must be allocated before DRIVER_OK.
> > 
> > That is universal.
> 
> 
> If I understand correctly, this is not what is done by current windows
> drivers.
> 
> Thanks
> 
> 
> > 
> > > There're tricks to make it work like what is
> > > done in your patch, but it depends on a specific implementation like qemu which
> > > is sub-optimal.
> > > 
> > > 
> > > 
> > > 
> > >          A fundamental question is what prevents you from just initialization all
> > >          queues during driver start? It looks to me this save lots of efforts
> > >          than allocating queue dynamically.
> > > 
> > > 
> > >      This is not so trivial in Windows driver, as it does not have objects for queues
> > >      that it does not use. Linux driver first of all allocates all the
> > >      queues and then
> > >      adds Rx/Tx to those it will use. Windows driver first decides how many queues
> > >      it will use then allocates objects for them and initializes them from zero to
> > >      fully functional state.
> > > 
> > > 
> > > Well, you just need to allocate some memory for the virtqueue, there's no need
> > > to make it visible to the rest until it was enabled.
> > > 
> > > Thanks
> > > 
> > > 
> > > 
> > > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-22  3:10                         ` Michael S. Tsirkin
@ 2019-02-22  4:22                           ` Michael S. Tsirkin
  2019-02-25  7:47                             ` Jason Wang
  0 siblings, 1 reply; 20+ messages in thread
From: Michael S. Tsirkin @ 2019-02-22  4:22 UTC (permalink / raw)
  To: Jason Wang; +Cc: Yuri Benditovich, Yan Vugenfirer, qemu-devel

On Thu, Feb 21, 2019 at 10:10:08PM -0500, Michael S. Tsirkin wrote:
> On Fri, Feb 22, 2019 at 11:04:05AM +0800, Jason Wang wrote:
> > 
> > On 2019/2/22 上午9:35, Michael S. Tsirkin wrote:
> > > On Thu, Feb 21, 2019 at 05:40:22PM +0800, Jason Wang wrote:
> > > > On 2019/2/21 下午4:18, Yuri Benditovich wrote:
> > > > 
> > > >          For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
> > > >          you enable one specific queue in this case? (setting status?)
> > > > 
> > > > 
> > > >      Do I understand correctly that for 0.9 device in some cases the device will
> > > >      receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
> > > >      Or the problem is different?
> > > > 
> > > > 
> > > > Let me clarify, VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is used to control the the
> > > > number of queue pairs used by device for doing transmission and reception. It
> > > > was not used to enable or disable a virtqueue.
> > > > 
> > > > For 1.0 device, we should use queue_enable in pci cfg to enable and disable
> > > > queue:
> > > > 
> > > > 
> > > > We could do:
> > > > 
> > > > 1) allocate memory and set queue_enable for vq0
> > > > 
> > > > 2) allocate memory and set queue_enable for vq1
> > > > 
> > > > 3) Set vq paris to 1
> > > > 
> > > > 4) allocate memory and set queue_enable for vq2
> > > > 
> > > > 5) allocate memory and set queue_enable for vq3
> > > > 
> > > > 6) set vq pairs to 2
> > > 
> > > I do not think spec allows this.
> > > 
> > > 
> > > The driver MUST follow this sequence to initialize a device:
> > > 1. Reset the device.
> > > 2. Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
> > > 3. Set the DRIVER status bit: the guest OS knows how to drive the device.
> > > 4. Read device feature bits, and write the subset of feature bits understood by the OS and driver to the
> > > device. During this step the driver MAY read (but MUST NOT write) the device-specific configuration
> > > fields to check that it can support the device before accepting it.
> > > 5. Set the FEATURES_OK status bit. The driver MUST NOT accept new feature bits after this step.
> > > 6. Re-read device status to ensure the FEATURES_OK bit is still set: otherwise, the device does not
> > > support our subset of features and the device is unusable.
> > > 7. Perform device-specific setup, including discovery of virtqueues for the device, optional per-bus setup,
> > > reading and possibly writing the device’s virtio configuration space, and population of virtqueues.
> > > 8. Set the DRIVER_OK status bit. At this point the device is “live”.
> > > 
> > > 
> > > Thus vqs are setup at step 7.
> > > 
> > > # of vq pairs are set up through a command which is a special
> > > buffer, and spec says:
> > > 
> > > The driver MUST NOT send any buffer available notifications to the device before setting DRIVER_OK.
> > 
> > 
> > So you meant write to queue_enable is forbidden after DRIVER_OK (though it's
> > not very clear to me from the  spec). And if a driver want to enable new
> > queues, it must reset the device?
> 
> 
> That's my reading.  What do you think?

Btw some legacy drivers might violate this by addig buffers
before driver_ok.

> 
> > 
> > > 
> > > 
> > > > But this requires a proper implementation for queue_enable for vhost which is
> > > > missed in qemu and probably what you really want to do.
> > > > 
> > > > but for 0.9x device, there's no such way to do this. That's the issue.
> > > 0.9x there's no queue enable, assumption is PA!=0 means VQ has
> > > been enabled.
> > > 
> > > 
> > > > So
> > > > driver must allocate all queBes before starting the device, otherwise there's
> > > > no way to enable it afterwards.
> > > 
> > > As per spec queues must be allocated before DRIVER_OK.
> > > 
> > > That is universal.
> > 
> > 
> > If I understand correctly, this is not what is done by current windows
> > drivers.
> > 
> > Thanks
> > 
> > 
> > > 
> > > > There're tricks to make it work like what is
> > > > done in your patch, but it depends on a specific implementation like qemu which
> > > > is sub-optimal.
> > > > 
> > > > 
> > > > 
> > > > 
> > > >          A fundamental question is what prevents you from just initialization all
> > > >          queues during driver start? It looks to me this save lots of efforts
> > > >          than allocating queue dynamically.
> > > > 
> > > > 
> > > >      This is not so trivial in Windows driver, as it does not have objects for queues
> > > >      that it does not use. Linux driver first of all allocates all the
> > > >      queues and then
> > > >      adds Rx/Tx to those it will use. Windows driver first decides how many queues
> > > >      it will use then allocates objects for them and initializes them from zero to
> > > >      fully functional state.
> > > > 
> > > > 
> > > > Well, you just need to allocate some memory for the virtqueue, there's no need
> > > > to make it visible to the rest until it was enabled.
> > > > 
> > > > Thanks
> > > > 
> > > > 
> > > > 
> > > > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-22  4:22                           ` Michael S. Tsirkin
@ 2019-02-25  7:47                             ` Jason Wang
  2019-02-25 12:33                               ` Michael S. Tsirkin
  0 siblings, 1 reply; 20+ messages in thread
From: Jason Wang @ 2019-02-25  7:47 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Yan Vugenfirer, Yuri Benditovich, qemu-devel


On 2019/2/22 下午12:22, Michael S. Tsirkin wrote:
> On Thu, Feb 21, 2019 at 10:10:08PM -0500, Michael S. Tsirkin wrote:
>> On Fri, Feb 22, 2019 at 11:04:05AM +0800, Jason Wang wrote:
>>> On 2019/2/22 上午9:35, Michael S. Tsirkin wrote:
>>>> On Thu, Feb 21, 2019 at 05:40:22PM +0800, Jason Wang wrote:
>>>>> On 2019/2/21 下午4:18, Yuri Benditovich wrote:
>>>>>
>>>>>           For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
>>>>>           you enable one specific queue in this case? (setting status?)
>>>>>
>>>>>
>>>>>       Do I understand correctly that for 0.9 device in some cases the device will
>>>>>       receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
>>>>>       Or the problem is different?
>>>>>
>>>>>
>>>>> Let me clarify, VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is used to control the the
>>>>> number of queue pairs used by device for doing transmission and reception. It
>>>>> was not used to enable or disable a virtqueue.
>>>>>
>>>>> For 1.0 device, we should use queue_enable in pci cfg to enable and disable
>>>>> queue:
>>>>>
>>>>>
>>>>> We could do:
>>>>>
>>>>> 1) allocate memory and set queue_enable for vq0
>>>>>
>>>>> 2) allocate memory and set queue_enable for vq1
>>>>>
>>>>> 3) Set vq paris to 1
>>>>>
>>>>> 4) allocate memory and set queue_enable for vq2
>>>>>
>>>>> 5) allocate memory and set queue_enable for vq3
>>>>>
>>>>> 6) set vq pairs to 2
>>>> I do not think spec allows this.
>>>>
>>>>
>>>> The driver MUST follow this sequence to initialize a device:
>>>> 1. Reset the device.
>>>> 2. Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
>>>> 3. Set the DRIVER status bit: the guest OS knows how to drive the device.
>>>> 4. Read device feature bits, and write the subset of feature bits understood by the OS and driver to the
>>>> device. During this step the driver MAY read (but MUST NOT write) the device-specific configuration
>>>> fields to check that it can support the device before accepting it.
>>>> 5. Set the FEATURES_OK status bit. The driver MUST NOT accept new feature bits after this step.
>>>> 6. Re-read device status to ensure the FEATURES_OK bit is still set: otherwise, the device does not
>>>> support our subset of features and the device is unusable.
>>>> 7. Perform device-specific setup, including discovery of virtqueues for the device, optional per-bus setup,
>>>> reading and possibly writing the device’s virtio configuration space, and population of virtqueues.
>>>> 8. Set the DRIVER_OK status bit. At this point the device is “live”.
>>>>
>>>>
>>>> Thus vqs are setup at step 7.
>>>>
>>>> # of vq pairs are set up through a command which is a special
>>>> buffer, and spec says:
>>>>
>>>> The driver MUST NOT send any buffer available notifications to the device before setting DRIVER_OK.
>>>
>>> So you meant write to queue_enable is forbidden after DRIVER_OK (though it's
>>> not very clear to me from the  spec). And if a driver want to enable new
>>> queues, it must reset the device?
>>
>> That's my reading.  What do you think?


Looks like I can infer this from the spec, maybe it's better to clarify.


> Btw some legacy drivers might violate this by addig buffers
> before driver_ok.


Yes, but it's probably too late to fix them.

Thanks.


>>>>
>>>>> But this requires a proper implementation for queue_enable for vhost which is
>>>>> missed in qemu and probably what you really want to do.
>>>>>
>>>>> but for 0.9x device, there's no such way to do this. That's the issue.
>>>> 0.9x there's no queue enable, assumption is PA!=0 means VQ has
>>>> been enabled.
>>>>
>>>>
>>>>> So
>>>>> driver must allocate all queBes before starting the device, otherwise there's
>>>>> no way to enable it afterwards.
>>>> As per spec queues must be allocated before DRIVER_OK.
>>>>
>>>> That is universal.
>>>
>>> If I understand correctly, this is not what is done by current windows
>>> drivers.
>>>
>>> Thanks
>>>
>>>
>>>>> There're tricks to make it work like what is
>>>>> done in your patch, but it depends on a specific implementation like qemu which
>>>>> is sub-optimal.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>           A fundamental question is what prevents you from just initialization all
>>>>>           queues during driver start? It looks to me this save lots of efforts
>>>>>           than allocating queue dynamically.
>>>>>
>>>>>
>>>>>       This is not so trivial in Windows driver, as it does not have objects for queues
>>>>>       that it does not use. Linux driver first of all allocates all the
>>>>>       queues and then
>>>>>       adds Rx/Tx to those it will use. Windows driver first decides how many queues
>>>>>       it will use then allocates objects for them and initializes them from zero to
>>>>>       fully functional state.
>>>>>
>>>>>
>>>>> Well, you just need to allocate some memory for the virtqueue, there's no need
>>>>> to make it visible to the rest until it was enabled.
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-25  7:47                             ` Jason Wang
@ 2019-02-25 12:33                               ` Michael S. Tsirkin
  0 siblings, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2019-02-25 12:33 UTC (permalink / raw)
  To: Jason Wang; +Cc: Yan Vugenfirer, Yuri Benditovich, qemu-devel

On Mon, Feb 25, 2019 at 03:47:48PM +0800, Jason Wang wrote:
> 
> On 2019/2/22 下午12:22, Michael S. Tsirkin wrote:
> > On Thu, Feb 21, 2019 at 10:10:08PM -0500, Michael S. Tsirkin wrote:
> > > On Fri, Feb 22, 2019 at 11:04:05AM +0800, Jason Wang wrote:
> > > > On 2019/2/22 上午9:35, Michael S. Tsirkin wrote:
> > > > > On Thu, Feb 21, 2019 at 05:40:22PM +0800, Jason Wang wrote:
> > > > > > On 2019/2/21 下午4:18, Yuri Benditovich wrote:
> > > > > > 
> > > > > >           For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
> > > > > >           you enable one specific queue in this case? (setting status?)
> > > > > > 
> > > > > > 
> > > > > >       Do I understand correctly that for 0.9 device in some cases the device will
> > > > > >       receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
> > > > > >       Or the problem is different?
> > > > > > 
> > > > > > 
> > > > > > Let me clarify, VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is used to control the the
> > > > > > number of queue pairs used by device for doing transmission and reception. It
> > > > > > was not used to enable or disable a virtqueue.
> > > > > > 
> > > > > > For 1.0 device, we should use queue_enable in pci cfg to enable and disable
> > > > > > queue:
> > > > > > 
> > > > > > 
> > > > > > We could do:
> > > > > > 
> > > > > > 1) allocate memory and set queue_enable for vq0
> > > > > > 
> > > > > > 2) allocate memory and set queue_enable for vq1
> > > > > > 
> > > > > > 3) Set vq paris to 1
> > > > > > 
> > > > > > 4) allocate memory and set queue_enable for vq2
> > > > > > 
> > > > > > 5) allocate memory and set queue_enable for vq3
> > > > > > 
> > > > > > 6) set vq pairs to 2
> > > > > I do not think spec allows this.
> > > > > 
> > > > > 
> > > > > The driver MUST follow this sequence to initialize a device:
> > > > > 1. Reset the device.
> > > > > 2. Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
> > > > > 3. Set the DRIVER status bit: the guest OS knows how to drive the device.
> > > > > 4. Read device feature bits, and write the subset of feature bits understood by the OS and driver to the
> > > > > device. During this step the driver MAY read (but MUST NOT write) the device-specific configuration
> > > > > fields to check that it can support the device before accepting it.
> > > > > 5. Set the FEATURES_OK status bit. The driver MUST NOT accept new feature bits after this step.
> > > > > 6. Re-read device status to ensure the FEATURES_OK bit is still set: otherwise, the device does not
> > > > > support our subset of features and the device is unusable.
> > > > > 7. Perform device-specific setup, including discovery of virtqueues for the device, optional per-bus setup,
> > > > > reading and possibly writing the device’s virtio configuration space, and population of virtqueues.
> > > > > 8. Set the DRIVER_OK status bit. At this point the device is “live”.
> > > > > 
> > > > > 
> > > > > Thus vqs are setup at step 7.
> > > > > 
> > > > > # of vq pairs are set up through a command which is a special
> > > > > buffer, and spec says:
> > > > > 
> > > > > The driver MUST NOT send any buffer available notifications to the device before setting DRIVER_OK.
> > > > 
> > > > So you meant write to queue_enable is forbidden after DRIVER_OK (though it's
> > > > not very clear to me from the  spec). And if a driver want to enable new
> > > > queues, it must reset the device?
> > > 
> > > That's my reading.  What do you think?
> 
> 
> Looks like I can infer this from the spec, maybe it's better to clarify.
> 
> 
> > Btw some legacy drivers might violate this by addig buffers
> > before driver_ok.
> 
> 
> Yes, but it's probably too late to fix them.
> 
> Thanks.

Right. As a work around virtio net also checks rings
when it detects driver ok. We can disable that when VIRTIO_1
has been negotiated.

> 
> > > > > 
> > > > > > But this requires a proper implementation for queue_enable for vhost which is
> > > > > > missed in qemu and probably what you really want to do.
> > > > > > 
> > > > > > but for 0.9x device, there's no such way to do this. That's the issue.
> > > > > 0.9x there's no queue enable, assumption is PA!=0 means VQ has
> > > > > been enabled.
> > > > > 
> > > > > 
> > > > > > So
> > > > > > driver must allocate all queBes before starting the device, otherwise there's
> > > > > > no way to enable it afterwards.
> > > > > As per spec queues must be allocated before DRIVER_OK.
> > > > > 
> > > > > That is universal.
> > > > 
> > > > If I understand correctly, this is not what is done by current windows
> > > > drivers.
> > > > 
> > > > Thanks
> > > > 
> > > > 
> > > > > > There're tricks to make it work like what is
> > > > > > done in your patch, but it depends on a specific implementation like qemu which
> > > > > > is sub-optimal.
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > >           A fundamental question is what prevents you from just initialization all
> > > > > >           queues during driver start? It looks to me this save lots of efforts
> > > > > >           than allocating queue dynamically.
> > > > > > 
> > > > > > 
> > > > > >       This is not so trivial in Windows driver, as it does not have objects for queues
> > > > > >       that it does not use. Linux driver first of all allocates all the
> > > > > >       queues and then
> > > > > >       adds Rx/Tx to those it will use. Windows driver first decides how many queues
> > > > > >       it will use then allocates objects for them and initializes them from zero to
> > > > > >       fully functional state.
> > > > > > 
> > > > > > 
> > > > > > Well, you just need to allocate some memory for the virtqueue, there's no need
> > > > > > to make it visible to the rest until it was enabled.
> > > > > > 
> > > > > > Thanks
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
  2019-02-21  8:18                 ` Yuri Benditovich
  2019-02-21  9:40                   ` Jason Wang
@ 2019-02-28 14:08                   ` Michael S. Tsirkin
  1 sibling, 0 replies; 20+ messages in thread
From: Michael S. Tsirkin @ 2019-02-28 14:08 UTC (permalink / raw)
  To: Yuri Benditovich; +Cc: Jason Wang, Yan Vugenfirer, qemu-devel

On Thu, Feb 21, 2019 at 10:18:52AM +0200, Yuri Benditovich wrote:
> On Thu, Feb 21, 2019 at 8:49 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > On 2019/2/21 下午2:00, Yuri Benditovich wrote:
> > > On Tue, Feb 19, 2019 at 8:27 AM Jason Wang <jasowang@redhat.com> wrote:
> > >>
> > >> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
> > >>> On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
> > >>>> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >>>>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> > >>>>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
> > >>>>>>> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > >>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > >>>>>>>> On startup/link-up in multiqueue configuration the virtio-net
> > >>>>>>>> tries to starts all the queues, including those that the guest
> > >>>>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > >>>>>>>> If the guest driver does not allocate queues that it will not
> > >>>>>>>> use (for example, Windows driver does not) and number of actually
> > >>>>>>>> used queues is less that maximal number supported by the device,
> > >>>>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate all
> > >>>>>>> queues in this case. This is usually what normal Linux driver did.
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> this causes vhost_net_start to fail and actually disables vhost
> > >>>>>>>> for all the queues, reducing the performance.
> > >>>>>>>> Current commit fixes this: initially only first queue is started,
> > >>>>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > >>>>>>>> requested by the guest.
> > >>>>>>>>
> > >>>>>>>> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > >>>>>>>> ---
> > >>>>>>>>     hw/net/virtio-net.c | 7 +++++--
> > >>>>>>>>     1 file changed, 5 insertions(+), 2 deletions(-)
> > >>>>>>>>
> > >>>>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > >>>>>>>> index 3f319ef723..d3b1ac6d3a 100644
> > >>>>>>>> --- a/hw/net/virtio-net.c
> > >>>>>>>> +++ b/hw/net/virtio-net.c
> > >>>>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > >>>>>>>>     {
> > >>>>>>>>         VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > >>>>>>>>         NetClientState *nc = qemu_get_queue(n->nic);
> > >>>>>>>> -    int queues = n->multiqueue ? n->max_queues : 1;
> > >>>>>>>> +    int queues = n->multiqueue ? n->curr_queues : 1;
> > >>>>>>>>
> > >>>>>>>>         if (!get_vhost_net(nc->peer)) {
> > >>>>>>>>             return;
> > >>>>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > >>>>>>>>             return VIRTIO_NET_ERR;
> > >>>>>>>>         }
> > >>>>>>>>
> > >>>>>>>> -    n->curr_queues = queues;
> > >>>>>>>>         /* stop the backend before changing the number of queues to avoid handling a
> > >>>>>>>>          * disabled queue */
> > >>>>>>>> +    virtio_net_set_status(vdev, 0);
> > >>>>>>> Any reason for doing this?
> > >>>>>> I think there are 2 reasons:
> > >>>>>> 1. The spec does not require guest SW to allocate unused queues.
> > >>>>>> 2. We spend guest's physical memory to just make vhost happy when it
> > >>>>>> touches queues that it should not use.
> > >>>>>>
> > >>>>>> Thanks,
> > >>>>>> Yuri Benditovich
> > >>>>> The spec also says:
> > >>>>>           queue_enable The driver uses this to selectively prevent the device from executing requests from this
> > >>>>>           virtqueue. 1 - enabled; 0 - disabled.
> > >>>>>
> > >>>>> While this is not a conformance clause this strongly implies that
> > >>>>> queues which are not enabled are never accessed by device.
> > >>>>>
> > >>>>> Yuri I am guessing you are not enabling these unused queues right?
> > >>>> Of course, we (Windows driver) do not.
> > >>>> The code of virtio-net passes max_queues to vhost and this causes
> > >>>> vhost to try accessing all the queues, fail on unused ones and finally
> > >>>> leave vhost disabled at all.
> > >>> Jason, at least for 1.0 accessing disabled queues looks like a spec
> > >>> violation. What do you think?
> > >>
> > >> Yes, but there's some issues:
> > >>
> > >> - How to detect a disabled queue for 0.9x device? Looks like there's no
> > >> way according to the spec, so device must assume all queues was enabled.
> > > Can you please add several words - what is 0.9 device (probably this
> > > is more about driver) and
> > > what is the problem with it?
> >
> >
> > It's not a net specific issue. 0.9x device is legacy device defined in
> > the spec. We don't have a method to disable and enable a specific queue
> > at that time. Michael said we can assume queue address 0 as disabled,
> > but there's still a question of how to enable it. Spec is unclear and it
> > was too late to add thing for legacy device. For 1.0 device we have
> > queue_enable, but its implementation is incomplete, since it can work
> > with vhost correctly, we probably need to add thing to make it work.
> >
> >
> > >
> > >> - For 1.0, if we depends on queue_enable, we should implement the
> > >> callback for vhost I think. Otherwise it's still buggy.
> > >>
> > >> So it looks tricky to enable and disable queues through set status
> > > If I succeed to modify the patch such a way that it will act only in
> > > 'target' case,
> > > i.e. only if some of queueus are not initialized (at time of
> > > driver_ok), will it be more safe?
> >
> >
> > For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
> > you enable one specific queue in this case? (setting status?)
> >
> 
> Do I understand correctly that for 0.9 device in some cases the device will
> receive feature _MQ set, but will not receive VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET?
> Or the problem is different?

No. For 0.9 it is possible that device adds buffers before DRIVER_OK.


> > A fundamental question is what prevents you from just initialization all
> > queues during driver start? It looks to me this save lots of efforts
> > than allocating queue dynamically.
> >
> 
> This is not so trivial in Windows driver, as it does not have objects for queues
> that it does not use. Linux driver first of all allocates all the
> queues and then
> adds Rx/Tx to those it will use. Windows driver first decides how many queues
> it will use then allocates objects for them and initializes them from zero to
> fully functional state.
> 
> > Thanks
> >
> >
> > >
> > >> Thanks
> > >>
> > >>
> > >>>>>
> > >>>>>>> Thanks
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> +
> > >>>>>>>> +    n->curr_queues = queues;
> > >>>>>>>> +
> > >>>>>>>>         virtio_net_set_status(vdev, vdev->status);
> > >>>>>>>>         virtio_net_set_queues(n);
> > >>>>>>>>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-02-28 14:09 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-13 14:51 [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest Yuri Benditovich
2019-02-18  3:49 ` Jason Wang
2019-02-18  9:58   ` Yuri Benditovich
2019-02-18 16:39     ` Michael S. Tsirkin
2019-02-18 20:49       ` Yuri Benditovich
2019-02-18 23:34         ` Michael S. Tsirkin
2019-02-19  6:27           ` Jason Wang
2019-02-19 14:19             ` Michael S. Tsirkin
2019-02-20 10:13               ` Jason Wang
2019-02-21  6:00             ` Yuri Benditovich
2019-02-21  6:49               ` Jason Wang
2019-02-21  8:18                 ` Yuri Benditovich
2019-02-21  9:40                   ` Jason Wang
2019-02-22  1:35                     ` Michael S. Tsirkin
2019-02-22  3:04                       ` Jason Wang
2019-02-22  3:10                         ` Michael S. Tsirkin
2019-02-22  4:22                           ` Michael S. Tsirkin
2019-02-25  7:47                             ` Jason Wang
2019-02-25 12:33                               ` Michael S. Tsirkin
2019-02-28 14:08                   ` Michael S. Tsirkin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.