qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better
@ 2019-09-17 12:25 Johannes Berg
  2019-09-17 17:13 ` no-reply
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Johannes Berg @ 2019-09-17 12:25 UTC (permalink / raw)
  To: qemu-devel; +Cc: Johannes Berg, Michael S . Tsirkin

From: Johannes Berg <johannes.berg@intel.com>

The code here is odd, for example will it print out invalid
file descriptor numbers that were never sent in the message.

Clean that up a bit so it's actually possible to implement
a device that uses polling.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
---
 contrib/libvhost-user/libvhost-user.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
index f1677da21201..17b7833d1f6b 100644
--- a/contrib/libvhost-user/libvhost-user.c
+++ b/contrib/libvhost-user/libvhost-user.c
@@ -920,6 +920,7 @@ static bool
 vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
 {
     int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+    bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
 
     if (index >= dev->max_queues) {
         vmsg_close_fds(vmsg);
@@ -927,8 +928,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
         return false;
     }
 
-    if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK ||
-        vmsg->fd_num != 1) {
+    if (nofd) {
+        vmsg_close_fds(vmsg);
+        return true;
+    }
+
+    if (vmsg->fd_num != 1) {
         vmsg_close_fds(vmsg);
         vu_panic(dev, "Invalid fds in request: %d", vmsg->request);
         return false;
@@ -1025,6 +1030,7 @@ static bool
 vu_set_vring_kick_exec(VuDev *dev, VhostUserMsg *vmsg)
 {
     int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+    bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
 
     DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64);
 
@@ -1038,8 +1044,8 @@ vu_set_vring_kick_exec(VuDev *dev, VhostUserMsg *vmsg)
         dev->vq[index].kick_fd = -1;
     }
 
-    dev->vq[index].kick_fd = vmsg->fds[0];
-    DPRINT("Got kick_fd: %d for vq: %d\n", vmsg->fds[0], index);
+    dev->vq[index].kick_fd = nofd ? -1 : vmsg->fds[0];
+    DPRINT("Got kick_fd: %d for vq: %d\n", dev->vq[index].kick_fd, index);
 
     dev->vq[index].started = true;
     if (dev->iface->queue_set_started) {
@@ -1116,6 +1122,7 @@ static bool
 vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
 {
     int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+    bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
 
     DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64);
 
@@ -1128,14 +1135,14 @@ vu_set_vring_call_exec(VuDev *dev, VhostUserMsg *vmsg)
         dev->vq[index].call_fd = -1;
     }
 
-    dev->vq[index].call_fd = vmsg->fds[0];
+    dev->vq[index].call_fd = nofd ? -1 : vmsg->fds[0];
 
     /* in case of I/O hang after reconnecting */
-    if (eventfd_write(vmsg->fds[0], 1)) {
+    if (dev->vq[index].call_fd != -1 && eventfd_write(vmsg->fds[0], 1)) {
         return -1;
     }
 
-    DPRINT("Got call_fd: %d for vq: %d\n", vmsg->fds[0], index);
+    DPRINT("Got call_fd: %d for vq: %d\n", dev->vq[index].call_fd, index);
 
     return false;
 }
@@ -1144,6 +1151,7 @@ static bool
 vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg)
 {
     int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+    bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
 
     DPRINT("u64: 0x%016"PRIx64"\n", vmsg->payload.u64);
 
@@ -1156,7 +1164,7 @@ vu_set_vring_err_exec(VuDev *dev, VhostUserMsg *vmsg)
         dev->vq[index].err_fd = -1;
     }
 
-    dev->vq[index].err_fd = vmsg->fds[0];
+    dev->vq[index].err_fd = nofd ? -1 : vmsg->fds[0];
 
     return false;
 }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better
  2019-09-17 12:25 [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better Johannes Berg
@ 2019-09-17 17:13 ` no-reply
  2019-09-17 17:14 ` no-reply
  2019-09-18  9:39 ` Stefan Hajnoczi
  2 siblings, 0 replies; 6+ messages in thread
From: no-reply @ 2019-09-17 17:13 UTC (permalink / raw)
  To: johannes; +Cc: qemu-devel, johannes.berg, mst

Patchew URL: https://patchew.org/QEMU/20190917122559.15555-1-johannes@sipsolutions.net/



Hi,

This series failed the asan build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-fedora V=1 NETWORK=1
time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1
=== TEST SCRIPT END ===

./tests/docker/docker.py --engine auto build qemu:fedora tests/docker/dockerfiles/fedora.docker   --add-current-user  
Image is up to date.
  LD      docker-test-debug@fedora.mo
cc: fatal error: no input files
compilation terminated.
make: *** [docker-test-debug@fedora.mo] Error 4



The full log is available at
http://patchew.org/logs/20190917122559.15555-1-johannes@sipsolutions.net/testing.asan/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better
  2019-09-17 12:25 [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better Johannes Berg
  2019-09-17 17:13 ` no-reply
@ 2019-09-17 17:14 ` no-reply
  2019-09-18  9:39 ` Stefan Hajnoczi
  2 siblings, 0 replies; 6+ messages in thread
From: no-reply @ 2019-09-17 17:14 UTC (permalink / raw)
  To: johannes; +Cc: qemu-devel, johannes.berg, mst

Patchew URL: https://patchew.org/QEMU/20190917122559.15555-1-johannes@sipsolutions.net/



Hi,

This series failed the docker-mingw@fedora build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#! /bin/bash
make docker-image-fedora V=1 NETWORK=1
time make docker-test-mingw@fedora J=14 NETWORK=1
=== TEST SCRIPT END ===

./tests/docker/docker.py --engine auto build qemu:fedora tests/docker/dockerfiles/fedora.docker   --add-current-user  
Image is up to date.
  LD      docker-test-mingw@fedora.mo
cc: fatal error: no input files
compilation terminated.
make: *** [docker-test-mingw@fedora.mo] Error 4



The full log is available at
http://patchew.org/logs/20190917122559.15555-1-johannes@sipsolutions.net/testing.docker-mingw@fedora/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better
  2019-09-17 12:25 [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better Johannes Berg
  2019-09-17 17:13 ` no-reply
  2019-09-17 17:14 ` no-reply
@ 2019-09-18  9:39 ` Stefan Hajnoczi
  2019-09-18  9:49   ` Johannes Berg
  2 siblings, 1 reply; 6+ messages in thread
From: Stefan Hajnoczi @ 2019-09-18  9:39 UTC (permalink / raw)
  To: Johannes Berg; +Cc: qemu-devel, Johannes Berg, Michael S . Tsirkin

[-- Attachment #1: Type: text/plain, Size: 1294 bytes --]

On Tue, Sep 17, 2019 at 02:25:59PM +0200, Johannes Berg wrote:
> diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
> index f1677da21201..17b7833d1f6b 100644
> --- a/contrib/libvhost-user/libvhost-user.c
> +++ b/contrib/libvhost-user/libvhost-user.c
> @@ -920,6 +920,7 @@ static bool
>  vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
>  {
>      int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
> +    bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
>  
>      if (index >= dev->max_queues) {
>          vmsg_close_fds(vmsg);
> @@ -927,8 +928,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
>          return false;
>      }
>  
> -    if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK ||
> -        vmsg->fd_num != 1) {
> +    if (nofd) {
> +        vmsg_close_fds(vmsg);
> +        return true;
> +    }

With the following change to vmsg_close_fds():

  for (i = 0; i < vmsg->fd_num; i++) {
      close(vmsg->fds[i]);
  }
+ for (i = 0; i < sizeof(vmsg->fd_num) / sizeof(vmsg->fd_num[0]); i++) {
+     vmsg->fds[i] = -1;
+ }
+ vmsg->fd_num = 0;

...the message handler functions below can use vmsg->fds[0] (-1) without
worrying about NOFD.  This makes the code simpler.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better
  2019-09-18  9:39 ` Stefan Hajnoczi
@ 2019-09-18  9:49   ` Johannes Berg
  2019-09-20  8:58     ` Stefan Hajnoczi
  0 siblings, 1 reply; 6+ messages in thread
From: Johannes Berg @ 2019-09-18  9:49 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel, Michael S . Tsirkin

On Wed, 2019-09-18 at 10:39 +0100, Stefan Hajnoczi wrote:
> 
> >  vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
> >  {
> >      int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
> > +    bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
> >  
> >      if (index >= dev->max_queues) {
> >          vmsg_close_fds(vmsg);
> > @@ -927,8 +928,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
> >          return false;
> >      }
> >  
> > -    if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK ||
> > -        vmsg->fd_num != 1) {
> > +    if (nofd) {
> > +        vmsg_close_fds(vmsg);
> > +        return true;
> > +    }

So in this particular code you quoted, I actually just aligned to have
the same "bool nofd" variable - and I made it return "true" when no FD
was given.

It couldn't make use of what you proposed:

> With the following change to vmsg_close_fds():
> 
>   for (i = 0; i < vmsg->fd_num; i++) {
>       close(vmsg->fds[i]);
>   }
> + for (i = 0; i < sizeof(vmsg->fd_num) / sizeof(vmsg->fd_num[0]); i++) {
> +     vmsg->fds[i] = -1;
> + }
> + vmsg->fd_num = 0;
> 
> ...the message handler functions below can use vmsg->fds[0] (-1) without
> worrying about NOFD.  This makes the code simpler.

because fd_num != 1 leads to the original code returning false, which
leads to the ring not getting started in vu_set_vring_kick_exec(). So we
need the special code here, can be argued if I should pull out the test
into the "bool nofd" variable or not ... *shrug*

The changes in vu_set_vring_kick_exec() and vu_set_vring_err_exec()
would indeed then not be necessary, but in vu_set_vring_call_exec() we
should still avoid the eventfd_write() if it's going to get -1.


So, yeah - could be a bit simpler there. I'd say being explicit here is
easier to understand and thus nicer, but your (or Michael's I guess?)
call.

johannes



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better
  2019-09-18  9:49   ` Johannes Berg
@ 2019-09-20  8:58     ` Stefan Hajnoczi
  0 siblings, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2019-09-20  8:58 UTC (permalink / raw)
  To: Johannes Berg; +Cc: qemu-devel, Michael S . Tsirkin

[-- Attachment #1: Type: text/plain, Size: 2266 bytes --]

On Wed, Sep 18, 2019 at 11:49:14AM +0200, Johannes Berg wrote:
> On Wed, 2019-09-18 at 10:39 +0100, Stefan Hajnoczi wrote:
> > 
> > >  vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
> > >  {
> > >      int index = vmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
> > > +    bool nofd = vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK;
> > >  
> > >      if (index >= dev->max_queues) {
> > >          vmsg_close_fds(vmsg);
> > > @@ -927,8 +928,12 @@ vu_check_queue_msg_file(VuDev *dev, VhostUserMsg *vmsg)
> > >          return false;
> > >      }
> > >  
> > > -    if (vmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK ||
> > > -        vmsg->fd_num != 1) {
> > > +    if (nofd) {
> > > +        vmsg_close_fds(vmsg);
> > > +        return true;
> > > +    }
> 
> So in this particular code you quoted, I actually just aligned to have
> the same "bool nofd" variable - and I made it return "true" when no FD
> was given.
> 
> It couldn't make use of what you proposed:
> 
> > With the following change to vmsg_close_fds():
> > 
> >   for (i = 0; i < vmsg->fd_num; i++) {
> >       close(vmsg->fds[i]);
> >   }
> > + for (i = 0; i < sizeof(vmsg->fd_num) / sizeof(vmsg->fd_num[0]); i++) {
> > +     vmsg->fds[i] = -1;
> > + }
> > + vmsg->fd_num = 0;
> > 
> > ...the message handler functions below can use vmsg->fds[0] (-1) without
> > worrying about NOFD.  This makes the code simpler.
> 
> because fd_num != 1 leads to the original code returning false, which
> leads to the ring not getting started in vu_set_vring_kick_exec(). So we
> need the special code here, can be argued if I should pull out the test
> into the "bool nofd" variable or not ... *shrug*
> 
> The changes in vu_set_vring_kick_exec() and vu_set_vring_err_exec()
> would indeed then not be necessary, but in vu_set_vring_call_exec() we
> should still avoid the eventfd_write() if it's going to get -1.
> 
> 
> So, yeah - could be a bit simpler there. I'd say being explicit here is
> easier to understand and thus nicer, but your (or Michael's I guess?)
> call.

Yeah, there is a trade-off to hiding NOFD and if what I proposed isn't
convincing then it wasn't a good proposal :-):

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-09-20  8:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-17 12:25 [Qemu-devel] [PATCH] libvhost-user: handle NOFD flag in call/kick/err better Johannes Berg
2019-09-17 17:13 ` no-reply
2019-09-17 17:14 ` no-reply
2019-09-18  9:39 ` Stefan Hajnoczi
2019-09-18  9:49   ` Johannes Berg
2019-09-20  8:58     ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).