All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [Virtio-fs] vhost_set_vring_kick failed
@ 2022-05-26  5:34 Prashant Dewan
  2022-05-26  7:49 ` Stefan Hajnoczi
  0 siblings, 1 reply; 13+ messages in thread
From: Prashant Dewan @ 2022-05-26  5:34 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: virtio-fs

[-- Attachment #1: Type: text/plain, Size: 1677 bytes --]

Hello Stefan,

Here is the output from virtofsd..

------------------------------------
- Journal begins at Thu 2022-05-26 04:23:42 UTC, ends at Thu 2022-05-26 04:41:57 UTC. --
May 26 04:24:13 localhost systemd[1]: Started virtio-fs vhost-user device daemon.<<Placeholder: run this daemon as non-privileged user>>.
May 26 04:24:13 localhost virtiofsd[505]: [2022-05-26T04:24:13Z INFO  virtiofsd] Waiting for vhost-user socket connection...
May 26 04:27:11 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:11Z INFO  virtiofsd] Client connected, servicing requests
May 26 04:27:33 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:33Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)
May 26 04:27:33 device-3e4b479e2ee9c647 systemd[1]: virtiofsd.service: Deactivated successfully.
May 26 04:27:33 device-3e4b479e2ee9c647 systemd[1]: virtiofsd.service: Scheduled restart job, restart counter is at 1.

________________________________
From: Stefan Hajnoczi
Sent: Wednesday, May 25, 2022 4:02 PM
To: Prashant Dewan
Cc: virtio-fs@redhat.com
Subject: Re: [Virtio-fs] vhost_set_vring_kick failed

On Tue, May 24, 2022 at 02:29:12PM +0000, Pra Dew wrote:
> I am trying to setup virtfs using KVM and QEMU.
>
>
>   1.  Inside the VM, I do
>      *   mount -t virtiofs myfs /dev/myfile  --succeeds
>      *   cd /dev/myfile  -hangs
>
>
>   1.  The host logs -
>
>  qemu-system: Failed to write msg. Wrote -1 instead of 20.

Sending a vhost-user protocol message failed. It is likely that the
virtiofsd process terminated and the UNIX domain socket closed on QEMU.

Is there any output from virtiofsd?

Stefan

[-- Attachment #2: Type: text/html, Size: 3268 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
  2022-05-26  5:34 [Virtio-fs] vhost_set_vring_kick failed Prashant Dewan
@ 2022-05-26  7:49 ` Stefan Hajnoczi
  2022-05-26  8:06   ` Sergio Lopez
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Hajnoczi @ 2022-05-26  7:49 UTC (permalink / raw)
  To: Prashant Dewan; +Cc: virtio-fs, slp, gmaglione

[-- Attachment #1: Type: text/plain, Size: 1940 bytes --]

On Thu, May 26, 2022 at 05:34:42AM +0000, Prashant Dewan wrote:
> Hello Stefan,
> 
> Here is the output from virtofsd..
> 
> ------------------------------------
> - Journal begins at Thu 2022-05-26 04:23:42 UTC, ends at Thu 2022-05-26 04:41:57 UTC. --
> May 26 04:24:13 localhost systemd[1]: Started virtio-fs vhost-user device daemon.<<Placeholder: run this daemon as non-privileged user>>.
> May 26 04:24:13 localhost virtiofsd[505]: [2022-05-26T04:24:13Z INFO  virtiofsd] Waiting for vhost-user socket connection...
> May 26 04:27:11 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:11Z INFO  virtiofsd] Client connected, servicing requests
> May 26 04:27:33 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:33Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)

virtiofsd terminated. I have CCed Sergio and German, who may have
suggestions about what to check next.

Stefan

> May 26 04:27:33 device-3e4b479e2ee9c647 systemd[1]: virtiofsd.service: Deactivated successfully.
> May 26 04:27:33 device-3e4b479e2ee9c647 systemd[1]: virtiofsd.service: Scheduled restart job, restart counter is at 1.
> 
> ________________________________
> From: Stefan Hajnoczi
> Sent: Wednesday, May 25, 2022 4:02 PM
> To: Prashant Dewan
> Cc: virtio-fs@redhat.com
> Subject: Re: [Virtio-fs] vhost_set_vring_kick failed
> 
> On Tue, May 24, 2022 at 02:29:12PM +0000, Pra Dew wrote:
> > I am trying to setup virtfs using KVM and QEMU.
> >
> >
> >   1.  Inside the VM, I do
> >      *   mount -t virtiofs myfs /dev/myfile  --succeeds
> >      *   cd /dev/myfile  -hangs
> >
> >
> >   1.  The host logs -
> >
> >  qemu-system: Failed to write msg. Wrote -1 instead of 20.
> 
> Sending a vhost-user protocol message failed. It is likely that the
> virtiofsd process terminated and the UNIX domain socket closed on QEMU.
> 
> Is there any output from virtiofsd?
> 
> Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
  2022-05-26  7:49 ` Stefan Hajnoczi
@ 2022-05-26  8:06   ` Sergio Lopez
  2022-05-30 22:11     ` Vivek Goyal
  0 siblings, 1 reply; 13+ messages in thread
From: Sergio Lopez @ 2022-05-26  8:06 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Prashant Dewan, virtio-fs, gmaglione

[-- Attachment #1: Type: text/plain, Size: 1148 bytes --]

On Thu, May 26, 2022 at 08:49:34AM +0100, Stefan Hajnoczi wrote:
> On Thu, May 26, 2022 at 05:34:42AM +0000, Prashant Dewan wrote:
> > Hello Stefan,
> > 
> > Here is the output from virtofsd..
> > 
> > ------------------------------------
> > - Journal begins at Thu 2022-05-26 04:23:42 UTC, ends at Thu 2022-05-26 04:41:57 UTC. --
> > May 26 04:24:13 localhost systemd[1]: Started virtio-fs vhost-user device daemon.<<Placeholder: run this daemon as non-privileged user>>.
> > May 26 04:24:13 localhost virtiofsd[505]: [2022-05-26T04:24:13Z INFO  virtiofsd] Waiting for vhost-user socket connection...
> > May 26 04:27:11 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:11Z INFO  virtiofsd] Client connected, servicing requests
> > May 26 04:27:33 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:33Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)
> 
> virtiofsd terminated. I have CCed Sergio and German, who may have
> suggestions about what to check next.

Hi,

Could you please tell us which QEMU version are you running, and paste
here the its command line?

Thanks,
Sergio.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
  2022-05-26  8:06   ` Sergio Lopez
@ 2022-05-30 22:11     ` Vivek Goyal
  2022-05-31 22:25       ` Pra.. Dew..
  0 siblings, 1 reply; 13+ messages in thread
From: Vivek Goyal @ 2022-05-30 22:11 UTC (permalink / raw)
  To: Sergio Lopez, Prashant Dewan; +Cc: Stefan Hajnoczi, virtio-fs

On Thu, May 26, 2022 at 10:06:46AM +0200, Sergio Lopez wrote:
> On Thu, May 26, 2022 at 08:49:34AM +0100, Stefan Hajnoczi wrote:
> > On Thu, May 26, 2022 at 05:34:42AM +0000, Prashant Dewan wrote:
> > > Hello Stefan,
> > > 
> > > Here is the output from virtofsd..
> > > 
> > > ------------------------------------
> > > - Journal begins at Thu 2022-05-26 04:23:42 UTC, ends at Thu 2022-05-26 04:41:57 UTC. --
> > > May 26 04:24:13 localhost systemd[1]: Started virtio-fs vhost-user device daemon.<<Placeholder: run this daemon as non-privileged user>>.
> > > May 26 04:24:13 localhost virtiofsd[505]: [2022-05-26T04:24:13Z INFO  virtiofsd] Waiting for vhost-user socket connection...
> > > May 26 04:27:11 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:11Z INFO  virtiofsd] Client connected, servicing requests
> > > May 26 04:27:33 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:33Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)

So looks like following gave error message. There is no error messsage
from the actual thread which failed. This is just the main thread
waiting.

    if let Err(e) = daemon.wait() {
        match e {
            HandleRequest(PartialMessage) => info!("Client disconnected, shutting down"),
            _ => error!("Waiting for daemon failed: {:?}", e),
        }
    }

I can't find "InvalidParam" in rust virtiofsd code. Does that mean,
that this error is being returned by one of the crates we are using.

Prashant, you could try running rust virtiofsd with option "-d" to enable
debug messages and that might show something interesting.

Also, if you are able to reproduce the problem all the time, I think
you can start virtiofsd under gdb and see where exactly is it failing.

Thanks
Vivek

> > 
> > virtiofsd terminated. I have CCed Sergio and German, who may have
> > suggestions about what to check next.
> 
> Hi,
> 
> Could you please tell us which QEMU version are you running, and paste
> here the its command line?
> 
> Thanks,
> Sergio.



> _______________________________________________
> Virtio-fs mailing list
> Virtio-fs@redhat.com
> https://listman.redhat.com/mailman/listinfo/virtio-fs


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
  2022-05-30 22:11     ` Vivek Goyal
@ 2022-05-31 22:25       ` Pra.. Dew..
  2022-05-31 22:50         ` Vivek Goyal
  2022-06-01  6:01         ` Sergio Lopez
  0 siblings, 2 replies; 13+ messages in thread
From: Pra.. Dew.. @ 2022-05-31 22:25 UTC (permalink / raw)
  To: Vivek Goyal, Sergio Lopez; +Cc: Stefan Hajnoczi, virtio-fs

[-- Attachment #1: Type: text/plain, Size: 3247 bytes --]

Thank you so much Vivek. I enabled the trace in the code itself and ran it. The only message I got was InvalidParam message below. I did not get any other message.  We are going to try changing the structure [VhostUserMemRegMsg]  padding to see if that fixes the problem. Is there anything else we should try?
________________________________
From: Vivek Goyal <vgoyal@redhat.com>
Sent: Monday, May 30, 2022 10:11 PM
To: Sergio Lopez <slp@redhat.com>; Prashant Dewan <linux_learner@outlook.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>; virtio-fs@redhat.com <virtio-fs@redhat.com>
Subject: Re: [Virtio-fs] vhost_set_vring_kick failed

On Thu, May 26, 2022 at 10:06:46AM +0200, Sergio Lopez wrote:
> On Thu, May 26, 2022 at 08:49:34AM +0100, Stefan Hajnoczi wrote:
> > On Thu, May 26, 2022 at 05:34:42AM +0000, Prashant Dewan wrote:
> > > Hello Stefan,
> > >
> > > Here is the output from virtofsd..
> > >
> > > ------------------------------------
> > > - Journal begins at Thu 2022-05-26 04:23:42 UTC, ends at Thu 2022-05-26 04:41:57 UTC. --
> > > May 26 04:24:13 localhost systemd[1]: Started virtio-fs vhost-user device daemon.<<Placeholder: run this daemon as non-privileged user>>.
> > > May 26 04:24:13 localhost virtiofsd[505]: [2022-05-26T04:24:13Z INFO  virtiofsd] Waiting for vhost-user socket connection...
> > > May 26 04:27:11 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:11Z INFO  virtiofsd] Client connected, servicing requests
> > > May 26 04:27:33 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:33Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)

So looks like following gave error message. There is no error messsage
from the actual thread which failed. This is just the main thread
waiting.

    if let Err(e) = daemon.wait() {
        match e {
            HandleRequest(PartialMessage) => info!("Client disconnected, shutting down"),
            _ => error!("Waiting for daemon failed: {:?}", e),
        }
    }

I can't find "InvalidParam" in rust virtiofsd code. Does that mean,
that this error is being returned by one of the crates we are using.

Prashant, you could try running rust virtiofsd with option "-d" to enable
debug messages and that might show something interesting.

Also, if you are able to reproduce the problem all the time, I think
you can start virtiofsd under gdb and see where exactly is it failing.

Thanks
Vivek

> >
> > virtiofsd terminated. I have CCed Sergio and German, who may have
> > suggestions about what to check next.
>
> Hi,
>
> Could you please tell us which QEMU version are you running, and paste
> here the its command line?
>
> Thanks,
> Sergio.



> _______________________________________________
> Virtio-fs mailing list
> Virtio-fs@redhat.com
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flistman.redhat.com%2Fmailman%2Flistinfo%2Fvirtio-fs&amp;data=05%7C01%7C%7Cc4d5792347234799874008da42896937%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637895455186641964%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=ox5y7KH0DjiooIeJ367DZiexKQutIouWSCi5OLld%2BNY%3D&amp;reserved=0


[-- Attachment #2: Type: text/html, Size: 5458 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
  2022-05-31 22:25       ` Pra.. Dew..
@ 2022-05-31 22:50         ` Vivek Goyal
  2022-06-01  6:01         ` Sergio Lopez
  1 sibling, 0 replies; 13+ messages in thread
From: Vivek Goyal @ 2022-05-31 22:50 UTC (permalink / raw)
  To: Pra.. Dew..; +Cc: Sergio Lopez, Stefan Hajnoczi, virtio-fs

On Tue, May 31, 2022 at 10:25:12PM +0000, Pra.. Dew.. wrote:
> Thank you so much Vivek. I enabled the trace in the code itself and ran it. The only message I got was InvalidParam message below. I did not get any other message.  We are going to try changing the structure [VhostUserMemRegMsg]  padding to see if that fixes the problem. Is there anything else we should try?

Hi Prashant,

I was trying to look at some of the code today and looks like
error message might be coming from vhost crate
(src/vhost_user/slave_req_handler.rs).

I don't understand the whole logic very well. So I will defer it to
Sergio if something can be done to support older qemu.

Thanks
Vivek

> ________________________________
> From: Vivek Goyal <vgoyal@redhat.com>
> Sent: Monday, May 30, 2022 10:11 PM
> To: Sergio Lopez <slp@redhat.com>; Prashant Dewan <linux_learner@outlook.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>; virtio-fs@redhat.com <virtio-fs@redhat.com>
> Subject: Re: [Virtio-fs] vhost_set_vring_kick failed
> 
> On Thu, May 26, 2022 at 10:06:46AM +0200, Sergio Lopez wrote:
> > On Thu, May 26, 2022 at 08:49:34AM +0100, Stefan Hajnoczi wrote:
> > > On Thu, May 26, 2022 at 05:34:42AM +0000, Prashant Dewan wrote:
> > > > Hello Stefan,
> > > >
> > > > Here is the output from virtofsd..
> > > >
> > > > ------------------------------------
> > > > - Journal begins at Thu 2022-05-26 04:23:42 UTC, ends at Thu 2022-05-26 04:41:57 UTC. --
> > > > May 26 04:24:13 localhost systemd[1]: Started virtio-fs vhost-user device daemon.<<Placeholder: run this daemon as non-privileged user>>.
> > > > May 26 04:24:13 localhost virtiofsd[505]: [2022-05-26T04:24:13Z INFO  virtiofsd] Waiting for vhost-user socket connection...
> > > > May 26 04:27:11 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:11Z INFO  virtiofsd] Client connected, servicing requests
> > > > May 26 04:27:33 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:33Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)
> 
> So looks like following gave error message. There is no error messsage
> from the actual thread which failed. This is just the main thread
> waiting.
> 
>     if let Err(e) = daemon.wait() {
>         match e {
>             HandleRequest(PartialMessage) => info!("Client disconnected, shutting down"),
>             _ => error!("Waiting for daemon failed: {:?}", e),
>         }
>     }
> 
> I can't find "InvalidParam" in rust virtiofsd code. Does that mean,
> that this error is being returned by one of the crates we are using.
> 
> Prashant, you could try running rust virtiofsd with option "-d" to enable
> debug messages and that might show something interesting.
> 
> Also, if you are able to reproduce the problem all the time, I think
> you can start virtiofsd under gdb and see where exactly is it failing.
> 
> Thanks
> Vivek
> 
> > >
> > > virtiofsd terminated. I have CCed Sergio and German, who may have
> > > suggestions about what to check next.
> >
> > Hi,
> >
> > Could you please tell us which QEMU version are you running, and paste
> > here the its command line?
> >
> > Thanks,
> > Sergio.
> 
> 
> 
> > _______________________________________________
> > Virtio-fs mailing list
> > Virtio-fs@redhat.com
> > https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Flistman.redhat.com%2Fmailman%2Flistinfo%2Fvirtio-fs&amp;data=05%7C01%7C%7Cc4d5792347234799874008da42896937%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637895455186641964%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=ox5y7KH0DjiooIeJ367DZiexKQutIouWSCi5OLld%2BNY%3D&amp;reserved=0
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
  2022-05-31 22:25       ` Pra.. Dew..
  2022-05-31 22:50         ` Vivek Goyal
@ 2022-06-01  6:01         ` Sergio Lopez
  1 sibling, 0 replies; 13+ messages in thread
From: Sergio Lopez @ 2022-06-01  6:01 UTC (permalink / raw)
  To: Pra.. Dew..; +Cc: Vivek Goyal, Stefan Hajnoczi, virtio-fs


[-- Attachment #1.1.1: Type: text/plain, Size: 610 bytes --]

On Tue, May 31, 2022 at 10:25:12PM +0000, Pra.. Dew.. wrote:
> Thank you so much Vivek. I enabled the trace in the code itself and
> ran it. The only message I got was InvalidParam message below. I did
> not get any other message.  We are going to try changing the
> structure [VhostUserMemRegMsg]  padding to see if that fixes the
> problem. Is there anything else we should try?

Applying 3009edff81 to qemu-5.1.0 should do the trick. I'm attaching
the patch to this email, so you can simply apply it with something
like:

patch -p1 < 0001-vhost-user-fix-VHOST_USER_ADD-REM_MEM_REG-truncation.patch

Sergio.

[-- Attachment #1.1.2: 0001-vhost-user-fix-VHOST_USER_ADD-REM_MEM_REG-truncation.patch --]
[-- Type: text/plain, Size: 5945 bytes --]

From 3009edff8192991293fe9e2b50b0d90db83c4a89 Mon Sep 17 00:00:00 2001
From: Stefan Hajnoczi <stefanha@redhat.com>
Date: Mon, 9 Nov 2020 17:43:55 +0000
Subject: [PATCH] vhost-user: fix VHOST_USER_ADD/REM_MEM_REG truncation

QEMU currently truncates the mmap_offset field when sending
VHOST_USER_ADD_MEM_REG and VHOST_USER_REM_MEM_REG messages. The struct
layout looks like this:

  typedef struct VhostUserMemoryRegion {
      uint64_t guest_phys_addr;
      uint64_t memory_size;
      uint64_t userspace_addr;
      uint64_t mmap_offset;
  } VhostUserMemoryRegion;

  typedef struct VhostUserMemRegMsg {
      uint32_t padding;
      /* WARNING: there is a 32-bit hole here! */
      VhostUserMemoryRegion region;
  } VhostUserMemRegMsg;

The payload size is calculated as follows when sending the message in
hw/virtio/vhost-user.c:

  msg->hdr.size = sizeof(msg->payload.mem_reg.padding) +
      sizeof(VhostUserMemoryRegion);

This calculation produces an incorrect result of only 36 bytes.
sizeof(VhostUserMemRegMsg) is actually 40 bytes.

The consequence of this is that the final field, mmap_offset, is
truncated. This breaks x86_64 TCG guests on s390 hosts. Other guest/host
combinations may get lucky if either of the following holds:
1. The guest memory layout does not need mmap_offset != 0.
2. The host is little-endian and mmap_offset <= 0xffffffff so the
   truncation has no effect.

Fix this by extending the existing 32-bit padding field to 64-bit. Now
the padding reflects the actual compiler padding. This can be verified
using pahole(1).

Also document the layout properly in the vhost-user specification.  The
vhost-user spec did not document the exact layout. It would be
impossible to implement the spec without looking at the QEMU source
code.

Existing vhost-user frontends and device backends continue to work after
this fix has been applied. The only change in the wire protocol is that
QEMU now sets hdr.size to 40 instead of 36. If a vhost-user
implementation has a hardcoded size check for 36 bytes, then it will
fail with new QEMUs. Both QEMU and DPDK/SPDK don't check the exact
payload size, so they continue to work.

Fixes: f1aeb14b0809e313c74244d838645ed25e85ea63 ("Transmit vhost-user memory regions individually")
Cc: Raphael Norwitz <raphael.norwitz@nutanix.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20201109174355.1069147-1-stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Fixes: f1aeb14b0809 ("Transmit vhost-user memory regions individually")
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
 contrib/libvhost-user/libvhost-user.h |  2 +-
 docs/interop/vhost-user.rst           | 21 +++++++++++++++++++--
 hw/virtio/vhost-user.c                |  5 ++---
 3 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
index a1539dbb69..7d47f1364a 100644
--- a/contrib/libvhost-user/libvhost-user.h
+++ b/contrib/libvhost-user/libvhost-user.h
@@ -136,7 +136,7 @@ typedef struct VhostUserMemory {
 } VhostUserMemory;
 
 typedef struct VhostUserMemRegMsg {
-    uint32_t padding;
+    uint64_t padding;
     VhostUserMemoryRegion region;
 } VhostUserMemRegMsg;
 
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 988f154144..6d4025ba6a 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -131,6 +131,23 @@ A region is:
 
 :mmap offset: 64-bit offset where region starts in the mapped memory
 
+Single memory region description
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
++---------+---------------+------+--------------+-------------+
+| padding | guest address | size | user address | mmap offset |
++---------+---------------+------+--------------+-------------+
+
+:padding: 64-bit
+
+:guest address: a 64-bit guest address of the region
+
+:size: a 64-bit size
+
+:user address: a 64-bit user address
+
+:mmap offset: 64-bit offset where region starts in the mapped memory
+
 Log description
 ^^^^^^^^^^^^^^^
 
@@ -1281,7 +1298,7 @@ Master message types
 ``VHOST_USER_ADD_MEM_REG``
   :id: 37
   :equivalent ioctl: N/A
-  :slave payload: memory region
+  :slave payload: single memory region description
 
   When the ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol
   feature has been successfully negotiated, this message is submitted
@@ -1296,7 +1313,7 @@ Master message types
 ``VHOST_USER_REM_MEM_REG``
   :id: 38
   :equivalent ioctl: N/A
-  :slave payload: memory region
+  :slave payload: single memory region description
 
   When the ``VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS`` protocol
   feature has been successfully negotiated, this message is submitted
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 9c5b4f7fbc..2fdd5daf74 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -149,7 +149,7 @@ typedef struct VhostUserMemory {
 } VhostUserMemory;
 
 typedef struct VhostUserMemRegMsg {
-    uint32_t padding;
+    uint64_t padding;
     VhostUserMemoryRegion region;
 } VhostUserMemRegMsg;
 
@@ -800,8 +800,7 @@ static int vhost_user_add_remove_regions(struct vhost_dev *dev,
     uint64_t shadow_pcb[VHOST_USER_MAX_RAM_SLOTS] = {};
     int nr_add_reg, nr_rem_reg;
 
-    msg->hdr.size = sizeof(msg->payload.mem_reg.padding) +
-        sizeof(VhostUserMemoryRegion);
+    msg->hdr.size = sizeof(msg->payload.mem_reg);
 
     /* Find the regions which need to be removed or added. */
     scrub_shadow_regions(dev, add_reg, &nr_add_reg, rem_reg, &nr_rem_reg,
-- 
2.36.1


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
@ 2022-06-01 19:12 Pra.. Dew..
  0 siblings, 0 replies; 13+ messages in thread
From: Pra.. Dew.. @ 2022-06-01 19:12 UTC (permalink / raw)
  To: Sergio Lopez; +Cc: Vivek Goyal, Stefan Hajnoczi, virtio-fs

[-- Attachment #1: Type: text/plain, Size: 4855 bytes --]

Thank you Sergio. When I applied the patch, I got stuck  here. The header size is correct (40 bytes).

vhost-user.c
 390     ret = qemu_chr_fe_write_all(chr, (const uint8_t *) msg, size);
 391     if (ret != size) {
 392         error_report("Failed to write msg."
 393                      " Wrote %d instead of %d.", ret, size);
 394         return -1;
 395     }


----
Jun 01 19:01:52 device-d1fe4f98f7091676 systemd[1]: Started Application Runtime Environment VMM.
Jun 01 19:01:52 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info: Passing through UIO device usb-phy@29910000:
Jun 01 19:01:52 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         Region: base: 0000000029910000 size: 0000000000010000
Jun 01 19:01:52 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         IRQ: 104
Jun 01 19:01:52 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info: Passing through UIO device gpio@2d010000:
Jun 01 19:01:52 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         Region: base: 000000002d010000 size: 0000000000002000
Jun 01 19:01:52 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         IRQ: 131
Jun 01 19:01:52 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info: Passing through UIO device ram:
Jun 01 19:01:52 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         Region: base: 00000000b0000000 size: 0000000050000000
Jun 01 19:01:55 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info: Passing through UIO device uio@298c0000:
Jun 01 19:01:55 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         Region: base: 00000000298c0000 size: 0000000000010000
Jun 01 19:01:55 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info: Passing through UIO device uio@29800000:
Jun 01 19:01:55 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         Region: base: 0000000029800000 size: 0000000000010000
Jun 01 19:01:55 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info: Passing through UIO device usb@29900000:
Jun 01 19:01:55 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         Region: base: 0000000029900000 size: 0000000000001000
Jun 01 19:01:55 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info:         IRQ: 103
Jun 01 19:01:55 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: info: Successfully realized 6 UIO devices
Jun 01 19:01:57 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: warning: Could not locate arm,scmi-shmem node
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: AS_DEBUG: hdr_size=0x28
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: Failed to write msg. Wrote -1 instead of 52.
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: vhost_set_vring_addr failed: Invalid argument (22)
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: Failed to set msg fds.
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: vhost VQ 0 ring restore failed: -1: Invalid argument (22)
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: Error starting vhost: 22
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: Failed to set msg fds.
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: vhost_set_vring_call failed: Resource temporarily unavailable (11)
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: Failed to set msg fds.
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: vhost_set_vring_call failed: Resource temporarily unavailable (11)
Jun 01 19:02:23 device-d1fe4f98f7091676 are-launcher[651]: qemu-system-device: Failed to read from slave.
device-d1fe4f98f7091676:~$

________________________________
From: Sergio Lopez
Sent: Wednesday, June 1, 2022 6:01 AM
To: Pra.. Dew..
Cc: Vivek Goyal; Stefan Hajnoczi; virtio-fs@redhat.com
Subject: Re: [Virtio-fs] vhost_set_vring_kick failed

On Tue, May 31, 2022 at 10:25:12PM +0000, Pra.. Dew.. wrote:
> Thank you so much Vivek. I enabled the trace in the code itself and
> ran it. The only message I got was InvalidParam message below. I did
> not get any other message.  We are going to try changing the
> structure [VhostUserMemRegMsg]  padding to see if that fixes the
> problem. Is there anything else we should try?

Applying 3009edff81 to qemu-5.1.0 should do the trick. I'm attaching
the patch to this email, so you can simply apply it with something
like:

patch -p1 < 0001-vhost-user-fix-VHOST_USER_ADD-REM_MEM_REG-truncation.patch

Sergio.

[-- Attachment #2: Type: text/html, Size: 7297 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
@ 2022-05-31 18:33 Pra.. Dew..
  0 siblings, 0 replies; 13+ messages in thread
From: Pra.. Dew.. @ 2022-05-31 18:33 UTC (permalink / raw)
  To: Sergio Lopez; +Cc: Stefan Hajnoczi, virtio-fs, gmaglione, Vivek Goyal

[-- Attachment #1: Type: text/plain, Size: 2305 bytes --]

Thank you Sergio.  Is there a patch I could use to make it work with an older version?

Thanks
Prashant



________________________________
From: Sergio Lopez
Sent: Tuesday, May 31, 2022 10:50 AM
To: Pra.. Dew..
Cc: Stefan Hajnoczi; virtio-fs@redhat.com; gmaglione@redhat.com; Vivek Goyal
Subject: Re: [Virtio-fs] vhost_set_vring_kick failed

On Thu, May 26, 2022 at 04:07:36PM +0000, Pra.. Dew.. wrote:
> Thanks for the response.  The  version of the qemu is 5.1.0 and  here is the comandline..
>
>
> [/usr/bin/qemu-system-test-machine test --enable-kvm -display none -mon mon-console,mode=readline -chardev socket,host=,port=5001,server,nowait,id=mon-console -device virtio-serial-device,max_ports=2 -device virtconsole,chardev=console,name=console -chardev socket,host=,port=5000,server,nowait,id=console -netdev tap,id=tap0,ifname=tap0,vhost=on,script=no,downscript=no -device virtio-net-device,netdev=tap0 -device vhost-vsock-device,guest-cid=4 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-device,rng=rng0 -chardev socket,id=char0,path=/run/vm001-vhost-fs.sock -device vhost-user-fs-device,queue-size=2,chardev=char0,tag=myfs -object memory-backend-memfd,id=mem,size=4G,share=on -kernel /var/lib/are/images/active/kernel.bin -dtb /var/lib/are/images/active/device-tree.dtb -device virtio-blk-device,drive=writable,serial=writable -drive file=/var/lib/are/volumes/writable.bin,if=none,id=writable,format=raw -device virtio-blk-device,drive=rootfs,serial=rootfs -drive file=/var/lib/are/images/active/rootfs/rootfs.bin,if=none,id=rootfs,format=raw,readonly]

Thanks for the details. I knew that we didn't support older versions,
but didn't want to reply that without explaining why.

I've just took a look, and the origin of this incompatibility is this
change in "hw/virtio/vhost-user.c":

qemu-5.1.0:

typedef struct VhostUserMemRegMsg {
    uint32_t padding;
    VhostUserMemoryRegion region;
} VhostUserMemRegMsg;

qemu-upstream (since 3009edff8192991293fe9e2b50b0d90db83c4a89):

typedef struct VhostUserMemRegMsg {
    uint64_t padding;
    VhostUserMemoryRegion region;
} VhostUserMemRegMsg;

Note how "padding" has changed from uint32_t to uint64_t.

This means we only support QEMU versions starting 5.2.0.

Regards,
Sergio.

[-- Attachment #2: Type: text/html, Size: 3792 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
  2022-05-26 16:07 Pra.. Dew..
@ 2022-05-31 10:50 ` Sergio Lopez
  0 siblings, 0 replies; 13+ messages in thread
From: Sergio Lopez @ 2022-05-31 10:50 UTC (permalink / raw)
  To: Pra.. Dew..; +Cc: Stefan Hajnoczi, virtio-fs, gmaglione, Vivek Goyal


[-- Attachment #1.1: Type: text/plain, Size: 1950 bytes --]

On Thu, May 26, 2022 at 04:07:36PM +0000, Pra.. Dew.. wrote:
> Thanks for the response.  The  version of the qemu is 5.1.0 and  here is the comandline..
> 
> 
> [/usr/bin/qemu-system-test-machine test --enable-kvm -display none -mon mon-console,mode=readline -chardev socket,host=,port=5001,server,nowait,id=mon-console -device virtio-serial-device,max_ports=2 -device virtconsole,chardev=console,name=console -chardev socket,host=,port=5000,server,nowait,id=console -netdev tap,id=tap0,ifname=tap0,vhost=on,script=no,downscript=no -device virtio-net-device,netdev=tap0 -device vhost-vsock-device,guest-cid=4 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-device,rng=rng0 -chardev socket,id=char0,path=/run/vm001-vhost-fs.sock -device vhost-user-fs-device,queue-size=2,chardev=char0,tag=myfs -object memory-backend-memfd,id=mem,size=4G,share=on -kernel /var/lib/are/images/active/kernel.bin -dtb /var/lib/are/images/active/device-tree.dtb -device virtio-blk-device,drive=writable,serial=writable -drive file=/var/lib/are/volumes/writable.bin,if=none,id=writable,format=raw -device virtio-blk-device,drive=rootfs,serial=rootfs -drive file=/var/lib/are/images/active/rootfs/rootfs.bin,if=none,id=rootfs,format=raw,readonly]

Thanks for the details. I knew that we didn't support older versions,
but didn't want to reply that without explaining why.

I've just took a look, and the origin of this incompatibility is this
change in "hw/virtio/vhost-user.c":

qemu-5.1.0:

typedef struct VhostUserMemRegMsg {
    uint32_t padding;
    VhostUserMemoryRegion region;
} VhostUserMemRegMsg;

qemu-upstream (since 3009edff8192991293fe9e2b50b0d90db83c4a89):

typedef struct VhostUserMemRegMsg {
    uint64_t padding;
    VhostUserMemoryRegion region;
} VhostUserMemRegMsg;

Note how "padding" has changed from uint32_t to uint64_t.

This means we only support QEMU versions starting 5.2.0.

Regards,
Sergio.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
@ 2022-05-26 16:07 Pra.. Dew..
  2022-05-31 10:50 ` Sergio Lopez
  0 siblings, 1 reply; 13+ messages in thread
From: Pra.. Dew.. @ 2022-05-26 16:07 UTC (permalink / raw)
  To: Sergio Lopez, Stefan Hajnoczi; +Cc: virtio-fs, gmaglione

[-- Attachment #1: Type: text/plain, Size: 2560 bytes --]

Thanks for the response.  The  version of the qemu is 5.1.0 and  here is the comandline..


[/usr/bin/qemu-system-test-machine test --enable-kvm -display none -mon mon-console,mode=readline -chardev socket,host=,port=5001,server,nowait,id=mon-console -device virtio-serial-device,max_ports=2 -device virtconsole,chardev=console,name=console -chardev socket,host=,port=5000,server,nowait,id=console -netdev tap,id=tap0,ifname=tap0,vhost=on,script=no,downscript=no -device virtio-net-device,netdev=tap0 -device vhost-vsock-device,guest-cid=4 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-device,rng=rng0 -chardev socket,id=char0,path=/run/vm001-vhost-fs.sock -device vhost-user-fs-device,queue-size=2,chardev=char0,tag=myfs -object memory-backend-memfd,id=mem,size=4G,share=on -kernel /var/lib/are/images/active/kernel.bin -dtb /var/lib/are/images/active/device-tree.dtb -device virtio-blk-device,drive=writable,serial=writable -drive file=/var/lib/are/volumes/writable.bin,if=none,id=writable,format=raw -device virtio-blk-device,drive=rootfs,serial=rootfs -drive file=/var/lib/are/images/active/rootfs/rootfs.bin,if=none,id=rootfs,format=raw,readonly]


________________________________
From: Sergio Lopez
Sent: Thursday, May 26, 2022 8:06 AM
To: Stefan Hajnoczi
Cc: Prashant Dewan; virtio-fs@redhat.com; gmaglione@redhat.com
Subject: Re: [Virtio-fs] vhost_set_vring_kick failed

On Thu, May 26, 2022 at 08:49:34AM +0100, Stefan Hajnoczi wrote:
> On Thu, May 26, 2022 at 05:34:42AM +0000, Prashant Dewan wrote:
> > Hello Stefan,
> >
> > Here is the output from virtofsd..
> >
> > ------------------------------------
> > - Journal begins at Thu 2022-05-26 04:23:42 UTC, ends at Thu 2022-05-26 04:41:57 UTC. --
> > May 26 04:24:13 localhost systemd[1]: Started virtio-fs vhost-user device daemon.<<Placeholder: run this daemon as non-privileged user>>.
> > May 26 04:24:13 localhost virtiofsd[505]: [2022-05-26T04:24:13Z INFO  virtiofsd] Waiting for vhost-user socket connection...
> > May 26 04:27:11 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:11Z INFO  virtiofsd] Client connected, servicing requests
> > May 26 04:27:33 device-3e4b479e2ee9c647 virtiofsd[505]: [2022-05-26T04:27:33Z ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)
>
> virtiofsd terminated. I have CCed Sergio and German, who may have
> suggestions about what to check next.

Hi,

Could you please tell us which QEMU version are you running, and paste
here the its command line?

Thanks,
Sergio.

[-- Attachment #2: Type: text/html, Size: 3888 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Virtio-fs] vhost_set_vring_kick failed
  2022-05-24 14:29 Prashant Dewan
@ 2022-05-25 16:02 ` Stefan Hajnoczi
  0 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2022-05-25 16:02 UTC (permalink / raw)
  To: Prashant Dewan; +Cc: virtio-fs

[-- Attachment #1: Type: text/plain, Size: 539 bytes --]

On Tue, May 24, 2022 at 02:29:12PM +0000, Prashant Dewan wrote:
> I am trying to setup virtfs using KVM and QEMU.
> 
> 
>   1.  Inside the VM, I do
>      *   mount -t virtiofs myfs /dev/myfile  --succeeds
>      *   cd /dev/myfile  -hangs
> 
> 
>   1.  The host logs -
> 
>  qemu-system: Failed to write msg. Wrote -1 instead of 20.

Sending a vhost-user protocol message failed. It is likely that the
virtiofsd process terminated and the UNIX domain socket closed on QEMU.

Is there any output from virtiofsd?

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Virtio-fs] vhost_set_vring_kick failed
@ 2022-05-24 14:29 Prashant Dewan
  2022-05-25 16:02 ` Stefan Hajnoczi
  0 siblings, 1 reply; 13+ messages in thread
From: Prashant Dewan @ 2022-05-24 14:29 UTC (permalink / raw)
  To: virtio-fs

[-- Attachment #1: Type: text/plain, Size: 1297 bytes --]

I am trying to setup virtfs using KVM and QEMU.


  1.  Inside the VM, I do
     *   mount -t virtiofs myfs /dev/myfile  --succeeds
     *   cd /dev/myfile  -hangs


  1.  The host logs -

 qemu-system: Failed to write msg. Wrote -1 instead of 20.
May 24 14:01:08 device-myhost[644]: qemu-system: vhost_set_vring_kick failed: Invalid argument (22)
May 24 14:01:08 device-myhost[644]: qemu-system: Failed to set msg fds.
May 24 14:01:08 device-myhost[644]: qemu-system: vhost VQ 0 ring restore failed: -1: Invalid argument (22)
May 24 14:01:08 device-myhost[644]: qemu-system: Error starting vhost: 22
May 24 14:01:08 device-myhost[644]: qemu-system: Failed to set msg fds.
May 24 14:01:08 device-myhost[644]: qemu-system: vhost_set_vring_call failed: Resource temporarily unavailable (11)
May 24 14:01:08 device-myhost[644]: qemu-system: Failed to set msg fds.
May 24 14:01:08 device-myhost[644]: qemu-system: vhost_set_vring_call failed: Resource temporarily unavailable (11)
May 24 14:01:08 device-myhost[644]: qemu-system: Failed to read from slave.


  1.   Qemu parameters -

 "-device",
    "vhost-user-fs-device,queue-size=1024,chardev=char0,tag=myfs",
    "-object",
    "memory-backend-memfd,id=mem,size=4G,share=on"

Any input is appreciated...

Thanks


[-- Attachment #2: Type: text/html, Size: 3131 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-06-01 19:12 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-26  5:34 [Virtio-fs] vhost_set_vring_kick failed Prashant Dewan
2022-05-26  7:49 ` Stefan Hajnoczi
2022-05-26  8:06   ` Sergio Lopez
2022-05-30 22:11     ` Vivek Goyal
2022-05-31 22:25       ` Pra.. Dew..
2022-05-31 22:50         ` Vivek Goyal
2022-06-01  6:01         ` Sergio Lopez
  -- strict thread matches above, loose matches on Subject: below --
2022-06-01 19:12 Pra.. Dew..
2022-05-31 18:33 Pra.. Dew..
2022-05-26 16:07 Pra.. Dew..
2022-05-31 10:50 ` Sergio Lopez
2022-05-24 14:29 Prashant Dewan
2022-05-25 16:02 ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.