All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5] Prevent vhost-user-blk-test hang
@ 2021-09-27 17:17 Raphael Norwitz
  2021-09-28  8:55 ` Stefan Hajnoczi
  2021-10-19 10:15 ` Stefan Hajnoczi
  0 siblings, 2 replies; 10+ messages in thread
From: Raphael Norwitz @ 2021-09-27 17:17 UTC (permalink / raw)
  To: mst, peter.maydell, eblake
  Cc: qemu-devel, Raphael Norwitz, stefanha, pbonzini, mreitz, sgarzare

In the vhost-user-blk-test, as of now there is nothing stoping
vhost-user-blk in QEMU writing to the socket right after forking off the
storage daemon before it has a chance to come up properly, leaving the
test hanging forever. This intermittently hanging test has caused QEMU
automation failures reported multiple times on the mailing list [1].

This change makes the storage-daemon notify the vhost-user-blk-test
that it is fully initialized and ready to handle client connections by
creating a pidfile on initialiation. This ensures that the storage-daemon
backend won't miss vhost-user messages and thereby resolves the hang.

[1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/

Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 tests/qtest/vhost-user-blk-test.c | 26 +++++++++++++++++++++++++-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
index 6f108a1b62..5fed262da1 100644
--- a/tests/qtest/vhost-user-blk-test.c
+++ b/tests/qtest/vhost-user-blk-test.c
@@ -24,6 +24,7 @@
 #define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
 #define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
 #define PCI_SLOT_HP             0x06
+#define PIDFILE_RETRIES         5
 
 typedef struct {
     pid_t pid;
@@ -885,7 +886,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
                                  int num_queues)
 {
     const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
-    int i;
+    int i, retries;
+    char *daemon_pidfile_path;
     gchar *img_path;
     GString *storage_daemon_command = g_string_new(NULL);
     QemuStorageDaemonState *qsd;
@@ -898,6 +900,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
             " -object memory-backend-memfd,id=mem,size=256M,share=on "
             " -M memory-backend=mem -m 256M ");
 
+    daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
+
     for (i = 0; i < vus_instances; i++) {
         int fd;
         char *sock_path = create_listen_socket(&fd);
@@ -914,6 +918,9 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
                                i + 1, sock_path);
     }
 
+    g_string_append_printf(storage_daemon_command, "--pidfile %s ",
+                           daemon_pidfile_path);
+
     g_test_message("starting vhost-user backend: %s",
                    storage_daemon_command->str);
     pid_t pid = fork();
@@ -930,7 +937,24 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
         execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
         exit(1);
     }
+
+    /*
+     * Ensure the storage-daemon has come up properly before allowing the
+     * test to proceed.
+     */
+    retries = 0;
+    while (access(daemon_pidfile_path, F_OK) != 0) {
+        g_assert_cmpint(retries, <, PIDFILE_RETRIES);
+
+        retries++;
+        g_usleep(1000);
+    }
+
     g_string_free(storage_daemon_command, true);
+    if (access(daemon_pidfile_path, F_OK) == 0) {
+        unlink(daemon_pidfile_path);
+    }
+    g_free(daemon_pidfile_path);
 
     qsd = g_new(QemuStorageDaemonState, 1);
     qsd->pid = pid;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-09-27 17:17 [PATCH v5] Prevent vhost-user-blk-test hang Raphael Norwitz
@ 2021-09-28  8:55 ` Stefan Hajnoczi
  2021-09-30  5:29   ` Raphael Norwitz
  2021-10-19 10:15 ` Stefan Hajnoczi
  1 sibling, 1 reply; 10+ messages in thread
From: Stefan Hajnoczi @ 2021-09-28  8:55 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: peter.maydell, mst, qemu-devel, mreitz, pbonzini, eblake, sgarzare

[-- Attachment #1: Type: text/plain, Size: 4524 bytes --]

On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> In the vhost-user-blk-test, as of now there is nothing stoping
> vhost-user-blk in QEMU writing to the socket right after forking off the
> storage daemon before it has a chance to come up properly, leaving the
> test hanging forever. This intermittently hanging test has caused QEMU
> automation failures reported multiple times on the mailing list [1].
> 
> This change makes the storage-daemon notify the vhost-user-blk-test
> that it is fully initialized and ready to handle client connections by
> creating a pidfile on initialiation. This ensures that the storage-daemon
> backend won't miss vhost-user messages and thereby resolves the hang.
> 
> [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/

Hi Raphael,
I would like to understand the issue that is being worked around in the
patch.

QEMU should be okay with listen fd passing. The qemu-storage-daemon
documentation even contains example code for this
(docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
listen fd passing is fundamentally broken.

Can you share more details about the problem?

Does "writing to the socket" mean writing vhost-user protocol messages
or does it mean connect(2)?

Could the problem be that vhost-user-blk-test.c creates the listen fds
and does not close them? This means the host network stack doesn't
consider the socket closed after QEMU terminates and therefore the test
process hangs after QEMU is gone? In that case vhost-user-blk-test needs
to close the fds after spawning qemu-storage-daemon.

Stefan

> 
> Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> ---
>  tests/qtest/vhost-user-blk-test.c | 26 +++++++++++++++++++++++++-
>  1 file changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
> index 6f108a1b62..5fed262da1 100644
> --- a/tests/qtest/vhost-user-blk-test.c
> +++ b/tests/qtest/vhost-user-blk-test.c
> @@ -24,6 +24,7 @@
>  #define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
>  #define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
>  #define PCI_SLOT_HP             0x06
> +#define PIDFILE_RETRIES         5
>  
>  typedef struct {
>      pid_t pid;
> @@ -885,7 +886,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
>                                   int num_queues)
>  {
>      const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
> -    int i;
> +    int i, retries;
> +    char *daemon_pidfile_path;
>      gchar *img_path;
>      GString *storage_daemon_command = g_string_new(NULL);
>      QemuStorageDaemonState *qsd;
> @@ -898,6 +900,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
>              " -object memory-backend-memfd,id=mem,size=256M,share=on "
>              " -M memory-backend=mem -m 256M ");
>  
> +    daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
> +
>      for (i = 0; i < vus_instances; i++) {
>          int fd;
>          char *sock_path = create_listen_socket(&fd);
> @@ -914,6 +918,9 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
>                                 i + 1, sock_path);
>      }
>  
> +    g_string_append_printf(storage_daemon_command, "--pidfile %s ",
> +                           daemon_pidfile_path);
> +
>      g_test_message("starting vhost-user backend: %s",
>                     storage_daemon_command->str);
>      pid_t pid = fork();
> @@ -930,7 +937,24 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
>          execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
>          exit(1);
>      }
> +
> +    /*
> +     * Ensure the storage-daemon has come up properly before allowing the
> +     * test to proceed.
> +     */
> +    retries = 0;
> +    while (access(daemon_pidfile_path, F_OK) != 0) {
> +        g_assert_cmpint(retries, <, PIDFILE_RETRIES);
> +
> +        retries++;
> +        g_usleep(1000);
> +    }
> +
>      g_string_free(storage_daemon_command, true);
> +    if (access(daemon_pidfile_path, F_OK) == 0) {
> +        unlink(daemon_pidfile_path);
> +    }
> +    g_free(daemon_pidfile_path);
>  
>      qsd = g_new(QemuStorageDaemonState, 1);
>      qsd->pid = pid;
> -- 
> 2.20.1
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-09-28  8:55 ` Stefan Hajnoczi
@ 2021-09-30  5:29   ` Raphael Norwitz
  2021-09-30  9:48     ` Stefan Hajnoczi
  0 siblings, 1 reply; 10+ messages in thread
From: Raphael Norwitz @ 2021-09-30  5:29 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: peter.maydell, mst, qemu-devel, Raphael Norwitz, pbonzini,
	mreitz, eblake, sgarzare

On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > In the vhost-user-blk-test, as of now there is nothing stoping
> > vhost-user-blk in QEMU writing to the socket right after forking off the
> > storage daemon before it has a chance to come up properly, leaving the
> > test hanging forever. This intermittently hanging test has caused QEMU
> > automation failures reported multiple times on the mailing list [1].
> > 
> > This change makes the storage-daemon notify the vhost-user-blk-test
> > that it is fully initialized and ready to handle client connections by
> > creating a pidfile on initialiation. This ensures that the storage-daemon
> > backend won't miss vhost-user messages and thereby resolves the hang.
> > 
> > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> 

Hey Stefan,

> Hi Raphael,
> I would like to understand the issue that is being worked around in the
> patch.
> 
> QEMU should be okay with listen fd passing. The qemu-storage-daemon
> documentation even contains example code for this
> (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> listen fd passing is fundamentally broken.
> 

The issue is that the "client" (in this case vhost-user-blk in QEMU) can
proceed to use the socket before the storage-daemon has a chance to
properly start up and monitor it. This is nothing unique to the
storage-daemon - I've seen races like this happen happend with different
vhost-user backends before.

Yes - I do think the docs can be improved to explicitly state that the
storage-daemon must be allowed to properly initialize before any data is
sent over the socket. Maybe we should even perscribe the use of the pidfile
option?

> Can you share more details about the problem?
> 

Did you see my analysis [1]?

[1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/

Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
socket and the storage daemon never receives it. Looking at the
QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
the storage-daemon never receives any message to begin with. AFAICT
there is nothing stopping QEMU from running first and sending a message
before vhost-user-blk comes up, and from testing we can see that waiting
for the storage-daemon to come up resolves the problem completely.

> Does "writing to the socket" mean writing vhost-user protocol messages
> or does it mean connect(2)?
> 

Yes - it means writing vhost-user messages. We see a message sent from
QEMU to the backend.

Note that in qtest_socket_server() (called from create_listen_socket())
we have already called listen() on the socket, so I would expect QEMU
calling connect(2) to succeed and proceed to successfully send messages
whether or not there is another listener. I even tried commenting out the
execlp for the storage-daemon and I saw the same behavior from QEMU - it
sends the message and hangs indefinitely.

> Could the problem be that vhost-user-blk-test.c creates the listen fds
> and does not close them? This means the host network stack doesn't
> consider the socket closed after QEMU terminates and therefore the test
> process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> to close the fds after spawning qemu-storage-daemon.
> 

When the test hangs both QEMU and storage-daemon are still up and
connected to the socket and waiting for messages from each other. I don't
see how we would close the FD in this state or how it would help.

We may want to think about implementing some kind of timeoout for initial
vhost-user messages so that we fail instead of hang in cases like these,
as I proposed in [1]. What do you think?

> Stefan
> 
> > 
> > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > Reviewed-by: Eric Blake <eblake@redhat.com>
> > ---
> >  tests/qtest/vhost-user-blk-test.c | 26 +++++++++++++++++++++++++-
> >  1 file changed, 25 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
> > index 6f108a1b62..5fed262da1 100644
> > --- a/tests/qtest/vhost-user-blk-test.c
> > +++ b/tests/qtest/vhost-user-blk-test.c
> > @@ -24,6 +24,7 @@
> >  #define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
> >  #define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
> >  #define PCI_SLOT_HP             0x06
> > +#define PIDFILE_RETRIES         5
> >  
> >  typedef struct {
> >      pid_t pid;
> > @@ -885,7 +886,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >                                   int num_queues)
> >  {
> >      const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
> > -    int i;
> > +    int i, retries;
> > +    char *daemon_pidfile_path;
> >      gchar *img_path;
> >      GString *storage_daemon_command = g_string_new(NULL);
> >      QemuStorageDaemonState *qsd;
> > @@ -898,6 +900,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >              " -object memory-backend-memfd,id=mem,size=256M,share=on "
> >              " -M memory-backend=mem -m 256M ");
> >  
> > +    daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
> > +
> >      for (i = 0; i < vus_instances; i++) {
> >          int fd;
> >          char *sock_path = create_listen_socket(&fd);
> > @@ -914,6 +918,9 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >                                 i + 1, sock_path);
> >      }
> >  
> > +    g_string_append_printf(storage_daemon_command, "--pidfile %s ",
> > +                           daemon_pidfile_path);
> > +
> >      g_test_message("starting vhost-user backend: %s",
> >                     storage_daemon_command->str);
> >      pid_t pid = fork();
> > @@ -930,7 +937,24 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >          execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
> >          exit(1);
> >      }
> > +
> > +    /*
> > +     * Ensure the storage-daemon has come up properly before allowing the
> > +     * test to proceed.
> > +     */
> > +    retries = 0;
> > +    while (access(daemon_pidfile_path, F_OK) != 0) {
> > +        g_assert_cmpint(retries, <, PIDFILE_RETRIES);
> > +
> > +        retries++;
> > +        g_usleep(1000);
> > +    }
> > +
> >      g_string_free(storage_daemon_command, true);
> > +    if (access(daemon_pidfile_path, F_OK) == 0) {
> > +        unlink(daemon_pidfile_path);
> > +    }
> > +    g_free(daemon_pidfile_path);
> >  
> >      qsd = g_new(QemuStorageDaemonState, 1);
> >      qsd->pid = pid;
> > -- 
> > 2.20.1
> > 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-09-30  5:29   ` Raphael Norwitz
@ 2021-09-30  9:48     ` Stefan Hajnoczi
  2021-10-01 20:59       ` Raphael Norwitz
  2021-10-05 14:31       ` Michael S. Tsirkin
  0 siblings, 2 replies; 10+ messages in thread
From: Stefan Hajnoczi @ 2021-09-30  9:48 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: peter.maydell, mst, qemu-devel, mreitz, pbonzini, eblake, sgarzare

[-- Attachment #1: Type: text/plain, Size: 4869 bytes --]

On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > storage daemon before it has a chance to come up properly, leaving the
> > > test hanging forever. This intermittently hanging test has caused QEMU
> > > automation failures reported multiple times on the mailing list [1].
> > > 
> > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > that it is fully initialized and ready to handle client connections by
> > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > 
> > > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> > 
> 
> Hey Stefan,
> 
> > Hi Raphael,
> > I would like to understand the issue that is being worked around in the
> > patch.
> > 
> > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > documentation even contains example code for this
> > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > listen fd passing is fundamentally broken.
> > 
> 
> The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> proceed to use the socket before the storage-daemon has a chance to
> properly start up and monitor it. This is nothing unique to the
> storage-daemon - I've seen races like this happen happend with different
> vhost-user backends before.
> 
> Yes - I do think the docs can be improved to explicitly state that the
> storage-daemon must be allowed to properly initialize before any data is
> sent over the socket. Maybe we should even perscribe the use of the pidfile
> option?
> 
> > Can you share more details about the problem?
> > 
> 
> Did you see my analysis [1]?
> 
> [1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
> 
> Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> socket and the storage daemon never receives it. Looking at the
> QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> the storage-daemon never receives any message to begin with. AFAICT
> there is nothing stopping QEMU from running first and sending a message
> before vhost-user-blk comes up, and from testing we can see that waiting
> for the storage-daemon to come up resolves the problem completely.

The root cause has not been determined yet. QEMU should accept the
incoming connection and then read the previously-sent
VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
Sockets API level why the message should get lost, so there is probably
a QEMU bug here.

> > Does "writing to the socket" mean writing vhost-user protocol messages
> > or does it mean connect(2)?
> > 
> 
> Yes - it means writing vhost-user messages. We see a message sent from
> QEMU to the backend.
> 
> Note that in qtest_socket_server() (called from create_listen_socket())
> we have already called listen() on the socket, so I would expect QEMU
> calling connect(2) to succeed and proceed to successfully send messages
> whether or not there is another listener. I even tried commenting out the
> execlp for the storage-daemon and I saw the same behavior from QEMU - it
> sends the message and hangs indefinitely.

QEMU is correct in waiting for a vhost-user reply. The question is why
qemu-storage-daemon's vhost-user-block export isn't processing the
request and replying to it?

> > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > and does not close them? This means the host network stack doesn't
> > consider the socket closed after QEMU terminates and therefore the test
> > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > to close the fds after spawning qemu-storage-daemon.
> > 
> 
> When the test hangs both QEMU and storage-daemon are still up and
> connected to the socket and waiting for messages from each other. I don't
> see how we would close the FD in this state or how it would help.

Yes, I see. In that case the theory about fds doesn't apply.

> We may want to think about implementing some kind of timeoout for initial
> vhost-user messages so that we fail instead of hang in cases like these,
> as I proposed in [1]. What do you think?

Let's hold off on workarounds until the root cause has been found.

Do you have time to debug why vu_accept() and vu_message_read() don't
see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?

Thanks,
Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-09-30  9:48     ` Stefan Hajnoczi
@ 2021-10-01 20:59       ` Raphael Norwitz
  2021-10-05 14:31       ` Michael S. Tsirkin
  1 sibling, 0 replies; 10+ messages in thread
From: Raphael Norwitz @ 2021-10-01 20:59 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: peter.maydell, mst, qemu-devel, Raphael Norwitz, pbonzini,
	mreitz, eblake, sgarzare

On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > storage daemon before it has a chance to come up properly, leaving the
> > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > automation failures reported multiple times on the mailing list [1].
> > > > 
> > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > that it is fully initialized and ready to handle client connections by
> > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > 
> > > > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> > > 
> > 
> > Hey Stefan,
> > 
> > > Hi Raphael,
> > > I would like to understand the issue that is being worked around in the
> > > patch.
> > > 
> > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > documentation even contains example code for this
> > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > listen fd passing is fundamentally broken.
> > > 
> > 
> > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > proceed to use the socket before the storage-daemon has a chance to
> > properly start up and monitor it. This is nothing unique to the
> > storage-daemon - I've seen races like this happen happend with different
> > vhost-user backends before.
> > 
> > Yes - I do think the docs can be improved to explicitly state that the
> > storage-daemon must be allowed to properly initialize before any data is
> > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > option?
> > 
> > > Can you share more details about the problem?
> > > 
> > 
> > Did you see my analysis [1]?
> > 
> > [1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
> > 
> > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > socket and the storage daemon never receives it. Looking at the
> > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > the storage-daemon never receives any message to begin with. AFAICT
> > there is nothing stopping QEMU from running first and sending a message
> > before vhost-user-blk comes up, and from testing we can see that waiting
> > for the storage-daemon to come up resolves the problem completely.
> 
> The root cause has not been determined yet. QEMU should accept the
> incoming connection and then read the previously-sent
> VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> Sockets API level why the message should get lost, so there is probably
> a QEMU bug here.
> 
> > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > or does it mean connect(2)?
> > > 
> > 
> > Yes - it means writing vhost-user messages. We see a message sent from
> > QEMU to the backend.
> > 
> > Note that in qtest_socket_server() (called from create_listen_socket())
> > we have already called listen() on the socket, so I would expect QEMU
> > calling connect(2) to succeed and proceed to successfully send messages
> > whether or not there is another listener. I even tried commenting out the
> > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > sends the message and hangs indefinitely.
> 
> QEMU is correct in waiting for a vhost-user reply. The question is why
> qemu-storage-daemon's vhost-user-block export isn't processing the
> request and replying to it?
> 
> > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > and does not close them? This means the host network stack doesn't
> > > consider the socket closed after QEMU terminates and therefore the test
> > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > to close the fds after spawning qemu-storage-daemon.
> > > 
> > 
> > When the test hangs both QEMU and storage-daemon are still up and
> > connected to the socket and waiting for messages from each other. I don't
> > see how we would close the FD in this state or how it would help.
> 
> Yes, I see. In that case the theory about fds doesn't apply.
> 
> > We may want to think about implementing some kind of timeoout for initial
> > vhost-user messages so that we fail instead of hang in cases like these,
> > as I proposed in [1]. What do you think?
> 
> Let's hold off on workarounds until the root cause has been found.
> 
> Do you have time to debug why vu_accept() and vu_message_read() don't
> see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?

Sure I'll spend some more time looking into it.

> 
> Thanks,
> Stefan



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-09-30  9:48     ` Stefan Hajnoczi
  2021-10-01 20:59       ` Raphael Norwitz
@ 2021-10-05 14:31       ` Michael S. Tsirkin
  2021-10-13  6:25         ` Raphael Norwitz
  1 sibling, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2021-10-05 14:31 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: peter.maydell, qemu-devel, mreitz, pbonzini, Raphael Norwitz,
	eblake, sgarzare

On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > storage daemon before it has a chance to come up properly, leaving the
> > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > automation failures reported multiple times on the mailing list [1].
> > > > 
> > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > that it is fully initialized and ready to handle client connections by
> > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > 
> > > > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> > > 
> > 
> > Hey Stefan,
> > 
> > > Hi Raphael,
> > > I would like to understand the issue that is being worked around in the
> > > patch.
> > > 
> > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > documentation even contains example code for this
> > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > listen fd passing is fundamentally broken.
> > > 
> > 
> > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > proceed to use the socket before the storage-daemon has a chance to
> > properly start up and monitor it. This is nothing unique to the
> > storage-daemon - I've seen races like this happen happend with different
> > vhost-user backends before.
> > 
> > Yes - I do think the docs can be improved to explicitly state that the
> > storage-daemon must be allowed to properly initialize before any data is
> > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > option?
> > 
> > > Can you share more details about the problem?
> > > 
> > 
> > Did you see my analysis [1]?
> > 
> > [1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
> > 
> > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > socket and the storage daemon never receives it. Looking at the
> > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > the storage-daemon never receives any message to begin with. AFAICT
> > there is nothing stopping QEMU from running first and sending a message
> > before vhost-user-blk comes up, and from testing we can see that waiting
> > for the storage-daemon to come up resolves the problem completely.
> 
> The root cause has not been determined yet. QEMU should accept the
> incoming connection and then read the previously-sent
> VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> Sockets API level why the message should get lost, so there is probably
> a QEMU bug here.

Right. However the test does randomly hang for people and it's
not really of interest to anyone. I think we should apply the
work-around but yes we should keep working on the root cause, too.


> > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > or does it mean connect(2)?
> > > 
> > 
> > Yes - it means writing vhost-user messages. We see a message sent from
> > QEMU to the backend.
> > 
> > Note that in qtest_socket_server() (called from create_listen_socket())
> > we have already called listen() on the socket, so I would expect QEMU
> > calling connect(2) to succeed and proceed to successfully send messages
> > whether or not there is another listener. I even tried commenting out the
> > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > sends the message and hangs indefinitely.
> 
> QEMU is correct in waiting for a vhost-user reply. The question is why
> qemu-storage-daemon's vhost-user-block export isn't processing the
> request and replying to it?
> 
> > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > and does not close them? This means the host network stack doesn't
> > > consider the socket closed after QEMU terminates and therefore the test
> > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > to close the fds after spawning qemu-storage-daemon.
> > > 
> > 
> > When the test hangs both QEMU and storage-daemon are still up and
> > connected to the socket and waiting for messages from each other. I don't
> > see how we would close the FD in this state or how it would help.
> 
> Yes, I see. In that case the theory about fds doesn't apply.
> 
> > We may want to think about implementing some kind of timeoout for initial
> > vhost-user messages so that we fail instead of hang in cases like these,
> > as I proposed in [1]. What do you think?
> 
> Let's hold off on workarounds until the root cause has been found.
> 
> Do you have time to debug why vu_accept() and vu_message_read() don't
> see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?
> 
> Thanks,
> Stefan




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-10-05 14:31       ` Michael S. Tsirkin
@ 2021-10-13  6:25         ` Raphael Norwitz
  2021-10-13  9:02           ` Michael S. Tsirkin
  2021-10-13  9:20           ` Stefan Hajnoczi
  0 siblings, 2 replies; 10+ messages in thread
From: Raphael Norwitz @ 2021-10-13  6:25 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: peter.maydell, qemu-devel, Raphael Norwitz, Stefan Hajnoczi,
	pbonzini, mreitz, eblake, sgarzare

On Tue, Oct 05, 2021 at 10:31:06AM -0400, Michael S. Tsirkin wrote:
> On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> > On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > > storage daemon before it has a chance to come up properly, leaving the
> > > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > > automation failures reported multiple times on the mailing list [1].
> > > > > 
> > > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > > that it is fully initialized and ready to handle client connections by
> > > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > > 
> > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_CAFEAcA8kYpz9LiPNxnWJAPSjc-3Dnv532bEdyfynaBeMeohqBp3A-40mail.gmail.com_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=buSM5F3BMoUQEmVsEOXaCdERM0onDwoqit7nbLblkVs&e= 
> > > > 
> > > 
> > > Hey Stefan,
> > > 
> > > > Hi Raphael,
> > > > I would like to understand the issue that is being worked around in the
> > > > patch.
> > > > 
> > > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > > documentation even contains example code for this
> > > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > > listen fd passing is fundamentally broken.
> > > > 
> > > 
> > > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > > proceed to use the socket before the storage-daemon has a chance to
> > > properly start up and monitor it. This is nothing unique to the
> > > storage-daemon - I've seen races like this happen happend with different
> > > vhost-user backends before.
> > > 
> > > Yes - I do think the docs can be improved to explicitly state that the
> > > storage-daemon must be allowed to properly initialize before any data is
> > > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > > option?
> > > 
> > > > Can you share more details about the problem?
> > > > 
> > > 
> > > Did you see my analysis [1]?
> > > 
> > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_20210827165253.GA14291-40raphael-2Ddebian-2Ddev_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=o_S2kKO4RQnWw2QnzVi7dgOwgiPbusI9Zche7mWV22I&e= 
> > > 
> > > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > > socket and the storage daemon never receives it. Looking at the
> > > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > > the storage-daemon never receives any message to begin with. AFAICT
> > > there is nothing stopping QEMU from running first and sending a message
> > > before vhost-user-blk comes up, and from testing we can see that waiting
> > > for the storage-daemon to come up resolves the problem completely.
> > 
> > The root cause has not been determined yet. QEMU should accept the
> > incoming connection and then read the previously-sent
> > VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> > Sockets API level why the message should get lost, so there is probably
> > a QEMU bug here.
> 
> Right. However the test does randomly hang for people and it's
> not really of interest to anyone. I think we should apply the
> work-around but yes we should keep working on the root cause, too.
>

From my end I have spent some more time looking at it but have not made
much progress. I was hopeful that David HiIdenbrand’s libvhost-user bug
fixes may have resolved it, but I tested and even with his patches I
still see the hang.

I am determined to get to the bottom of it, but I’m not sure how long it
will take. If this is impacting people than I agree with merging the
patch as a workaround.

From my end, I will send a v6 updating the commit message and add
comments to make it clear that the patch is a workaround and the root
cause has not been determined yet. Sound good?

>
> > > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > > or does it mean connect(2)?
> > > > 
> > > 
> > > Yes - it means writing vhost-user messages. We see a message sent from
> > > QEMU to the backend.
> > > 
> > > Note that in qtest_socket_server() (called from create_listen_socket())
> > > we have already called listen() on the socket, so I would expect QEMU
> > > calling connect(2) to succeed and proceed to successfully send messages
> > > whether or not there is another listener. I even tried commenting out the
> > > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > > sends the message and hangs indefinitely.
> > 
> > QEMU is correct in waiting for a vhost-user reply. The question is why
> > qemu-storage-daemon's vhost-user-block export isn't processing the
> > request and replying to it?
> > 
> > > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > > and does not close them? This means the host network stack doesn't
> > > > consider the socket closed after QEMU terminates and therefore the test
> > > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > > to close the fds after spawning qemu-storage-daemon.
> > > > 
> > > 
> > > When the test hangs both QEMU and storage-daemon are still up and
> > > connected to the socket and waiting for messages from each other. I don't
> > > see how we would close the FD in this state or how it would help.
> > 
> > Yes, I see. In that case the theory about fds doesn't apply.
> > 
> > > We may want to think about implementing some kind of timeoout for initial
> > > vhost-user messages so that we fail instead of hang in cases like these,
> > > as I proposed in [1]. What do you think?
> > 
> > Let's hold off on workarounds until the root cause has been found.
> > 
> > Do you have time to debug why vu_accept() and vu_message_read() don't
> > see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?
> > 
> > Thanks,
> > Stefan
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-10-13  6:25         ` Raphael Norwitz
@ 2021-10-13  9:02           ` Michael S. Tsirkin
  2021-10-13  9:20           ` Stefan Hajnoczi
  1 sibling, 0 replies; 10+ messages in thread
From: Michael S. Tsirkin @ 2021-10-13  9:02 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: peter.maydell, qemu-devel, mreitz, Stefan Hajnoczi, pbonzini,
	eblake, sgarzare

On Wed, Oct 13, 2021 at 06:25:33AM +0000, Raphael Norwitz wrote:
> On Tue, Oct 05, 2021 at 10:31:06AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > > > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > > > storage daemon before it has a chance to come up properly, leaving the
> > > > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > > > automation failures reported multiple times on the mailing list [1].
> > > > > > 
> > > > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > > > that it is fully initialized and ready to handle client connections by
> > > > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > > > 
> > > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_CAFEAcA8kYpz9LiPNxnWJAPSjc-3Dnv532bEdyfynaBeMeohqBp3A-40mail.gmail.com_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=buSM5F3BMoUQEmVsEOXaCdERM0onDwoqit7nbLblkVs&e= 
> > > > > 
> > > > 
> > > > Hey Stefan,
> > > > 
> > > > > Hi Raphael,
> > > > > I would like to understand the issue that is being worked around in the
> > > > > patch.
> > > > > 
> > > > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > > > documentation even contains example code for this
> > > > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > > > listen fd passing is fundamentally broken.
> > > > > 
> > > > 
> > > > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > > > proceed to use the socket before the storage-daemon has a chance to
> > > > properly start up and monitor it. This is nothing unique to the
> > > > storage-daemon - I've seen races like this happen happend with different
> > > > vhost-user backends before.
> > > > 
> > > > Yes - I do think the docs can be improved to explicitly state that the
> > > > storage-daemon must be allowed to properly initialize before any data is
> > > > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > > > option?
> > > > 
> > > > > Can you share more details about the problem?
> > > > > 
> > > > 
> > > > Did you see my analysis [1]?
> > > > 
> > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_20210827165253.GA14291-40raphael-2Ddebian-2Ddev_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=o_S2kKO4RQnWw2QnzVi7dgOwgiPbusI9Zche7mWV22I&e= 
> > > > 
> > > > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > > > socket and the storage daemon never receives it. Looking at the
> > > > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > > > the storage-daemon never receives any message to begin with. AFAICT
> > > > there is nothing stopping QEMU from running first and sending a message
> > > > before vhost-user-blk comes up, and from testing we can see that waiting
> > > > for the storage-daemon to come up resolves the problem completely.
> > > 
> > > The root cause has not been determined yet. QEMU should accept the
> > > incoming connection and then read the previously-sent
> > > VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> > > Sockets API level why the message should get lost, so there is probably
> > > a QEMU bug here.
> > 
> > Right. However the test does randomly hang for people and it's
> > not really of interest to anyone. I think we should apply the
> > work-around but yes we should keep working on the root cause, too.
> >
> 
> From my end I have spent some more time looking at it but have not made
> much progress. I was hopeful that David HiIdenbrand’s libvhost-user bug
> fixes may have resolved it, but I tested and even with his patches I
> still see the hang.
> 
> I am determined to get to the bottom of it, but I’m not sure how long it
> will take. If this is impacting people than I agree with merging the
> patch as a workaround.
> 
> From my end, I will send a v6 updating the commit message and add
> comments to make it clear that the patch is a workaround and the root
> cause has not been determined yet. Sound good?


Makes sense for now.

> >
> > > > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > > > or does it mean connect(2)?
> > > > > 
> > > > 
> > > > Yes - it means writing vhost-user messages. We see a message sent from
> > > > QEMU to the backend.
> > > > 
> > > > Note that in qtest_socket_server() (called from create_listen_socket())
> > > > we have already called listen() on the socket, so I would expect QEMU
> > > > calling connect(2) to succeed and proceed to successfully send messages
> > > > whether or not there is another listener. I even tried commenting out the
> > > > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > > > sends the message and hangs indefinitely.
> > > 
> > > QEMU is correct in waiting for a vhost-user reply. The question is why
> > > qemu-storage-daemon's vhost-user-block export isn't processing the
> > > request and replying to it?
> > > 
> > > > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > > > and does not close them? This means the host network stack doesn't
> > > > > consider the socket closed after QEMU terminates and therefore the test
> > > > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > > > to close the fds after spawning qemu-storage-daemon.
> > > > > 
> > > > 
> > > > When the test hangs both QEMU and storage-daemon are still up and
> > > > connected to the socket and waiting for messages from each other. I don't
> > > > see how we would close the FD in this state or how it would help.
> > > 
> > > Yes, I see. In that case the theory about fds doesn't apply.
> > > 
> > > > We may want to think about implementing some kind of timeoout for initial
> > > > vhost-user messages so that we fail instead of hang in cases like these,
> > > > as I proposed in [1]. What do you think?
> > > 
> > > Let's hold off on workarounds until the root cause has been found.
> > > 
> > > Do you have time to debug why vu_accept() and vu_message_read() don't
> > > see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?
> > > 
> > > Thanks,
> > > Stefan
> > 
> > 



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-10-13  6:25         ` Raphael Norwitz
  2021-10-13  9:02           ` Michael S. Tsirkin
@ 2021-10-13  9:20           ` Stefan Hajnoczi
  1 sibling, 0 replies; 10+ messages in thread
From: Stefan Hajnoczi @ 2021-10-13  9:20 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: peter.maydell, Michael S. Tsirkin, qemu-devel, mreitz, pbonzini,
	eblake, sgarzare

[-- Attachment #1: Type: text/plain, Size: 5010 bytes --]

On Wed, Oct 13, 2021 at 06:25:33AM +0000, Raphael Norwitz wrote:
> On Tue, Oct 05, 2021 at 10:31:06AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > > > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > > > storage daemon before it has a chance to come up properly, leaving the
> > > > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > > > automation failures reported multiple times on the mailing list [1].
> > > > > > 
> > > > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > > > that it is fully initialized and ready to handle client connections by
> > > > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > > > 
> > > > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_CAFEAcA8kYpz9LiPNxnWJAPSjc-3Dnv532bEdyfynaBeMeohqBp3A-40mail.gmail.com_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=buSM5F3BMoUQEmVsEOXaCdERM0onDwoqit7nbLblkVs&e= 
> > > > > 
> > > > 
> > > > Hey Stefan,
> > > > 
> > > > > Hi Raphael,
> > > > > I would like to understand the issue that is being worked around in the
> > > > > patch.
> > > > > 
> > > > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > > > documentation even contains example code for this
> > > > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > > > listen fd passing is fundamentally broken.
> > > > > 
> > > > 
> > > > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > > > proceed to use the socket before the storage-daemon has a chance to
> > > > properly start up and monitor it. This is nothing unique to the
> > > > storage-daemon - I've seen races like this happen happend with different
> > > > vhost-user backends before.
> > > > 
> > > > Yes - I do think the docs can be improved to explicitly state that the
> > > > storage-daemon must be allowed to properly initialize before any data is
> > > > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > > > option?
> > > > 
> > > > > Can you share more details about the problem?
> > > > > 
> > > > 
> > > > Did you see my analysis [1]?
> > > > 
> > > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_qemu-2Ddevel_20210827165253.GA14291-40raphael-2Ddebian-2Ddev_&d=DwIBAg&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=fB3Xs9HB_Joc1WbeoKGaipFGQA7TPiMQPKa9OS04FM8&s=o_S2kKO4RQnWw2QnzVi7dgOwgiPbusI9Zche7mWV22I&e= 
> > > > 
> > > > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > > > socket and the storage daemon never receives it. Looking at the
> > > > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > > > the storage-daemon never receives any message to begin with. AFAICT
> > > > there is nothing stopping QEMU from running first and sending a message
> > > > before vhost-user-blk comes up, and from testing we can see that waiting
> > > > for the storage-daemon to come up resolves the problem completely.
> > > 
> > > The root cause has not been determined yet. QEMU should accept the
> > > incoming connection and then read the previously-sent
> > > VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> > > Sockets API level why the message should get lost, so there is probably
> > > a QEMU bug here.
> > 
> > Right. However the test does randomly hang for people and it's
> > not really of interest to anyone. I think we should apply the
> > work-around but yes we should keep working on the root cause, too.
> >
> 
> From my end I have spent some more time looking at it but have not made
> much progress. I was hopeful that David HiIdenbrand’s libvhost-user bug
> fixes may have resolved it, but I tested and even with his patches I
> still see the hang.
> 
> I am determined to get to the bottom of it, but I’m not sure how long it
> will take. If this is impacting people than I agree with merging the
> patch as a workaround.
> 
> From my end, I will send a v6 updating the commit message and add
> comments to make it clear that the patch is a workaround and the root
> cause has not been determined yet. Sound good?

Michael is eager to merge a fix so the tests stop hanging, so that
sounds good.

If I have time I'll take a look too and share anything I find.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5] Prevent vhost-user-blk-test hang
  2021-09-27 17:17 [PATCH v5] Prevent vhost-user-blk-test hang Raphael Norwitz
  2021-09-28  8:55 ` Stefan Hajnoczi
@ 2021-10-19 10:15 ` Stefan Hajnoczi
  1 sibling, 0 replies; 10+ messages in thread
From: Stefan Hajnoczi @ 2021-10-19 10:15 UTC (permalink / raw)
  To: Raphael Norwitz
  Cc: peter.maydell, mst, qemu-devel, mreitz, pbonzini, eblake, sgarzare

[-- Attachment #1: Type: text/plain, Size: 1413 bytes --]

On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> In the vhost-user-blk-test, as of now there is nothing stoping
> vhost-user-blk in QEMU writing to the socket right after forking off the
> storage daemon before it has a chance to come up properly, leaving the
> test hanging forever. This intermittently hanging test has caused QEMU
> automation failures reported multiple times on the mailing list [1].
> 
> This change makes the storage-daemon notify the vhost-user-blk-test
> that it is fully initialized and ready to handle client connections by
> creating a pidfile on initialiation. This ensures that the storage-daemon
> backend won't miss vhost-user messages and thereby resolves the hang.
> 
> [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/

The following hack triggers the hang. The stack traces match what Peter
posted in the thread linked above. I'll take a closer look.

---

diff --git a/storage-daemon/qemu-storage-daemon.c b/storage-daemon/qemu-storage-daemon.c
index 10a1a33761..50c9caa1b2 100644
--- a/storage-daemon/qemu-storage-daemon.c
+++ b/storage-daemon/qemu-storage-daemon.c
@@ -316,6 +316,8 @@ int main(int argc, char *argv[])
     signal(SIGPIPE, SIG_IGN);
 #endif

+    sleep(5);
+
     error_init(argv[0]);
     qemu_init_exec_dir(argv[0]);
     os_setup_signal_handling();

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-10-19 10:19 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-27 17:17 [PATCH v5] Prevent vhost-user-blk-test hang Raphael Norwitz
2021-09-28  8:55 ` Stefan Hajnoczi
2021-09-30  5:29   ` Raphael Norwitz
2021-09-30  9:48     ` Stefan Hajnoczi
2021-10-01 20:59       ` Raphael Norwitz
2021-10-05 14:31       ` Michael S. Tsirkin
2021-10-13  6:25         ` Raphael Norwitz
2021-10-13  9:02           ` Michael S. Tsirkin
2021-10-13  9:20           ` Stefan Hajnoczi
2021-10-19 10:15 ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.