All of lore.kernel.org
 help / color / mirror / Atom feed
From: Raphael Norwitz <raphael.norwitz@nutanix.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: "peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"mst@redhat.com" <mst@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Raphael Norwitz <raphael.norwitz@nutanix.com>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>,
	"mreitz@redhat.com" <mreitz@redhat.com>,
	"eblake@redhat.com" <eblake@redhat.com>,
	"sgarzare@redhat.com" <sgarzare@redhat.com>
Subject: Re: [PATCH v5] Prevent vhost-user-blk-test hang
Date: Thu, 30 Sep 2021 05:29:06 +0000	[thread overview]
Message-ID: <20210930052900.GA15735@raphael-debian-dev> (raw)
In-Reply-To: <YVLYZCZ0NlhhrStG@stefanha-x1.localdomain>

On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > In the vhost-user-blk-test, as of now there is nothing stoping
> > vhost-user-blk in QEMU writing to the socket right after forking off the
> > storage daemon before it has a chance to come up properly, leaving the
> > test hanging forever. This intermittently hanging test has caused QEMU
> > automation failures reported multiple times on the mailing list [1].
> > 
> > This change makes the storage-daemon notify the vhost-user-blk-test
> > that it is fully initialized and ready to handle client connections by
> > creating a pidfile on initialiation. This ensures that the storage-daemon
> > backend won't miss vhost-user messages and thereby resolves the hang.
> > 
> > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> 

Hey Stefan,

> Hi Raphael,
> I would like to understand the issue that is being worked around in the
> patch.
> 
> QEMU should be okay with listen fd passing. The qemu-storage-daemon
> documentation even contains example code for this
> (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> listen fd passing is fundamentally broken.
> 

The issue is that the "client" (in this case vhost-user-blk in QEMU) can
proceed to use the socket before the storage-daemon has a chance to
properly start up and monitor it. This is nothing unique to the
storage-daemon - I've seen races like this happen happend with different
vhost-user backends before.

Yes - I do think the docs can be improved to explicitly state that the
storage-daemon must be allowed to properly initialize before any data is
sent over the socket. Maybe we should even perscribe the use of the pidfile
option?

> Can you share more details about the problem?
> 

Did you see my analysis [1]?

[1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/

Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
socket and the storage daemon never receives it. Looking at the
QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
the storage-daemon never receives any message to begin with. AFAICT
there is nothing stopping QEMU from running first and sending a message
before vhost-user-blk comes up, and from testing we can see that waiting
for the storage-daemon to come up resolves the problem completely.

> Does "writing to the socket" mean writing vhost-user protocol messages
> or does it mean connect(2)?
> 

Yes - it means writing vhost-user messages. We see a message sent from
QEMU to the backend.

Note that in qtest_socket_server() (called from create_listen_socket())
we have already called listen() on the socket, so I would expect QEMU
calling connect(2) to succeed and proceed to successfully send messages
whether or not there is another listener. I even tried commenting out the
execlp for the storage-daemon and I saw the same behavior from QEMU - it
sends the message and hangs indefinitely.

> Could the problem be that vhost-user-blk-test.c creates the listen fds
> and does not close them? This means the host network stack doesn't
> consider the socket closed after QEMU terminates and therefore the test
> process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> to close the fds after spawning qemu-storage-daemon.
> 

When the test hangs both QEMU and storage-daemon are still up and
connected to the socket and waiting for messages from each other. I don't
see how we would close the FD in this state or how it would help.

We may want to think about implementing some kind of timeoout for initial
vhost-user messages so that we fail instead of hang in cases like these,
as I proposed in [1]. What do you think?

> Stefan
> 
> > 
> > Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
> > Reviewed-by: Eric Blake <eblake@redhat.com>
> > ---
> >  tests/qtest/vhost-user-blk-test.c | 26 +++++++++++++++++++++++++-
> >  1 file changed, 25 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tests/qtest/vhost-user-blk-test.c b/tests/qtest/vhost-user-blk-test.c
> > index 6f108a1b62..5fed262da1 100644
> > --- a/tests/qtest/vhost-user-blk-test.c
> > +++ b/tests/qtest/vhost-user-blk-test.c
> > @@ -24,6 +24,7 @@
> >  #define TEST_IMAGE_SIZE         (64 * 1024 * 1024)
> >  #define QVIRTIO_BLK_TIMEOUT_US  (30 * 1000 * 1000)
> >  #define PCI_SLOT_HP             0x06
> > +#define PIDFILE_RETRIES         5
> >  
> >  typedef struct {
> >      pid_t pid;
> > @@ -885,7 +886,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >                                   int num_queues)
> >  {
> >      const char *vhost_user_blk_bin = qtest_qemu_storage_daemon_binary();
> > -    int i;
> > +    int i, retries;
> > +    char *daemon_pidfile_path;
> >      gchar *img_path;
> >      GString *storage_daemon_command = g_string_new(NULL);
> >      QemuStorageDaemonState *qsd;
> > @@ -898,6 +900,8 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >              " -object memory-backend-memfd,id=mem,size=256M,share=on "
> >              " -M memory-backend=mem -m 256M ");
> >  
> > +    daemon_pidfile_path = g_strdup_printf("/tmp/daemon-%d", getpid());
> > +
> >      for (i = 0; i < vus_instances; i++) {
> >          int fd;
> >          char *sock_path = create_listen_socket(&fd);
> > @@ -914,6 +918,9 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >                                 i + 1, sock_path);
> >      }
> >  
> > +    g_string_append_printf(storage_daemon_command, "--pidfile %s ",
> > +                           daemon_pidfile_path);
> > +
> >      g_test_message("starting vhost-user backend: %s",
> >                     storage_daemon_command->str);
> >      pid_t pid = fork();
> > @@ -930,7 +937,24 @@ static void start_vhost_user_blk(GString *cmd_line, int vus_instances,
> >          execlp("/bin/sh", "sh", "-c", storage_daemon_command->str, NULL);
> >          exit(1);
> >      }
> > +
> > +    /*
> > +     * Ensure the storage-daemon has come up properly before allowing the
> > +     * test to proceed.
> > +     */
> > +    retries = 0;
> > +    while (access(daemon_pidfile_path, F_OK) != 0) {
> > +        g_assert_cmpint(retries, <, PIDFILE_RETRIES);
> > +
> > +        retries++;
> > +        g_usleep(1000);
> > +    }
> > +
> >      g_string_free(storage_daemon_command, true);
> > +    if (access(daemon_pidfile_path, F_OK) == 0) {
> > +        unlink(daemon_pidfile_path);
> > +    }
> > +    g_free(daemon_pidfile_path);
> >  
> >      qsd = g_new(QemuStorageDaemonState, 1);
> >      qsd->pid = pid;
> > -- 
> > 2.20.1
> > 



  reply	other threads:[~2021-09-30  5:30 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-27 17:17 [PATCH v5] Prevent vhost-user-blk-test hang Raphael Norwitz
2021-09-28  8:55 ` Stefan Hajnoczi
2021-09-30  5:29   ` Raphael Norwitz [this message]
2021-09-30  9:48     ` Stefan Hajnoczi
2021-10-01 20:59       ` Raphael Norwitz
2021-10-05 14:31       ` Michael S. Tsirkin
2021-10-13  6:25         ` Raphael Norwitz
2021-10-13  9:02           ` Michael S. Tsirkin
2021-10-13  9:20           ` Stefan Hajnoczi
2021-10-19 10:15 ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210930052900.GA15735@raphael-debian-dev \
    --to=raphael.norwitz@nutanix.com \
    --cc=eblake@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.