All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: "peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"mreitz@redhat.com" <mreitz@redhat.com>,
	"pbonzini@redhat.com" <pbonzini@redhat.com>,
	Raphael Norwitz <raphael.norwitz@nutanix.com>,
	"eblake@redhat.com" <eblake@redhat.com>,
	"sgarzare@redhat.com" <sgarzare@redhat.com>
Subject: Re: [PATCH v5] Prevent vhost-user-blk-test hang
Date: Tue, 5 Oct 2021 10:31:06 -0400	[thread overview]
Message-ID: <20211005102944-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <YVWH2YUDLzaRxHYW@stefanha-x1.localdomain>

On Thu, Sep 30, 2021 at 10:48:09AM +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote:
> > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote:
> > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote:
> > > > In the vhost-user-blk-test, as of now there is nothing stoping
> > > > vhost-user-blk in QEMU writing to the socket right after forking off the
> > > > storage daemon before it has a chance to come up properly, leaving the
> > > > test hanging forever. This intermittently hanging test has caused QEMU
> > > > automation failures reported multiple times on the mailing list [1].
> > > > 
> > > > This change makes the storage-daemon notify the vhost-user-blk-test
> > > > that it is fully initialized and ready to handle client connections by
> > > > creating a pidfile on initialiation. This ensures that the storage-daemon
> > > > backend won't miss vhost-user messages and thereby resolves the hang.
> > > > 
> > > > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/
> > > 
> > 
> > Hey Stefan,
> > 
> > > Hi Raphael,
> > > I would like to understand the issue that is being worked around in the
> > > patch.
> > > 
> > > QEMU should be okay with listen fd passing. The qemu-storage-daemon
> > > documentation even contains example code for this
> > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if
> > > listen fd passing is fundamentally broken.
> > > 
> > 
> > The issue is that the "client" (in this case vhost-user-blk in QEMU) can
> > proceed to use the socket before the storage-daemon has a chance to
> > properly start up and monitor it. This is nothing unique to the
> > storage-daemon - I've seen races like this happen happend with different
> > vhost-user backends before.
> > 
> > Yes - I do think the docs can be improved to explicitly state that the
> > storage-daemon must be allowed to properly initialize before any data is
> > sent over the socket. Maybe we should even perscribe the use of the pidfile
> > option?
> > 
> > > Can you share more details about the problem?
> > > 
> > 
> > Did you see my analysis [1]?
> > 
> > [1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/
> > 
> > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost
> > socket and the storage daemon never receives it. Looking at the
> > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile
> > the storage-daemon never receives any message to begin with. AFAICT
> > there is nothing stopping QEMU from running first and sending a message
> > before vhost-user-blk comes up, and from testing we can see that waiting
> > for the storage-daemon to come up resolves the problem completely.
> 
> The root cause has not been determined yet. QEMU should accept the
> incoming connection and then read the previously-sent
> VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the
> Sockets API level why the message should get lost, so there is probably
> a QEMU bug here.

Right. However the test does randomly hang for people and it's
not really of interest to anyone. I think we should apply the
work-around but yes we should keep working on the root cause, too.


> > > Does "writing to the socket" mean writing vhost-user protocol messages
> > > or does it mean connect(2)?
> > > 
> > 
> > Yes - it means writing vhost-user messages. We see a message sent from
> > QEMU to the backend.
> > 
> > Note that in qtest_socket_server() (called from create_listen_socket())
> > we have already called listen() on the socket, so I would expect QEMU
> > calling connect(2) to succeed and proceed to successfully send messages
> > whether or not there is another listener. I even tried commenting out the
> > execlp for the storage-daemon and I saw the same behavior from QEMU - it
> > sends the message and hangs indefinitely.
> 
> QEMU is correct in waiting for a vhost-user reply. The question is why
> qemu-storage-daemon's vhost-user-block export isn't processing the
> request and replying to it?
> 
> > > Could the problem be that vhost-user-blk-test.c creates the listen fds
> > > and does not close them? This means the host network stack doesn't
> > > consider the socket closed after QEMU terminates and therefore the test
> > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs
> > > to close the fds after spawning qemu-storage-daemon.
> > > 
> > 
> > When the test hangs both QEMU and storage-daemon are still up and
> > connected to the socket and waiting for messages from each other. I don't
> > see how we would close the FD in this state or how it would help.
> 
> Yes, I see. In that case the theory about fds doesn't apply.
> 
> > We may want to think about implementing some kind of timeoout for initial
> > vhost-user messages so that we fail instead of hang in cases like these,
> > as I proposed in [1]. What do you think?
> 
> Let's hold off on workarounds until the root cause has been found.
> 
> Do you have time to debug why vu_accept() and vu_message_read() don't
> see the pending VHOST_USER_GET_PROTOCOL_FEATURES message?
> 
> Thanks,
> Stefan




  parent reply	other threads:[~2021-10-05 14:33 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-27 17:17 [PATCH v5] Prevent vhost-user-blk-test hang Raphael Norwitz
2021-09-28  8:55 ` Stefan Hajnoczi
2021-09-30  5:29   ` Raphael Norwitz
2021-09-30  9:48     ` Stefan Hajnoczi
2021-10-01 20:59       ` Raphael Norwitz
2021-10-05 14:31       ` Michael S. Tsirkin [this message]
2021-10-13  6:25         ` Raphael Norwitz
2021-10-13  9:02           ` Michael S. Tsirkin
2021-10-13  9:20           ` Stefan Hajnoczi
2021-10-19 10:15 ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211005102944-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=eblake@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=raphael.norwitz@nutanix.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.