On Thu, Sep 30, 2021 at 05:29:06AM +0000, Raphael Norwitz wrote: > On Tue, Sep 28, 2021 at 10:55:00AM +0200, Stefan Hajnoczi wrote: > > On Mon, Sep 27, 2021 at 05:17:01PM +0000, Raphael Norwitz wrote: > > > In the vhost-user-blk-test, as of now there is nothing stoping > > > vhost-user-blk in QEMU writing to the socket right after forking off the > > > storage daemon before it has a chance to come up properly, leaving the > > > test hanging forever. This intermittently hanging test has caused QEMU > > > automation failures reported multiple times on the mailing list [1]. > > > > > > This change makes the storage-daemon notify the vhost-user-blk-test > > > that it is fully initialized and ready to handle client connections by > > > creating a pidfile on initialiation. This ensures that the storage-daemon > > > backend won't miss vhost-user messages and thereby resolves the hang. > > > > > > [1] https://lore.kernel.org/qemu-devel/CAFEAcA8kYpz9LiPNxnWJAPSjc=nv532bEdyfynaBeMeohqBp3A@mail.gmail.com/ > > > > Hey Stefan, > > > Hi Raphael, > > I would like to understand the issue that is being worked around in the > > patch. > > > > QEMU should be okay with listen fd passing. The qemu-storage-daemon > > documentation even contains example code for this > > (docs/tools/qemu-storage-daemon.rst) and that may need to be updated if > > listen fd passing is fundamentally broken. > > > > The issue is that the "client" (in this case vhost-user-blk in QEMU) can > proceed to use the socket before the storage-daemon has a chance to > properly start up and monitor it. This is nothing unique to the > storage-daemon - I've seen races like this happen happend with different > vhost-user backends before. > > Yes - I do think the docs can be improved to explicitly state that the > storage-daemon must be allowed to properly initialize before any data is > sent over the socket. Maybe we should even perscribe the use of the pidfile > option? > > > Can you share more details about the problem? > > > > Did you see my analysis [1]? > > [1] https://lore.kernel.org/qemu-devel/20210827165253.GA14291@raphael-debian-dev/ > > Basically QEMU sends VHOST_USER_GET_PROTOCOL_FEATURES across the vhost > socket and the storage daemon never receives it. Looking at the > QEMU state we see it is stuck waiting for a vhost-user response. Meanwhile > the storage-daemon never receives any message to begin with. AFAICT > there is nothing stopping QEMU from running first and sending a message > before vhost-user-blk comes up, and from testing we can see that waiting > for the storage-daemon to come up resolves the problem completely. The root cause has not been determined yet. QEMU should accept the incoming connection and then read the previously-sent VHOST_USER_GET_PROTOCOL_FEATURES message. There is no reason at the Sockets API level why the message should get lost, so there is probably a QEMU bug here. > > Does "writing to the socket" mean writing vhost-user protocol messages > > or does it mean connect(2)? > > > > Yes - it means writing vhost-user messages. We see a message sent from > QEMU to the backend. > > Note that in qtest_socket_server() (called from create_listen_socket()) > we have already called listen() on the socket, so I would expect QEMU > calling connect(2) to succeed and proceed to successfully send messages > whether or not there is another listener. I even tried commenting out the > execlp for the storage-daemon and I saw the same behavior from QEMU - it > sends the message and hangs indefinitely. QEMU is correct in waiting for a vhost-user reply. The question is why qemu-storage-daemon's vhost-user-block export isn't processing the request and replying to it? > > Could the problem be that vhost-user-blk-test.c creates the listen fds > > and does not close them? This means the host network stack doesn't > > consider the socket closed after QEMU terminates and therefore the test > > process hangs after QEMU is gone? In that case vhost-user-blk-test needs > > to close the fds after spawning qemu-storage-daemon. > > > > When the test hangs both QEMU and storage-daemon are still up and > connected to the socket and waiting for messages from each other. I don't > see how we would close the FD in this state or how it would help. Yes, I see. In that case the theory about fds doesn't apply. > We may want to think about implementing some kind of timeoout for initial > vhost-user messages so that we fail instead of hang in cases like these, > as I proposed in [1]. What do you think? Let's hold off on workarounds until the root cause has been found. Do you have time to debug why vu_accept() and vu_message_read() don't see the pending VHOST_USER_GET_PROTOCOL_FEATURES message? Thanks, Stefan