linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Berg <johannes@sipsolutions.net>
To: Benjamin Beichler <benjamin.beichler@uni-rostock.de>,
	jdike@addtoit.com, Richard Weinberger <richard@nod.at>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: linux-um@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] um: read multiple msg from virtio slave request fd
Date: Wed, 01 Jun 2022 19:13:29 +0200	[thread overview]
Message-ID: <360edf352f888f4607e0411df8a994aa087e9db4.camel@sipsolutions.net> (raw)
In-Reply-To: <20220601153722.181427-1-benjamin.beichler@uni-rostock.de>

On Wed, 2022-06-01 at 15:37 +0000, Benjamin Beichler wrote:
> If VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is activated, the user mode
> linux virtio irq handler only read one msg from the corresponding socket.
> This creates issues, when the device emulation creates multiple call
> requests (e.g. for multiple virtqueues), as the socket buffer tend to fill
> up and the call requests are delayed.
> 
> This creates a deadlock situation, when the device simulation blocks,
> because of sending a msg and the kernel side blocks because of
> synchronously waiting for an acknowledge of kick request.
> 
> Actually inband notifications are meant to be used in combination with the
> time travel protocol, but it is not required, therefore this corner case
> needs to be handled.

Hmm. How did you run into this? Why would a device send many messages
and not wait for ACK, but the kernel side actually waits for ACK? What
would the use case for that be? Seems a bit odd, if both wait for ACK
there shouldn't be an issue?

Anyway, I guess I don't mind fixing this regardless of whether I see a
use case where it could happen :-)


> +++ b/arch/um/drivers/virtio_uml.c
> @@ -363,45 +363,47 @@ static irqreturn_t vu_req_read_message(struct virtio_uml_device *vu_dev,
>  		struct vhost_user_msg msg;
>  		u8 extra_payload[512];
>  	} msg;
> -	int rc;
> -
> -	rc = vhost_user_recv_req(vu_dev, &msg.msg,
> -				 sizeof(msg.msg.payload) +
> -				 sizeof(msg.extra_payload));
> -
> -	if (rc)

This code changed a bit, you should rebase onto the uml tree's for-next
branch.

> +	while (1) {
> +		if (vhost_user_recv_req(vu_dev, &msg.msg,
> +					sizeof(msg.msg.payload)
> +					+ sizeof(msg.extra_payload)))

prefer to keep the + on the previous line.


That said, my attempt at rebasing this made it all fail completely,
maybe you have better luck :)

johannes

  reply	other threads:[~2022-06-01 17:13 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-01 15:37 [PATCH] um: read multiple msg from virtio slave request fd Benjamin Beichler
2022-06-01 17:13 ` Johannes Berg [this message]
2022-06-02  8:32   ` Benjamin Beichler
     [not found]     ` <4310108d-9cf9-645b-48cf-f8d0b979f96e@uni-rostock.de>
2022-06-07 11:36       ` Benjamin Beichler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=360edf352f888f4607e0411df8a994aa087e9db4.camel@sipsolutions.net \
    --to=johannes@sipsolutions.net \
    --cc=anton.ivanov@cambridgegreys.com \
    --cc=benjamin.beichler@uni-rostock.de \
    --cc=jdike@addtoit.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-um@lists.infradead.org \
    --cc=richard@nod.at \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).