linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] um: read multiple msg from virtio slave request fd
@ 2022-06-07 11:27 Benjamin Beichler
  2022-07-27 20:28 ` Benjamin Beichler
  2022-08-09 18:07 ` Johannes Berg
  0 siblings, 2 replies; 4+ messages in thread
From: Benjamin Beichler @ 2022-06-07 11:27 UTC (permalink / raw)
  To: Richard Weinberger, Anton Ivanov, Johannes Berg
  Cc: Benjamin Beichler, Johannes Berg, linux-um, linux-kernel

If VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is activated, the user mode
linux virtio irq handler only read one msg from the corresponding socket.
This creates issues, when the device emulation creates multiple call
requests (e.g. for multiple virtqueues), as the socket buffer tend to fill
up and the call requests are delayed.

This creates a deadlock situation, when the device simulation blocks,
because of sending a msg and the kernel side blocks because of
synchronously waiting for an acknowledge of kick request.

Actually inband notifications are meant to be used in combination with the
time travel protocol, but it is not required, therefore this corner case
needs to be handled.

Anyways, in general it seems to be more natural to consume always all
messages from a socket, instead of only a single one.

Fixes: 2cd097ba8c05 ("um: virtio: Implement VHOST_USER_PROTOCOL_F_SLAVE_REQ")
Signed-off-by: Benjamin Beichler <benjamin.beichler@uni-rostock.de>
---
 arch/um/drivers/virtio_uml.c | 71 +++++++++++++++++++-----------------
 1 file changed, 37 insertions(+), 34 deletions(-)

diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
index 82ff3785bf69..3716c5f6f9aa 100644
--- a/arch/um/drivers/virtio_uml.c
+++ b/arch/um/drivers/virtio_uml.c
@@ -374,45 +374,48 @@ static irqreturn_t vu_req_read_message(struct virtio_uml_device *vu_dev,
 		u8 extra_payload[512];
 	} msg;
 	int rc;
+	irqreturn_t irq_rc = IRQ_NONE;
 
-	rc = vhost_user_recv_req(vu_dev, &msg.msg,
-				 sizeof(msg.msg.payload) +
-				 sizeof(msg.extra_payload));
-
-	vu_dev->recv_rc = rc;
-	if (rc)
-		return IRQ_NONE;
-
-	switch (msg.msg.header.request) {
-	case VHOST_USER_SLAVE_CONFIG_CHANGE_MSG:
-		vu_dev->config_changed_irq = true;
-		response = 0;
-		break;
-	case VHOST_USER_SLAVE_VRING_CALL:
-		virtio_device_for_each_vq((&vu_dev->vdev), vq) {
-			if (vq->index == msg.msg.payload.vring_state.index) {
-				response = 0;
-				vu_dev->vq_irq_vq_map |= BIT_ULL(vq->index);
-				break;
+	while (1) {
+		rc = vhost_user_recv_req(vu_dev, &msg.msg,
+					 sizeof(msg.msg.payload) +
+					 sizeof(msg.extra_payload));
+		if (rc)
+			break;
+
+		switch (msg.msg.header.request) {
+		case VHOST_USER_SLAVE_CONFIG_CHANGE_MSG:
+			vu_dev->config_changed_irq = true;
+			response = 0;
+			break;
+		case VHOST_USER_SLAVE_VRING_CALL:
+			virtio_device_for_each_vq((&vu_dev->vdev), vq) {
+				if (vq->index == msg.msg.payload.vring_state.index) {
+					response = 0;
+					vu_dev->vq_irq_vq_map |= BIT_ULL(vq->index);
+					break;
+				}
 			}
+			break;
+		case VHOST_USER_SLAVE_IOTLB_MSG:
+			/* not supported - VIRTIO_F_ACCESS_PLATFORM */
+		case VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG:
+			/* not supported - VHOST_USER_PROTOCOL_F_HOST_NOTIFIER */
+		default:
+			vu_err(vu_dev, "unexpected slave request %d\n",
+			       msg.msg.header.request);
 		}
-		break;
-	case VHOST_USER_SLAVE_IOTLB_MSG:
-		/* not supported - VIRTIO_F_ACCESS_PLATFORM */
-	case VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG:
-		/* not supported - VHOST_USER_PROTOCOL_F_HOST_NOTIFIER */
-	default:
-		vu_err(vu_dev, "unexpected slave request %d\n",
-		       msg.msg.header.request);
-	}
-
-	if (ev && !vu_dev->suspended)
-		time_travel_add_irq_event(ev);
 
-	if (msg.msg.header.flags & VHOST_USER_FLAG_NEED_REPLY)
-		vhost_user_reply(vu_dev, &msg.msg, response);
+		if (ev && !vu_dev->suspended)
+			time_travel_add_irq_event(ev);
 
-	return IRQ_HANDLED;
+		if (msg.msg.header.flags & VHOST_USER_FLAG_NEED_REPLY)
+			vhost_user_reply(vu_dev, &msg.msg, response);
+		irq_rc = IRQ_HANDLED;
+	};
+	/* mask EAGAIN as we try non-blocking read until socket is empty */
+	vu_dev->recv_rc = (rc == -EAGAIN) ? 0 : rc;
+	return irq_rc;
 }
 
 static irqreturn_t vu_req_interrupt(int irq, void *data)
-- 
2.25.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] um: read multiple msg from virtio slave request fd
  2022-06-07 11:27 [PATCH v2] um: read multiple msg from virtio slave request fd Benjamin Beichler
@ 2022-07-27 20:28 ` Benjamin Beichler
  2022-07-29  6:40   ` Richard Weinberger
  2022-08-09 18:07 ` Johannes Berg
  1 sibling, 1 reply; 4+ messages in thread
From: Benjamin Beichler @ 2022-07-27 20:28 UTC (permalink / raw)
  To: Richard Weinberger, Anton Ivanov, Johannes Berg
  Cc: Johannes Berg, linux-um, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 4740 bytes --]

Are there any issues with that patch?
I would be happy to receive any comments or an acceptance :-D

Sorry for my former HTML-Email.

kind regards

Benjamin


Am 07.06.2022 um 13:27 schrieb Benjamin Beichler:
> If VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is activated, the user mode
> linux virtio irq handler only read one msg from the corresponding socket.
> This creates issues, when the device emulation creates multiple call
> requests (e.g. for multiple virtqueues), as the socket buffer tend to fill
> up and the call requests are delayed.
>
> This creates a deadlock situation, when the device simulation blocks,
> because of sending a msg and the kernel side blocks because of
> synchronously waiting for an acknowledge of kick request.
>
> Actually inband notifications are meant to be used in combination with the
> time travel protocol, but it is not required, therefore this corner case
> needs to be handled.
>
> Anyways, in general it seems to be more natural to consume always all
> messages from a socket, instead of only a single one.
>
> Fixes: 2cd097ba8c05 ("um: virtio: Implement VHOST_USER_PROTOCOL_F_SLAVE_REQ")
> Signed-off-by: Benjamin Beichler <benjamin.beichler@uni-rostock.de>
> ---
>   arch/um/drivers/virtio_uml.c | 71 +++++++++++++++++++-----------------
>   1 file changed, 37 insertions(+), 34 deletions(-)
>
> diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
> index 82ff3785bf69..3716c5f6f9aa 100644
> --- a/arch/um/drivers/virtio_uml.c
> +++ b/arch/um/drivers/virtio_uml.c
> @@ -374,45 +374,48 @@ static irqreturn_t vu_req_read_message(struct virtio_uml_device *vu_dev,
>   		u8 extra_payload[512];
>   	} msg;
>   	int rc;
> +	irqreturn_t irq_rc = IRQ_NONE;
>
> -	rc = vhost_user_recv_req(vu_dev, &msg.msg,
> -				 sizeof(msg.msg.payload) +
> -				 sizeof(msg.extra_payload));
> -
> -	vu_dev->recv_rc = rc;
> -	if (rc)
> -		return IRQ_NONE;
> -
> -	switch (msg.msg.header.request) {
> -	case VHOST_USER_SLAVE_CONFIG_CHANGE_MSG:
> -		vu_dev->config_changed_irq = true;
> -		response = 0;
> -		break;
> -	case VHOST_USER_SLAVE_VRING_CALL:
> -		virtio_device_for_each_vq((&vu_dev->vdev), vq) {
> -			if (vq->index == msg.msg.payload.vring_state.index) {
> -				response = 0;
> -				vu_dev->vq_irq_vq_map |= BIT_ULL(vq->index);
> -				break;
> +	while (1) {
> +		rc = vhost_user_recv_req(vu_dev, &msg.msg,
> +					 sizeof(msg.msg.payload) +
> +					 sizeof(msg.extra_payload));
> +		if (rc)
> +			break;
> +
> +		switch (msg.msg.header.request) {
> +		case VHOST_USER_SLAVE_CONFIG_CHANGE_MSG:
> +			vu_dev->config_changed_irq = true;
> +			response = 0;
> +			break;
> +		case VHOST_USER_SLAVE_VRING_CALL:
> +			virtio_device_for_each_vq((&vu_dev->vdev), vq) {
> +				if (vq->index == msg.msg.payload.vring_state.index) {
> +					response = 0;
> +					vu_dev->vq_irq_vq_map |= BIT_ULL(vq->index);
> +					break;
> +				}
>   			}
> +			break;
> +		case VHOST_USER_SLAVE_IOTLB_MSG:
> +			/* not supported - VIRTIO_F_ACCESS_PLATFORM */
> +		case VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG:
> +			/* not supported - VHOST_USER_PROTOCOL_F_HOST_NOTIFIER */
> +		default:
> +			vu_err(vu_dev, "unexpected slave request %d\n",
> +			       msg.msg.header.request);
>   		}
> -		break;
> -	case VHOST_USER_SLAVE_IOTLB_MSG:
> -		/* not supported - VIRTIO_F_ACCESS_PLATFORM */
> -	case VHOST_USER_SLAVE_VRING_HOST_NOTIFIER_MSG:
> -		/* not supported - VHOST_USER_PROTOCOL_F_HOST_NOTIFIER */
> -	default:
> -		vu_err(vu_dev, "unexpected slave request %d\n",
> -		       msg.msg.header.request);
> -	}
> -
> -	if (ev && !vu_dev->suspended)
> -		time_travel_add_irq_event(ev);
>
> -	if (msg.msg.header.flags & VHOST_USER_FLAG_NEED_REPLY)
> -		vhost_user_reply(vu_dev, &msg.msg, response);
> +		if (ev && !vu_dev->suspended)
> +			time_travel_add_irq_event(ev);
>
> -	return IRQ_HANDLED;
> +		if (msg.msg.header.flags & VHOST_USER_FLAG_NEED_REPLY)
> +			vhost_user_reply(vu_dev, &msg.msg, response);
> +		irq_rc = IRQ_HANDLED;
> +	};
> +	/* mask EAGAIN as we try non-blocking read until socket is empty */
> +	vu_dev->recv_rc = (rc == -EAGAIN) ? 0 : rc;
> +	return irq_rc;
>   }
>
>   static irqreturn_t vu_req_interrupt(int irq, void *data)


--
M.Sc. Benjamin Beichler

Universität Rostock, Fakultät für Informatik und Elektrotechnik
Institut für Angewandte Mikroelektronik und Datentechnik

University of Rostock, Department of CS and EE
Institute of Applied Microelectronics and CE

Richard-Wagner-Straße 31
18119 Rostock
Deutschland/Germany

phone: +49 (0) 381 498 - 7278
email: Benjamin.Beichler@uni-rostock.de
www: http://www.imd.uni-rostock.de/

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5364 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] um: read multiple msg from virtio slave request fd
  2022-07-27 20:28 ` Benjamin Beichler
@ 2022-07-29  6:40   ` Richard Weinberger
  0 siblings, 0 replies; 4+ messages in thread
From: Richard Weinberger @ 2022-07-29  6:40 UTC (permalink / raw)
  To: Benjamin Beichler
  Cc: anton ivanov, Johannes Berg, Johannes Berg, linux-um, linux-kernel

----- Ursprüngliche Mail -----
> Von: "Benjamin Beichler" <Benjamin.Beichler@uni-rostock.de>
> Are there any issues with that patch?
> I would be happy to receive any comments or an acceptance :-D

Johannes, can you please have a look?

Thanks,
//richard

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] um: read multiple msg from virtio slave request fd
  2022-06-07 11:27 [PATCH v2] um: read multiple msg from virtio slave request fd Benjamin Beichler
  2022-07-27 20:28 ` Benjamin Beichler
@ 2022-08-09 18:07 ` Johannes Berg
  1 sibling, 0 replies; 4+ messages in thread
From: Johannes Berg @ 2022-08-09 18:07 UTC (permalink / raw)
  To: Benjamin Beichler, Richard Weinberger, Anton Ivanov
  Cc: linux-um, linux-kernel

On Tue, 2022-06-07 at 11:27 +0000, Benjamin Beichler wrote:
> If VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS is activated, the user mode
> linux virtio irq handler only read one msg from the corresponding socket.
> This creates issues, when the device emulation creates multiple call
> requests (e.g. for multiple virtqueues), as the socket buffer tend to fill
> up and the call requests are delayed.
> 
> This creates a deadlock situation, when the device simulation blocks,
> because of sending a msg and the kernel side blocks because of
> synchronously waiting for an acknowledge of kick request.
> 
> Actually inband notifications are meant to be used in combination with the
> time travel protocol, but it is not required, therefore this corner case
> needs to be handled.
> 
> Anyways, in general it seems to be more natural to consume always all
> messages from a socket, instead of only a single one.
> 
> Fixes: 2cd097ba8c05 ("um: virtio: Implement VHOST_USER_PROTOCOL_F_SLAVE_REQ")
> Signed-off-by: Benjamin Beichler <benjamin.beichler@uni-rostock.de>
> 

Reviewed-by: Johannes Berg <johannes@sipsolutions.net>


Sorry, should've sent that earlier.

johannes

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-08-09 18:23 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-07 11:27 [PATCH v2] um: read multiple msg from virtio slave request fd Benjamin Beichler
2022-07-27 20:28 ` Benjamin Beichler
2022-07-29  6:40   ` Richard Weinberger
2022-08-09 18:07 ` Johannes Berg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).