All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Keith Busch <kbusch@kernel.org>, linux-nvme@lists.infradead.org
Cc: hch@lst.de
Subject: Re: [RFC PATCH] nvme-tcp: rerun io_work if req_list is not empty
Date: Mon, 17 May 2021 17:42:18 -0700	[thread overview]
Message-ID: <2479237f-ed41-6de0-6ffc-bed66046b2c2@grimberg.me> (raw)
In-Reply-To: <20210517223643.2934196-1-kbusch@kernel.org>


> A possible race condition exists where the request to send data is
> enqueued from nvme_tcp_handle_r2t()'s will not be observed by
> nvme_tcp_send_all() if it happens to be running. The driver relies on
> io_work to send the enqueued request when it is runs again, but the
> concurrently running nvme_tcp_send_all() may not have released the
> send_mutex at that time. If no future commands are enqueued to re-kick
> the io_work, the request will timeout in the SEND_H2C state, resulting
> in a timeout error like:
> 
>    nvme nvme0: queue 1: timeout request 0x3 type 6
> 
> Ensure the io_work continues to run as long as the req_list is not
> empty.

There is a version of this patch that I personally suggested before,
however I couldn't explain why that should happen...

nvme_tcp_send_all tries to send everything it has queues, it means
should either be able to send everything, or it should see a full socket
buffer. But in case the socket buffer is full, there should be a
.write_space() sk callback triggering when the socket buffer evacuates
space... Maybe there is a chance that write_space triggered, started
execution, and that the send_mutex is still taken?

Can we maybe try to catch if that is the case?

> 
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---
> Marking this RFC because the timeout is difficult to recreate, so
> difficult to verify the patch. The patch was created purely from code
> inspection, so I'm just hoping for feedback on my analysis right now.
> 
>   drivers/nvme/host/tcp.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 0222e23f5936..d07eb13d8713 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -1140,7 +1140,8 @@ static void nvme_tcp_io_work(struct work_struct *w)
>   				pending = true;
>   			else if (unlikely(result < 0))
>   				break;
> -		}
> +		} else
> +			pending = !llist_empty(&queue->req_list);

In my version, pending was unconditionally set to true...

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-05-18  0:42 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-17 22:36 [RFC PATCH] nvme-tcp: rerun io_work if req_list is not empty Keith Busch
2021-05-18  0:42 ` Sagi Grimberg [this message]
2021-05-18  1:38   ` Keith Busch
2021-05-18  3:02     ` Sagi Grimberg
2021-05-18 14:38       ` Keith Busch
2021-05-19  6:35 ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2479237f-ed41-6de0-6ffc-bed66046b2c2@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.