All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Chao Leng <lengchao@huawei.com>
Cc: Yi Zhang <yi.zhang@redhat.com>, Sagi Grimberg <sagi@grimberg.me>,
	Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, Keith Busch <kbusch@kernel.org>,
	linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] block: re-introduce blk_mq_complete_request_sync
Date: Mon, 12 Oct 2020 16:13:06 +0800	[thread overview]
Message-ID: <20201012081306.GB556731@T590> (raw)
In-Reply-To: <6f2a5ae2-2e6a-0386-691c-baefeecb5478@huawei.com>

On Mon, Oct 12, 2020 at 11:59:21AM +0800, Chao Leng wrote:
> 
> 
> On 2020/10/10 14:08, Yi Zhang wrote:
> > 
> > 
> > On 10/10/20 2:29 AM, Sagi Grimberg wrote:
> > > 
> > > 
> > > On 10/9/20 6:55 AM, Yi Zhang wrote:
> > > > Hi Sagi
> > > > 
> > > > On 10/9/20 4:09 PM, Sagi Grimberg wrote:
> > > > > > Hi Sagi
> > > > > > 
> > > > > > I applied this patch on block origin/for-next and still can reproduce it.
> > > > > 
> > > > > That's unexpected, can you try this patch?
> > > > > -- 
> > > > > diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> > > > > index 629b025685d1..46428ff0b0fc 100644
> > > > > --- a/drivers/nvme/host/tcp.c
> > > > > +++ b/drivers/nvme/host/tcp.c
> > > > > @@ -2175,7 +2175,7 @@ static void nvme_tcp_complete_timed_out(struct request *rq)
> > > > >         /* fence other contexts that may complete the command */
> > > > >         mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
> > > > >         nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
> > > > > -       if (!blk_mq_request_completed(rq)) {
> > > > > +       if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) {
> > > > >                 nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
> > > > >                 blk_mq_complete_request_sync(rq);
> > > > >         }
> This may just reduce the probability. The concurrency of timeout and teardown will cause the same request
> be treated repeatly, this is not we expected.

That is right, not like SCSI, NVME doesn't apply atomic request completion, so
request may be completed/freed from both timeout & nvme_cancel_request().

.teardown_lock still may cover the race with Sagi's patch because teardown
actually cancels requests in sync style.

> In the teardown process, after quiesced queues delete the timer and cancel the timeout work maybe a better option.

Seems better solution, given it is aligned with NVME PCI's reset
handling. nvme_sync_queues() may be called in nvme_tcp_teardown_io_queues() to
avoid this race.


Thanks, 
Ming


WARNING: multiple messages have this Message-ID (diff)
From: Ming Lei <ming.lei@redhat.com>
To: Chao Leng <lengchao@huawei.com>
Cc: Jens Axboe <axboe@kernel.dk>, Yi Zhang <yi.zhang@redhat.com>,
	Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] block: re-introduce blk_mq_complete_request_sync
Date: Mon, 12 Oct 2020 16:13:06 +0800	[thread overview]
Message-ID: <20201012081306.GB556731@T590> (raw)
In-Reply-To: <6f2a5ae2-2e6a-0386-691c-baefeecb5478@huawei.com>

On Mon, Oct 12, 2020 at 11:59:21AM +0800, Chao Leng wrote:
> 
> 
> On 2020/10/10 14:08, Yi Zhang wrote:
> > 
> > 
> > On 10/10/20 2:29 AM, Sagi Grimberg wrote:
> > > 
> > > 
> > > On 10/9/20 6:55 AM, Yi Zhang wrote:
> > > > Hi Sagi
> > > > 
> > > > On 10/9/20 4:09 PM, Sagi Grimberg wrote:
> > > > > > Hi Sagi
> > > > > > 
> > > > > > I applied this patch on block origin/for-next and still can reproduce it.
> > > > > 
> > > > > That's unexpected, can you try this patch?
> > > > > -- 
> > > > > diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> > > > > index 629b025685d1..46428ff0b0fc 100644
> > > > > --- a/drivers/nvme/host/tcp.c
> > > > > +++ b/drivers/nvme/host/tcp.c
> > > > > @@ -2175,7 +2175,7 @@ static void nvme_tcp_complete_timed_out(struct request *rq)
> > > > >         /* fence other contexts that may complete the command */
> > > > >         mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
> > > > >         nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
> > > > > -       if (!blk_mq_request_completed(rq)) {
> > > > > +       if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) {
> > > > >                 nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
> > > > >                 blk_mq_complete_request_sync(rq);
> > > > >         }
> This may just reduce the probability. The concurrency of timeout and teardown will cause the same request
> be treated repeatly, this is not we expected.

That is right, not like SCSI, NVME doesn't apply atomic request completion, so
request may be completed/freed from both timeout & nvme_cancel_request().

.teardown_lock still may cover the race with Sagi's patch because teardown
actually cancels requests in sync style.

> In the teardown process, after quiesced queues delete the timer and cancel the timeout work maybe a better option.

Seems better solution, given it is aligned with NVME PCI's reset
handling. nvme_sync_queues() may be called in nvme_tcp_teardown_io_queues() to
avoid this race.


Thanks, 
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-10-12  8:13 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-08 21:37 [PATCH] block: re-introduce blk_mq_complete_request_sync Sagi Grimberg
2020-10-08 21:37 ` Sagi Grimberg
2020-10-09  4:39 ` Ming Lei
2020-10-09  4:39   ` Ming Lei
2020-10-09  5:03   ` Yi Zhang
2020-10-09  5:03     ` Yi Zhang
2020-10-09  8:09     ` Sagi Grimberg
2020-10-09  8:09       ` Sagi Grimberg
2020-10-09 13:55       ` Yi Zhang
2020-10-09 13:55         ` Yi Zhang
2020-10-09 18:29         ` Sagi Grimberg
2020-10-09 18:29           ` Sagi Grimberg
2020-10-10  6:08           ` Yi Zhang
2020-10-10  6:08             ` Yi Zhang
2020-10-12  3:59             ` Chao Leng
2020-10-12  3:59               ` Chao Leng
2020-10-12  8:13               ` Ming Lei [this message]
2020-10-12  8:13                 ` Ming Lei
2020-10-12  9:06                 ` Chao Leng
2020-10-12  9:06                   ` Chao Leng
2020-10-13 22:36                   ` Sagi Grimberg
2020-10-13 22:36                     ` Sagi Grimberg
2020-10-14  1:08                     ` Ming Lei
2020-10-14  1:08                       ` Ming Lei
2020-10-14  1:37                       ` Chao Leng
2020-10-14  1:37                         ` Chao Leng
2020-10-14  2:02                         ` Ming Lei
2020-10-14  2:02                           ` Ming Lei
2020-10-14  2:32                           ` Chao Leng
2020-10-14  2:32                             ` Chao Leng
2020-10-14  2:41                           ` Chao Leng
2020-10-14  2:41                             ` Chao Leng
2020-10-14  3:34                       ` Ming Lei
2020-10-14  3:34                         ` Ming Lei
2020-10-14  9:39                         ` Chao Leng
2020-10-14  9:39                           ` Chao Leng
2020-10-14  9:56                           ` Ming Lei
2020-10-14  9:56                             ` Ming Lei
2020-10-15  6:05                             ` Chao Leng
2020-10-15  6:05                               ` Chao Leng
2020-10-15  7:50                               ` Ming Lei
2020-10-15  7:50                                 ` Ming Lei
2020-10-15 10:05                                 ` Chao Leng
2020-10-15 10:05                                   ` Chao Leng
2020-10-14  1:32                     ` Chao Leng
2020-10-14  1:32                       ` Chao Leng
2020-10-13 22:31                 ` Sagi Grimberg
2020-10-13 22:31                   ` Sagi Grimberg
2020-10-14  1:25                   ` Chao Leng
2020-10-14  1:25                     ` Chao Leng
2020-10-09  8:11   ` Sagi Grimberg
2020-10-09  8:11     ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201012081306.GB556731@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=lengchao@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.