linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Chao Leng <lengchao@huawei.com>
Cc: Sagi Grimberg <sagi@grimberg.me>, Jens Axboe <axboe@kernel.dk>,
	Yi Zhang <yi.zhang@redhat.com>,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] block: re-introduce blk_mq_complete_request_sync
Date: Wed, 14 Oct 2020 10:02:57 +0800	[thread overview]
Message-ID: <20201014020257.GB775684@T590> (raw)
In-Reply-To: <a1221820-55db-e550-0a9c-684ab96608e3@huawei.com>

On Wed, Oct 14, 2020 at 09:37:07AM +0800, Chao Leng wrote:
> rdma also need do this patch.
> We do test this patch with nvme over roce a few days, now work well.
> 
> On 2020/10/14 9:08, Ming Lei wrote:
> > On Tue, Oct 13, 2020 at 03:36:08PM -0700, Sagi Grimberg wrote:
> > > 
> > > > > > This may just reduce the probability. The concurrency of timeout
> > > > > > and teardown will cause the same request
> > > > > > be treated repeatly, this is not we expected.
> > > > > 
> > > > > That is right, not like SCSI, NVME doesn't apply atomic request
> > > > > completion, so
> > > > > request may be completed/freed from both timeout & nvme_cancel_request().
> > > > > 
> > > > > .teardown_lock still may cover the race with Sagi's patch because
> > > > > teardown
> > > > > actually cancels requests in sync style.
> > > > In extreme scenarios, the request may be already retry success(rq state
> > > > change to inflight).
> > > > Timeout processing may wrongly stop the queue and abort the request.
> > > > teardown_lock serialize the process of timeout and teardown, but do not
> > > > avoid the race.
> > > > It might not be safe.
> > > 
> > > Not sure I understand the scenario you are describing.
> > > 
> > > what do you mean by "In extreme scenarios, the request may be already retry
> > > success(rq state change to inflight)"?
> > > 
> > > What will retry the request? only when the host will reconnect
> > > the request will be retried.
> > > 
> > > We can call nvme_sync_queues in the last part of the teardown, but
> > > I still don't understand the race here.
> > 
> > Not like SCSI, NVME doesn't complete request atomically, so double
> > completion/free can be done from both timeout & nvme_cancel_request()(via teardown).
> > 
> > Given request is completed remotely or asynchronously in the two code paths,
> > the teardown_lock can't protect the case.
> > 
> > One solution is to apply the introduced blk_mq_complete_request_sync()
> > in both two code paths.
> > 
> > Another candidate is to use nvme_sync_queues() before teardown, such as
> > the following change:
> > 
> > diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> > index d6a3e1487354..dc3561ca0074 100644
> > --- a/drivers/nvme/host/tcp.c
> > +++ b/drivers/nvme/host/tcp.c
> > @@ -1909,6 +1909,7 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
> >   	blk_mq_quiesce_queue(ctrl->admin_q);
> >   	nvme_start_freeze(ctrl);
> >   	nvme_stop_queues(ctrl);
> > +	nvme_sync_queues(ctrl);
> nvme_sync_queues now sync io queues and admin queues, so we may need like this:
> nvme_sync_io_queues(struct nvme_ctrl *ctrl)
> {
> 	down_read(&ctrl->namespaces_rwsem);
> 	list_for_each_entry(ns, &ctrl->namespaces, list)
> 		blk_sync_queue(ns->queue);
> 	up_read(&ctrl->namespaces_rwsem);
> }

Looks not necessary to do that, because admin queue is quiesced in
nvme_tcp_teardown_io_queues(), so it is safe to sync admin queue here too.


Thanks,
Ming


  reply	other threads:[~2020-10-14  2:03 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-08 21:37 [PATCH] block: re-introduce blk_mq_complete_request_sync Sagi Grimberg
2020-10-09  4:39 ` Ming Lei
2020-10-09  5:03   ` Yi Zhang
2020-10-09  8:09     ` Sagi Grimberg
2020-10-09 13:55       ` Yi Zhang
2020-10-09 18:29         ` Sagi Grimberg
2020-10-10  6:08           ` Yi Zhang
2020-10-12  3:59             ` Chao Leng
2020-10-12  8:13               ` Ming Lei
2020-10-12  9:06                 ` Chao Leng
2020-10-13 22:36                   ` Sagi Grimberg
2020-10-14  1:08                     ` Ming Lei
2020-10-14  1:37                       ` Chao Leng
2020-10-14  2:02                         ` Ming Lei [this message]
2020-10-14  2:32                           ` Chao Leng
2020-10-14  2:41                           ` Chao Leng
2020-10-14  3:34                       ` Ming Lei
2020-10-14  9:39                         ` Chao Leng
2020-10-14  9:56                           ` Ming Lei
2020-10-15  6:05                             ` Chao Leng
2020-10-15  7:50                               ` Ming Lei
2020-10-15 10:05                                 ` Chao Leng
2020-10-14  1:32                     ` Chao Leng
2020-10-13 22:31                 ` Sagi Grimberg
2020-10-14  1:25                   ` Chao Leng
2020-10-09  8:11   ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201014020257.GB775684@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=lengchao@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).