All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Chao Leng <lengchao@huawei.com>, Ming Lei <ming.lei@redhat.com>
Cc: Yi Zhang <yi.zhang@redhat.com>, Jens Axboe <axboe@kernel.dk>,
	linux-block@vger.kernel.org, Keith Busch <kbusch@kernel.org>,
	linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] block: re-introduce blk_mq_complete_request_sync
Date: Tue, 13 Oct 2020 15:36:08 -0700	[thread overview]
Message-ID: <e19073e4-06da-ce3c-519c-ece2c4d942fa@grimberg.me> (raw)
In-Reply-To: <5e05fc3b-ad81-aacc-1f8e-7ff0d1ad58fe@huawei.com>


>>> This may just reduce the probability. The concurrency of timeout and 
>>> teardown will cause the same request
>>> be treated repeatly, this is not we expected.
>>
>> That is right, not like SCSI, NVME doesn't apply atomic request 
>> completion, so
>> request may be completed/freed from both timeout & nvme_cancel_request().
>>
>> .teardown_lock still may cover the race with Sagi's patch because 
>> teardown
>> actually cancels requests in sync style.
> In extreme scenarios, the request may be already retry success(rq state 
> change to inflight).
> Timeout processing may wrongly stop the queue and abort the request.
> teardown_lock serialize the process of timeout and teardown, but do not 
> avoid the race.
> It might not be safe.

Not sure I understand the scenario you are describing.

what do you mean by "In extreme scenarios, the request may be already 
retry success(rq state change to inflight)"?

What will retry the request? only when the host will reconnect
the request will be retried.

We can call nvme_sync_queues in the last part of the teardown, but
I still don't understand the race here.

WARNING: multiple messages have this Message-ID (diff)
From: Sagi Grimberg <sagi@grimberg.me>
To: Chao Leng <lengchao@huawei.com>, Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>, Yi Zhang <yi.zhang@redhat.com>,
	linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] block: re-introduce blk_mq_complete_request_sync
Date: Tue, 13 Oct 2020 15:36:08 -0700	[thread overview]
Message-ID: <e19073e4-06da-ce3c-519c-ece2c4d942fa@grimberg.me> (raw)
In-Reply-To: <5e05fc3b-ad81-aacc-1f8e-7ff0d1ad58fe@huawei.com>


>>> This may just reduce the probability. The concurrency of timeout and 
>>> teardown will cause the same request
>>> be treated repeatly, this is not we expected.
>>
>> That is right, not like SCSI, NVME doesn't apply atomic request 
>> completion, so
>> request may be completed/freed from both timeout & nvme_cancel_request().
>>
>> .teardown_lock still may cover the race with Sagi's patch because 
>> teardown
>> actually cancels requests in sync style.
> In extreme scenarios, the request may be already retry success(rq state 
> change to inflight).
> Timeout processing may wrongly stop the queue and abort the request.
> teardown_lock serialize the process of timeout and teardown, but do not 
> avoid the race.
> It might not be safe.

Not sure I understand the scenario you are describing.

what do you mean by "In extreme scenarios, the request may be already 
retry success(rq state change to inflight)"?

What will retry the request? only when the host will reconnect
the request will be retried.

We can call nvme_sync_queues in the last part of the teardown, but
I still don't understand the race here.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2020-10-13 22:36 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-08 21:37 [PATCH] block: re-introduce blk_mq_complete_request_sync Sagi Grimberg
2020-10-08 21:37 ` Sagi Grimberg
2020-10-09  4:39 ` Ming Lei
2020-10-09  4:39   ` Ming Lei
2020-10-09  5:03   ` Yi Zhang
2020-10-09  5:03     ` Yi Zhang
2020-10-09  8:09     ` Sagi Grimberg
2020-10-09  8:09       ` Sagi Grimberg
2020-10-09 13:55       ` Yi Zhang
2020-10-09 13:55         ` Yi Zhang
2020-10-09 18:29         ` Sagi Grimberg
2020-10-09 18:29           ` Sagi Grimberg
2020-10-10  6:08           ` Yi Zhang
2020-10-10  6:08             ` Yi Zhang
2020-10-12  3:59             ` Chao Leng
2020-10-12  3:59               ` Chao Leng
2020-10-12  8:13               ` Ming Lei
2020-10-12  8:13                 ` Ming Lei
2020-10-12  9:06                 ` Chao Leng
2020-10-12  9:06                   ` Chao Leng
2020-10-13 22:36                   ` Sagi Grimberg [this message]
2020-10-13 22:36                     ` Sagi Grimberg
2020-10-14  1:08                     ` Ming Lei
2020-10-14  1:08                       ` Ming Lei
2020-10-14  1:37                       ` Chao Leng
2020-10-14  1:37                         ` Chao Leng
2020-10-14  2:02                         ` Ming Lei
2020-10-14  2:02                           ` Ming Lei
2020-10-14  2:32                           ` Chao Leng
2020-10-14  2:32                             ` Chao Leng
2020-10-14  2:41                           ` Chao Leng
2020-10-14  2:41                             ` Chao Leng
2020-10-14  3:34                       ` Ming Lei
2020-10-14  3:34                         ` Ming Lei
2020-10-14  9:39                         ` Chao Leng
2020-10-14  9:39                           ` Chao Leng
2020-10-14  9:56                           ` Ming Lei
2020-10-14  9:56                             ` Ming Lei
2020-10-15  6:05                             ` Chao Leng
2020-10-15  6:05                               ` Chao Leng
2020-10-15  7:50                               ` Ming Lei
2020-10-15  7:50                                 ` Ming Lei
2020-10-15 10:05                                 ` Chao Leng
2020-10-15 10:05                                   ` Chao Leng
2020-10-14  1:32                     ` Chao Leng
2020-10-14  1:32                       ` Chao Leng
2020-10-13 22:31                 ` Sagi Grimberg
2020-10-13 22:31                   ` Sagi Grimberg
2020-10-14  1:25                   ` Chao Leng
2020-10-14  1:25                     ` Chao Leng
2020-10-09  8:11   ` Sagi Grimberg
2020-10-09  8:11     ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e19073e4-06da-ce3c-519c-ece2c4d942fa@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=lengchao@huawei.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.