Linux-NVME Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 1/3] nvme-core: introduce sync io queues
@ 2020-10-20  9:08 Chao Leng
  2020-10-20 17:06 ` Chaitanya Kulkarni
  0 siblings, 1 reply; 3+ messages in thread
From: Chao Leng @ 2020-10-20  9:08 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, axboe, hch, lengchao, sagi

Introduce sync io queues for some scenarios which just only need sync
io queues not sync all queues.

Signed-off-by: Chao Leng <lengchao@huawei.com>
---
 drivers/nvme/host/core.c | 8 ++++++--
 drivers/nvme/host/nvme.h | 1 +
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e85f6304efd7..2211cf155e89 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4573,8 +4573,7 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvme_start_queues);
 
-
-void nvme_sync_queues(struct nvme_ctrl *ctrl)
+void nvme_sync_io_queues(struct nvme_ctrl *ctrl)
 {
 	struct nvme_ns *ns;
 
@@ -4582,7 +4581,12 @@ void nvme_sync_queues(struct nvme_ctrl *ctrl)
 	list_for_each_entry(ns, &ctrl->namespaces, list)
 		blk_sync_queue(ns->queue);
 	up_read(&ctrl->namespaces_rwsem);
+}
+EXPORT_SYMBOL_GPL(nvme_sync_io_queues);
 
+void nvme_sync_queues(struct nvme_ctrl *ctrl)
+{
+	nvme_sync_io_queues(ctrl);
 	if (ctrl->admin_q)
 		blk_sync_queue(ctrl->admin_q);
 }
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 566776100126..49d7f43511d4 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -602,6 +602,7 @@ void nvme_stop_queues(struct nvme_ctrl *ctrl);
 void nvme_start_queues(struct nvme_ctrl *ctrl);
 void nvme_kill_queues(struct nvme_ctrl *ctrl);
 void nvme_sync_queues(struct nvme_ctrl *ctrl);
+void nvme_sync_io_queues(struct nvme_ctrl *ctrl);
 void nvme_unfreeze(struct nvme_ctrl *ctrl);
 void nvme_wait_freeze(struct nvme_ctrl *ctrl);
 int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
-- 
2.16.4


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/3] nvme-core: introduce sync io queues
  2020-10-20  9:08 [PATCH 1/3] nvme-core: introduce sync io queues Chao Leng
@ 2020-10-20 17:06 ` Chaitanya Kulkarni
  2020-10-21  2:10   ` Chao Leng
  0 siblings, 1 reply; 3+ messages in thread
From: Chaitanya Kulkarni @ 2020-10-20 17:06 UTC (permalink / raw)
  To: Chao Leng, linux-nvme; +Cc: kbusch, axboe, hch, sagi

On 10/20/20 02:12, Chao Leng wrote:
> Introduce sync io queues for some scenarios which just only need sync
> io queues not sync all queues.
>
> Signed-off-by: Chao Leng <lengchao@huawei.com>

Can you please explain the scenario in detail ?


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/3] nvme-core: introduce sync io queues
  2020-10-20 17:06 ` Chaitanya Kulkarni
@ 2020-10-21  2:10   ` Chao Leng
  0 siblings, 0 replies; 3+ messages in thread
From: Chao Leng @ 2020-10-21  2:10 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: kbusch, axboe, hch, sagi



On 2020/10/21 1:06, Chaitanya Kulkarni wrote:
> On 10/20/20 02:12, Chao Leng wrote:
>> Introduce sync io queues for some scenarios which just only need sync
>> io queues not sync all queues.
>>
>> Signed-off-by: Chao Leng <lengchao@huawei.com>
> 
> Can you please explain the scenario in detail ?
It is used for avoid race between time out and tear down. see patch 2/3.
The race may cause abnormal:
1. Reported by Yi Zhang <yi.zhang@redhat.com>
detail: https://lore.kernel.org/linux-nvme/1934331639.3314730.1602152202454.JavaMail.zimbra@redhat.com/
2. BUG_ON in blk_mq_requeue_request
Because error recovery and time out may repeated completion request.
First error recovery cancel request in tear down process, the request
will be retried in completion, rq->state will be changed to IDEL.
And then time out will complete the request again, and samely retry
the request, BUG_ON will happen in blk_mq_requeue_request.
3. abnormal link disconnection
Firt error recovery cancel all request, reconnect success, the request
will be restarted. And then time out will complete the request again,
the queue will be stoped in nvme_rdma(tcp)_complete_timed_out,
Abnormal link diconnection will happen. The condition(time out process
is delayed long time by some reason such as hardware interrupt) is need.
So the probability is low.
> 
> .
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-20  9:08 [PATCH 1/3] nvme-core: introduce sync io queues Chao Leng
2020-10-20 17:06 ` Chaitanya Kulkarni
2020-10-21  2:10   ` Chao Leng

Linux-NVME Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-nvme/0 linux-nvme/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-nvme linux-nvme/ https://lore.kernel.org/linux-nvme \
		linux-nvme@lists.infradead.org
	public-inbox-index linux-nvme

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-nvme


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git