linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/2] nvme-pci: check CQ after batch submission for Microsoft device
@ 2019-11-13 13:42 Ming Lei
  2019-11-13 13:42 ` [PATCH V2 1/2] nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue Ming Lei
  2019-11-13 13:42 ` [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
  0 siblings, 2 replies; 8+ messages in thread
From: Ming Lei @ 2019-11-13 13:42 UTC (permalink / raw)
  To: linux-nvme
  Cc: Sagi Grimberg, Long Li, Ming Lei, Jens Axboe, Keith Busch,
	Christoph Hellwig

Hi,

The two patches fix one performance regression on Microsoft Corporation device. The
root cause is that Microsoft device applies aggressive interrupt coalescing, so
single job fio performance drops much after we removes checking cq in
f9dde187fa92("nvme-pci: remove cq check after submission").

Turns out this issue is very specific on Microsoft device, so add
a quirk for checking CQ on this device.

V2:
	- only check CQ for Microsoft device.


Ming Lei (2):
  nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue
  nvme-pci: check CQ after batch submission for Microsoft device

 drivers/nvme/host/nvme.h |  6 +++
 drivers/nvme/host/pci.c  | 98 ++++++++++++++++++++++++++++++++++------
 2 files changed, 91 insertions(+), 13 deletions(-)

Cc: Keith Busch <kbusch@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Long Li <longli@microsoft.com>
-- 
2.20.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH V2 1/2] nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue
  2019-11-13 13:42 [PATCH V2 0/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
@ 2019-11-13 13:42 ` Ming Lei
  2019-11-13 13:42 ` [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
  1 sibling, 0 replies; 8+ messages in thread
From: Ming Lei @ 2019-11-13 13:42 UTC (permalink / raw)
  To: linux-nvme
  Cc: Sagi Grimberg, Long Li, Ming Lei, Jens Axboe, Keith Busch,
	Christoph Hellwig

Prepare for adding new cq lock for improving IO performance in case
that device applies aggressive interrupt coalescing, and the new added
cq_lock can share space with cq_poll_lock becasue both are mutually
exclusive.

Move sq/cq_poll lock initialization into nvme_init_queue so that
we may initialize the added cq_lock correctly and lockdep warning
can be avoided.

This way is safe because no IO can be started until nvme_init_queue
is returned.

Cc: Keith Busch <kbusch@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Long Li <longli@microsoft.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/nvme/host/pci.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 869f462e6b6e..01728409a7ea 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1481,8 +1481,6 @@ static int nvme_alloc_queue(struct nvme_dev *dev, int qid, int depth)
 		goto free_cqdma;
 
 	nvmeq->dev = dev;
-	spin_lock_init(&nvmeq->sq_lock);
-	spin_lock_init(&nvmeq->cq_poll_lock);
 	nvmeq->cq_head = 0;
 	nvmeq->cq_phase = 1;
 	nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride];
@@ -1516,6 +1514,9 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
 {
 	struct nvme_dev *dev = nvmeq->dev;
 
+	spin_lock_init(&nvmeq->sq_lock);
+	spin_lock_init(&nvmeq->cq_poll_lock);
+
 	nvmeq->sq_tail = 0;
 	nvmeq->last_sq_tail = 0;
 	nvmeq->cq_head = 0;
-- 
2.20.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device
  2019-11-13 13:42 [PATCH V2 0/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
  2019-11-13 13:42 ` [PATCH V2 1/2] nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue Ming Lei
@ 2019-11-13 13:42 ` Ming Lei
  2019-11-13 15:53   ` Keith Busch
  2019-11-13 23:11   ` Nadolski, Edmund
  1 sibling, 2 replies; 8+ messages in thread
From: Ming Lei @ 2019-11-13 13:42 UTC (permalink / raw)
  To: linux-nvme
  Cc: Sagi Grimberg, Long Li, Ming Lei, Jens Axboe, Keith Busch,
	Christoph Hellwig

f9dde187fa92("nvme-pci: remove cq check after submission") removes
cq check after submission, this change actually causes performance
regression on Microsoft Corporation device in which aggressive interrupt
coalescing is used, for example, the max single interrupt per second
can reach 40~50K at most.

Checking CQ after submission can improve IO completion for Microsoft
Corporation device:

1) the check is very cheap, and lockless check on cq is enough,
see nvme_cqe_pending(). Especially the check can be done after batch
submission is done. Compared with the big latency added by aggressive
interrupt coalescing, cost of the check can be ignored.

2) when IO completion is observed via checking CQ in submission, the requst
may be completed without interrupt involved, or the interrupt handler
overload can be decreased.

3) when single sw queue is submiting IOs to this hw queue, if IOs completion
is observed via this check, the IO can be completed locally, which is
cheaper than remote completion.

Follows test result done on Azure L80sv2 guest with NVMe drive(
Microsoft Corporation Device b111). This guest has 80 CPUs and 10
numa nodes, and each NVMe drive supports 8 hw queues.

1) test script:
fio --bs=4k --ioengine=libaio --iodepth=64 --filename=/dev/nvme0n1 \
	--iodepth_batch_submit=16 --iodepth_batch_complete_min=16 \
	--direct=1 --runtime=30 --numjobs=1 --rw=randread \
	--name=test --group_reporting --gtod_reduce=1

2) test result:
     | v5.3       | v5.3 with this patchset
-------------------------------------------
IOPS | 130K       | 424K

Given IO is handled more efficiently in this way, the original report
of CPU lockup[1] on Hyper-V can't be observed any more after this patch
is applied, since this patch reduces IO completions much in interrupt
handler.

[1] https://lore.kernel.org/lkml/1566281669-48212-1-git-send-email-longli@linuxonhyperv.com

Cc: Keith Busch <kbusch@kernel.org>
Cc: Jens Axboe <axboe@fb.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Long Li <longli@microsoft.com>
Fixes: f9dde187fa92("nvme-pci: remove cq check after submission")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/nvme/host/nvme.h |  6 +++
 drivers/nvme/host/pci.c  | 95 +++++++++++++++++++++++++++++++++++-----
 2 files changed, 89 insertions(+), 12 deletions(-)

diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 22e8401352c2..5575827016d1 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -115,6 +115,12 @@ enum nvme_quirks {
 	 * Prevent tag overlap between queues
 	 */
 	NVME_QUIRK_SHARED_TAGS                  = (1 << 13),
+
+	/*
+	 * Check CQ in submission for improving performance since
+	 * the device applies aggressive interrupt coalescing
+	 */
+	NVME_QUIRK_CHECK_CQ			= (1 << 14),
 };
 
 /*
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 01728409a7ea..87d7f55c2d6d 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -83,6 +83,7 @@ struct nvme_queue;
 
 static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown);
 static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode);
+static void nvme_check_cq(struct nvme_queue *nvmeq);
 
 /*
  * Represents an NVM Express device.  Each nvme_dev is a PCI function.
@@ -165,7 +166,10 @@ struct nvme_queue {
 	spinlock_t sq_lock;
 	void *sq_cmds;
 	 /* only used for poll queues: */
-	spinlock_t cq_poll_lock ____cacheline_aligned_in_smp;
+	union {
+		spinlock_t cq_poll_lock;
+		spinlock_t cq_lock;
+	}____cacheline_aligned_in_smp;
 	volatile struct nvme_completion *cqes;
 	struct blk_mq_tags **tags;
 	dma_addr_t sq_dma_addr;
@@ -185,6 +189,7 @@ struct nvme_queue {
 #define NVMEQ_SQ_CMB		1
 #define NVMEQ_DELETE_ERROR	2
 #define NVMEQ_POLLED		3
+#define NVMEQ_CHECK_CQ		4
 	u32 *dbbuf_sq_db;
 	u32 *dbbuf_cq_db;
 	u32 *dbbuf_sq_ei;
@@ -912,6 +917,10 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 	blk_mq_start_request(req);
 	nvme_submit_cmd(nvmeq, &cmnd, bd->last);
+
+	if (bd->last)
+		nvme_check_cq(nvmeq);
+
 	return BLK_STS_OK;
 out_unmap_data:
 	nvme_unmap_data(dev, req);
@@ -1017,6 +1026,19 @@ static inline int nvme_process_cq(struct nvme_queue *nvmeq, u16 *start,
 	return found;
 }
 
+static inline irqreturn_t
+nvme_update_cq(struct nvme_queue *nvmeq, u16 *start, u16 *end)
+{
+	irqreturn_t ret = IRQ_NONE;
+
+	if (nvmeq->cq_head != nvmeq->last_cq_head)
+		ret = IRQ_HANDLED;
+	nvme_process_cq(nvmeq, start, end, -1);
+	nvmeq->last_cq_head = nvmeq->cq_head;
+
+	return ret;
+}
+
 static irqreturn_t nvme_irq(int irq, void *data)
 {
 	struct nvme_queue *nvmeq = data;
@@ -1028,10 +1050,7 @@ static irqreturn_t nvme_irq(int irq, void *data)
 	 * the irq handler, even if that was on another CPU.
 	 */
 	rmb();
-	if (nvmeq->cq_head != nvmeq->last_cq_head)
-		ret = IRQ_HANDLED;
-	nvme_process_cq(nvmeq, &start, &end, -1);
-	nvmeq->last_cq_head = nvmeq->cq_head;
+	ret = nvme_update_cq(nvmeq, &start, &end);
 	wmb();
 
 	if (start != end) {
@@ -1042,6 +1061,24 @@ static irqreturn_t nvme_irq(int irq, void *data)
 	return ret;
 }
 
+static irqreturn_t nvme_irq_for_check_cq(int irq, void *data)
+{
+	struct nvme_queue *nvmeq = data;
+	irqreturn_t ret = IRQ_NONE;
+	u16 start, end;
+
+	spin_lock(&nvmeq->cq_lock);
+	ret = nvme_update_cq(nvmeq, &start, &end);
+	spin_unlock(&nvmeq->cq_lock);
+
+	if (start != end) {
+		nvme_complete_cqes(nvmeq, start, end);
+		return IRQ_HANDLED;
+	}
+
+	return ret;
+}
+
 static irqreturn_t nvme_irq_check(int irq, void *data)
 {
 	struct nvme_queue *nvmeq = data;
@@ -1050,6 +1087,24 @@ static irqreturn_t nvme_irq_check(int irq, void *data)
 	return IRQ_NONE;
 }
 
+static void nvme_check_cq(struct nvme_queue *nvmeq)
+{
+	if (test_bit(NVMEQ_CHECK_CQ, &nvmeq->flags) &&
+			nvme_cqe_pending(nvmeq)) {
+		unsigned long flags;
+
+		if (spin_trylock_irqsave(&nvmeq->cq_lock, flags)) {
+			u16 start, end;
+
+			nvme_update_cq(nvmeq, &start, &end);
+			spin_unlock_irqrestore(&nvmeq->cq_lock, flags);
+
+			if (start != end)
+				nvme_complete_cqes(nvmeq, start, end);
+		}
+	}
+}
+
 /*
  * Poll for completions any queue, including those not dedicated to polling.
  * Can be called from any context.
@@ -1500,12 +1555,14 @@ static int queue_request_irq(struct nvme_queue *nvmeq)
 {
 	struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev);
 	int nr = nvmeq->dev->ctrl.instance;
+	irq_handler_t handler = test_bit(NVMEQ_CHECK_CQ, &nvmeq->flags) ?
+		nvme_irq_for_check_cq : nvme_irq;
 
 	if (use_threaded_interrupts) {
 		return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq_check,
-				nvme_irq, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
+				handler, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
 	} else {
-		return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq,
+		return pci_request_irq(pdev, nvmeq->cq_vector, handler,
 				NULL, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
 	}
 }
@@ -1515,7 +1572,13 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
 	struct nvme_dev *dev = nvmeq->dev;
 
 	spin_lock_init(&nvmeq->sq_lock);
-	spin_lock_init(&nvmeq->cq_poll_lock);
+
+	if (test_bit(NVMEQ_CHECK_CQ, &nvmeq->flags)) {
+		WARN_ON_ONCE(test_bit(NVMEQ_POLLED, &nvmeq->flags));
+		spin_lock_init(&nvmeq->cq_lock);
+	} else {
+		spin_lock_init(&nvmeq->cq_poll_lock);
+	}
 
 	nvmeq->sq_tail = 0;
 	nvmeq->last_sq_tail = 0;
@@ -1528,22 +1591,27 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
 	wmb(); /* ensure the first interrupt sees the initialization */
 }
 
-static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled)
+static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled,
+		bool check_cq)
 {
 	struct nvme_dev *dev = nvmeq->dev;
 	int result;
 	u16 vector = 0;
 
 	clear_bit(NVMEQ_DELETE_ERROR, &nvmeq->flags);
+	clear_bit(NVMEQ_CHECK_CQ, &nvmeq->flags);
 
 	/*
 	 * A queue's vector matches the queue identifier unless the controller
 	 * has only one vector available.
 	 */
-	if (!polled)
+	if (!polled) {
 		vector = dev->num_vecs == 1 ? 0 : qid;
-	else
+		if (vector && check_cq)
+			set_bit(NVMEQ_CHECK_CQ, &nvmeq->flags);
+	} else {
 		set_bit(NVMEQ_POLLED, &nvmeq->flags);
+	}
 
 	result = adapter_alloc_cq(dev, qid, nvmeq, vector);
 	if (result)
@@ -1740,7 +1808,8 @@ static int nvme_create_io_queues(struct nvme_dev *dev)
 	for (i = dev->online_queues; i <= max; i++) {
 		bool polled = i > rw_queues;
 
-		ret = nvme_create_queue(&dev->queues[i], i, polled);
+		ret = nvme_create_queue(&dev->queues[i], i, polled,
+				dev->ctrl.quirks & NVME_QUIRK_CHECK_CQ);
 		if (ret)
 			break;
 	}
@@ -3112,6 +3181,8 @@ static const struct pci_device_id nvme_id_table[] = {
 	{ PCI_DEVICE(0x1cc1, 0x8201),   /* ADATA SX8200PNP 512GB */
 		.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
 				NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+	{ PCI_DEVICE(0x1414, 0xb111),   /* Microsoft Corporation device */
+		.driver_data = NVME_QUIRK_CHECK_CQ, },
 	{ PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) },
 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001) },
 	{ PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2003) },
-- 
2.20.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device
  2019-11-13 13:42 ` [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
@ 2019-11-13 15:53   ` Keith Busch
  2019-11-14  1:30     ` Ming Lei
  2019-11-13 23:11   ` Nadolski, Edmund
  1 sibling, 1 reply; 8+ messages in thread
From: Keith Busch @ 2019-11-13 15:53 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Long Li, Christoph Hellwig, linux-nvme, Sagi Grimberg

On Wed, Nov 13, 2019 at 09:42:48PM +0800, Ming Lei wrote:
> +static void nvme_check_cq(struct nvme_queue *nvmeq)
> +{
> +	if (test_bit(NVMEQ_CHECK_CQ, &nvmeq->flags) &&
> +			nvme_cqe_pending(nvmeq)) {
> +		unsigned long flags;
> +
> +		if (spin_trylock_irqsave(&nvmeq->cq_lock, flags)) {

What's with the irqsave? This isn't called from an irq disabled context.

Not saying I'm on board with this approach, though. Checking the cq
during submission was cheap when submission and completion sides shared
the same lock, but this quite different than what we previously had.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device
  2019-11-13 13:42 ` [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
  2019-11-13 15:53   ` Keith Busch
@ 2019-11-13 23:11   ` Nadolski, Edmund
  2019-11-14  1:34     ` Ming Lei
  1 sibling, 1 reply; 8+ messages in thread
From: Nadolski, Edmund @ 2019-11-13 23:11 UTC (permalink / raw)
  To: Ming Lei, linux-nvme
  Cc: Jens Axboe, Keith Busch, Long Li, Sagi Grimberg, Christoph Hellwig

On 11/13/2019 6:42 AM, Ming Lei wrote:
>   	spinlock_t sq_lock;
>   	void *sq_cmds;
>   	 /* only used for poll queues: */
> -	spinlock_t cq_poll_lock ____cacheline_aligned_in_smp;
> +	union {
> +		spinlock_t cq_poll_lock;
> +		spinlock_t cq_lock;
> +	}____cacheline_aligned_in_smp;
>   	volatile struct nvme_completion *cqes;
>   	struct blk_mq_tags **tags;
>   	dma_addr_t sq_dma_addr;

Still not sure I follow the case for another lock (seems to me like just a 
semantic distinction, as they both basically bracket nvme_process_cq() calls). 
  What have I overlooked?

(Anyways the poll queues comment doesn't seem to apply to cq_lock.)

Thx,
Ed

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device
  2019-11-13 15:53   ` Keith Busch
@ 2019-11-14  1:30     ` Ming Lei
  2019-11-14  1:39       ` Ming Lei
  0 siblings, 1 reply; 8+ messages in thread
From: Ming Lei @ 2019-11-14  1:30 UTC (permalink / raw)
  To: Keith Busch
  Cc: Jens Axboe, Long Li, Christoph Hellwig, linux-nvme, Sagi Grimberg

On Thu, Nov 14, 2019 at 12:53:06AM +0900, Keith Busch wrote:
> On Wed, Nov 13, 2019 at 09:42:48PM +0800, Ming Lei wrote:
> > +static void nvme_check_cq(struct nvme_queue *nvmeq)
> > +{
> > +	if (test_bit(NVMEQ_CHECK_CQ, &nvmeq->flags) &&
> > +			nvme_cqe_pending(nvmeq)) {
> > +		unsigned long flags;
> > +
> > +		if (spin_trylock_irqsave(&nvmeq->cq_lock, flags)) {
> 
> What's with the irqsave? This isn't called from an irq disabled context.

Because .cq_lock is required in nvme irq handler.

> 
> Not saying I'm on board with this approach, though. Checking the cq

I am open for other approach if the issue can be solved.

> during submission was cheap when submission and completion sides shared
> the same lock, but this quite different than what we previously had.

Previously we have single .q_lock to cover both completion and submission.

Now we have splited the lock into two, that is why this patch added
.cq_lock for this purpose.


Thanks, 
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device
  2019-11-13 23:11   ` Nadolski, Edmund
@ 2019-11-14  1:34     ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2019-11-14  1:34 UTC (permalink / raw)
  To: Nadolski, Edmund
  Cc: Sagi Grimberg, Long Li, linux-nvme, Jens Axboe, Keith Busch,
	Christoph Hellwig

On Wed, Nov 13, 2019 at 04:11:42PM -0700, Nadolski, Edmund wrote:
> On 11/13/2019 6:42 AM, Ming Lei wrote:
> >   	spinlock_t sq_lock;
> >   	void *sq_cmds;
> >   	 /* only used for poll queues: */
> > -	spinlock_t cq_poll_lock ____cacheline_aligned_in_smp;
> > +	union {
> > +		spinlock_t cq_poll_lock;
> > +		spinlock_t cq_lock;
> > +	}____cacheline_aligned_in_smp;
> >   	volatile struct nvme_completion *cqes;
> >   	struct blk_mq_tags **tags;
> >   	dma_addr_t sq_dma_addr;
> 
> Still not sure I follow the case for another lock (seems to me like just a
> semantic distinction, as they both basically bracket nvme_process_cq()
> calls).  What have I overlooked?

.cq_poll_lock is for sync poll queues between nvme_poll_irqdisable() and
nvme_poll(), and this lock won't be acquired in irq context.

.cq_lock is for sync between submission and completion, which can be
acquired in irq context.

Their usage isn't same, and lock context isn't same, and have to be
initialized with different name(lock class) for making lockdep happy.

Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device
  2019-11-14  1:30     ` Ming Lei
@ 2019-11-14  1:39       ` Ming Lei
  0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2019-11-14  1:39 UTC (permalink / raw)
  To: Keith Busch
  Cc: Jens Axboe, Long Li, Christoph Hellwig, linux-nvme, Sagi Grimberg

On Thu, Nov 14, 2019 at 09:30:19AM +0800, Ming Lei wrote:
> On Thu, Nov 14, 2019 at 12:53:06AM +0900, Keith Busch wrote:
> > On Wed, Nov 13, 2019 at 09:42:48PM +0800, Ming Lei wrote:
> > > +static void nvme_check_cq(struct nvme_queue *nvmeq)
> > > +{
> > > +	if (test_bit(NVMEQ_CHECK_CQ, &nvmeq->flags) &&
> > > +			nvme_cqe_pending(nvmeq)) {
> > > +		unsigned long flags;
> > > +
> > > +		if (spin_trylock_irqsave(&nvmeq->cq_lock, flags)) {
> > 
> > What's with the irqsave? This isn't called from an irq disabled context.
> 
> Because .cq_lock is required in nvme irq handler.

However, it can be changed to spin_trylock_irq().

thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-11-14  1:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-13 13:42 [PATCH V2 0/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
2019-11-13 13:42 ` [PATCH V2 1/2] nvme-pci: move sq/cq_poll lock initialization into nvme_init_queue Ming Lei
2019-11-13 13:42 ` [PATCH V2 2/2] nvme-pci: check CQ after batch submission for Microsoft device Ming Lei
2019-11-13 15:53   ` Keith Busch
2019-11-14  1:30     ` Ming Lei
2019-11-14  1:39       ` Ming Lei
2019-11-13 23:11   ` Nadolski, Edmund
2019-11-14  1:34     ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).