All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 0/6] blk-mq: fix blk_mq_alloc_request_hctx
@ 2021-07-02 15:05 ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

Hi,

blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect
io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask.
However, all cpus in hctx->cpumask may be offline.

This usage model isn't well supported by blk-mq which supposes allocator is
always done on one online CPU in hctx->cpumask. This assumption is
related with managed irq, which also requires blk-mq to drain inflight
request in this hctx when the last cpu in hctx->cpumask is going to
offline.

However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow
them to ask for request allocation when the specified hctx is inactive
(all cpus in hctx->cpumask are offline).

Fix blk_mq_alloc_request_hctx() by adding/passing flag of BLK_MQ_F_MANAGED_IRQ. 

Meantime optimize blk-mq cpu hotplug handling for non-managed irq.

V2:
	- use flag of BLK_MQ_F_MANAGED_IRQ
	- pass BLK_MQ_F_MANAGED_IRQ from driver explicitly
	- kill BLK_MQ_F_STACKING

Ming Lei (6):
  blk-mq: prepare for not deactivating hctx if managed irq isn't used
  nvme: pci: pass BLK_MQ_F_MANAGED_IRQ to blk-mq
  scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  scsi: set shost->use_managed_irq if driver uses managed irq
  virtio: add one field into virtio_device for recording if device uses
    managed irq
  blk-mq: don't deactivate hctx if managed irq isn't used

 block/blk-mq-debugfs.c                    |  2 +-
 block/blk-mq.c                            | 27 +++++++++++++----------
 drivers/block/loop.c                      |  2 +-
 drivers/block/virtio_blk.c                |  2 ++
 drivers/md/dm-rq.c                        |  2 +-
 drivers/nvme/host/pci.c                   |  3 ++-
 drivers/scsi/aacraid/linit.c              |  3 +++
 drivers/scsi/be2iscsi/be_main.c           |  3 +++
 drivers/scsi/csiostor/csio_init.c         |  3 +++
 drivers/scsi/hisi_sas/hisi_sas_v3_hw.c    |  1 +
 drivers/scsi/hpsa.c                       |  3 +++
 drivers/scsi/lpfc/lpfc.h                  |  1 +
 drivers/scsi/lpfc/lpfc_init.c             |  4 ++++
 drivers/scsi/megaraid/megaraid_sas_base.c |  3 +++
 drivers/scsi/mpt3sas/mpt3sas_scsih.c      |  3 +++
 drivers/scsi/qla2xxx/qla_isr.c            |  5 ++++-
 drivers/scsi/scsi_lib.c                   | 12 +++++-----
 drivers/scsi/smartpqi/smartpqi_init.c     |  3 +++
 drivers/scsi/virtio_scsi.c                |  1 +
 drivers/virtio/virtio_pci_common.c        |  1 +
 include/linux/blk-mq.h                    |  6 +----
 include/linux/virtio.h                    |  1 +
 include/scsi/scsi_host.h                  |  3 +++
 23 files changed, 67 insertions(+), 27 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH V2 0/6] blk-mq: fix blk_mq_alloc_request_hctx
@ 2021-07-02 15:05 ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

Hi,

blk_mq_alloc_request_hctx() is used by NVMe fc/rdma/tcp/loop to connect
io queue. Also the sw ctx is chosen as the 1st online cpu in hctx->cpumask.
However, all cpus in hctx->cpumask may be offline.

This usage model isn't well supported by blk-mq which supposes allocator is
always done on one online CPU in hctx->cpumask. This assumption is
related with managed irq, which also requires blk-mq to drain inflight
request in this hctx when the last cpu in hctx->cpumask is going to
offline.

However, NVMe fc/rdma/tcp/loop don't use managed irq, so we should allow
them to ask for request allocation when the specified hctx is inactive
(all cpus in hctx->cpumask are offline).

Fix blk_mq_alloc_request_hctx() by adding/passing flag of BLK_MQ_F_MANAGED_IRQ. 

Meantime optimize blk-mq cpu hotplug handling for non-managed irq.

V2:
	- use flag of BLK_MQ_F_MANAGED_IRQ
	- pass BLK_MQ_F_MANAGED_IRQ from driver explicitly
	- kill BLK_MQ_F_STACKING

Ming Lei (6):
  blk-mq: prepare for not deactivating hctx if managed irq isn't used
  nvme: pci: pass BLK_MQ_F_MANAGED_IRQ to blk-mq
  scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  scsi: set shost->use_managed_irq if driver uses managed irq
  virtio: add one field into virtio_device for recording if device uses
    managed irq
  blk-mq: don't deactivate hctx if managed irq isn't used

 block/blk-mq-debugfs.c                    |  2 +-
 block/blk-mq.c                            | 27 +++++++++++++----------
 drivers/block/loop.c                      |  2 +-
 drivers/block/virtio_blk.c                |  2 ++
 drivers/md/dm-rq.c                        |  2 +-
 drivers/nvme/host/pci.c                   |  3 ++-
 drivers/scsi/aacraid/linit.c              |  3 +++
 drivers/scsi/be2iscsi/be_main.c           |  3 +++
 drivers/scsi/csiostor/csio_init.c         |  3 +++
 drivers/scsi/hisi_sas/hisi_sas_v3_hw.c    |  1 +
 drivers/scsi/hpsa.c                       |  3 +++
 drivers/scsi/lpfc/lpfc.h                  |  1 +
 drivers/scsi/lpfc/lpfc_init.c             |  4 ++++
 drivers/scsi/megaraid/megaraid_sas_base.c |  3 +++
 drivers/scsi/mpt3sas/mpt3sas_scsih.c      |  3 +++
 drivers/scsi/qla2xxx/qla_isr.c            |  5 ++++-
 drivers/scsi/scsi_lib.c                   | 12 +++++-----
 drivers/scsi/smartpqi/smartpqi_init.c     |  3 +++
 drivers/scsi/virtio_scsi.c                |  1 +
 drivers/virtio/virtio_pci_common.c        |  1 +
 include/linux/blk-mq.h                    |  6 +----
 include/linux/virtio.h                    |  1 +
 include/scsi/scsi_host.h                  |  3 +++
 23 files changed, 67 insertions(+), 27 deletions(-)

-- 
2.31.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH V2 1/6] blk-mq: prepare for not deactivating hctx if managed irq isn't used
  2021-07-02 15:05 ` Ming Lei
@ 2021-07-02 15:05   ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

blk-mq deactivates one hctx when the last CPU in hctx->cpumask become
offline by draining all requests originated from this hctx and moving new
allocation on other active hctx. This way is for avoiding inflight IO in
case of managed irq because managed irq is shutdown when the last CPU in
the irq's affinity becomes offline.

However, lots of drivers(nvme fc, rdma, tcp, loop, ...) don't use managed
irq, so they needn't to deactivate hctx when the last CPU becomes offline.
Also, some of them are the only user of blk_mq_alloc_request_hctx() which
is used for connecting io queue. And their requirement is that the connect
request needs to be submitted successfully via one specified hctx even though
all CPUs in this hctx->cpumask have become offline.

Preparing for addressing the requirement for nvme fc/rdma/loop by adding
BLK_MQ_F_MANAGED_IRQ to not deactivate hctxs if managed irq isn't used.
Finally, if one driver uses managed irq, it has to tell blk-mq via
BLK_MQ_F_MANAGED_IRQ.

Meantime blk-mq's cpu hotplug handling can be optimized a bit if managed
irq isn't used.

Given blk_mq_alloc_request_hctx() is always called by driver without
BLK_MQ_F_MANAGED_IRQ, it is safe to take one offline cpu for getting
the sw context.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-debugfs.c |  1 +
 block/blk-mq.c         | 23 +++++++++++++----------
 include/linux/blk-mq.h |  1 +
 3 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 4b66d2776eda..17f57af3a4d6 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -247,6 +247,7 @@ static const char *const hctx_flag_name[] = {
 	HCTX_FLAG_NAME(NO_SCHED),
 	HCTX_FLAG_NAME(STACKING),
 	HCTX_FLAG_NAME(TAG_HCTX_SHARED),
+	HCTX_FLAG_NAME(MANAGED_IRQ),
 };
 #undef HCTX_FLAG_NAME
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2e9fd0ec63d7..1d45d2922ca7 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -427,6 +427,15 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 }
 EXPORT_SYMBOL(blk_mq_alloc_request);
 
+static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx)
+{
+	int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask);
+
+	if (cpu >= nr_cpu_ids)
+		cpu = cpumask_first(hctx->cpumask);
+	return cpu;
+}
+
 struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	unsigned int op, blk_mq_req_flags_t flags, unsigned int hctx_idx)
 {
@@ -468,7 +477,10 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	data.hctx = q->queue_hw_ctx[hctx_idx];
 	if (!blk_mq_hw_queue_mapped(data.hctx))
 		goto out_queue_exit;
-	cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
+
+	WARN_ON_ONCE(data.hctx->flags & BLK_MQ_F_MANAGED_IRQ);
+
+	cpu = blk_mq_first_mapped_cpu(data.hctx);
 	data.ctx = __blk_mq_get_ctx(q, cpu);
 
 	if (!q->elevator)
@@ -1501,15 +1513,6 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
 	hctx_unlock(hctx, srcu_idx);
 }
 
-static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx)
-{
-	int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask);
-
-	if (cpu >= nr_cpu_ids)
-		cpu = cpumask_first(hctx->cpumask);
-	return cpu;
-}
-
 /*
  * It'd be great if the workqueue API had a way to pass
  * in a mask and had some smarts for more clever placement.
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index fd2de2b422ed..62fc0393cc3a 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -403,6 +403,7 @@ enum {
 	 */
 	BLK_MQ_F_STACKING	= 1 << 2,
 	BLK_MQ_F_TAG_HCTX_SHARED = 1 << 3,
+	BLK_MQ_F_MANAGED_IRQ	= 1 << 4,
 	BLK_MQ_F_BLOCKING	= 1 << 5,
 	BLK_MQ_F_NO_SCHED	= 1 << 6,
 	BLK_MQ_F_ALLOC_POLICY_START_BIT = 8,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 1/6] blk-mq: prepare for not deactivating hctx if managed irq isn't used
@ 2021-07-02 15:05   ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

blk-mq deactivates one hctx when the last CPU in hctx->cpumask become
offline by draining all requests originated from this hctx and moving new
allocation on other active hctx. This way is for avoiding inflight IO in
case of managed irq because managed irq is shutdown when the last CPU in
the irq's affinity becomes offline.

However, lots of drivers(nvme fc, rdma, tcp, loop, ...) don't use managed
irq, so they needn't to deactivate hctx when the last CPU becomes offline.
Also, some of them are the only user of blk_mq_alloc_request_hctx() which
is used for connecting io queue. And their requirement is that the connect
request needs to be submitted successfully via one specified hctx even though
all CPUs in this hctx->cpumask have become offline.

Preparing for addressing the requirement for nvme fc/rdma/loop by adding
BLK_MQ_F_MANAGED_IRQ to not deactivate hctxs if managed irq isn't used.
Finally, if one driver uses managed irq, it has to tell blk-mq via
BLK_MQ_F_MANAGED_IRQ.

Meantime blk-mq's cpu hotplug handling can be optimized a bit if managed
irq isn't used.

Given blk_mq_alloc_request_hctx() is always called by driver without
BLK_MQ_F_MANAGED_IRQ, it is safe to take one offline cpu for getting
the sw context.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-debugfs.c |  1 +
 block/blk-mq.c         | 23 +++++++++++++----------
 include/linux/blk-mq.h |  1 +
 3 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 4b66d2776eda..17f57af3a4d6 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -247,6 +247,7 @@ static const char *const hctx_flag_name[] = {
 	HCTX_FLAG_NAME(NO_SCHED),
 	HCTX_FLAG_NAME(STACKING),
 	HCTX_FLAG_NAME(TAG_HCTX_SHARED),
+	HCTX_FLAG_NAME(MANAGED_IRQ),
 };
 #undef HCTX_FLAG_NAME
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2e9fd0ec63d7..1d45d2922ca7 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -427,6 +427,15 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 }
 EXPORT_SYMBOL(blk_mq_alloc_request);
 
+static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx)
+{
+	int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask);
+
+	if (cpu >= nr_cpu_ids)
+		cpu = cpumask_first(hctx->cpumask);
+	return cpu;
+}
+
 struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	unsigned int op, blk_mq_req_flags_t flags, unsigned int hctx_idx)
 {
@@ -468,7 +477,10 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
 	data.hctx = q->queue_hw_ctx[hctx_idx];
 	if (!blk_mq_hw_queue_mapped(data.hctx))
 		goto out_queue_exit;
-	cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
+
+	WARN_ON_ONCE(data.hctx->flags & BLK_MQ_F_MANAGED_IRQ);
+
+	cpu = blk_mq_first_mapped_cpu(data.hctx);
 	data.ctx = __blk_mq_get_ctx(q, cpu);
 
 	if (!q->elevator)
@@ -1501,15 +1513,6 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
 	hctx_unlock(hctx, srcu_idx);
 }
 
-static inline int blk_mq_first_mapped_cpu(struct blk_mq_hw_ctx *hctx)
-{
-	int cpu = cpumask_first_and(hctx->cpumask, cpu_online_mask);
-
-	if (cpu >= nr_cpu_ids)
-		cpu = cpumask_first(hctx->cpumask);
-	return cpu;
-}
-
 /*
  * It'd be great if the workqueue API had a way to pass
  * in a mask and had some smarts for more clever placement.
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index fd2de2b422ed..62fc0393cc3a 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -403,6 +403,7 @@ enum {
 	 */
 	BLK_MQ_F_STACKING	= 1 << 2,
 	BLK_MQ_F_TAG_HCTX_SHARED = 1 << 3,
+	BLK_MQ_F_MANAGED_IRQ	= 1 << 4,
 	BLK_MQ_F_BLOCKING	= 1 << 5,
 	BLK_MQ_F_NO_SCHED	= 1 << 6,
 	BLK_MQ_F_ALLOC_POLICY_START_BIT = 8,
-- 
2.31.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 2/6] nvme: pci: pass BLK_MQ_F_MANAGED_IRQ to blk-mq
  2021-07-02 15:05 ` Ming Lei
@ 2021-07-02 15:05   ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

blk-mq needs to know if the controller allocates managed irq vectors or
not, so pass the info to blk-mq.

The rule is that driver has to tell blk-mq if managed irq is used.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/nvme/host/pci.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index d3c5086673bc..093c56e1428e 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2317,7 +2317,8 @@ static void nvme_dev_add(struct nvme_dev *dev)
 		dev->tagset.queue_depth = min_t(unsigned int, dev->q_depth,
 						BLK_MQ_MAX_DEPTH) - 1;
 		dev->tagset.cmd_size = sizeof(struct nvme_iod);
-		dev->tagset.flags = BLK_MQ_F_SHOULD_MERGE;
+		dev->tagset.flags = BLK_MQ_F_SHOULD_MERGE |
+			BLK_MQ_F_MANAGED_IRQ;
 		dev->tagset.driver_data = dev;
 
 		/*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 2/6] nvme: pci: pass BLK_MQ_F_MANAGED_IRQ to blk-mq
@ 2021-07-02 15:05   ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

blk-mq needs to know if the controller allocates managed irq vectors or
not, so pass the info to blk-mq.

The rule is that driver has to tell blk-mq if managed irq is used.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/nvme/host/pci.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index d3c5086673bc..093c56e1428e 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2317,7 +2317,8 @@ static void nvme_dev_add(struct nvme_dev *dev)
 		dev->tagset.queue_depth = min_t(unsigned int, dev->q_depth,
 						BLK_MQ_MAX_DEPTH) - 1;
 		dev->tagset.cmd_size = sizeof(struct nvme_iod);
-		dev->tagset.flags = BLK_MQ_F_SHOULD_MERGE;
+		dev->tagset.flags = BLK_MQ_F_SHOULD_MERGE |
+			BLK_MQ_F_MANAGED_IRQ;
 		dev->tagset.driver_data = dev;
 
 		/*
-- 
2.31.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  2021-07-02 15:05 ` Ming Lei
@ 2021-07-02 15:05   ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

blk-mq needs this information of using managed irq for improving
deactivating hctx, so add such flag to 'struct Scsi_Host', then
drivers can pass such flag to blk-mq via scsi_mq_setup_tags().

The rule is that driver has to tell blk-mq if managed irq is used.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/scsi/scsi_lib.c  | 12 +++++++-----
 include/scsi/scsi_host.h |  3 +++
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 532304d42f00..743df8e824b9 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1915,6 +1915,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
 {
 	unsigned int cmd_size, sgl_size;
 	struct blk_mq_tag_set *tag_set = &shost->tag_set;
+	unsigned long flags = BLK_MQ_F_SHOULD_MERGE |
+		BLK_ALLOC_POLICY_TO_MQ_FLAG(shost->hostt->tag_alloc_policy);
 
 	sgl_size = max_t(unsigned int, sizeof(struct scatterlist),
 				scsi_mq_inline_sgl_size(shost));
@@ -1933,12 +1935,12 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
 	tag_set->queue_depth = shost->can_queue;
 	tag_set->cmd_size = cmd_size;
 	tag_set->numa_node = NUMA_NO_NODE;
-	tag_set->flags = BLK_MQ_F_SHOULD_MERGE;
-	tag_set->flags |=
-		BLK_ALLOC_POLICY_TO_MQ_FLAG(shost->hostt->tag_alloc_policy);
-	tag_set->driver_data = shost;
 	if (shost->host_tagset)
-		tag_set->flags |= BLK_MQ_F_TAG_HCTX_SHARED;
+		flags |= BLK_MQ_F_TAG_HCTX_SHARED;
+	if (shost->use_managed_irq)
+		flags |= BLK_MQ_F_MANAGED_IRQ;
+	tag_set->flags = flags;
+	tag_set->driver_data = shost;
 
 	return blk_mq_alloc_tag_set(tag_set);
 }
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index d0bf88d77f02..3ac589ae9592 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -657,6 +657,9 @@ struct Scsi_Host {
 	/* True if the host uses host-wide tagspace */
 	unsigned host_tagset:1;
 
+	/* True if the host uses managed irq */
+	unsigned use_managed_irq:1;
+
 	/* Host responded with short (<36 bytes) INQUIRY result */
 	unsigned short_inquiry:1;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
@ 2021-07-02 15:05   ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

blk-mq needs this information of using managed irq for improving
deactivating hctx, so add such flag to 'struct Scsi_Host', then
drivers can pass such flag to blk-mq via scsi_mq_setup_tags().

The rule is that driver has to tell blk-mq if managed irq is used.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/scsi/scsi_lib.c  | 12 +++++++-----
 include/scsi/scsi_host.h |  3 +++
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 532304d42f00..743df8e824b9 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1915,6 +1915,8 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
 {
 	unsigned int cmd_size, sgl_size;
 	struct blk_mq_tag_set *tag_set = &shost->tag_set;
+	unsigned long flags = BLK_MQ_F_SHOULD_MERGE |
+		BLK_ALLOC_POLICY_TO_MQ_FLAG(shost->hostt->tag_alloc_policy);
 
 	sgl_size = max_t(unsigned int, sizeof(struct scatterlist),
 				scsi_mq_inline_sgl_size(shost));
@@ -1933,12 +1935,12 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
 	tag_set->queue_depth = shost->can_queue;
 	tag_set->cmd_size = cmd_size;
 	tag_set->numa_node = NUMA_NO_NODE;
-	tag_set->flags = BLK_MQ_F_SHOULD_MERGE;
-	tag_set->flags |=
-		BLK_ALLOC_POLICY_TO_MQ_FLAG(shost->hostt->tag_alloc_policy);
-	tag_set->driver_data = shost;
 	if (shost->host_tagset)
-		tag_set->flags |= BLK_MQ_F_TAG_HCTX_SHARED;
+		flags |= BLK_MQ_F_TAG_HCTX_SHARED;
+	if (shost->use_managed_irq)
+		flags |= BLK_MQ_F_MANAGED_IRQ;
+	tag_set->flags = flags;
+	tag_set->driver_data = shost;
 
 	return blk_mq_alloc_tag_set(tag_set);
 }
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index d0bf88d77f02..3ac589ae9592 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -657,6 +657,9 @@ struct Scsi_Host {
 	/* True if the host uses host-wide tagspace */
 	unsigned host_tagset:1;
 
+	/* True if the host uses managed irq */
+	unsigned use_managed_irq:1;
+
 	/* Host responded with short (<36 bytes) INQUIRY result */
 	unsigned short_inquiry:1;
 
-- 
2.31.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 4/6] scsi: set shost->use_managed_irq if driver uses managed irq
  2021-07-02 15:05 ` Ming Lei
@ 2021-07-02 15:05   ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei,
	Adaptec OEM Raid Solutions, Subbu Seetharaman, Don Brace,
	James Smart, Kashyap Desai, Sathya Prakash, Nilesh Javali

Set shost->use_managed_irq if irq vectors are allocated via
pci_alloc_irq_vectors_affinity(PCI_IRQ_AFFINITY) or
pci_alloc_irq_vectors(PCI_IRQ_AFFINITY).

The rule is that driver has to tell scsi core if managed irq is used.

Cc: Adaptec OEM Raid Solutions <aacraid@microsemi.com>
Cc: Subbu Seetharaman <subbu.seetharaman@broadcom.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Don Brace <don.brace@microchip.com>
Cc: James Smart <james.smart@broadcom.com>
Cc: Kashyap Desai <kashyap.desai@broadcom.com>
Cc: Sathya Prakash <sathya.prakash@broadcom.com>
Cc: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/scsi/aacraid/linit.c              | 3 +++
 drivers/scsi/be2iscsi/be_main.c           | 3 +++
 drivers/scsi/csiostor/csio_init.c         | 3 +++
 drivers/scsi/hisi_sas/hisi_sas_v3_hw.c    | 1 +
 drivers/scsi/hpsa.c                       | 3 +++
 drivers/scsi/lpfc/lpfc.h                  | 1 +
 drivers/scsi/lpfc/lpfc_init.c             | 4 ++++
 drivers/scsi/megaraid/megaraid_sas_base.c | 3 +++
 drivers/scsi/mpt3sas/mpt3sas_scsih.c      | 3 +++
 drivers/scsi/qla2xxx/qla_isr.c            | 5 ++++-
 drivers/scsi/smartpqi/smartpqi_init.c     | 3 +++
 11 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
index 3168915adaa7..6d0abad77ef8 100644
--- a/drivers/scsi/aacraid/linit.c
+++ b/drivers/scsi/aacraid/linit.c
@@ -1761,6 +1761,9 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 	else
 		shost->this_id = shost->max_id;
 
+	if (aac->msi_enabled)
+		shost->use_managed_irq = 1;
+
 	if (!aac->sa_firmware && aac_drivers[index].quirks & AAC_QUIRK_SRC)
 		aac_intr_normal(aac, 0, 2, 0, NULL);
 
diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
index 22cf7f4b8d8c..2de2a3f78170 100644
--- a/drivers/scsi/be2iscsi/be_main.c
+++ b/drivers/scsi/be2iscsi/be_main.c
@@ -5684,6 +5684,9 @@ static int beiscsi_dev_probe(struct pci_dev *pcidev,
 	}
 	hwi_enable_intr(phba);
 
+	if (phba->num_cpus > 1)
+		phba->shost->use_managed_irq = 1;
+
 	ret = iscsi_host_add(phba->shost, &phba->pcidev->dev);
 	if (ret)
 		goto free_irqs;
diff --git a/drivers/scsi/csiostor/csio_init.c b/drivers/scsi/csiostor/csio_init.c
index 390b07bf92b9..3bc713f3237d 100644
--- a/drivers/scsi/csiostor/csio_init.c
+++ b/drivers/scsi/csiostor/csio_init.c
@@ -633,6 +633,9 @@ csio_shost_init(struct csio_hw *hw, struct device *dev,
 	else
 		shost->transportt = csio_fcoe_transport_vport;
 
+	if (hw->num_pports <= IRQ_AFFINITY_MAX_SETS)
+		shost->use_managed_irq = 1;
+
 	/* root lnode */
 	if (!hw->rln)
 		hw->rln = ln;
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index e95408314078..a8bf567b8cfe 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -4760,6 +4760,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		goto err_out_debugfs;
 	dev_err(dev, "%d hw queues\n", shost->nr_hw_queues);
+	shost->use_managed_irq = 1;
 	rc = scsi_add_host(shost, dev);
 	if (rc)
 		goto err_out_free_irq_vectors;
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index f135a10f582b..a84c80746001 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -5877,6 +5877,9 @@ static int hpsa_scsi_add_host(struct ctlr_info *h)
 {
 	int rv;
 
+	if (h->msix_vectors > 0)
+		h->scsi_host->use_managed_irq = 1;
+
 	rv = scsi_add_host(h->scsi_host, &h->pdev->dev);
 	if (rv) {
 		dev_err(&h->pdev->dev, "scsi_add_host failed\n");
diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
index f8de0d10620b..f0595c670fe8 100644
--- a/drivers/scsi/lpfc/lpfc.h
+++ b/drivers/scsi/lpfc/lpfc.h
@@ -783,6 +783,7 @@ struct lpfc_hba {
 #define HBA_HBEAT_INP		0x4000000 /* mbox HBEAT is in progress */
 #define HBA_HBEAT_TMO		0x8000000 /* HBEAT initiated after timeout */
 #define HBA_FLOGI_OUTSTANDING	0x10000000 /* FLOGI is outstanding */
+#define HBA_USE_MANAGED_IRQ	0x20000000 /* Use managed irq */
 
 	uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
 	struct lpfc_dmabuf slim2p;
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 5f018d02bf56..df5ec095d813 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -4460,6 +4460,9 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
 		vport->port_type = LPFC_PHYSICAL_PORT;
 	}
 
+	if (phba->hba_flag & HBA_USE_MANAGED_IRQ)
+		shost->use_managed_irq = 1;
+
 	lpfc_printf_log(phba, KERN_INFO, LOG_INIT | LOG_FCP,
 			"9081 CreatePort TMPLATE type %x TBLsize %d "
 			"SEGcnt %d/%d\n",
@@ -11563,6 +11566,7 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba)
 		goto vec_fail_out;
 	}
 	vectors = rc;
+	phba->hba_flag |= HBA_USE_MANAGED_IRQ;
 
 	/* Assign MSI-X vectors to interrupt handlers */
 	for (index = 0; index < vectors; index++) {
diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
index 4d4e9dbe5193..20747f8e7bc8 100644
--- a/drivers/scsi/megaraid/megaraid_sas_base.c
+++ b/drivers/scsi/megaraid/megaraid_sas_base.c
@@ -6909,6 +6909,9 @@ static int megasas_io_attach(struct megasas_instance *instance)
 		instance->iopoll_q_count = 0;
 	}
 
+	if (instance->smp_affinity_enable)
+		host->use_managed_irq = 1;
+
 	dev_info(&instance->pdev->dev,
 		"Max firmware commands: %d shared with default "
 		"hw_queues = %d poll_queues %d\n", instance->max_fw_cmds,
diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
index d00aca3c77ce..de9de343e819 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
@@ -12089,6 +12089,9 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		    shost->can_queue, shost->nr_hw_queues);
 	}
 
+	if (ioc->smp_affinity_enable)
+		shost->use_managed_irq = 1;
+
 	rv = scsi_add_host(shost, &pdev->dev);
 	if (rv) {
 		ioc_err(ioc, "failure at %s:%d/%s()!\n",
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index 6e8f737a4af3..036b5f178a7f 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -4000,11 +4000,14 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
 		ret = pci_alloc_irq_vectors(ha->pdev, min_vecs,
 		    min((u16)ha->msix_count, (u16)(num_online_cpus() + min_vecs)),
 		    PCI_IRQ_MSIX);
-	} else
+	} else {
 		ret = pci_alloc_irq_vectors_affinity(ha->pdev, min_vecs,
 		    min((u16)ha->msix_count, (u16)(num_online_cpus() + min_vecs)),
 		    PCI_IRQ_MSIX | PCI_IRQ_AFFINITY,
 		    &desc);
+		if (ret > 0)
+			vha->host->use_managed_irq = 1;
+	}
 
 	if (ret < 0) {
 		ql_log(ql_log_fatal, vha, 0x00c7,
diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
index 5db16509b6e1..b2f28ed7e568 100644
--- a/drivers/scsi/smartpqi/smartpqi_init.c
+++ b/drivers/scsi/smartpqi/smartpqi_init.c
@@ -6972,6 +6972,9 @@ static int pqi_register_scsi(struct pqi_ctrl_info *ctrl_info)
 	shost->host_tagset = 1;
 	shost->hostdata[0] = (unsigned long)ctrl_info;
 
+	if (ctrl_info->num_msix_vectors_enabled)
+		shost->use_managed_irq = 1;
+
 	rc = scsi_add_host(shost, &ctrl_info->pci_dev->dev);
 	if (rc) {
 		dev_err(&ctrl_info->pci_dev->dev, "scsi_add_host failed\n");
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 4/6] scsi: set shost->use_managed_irq if driver uses managed irq
@ 2021-07-02 15:05   ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei,
	Adaptec OEM Raid Solutions, Subbu Seetharaman, Don Brace,
	James Smart, Kashyap Desai, Sathya Prakash, Nilesh Javali

Set shost->use_managed_irq if irq vectors are allocated via
pci_alloc_irq_vectors_affinity(PCI_IRQ_AFFINITY) or
pci_alloc_irq_vectors(PCI_IRQ_AFFINITY).

The rule is that driver has to tell scsi core if managed irq is used.

Cc: Adaptec OEM Raid Solutions <aacraid@microsemi.com>
Cc: Subbu Seetharaman <subbu.seetharaman@broadcom.com>
Cc: John Garry <john.garry@huawei.com>
Cc: Don Brace <don.brace@microchip.com>
Cc: James Smart <james.smart@broadcom.com>
Cc: Kashyap Desai <kashyap.desai@broadcom.com>
Cc: Sathya Prakash <sathya.prakash@broadcom.com>
Cc: Nilesh Javali <njavali@marvell.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/scsi/aacraid/linit.c              | 3 +++
 drivers/scsi/be2iscsi/be_main.c           | 3 +++
 drivers/scsi/csiostor/csio_init.c         | 3 +++
 drivers/scsi/hisi_sas/hisi_sas_v3_hw.c    | 1 +
 drivers/scsi/hpsa.c                       | 3 +++
 drivers/scsi/lpfc/lpfc.h                  | 1 +
 drivers/scsi/lpfc/lpfc_init.c             | 4 ++++
 drivers/scsi/megaraid/megaraid_sas_base.c | 3 +++
 drivers/scsi/mpt3sas/mpt3sas_scsih.c      | 3 +++
 drivers/scsi/qla2xxx/qla_isr.c            | 5 ++++-
 drivers/scsi/smartpqi/smartpqi_init.c     | 3 +++
 11 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
index 3168915adaa7..6d0abad77ef8 100644
--- a/drivers/scsi/aacraid/linit.c
+++ b/drivers/scsi/aacraid/linit.c
@@ -1761,6 +1761,9 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 	else
 		shost->this_id = shost->max_id;
 
+	if (aac->msi_enabled)
+		shost->use_managed_irq = 1;
+
 	if (!aac->sa_firmware && aac_drivers[index].quirks & AAC_QUIRK_SRC)
 		aac_intr_normal(aac, 0, 2, 0, NULL);
 
diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
index 22cf7f4b8d8c..2de2a3f78170 100644
--- a/drivers/scsi/be2iscsi/be_main.c
+++ b/drivers/scsi/be2iscsi/be_main.c
@@ -5684,6 +5684,9 @@ static int beiscsi_dev_probe(struct pci_dev *pcidev,
 	}
 	hwi_enable_intr(phba);
 
+	if (phba->num_cpus > 1)
+		phba->shost->use_managed_irq = 1;
+
 	ret = iscsi_host_add(phba->shost, &phba->pcidev->dev);
 	if (ret)
 		goto free_irqs;
diff --git a/drivers/scsi/csiostor/csio_init.c b/drivers/scsi/csiostor/csio_init.c
index 390b07bf92b9..3bc713f3237d 100644
--- a/drivers/scsi/csiostor/csio_init.c
+++ b/drivers/scsi/csiostor/csio_init.c
@@ -633,6 +633,9 @@ csio_shost_init(struct csio_hw *hw, struct device *dev,
 	else
 		shost->transportt = csio_fcoe_transport_vport;
 
+	if (hw->num_pports <= IRQ_AFFINITY_MAX_SETS)
+		shost->use_managed_irq = 1;
+
 	/* root lnode */
 	if (!hw->rln)
 		hw->rln = ln;
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index e95408314078..a8bf567b8cfe 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -4760,6 +4760,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (rc)
 		goto err_out_debugfs;
 	dev_err(dev, "%d hw queues\n", shost->nr_hw_queues);
+	shost->use_managed_irq = 1;
 	rc = scsi_add_host(shost, dev);
 	if (rc)
 		goto err_out_free_irq_vectors;
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index f135a10f582b..a84c80746001 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -5877,6 +5877,9 @@ static int hpsa_scsi_add_host(struct ctlr_info *h)
 {
 	int rv;
 
+	if (h->msix_vectors > 0)
+		h->scsi_host->use_managed_irq = 1;
+
 	rv = scsi_add_host(h->scsi_host, &h->pdev->dev);
 	if (rv) {
 		dev_err(&h->pdev->dev, "scsi_add_host failed\n");
diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
index f8de0d10620b..f0595c670fe8 100644
--- a/drivers/scsi/lpfc/lpfc.h
+++ b/drivers/scsi/lpfc/lpfc.h
@@ -783,6 +783,7 @@ struct lpfc_hba {
 #define HBA_HBEAT_INP		0x4000000 /* mbox HBEAT is in progress */
 #define HBA_HBEAT_TMO		0x8000000 /* HBEAT initiated after timeout */
 #define HBA_FLOGI_OUTSTANDING	0x10000000 /* FLOGI is outstanding */
+#define HBA_USE_MANAGED_IRQ	0x20000000 /* Use managed irq */
 
 	uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
 	struct lpfc_dmabuf slim2p;
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 5f018d02bf56..df5ec095d813 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -4460,6 +4460,9 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
 		vport->port_type = LPFC_PHYSICAL_PORT;
 	}
 
+	if (phba->hba_flag & HBA_USE_MANAGED_IRQ)
+		shost->use_managed_irq = 1;
+
 	lpfc_printf_log(phba, KERN_INFO, LOG_INIT | LOG_FCP,
 			"9081 CreatePort TMPLATE type %x TBLsize %d "
 			"SEGcnt %d/%d\n",
@@ -11563,6 +11566,7 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba)
 		goto vec_fail_out;
 	}
 	vectors = rc;
+	phba->hba_flag |= HBA_USE_MANAGED_IRQ;
 
 	/* Assign MSI-X vectors to interrupt handlers */
 	for (index = 0; index < vectors; index++) {
diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
index 4d4e9dbe5193..20747f8e7bc8 100644
--- a/drivers/scsi/megaraid/megaraid_sas_base.c
+++ b/drivers/scsi/megaraid/megaraid_sas_base.c
@@ -6909,6 +6909,9 @@ static int megasas_io_attach(struct megasas_instance *instance)
 		instance->iopoll_q_count = 0;
 	}
 
+	if (instance->smp_affinity_enable)
+		host->use_managed_irq = 1;
+
 	dev_info(&instance->pdev->dev,
 		"Max firmware commands: %d shared with default "
 		"hw_queues = %d poll_queues %d\n", instance->max_fw_cmds,
diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
index d00aca3c77ce..de9de343e819 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
@@ -12089,6 +12089,9 @@ _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		    shost->can_queue, shost->nr_hw_queues);
 	}
 
+	if (ioc->smp_affinity_enable)
+		shost->use_managed_irq = 1;
+
 	rv = scsi_add_host(shost, &pdev->dev);
 	if (rv) {
 		ioc_err(ioc, "failure at %s:%d/%s()!\n",
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index 6e8f737a4af3..036b5f178a7f 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -4000,11 +4000,14 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
 		ret = pci_alloc_irq_vectors(ha->pdev, min_vecs,
 		    min((u16)ha->msix_count, (u16)(num_online_cpus() + min_vecs)),
 		    PCI_IRQ_MSIX);
-	} else
+	} else {
 		ret = pci_alloc_irq_vectors_affinity(ha->pdev, min_vecs,
 		    min((u16)ha->msix_count, (u16)(num_online_cpus() + min_vecs)),
 		    PCI_IRQ_MSIX | PCI_IRQ_AFFINITY,
 		    &desc);
+		if (ret > 0)
+			vha->host->use_managed_irq = 1;
+	}
 
 	if (ret < 0) {
 		ql_log(ql_log_fatal, vha, 0x00c7,
diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
index 5db16509b6e1..b2f28ed7e568 100644
--- a/drivers/scsi/smartpqi/smartpqi_init.c
+++ b/drivers/scsi/smartpqi/smartpqi_init.c
@@ -6972,6 +6972,9 @@ static int pqi_register_scsi(struct pqi_ctrl_info *ctrl_info)
 	shost->host_tagset = 1;
 	shost->hostdata[0] = (unsigned long)ctrl_info;
 
+	if (ctrl_info->num_msix_vectors_enabled)
+		shost->use_managed_irq = 1;
+
 	rc = scsi_add_host(shost, &ctrl_info->pci_dev->dev);
 	if (rc) {
 		dev_err(&ctrl_info->pci_dev->dev, "scsi_add_host failed\n");
-- 
2.31.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-02 15:05 ` Ming Lei
  (?)
@ 2021-07-02 15:05   ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei,
	Michael S. Tsirkin, Jason Wang, virtualization

blk-mq needs to know if the device uses managed irq, so add one field
to virtio_device for recording if device uses managed irq.

If the driver use managed irq, this flag has to be set so it can be
passed to blk-mq.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/block/virtio_blk.c         | 2 ++
 drivers/scsi/virtio_scsi.c         | 1 +
 drivers/virtio/virtio_pci_common.c | 1 +
 include/linux/virtio.h             | 1 +
 4 files changed, 5 insertions(+)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index e4bd3b1fc3c2..33b9c80ac475 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
 	vblk->tag_set.queue_depth = queue_depth;
 	vblk->tag_set.numa_node = NUMA_NO_NODE;
 	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+	if (vdev->use_managed_irq)
+		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;
 	vblk->tag_set.cmd_size =
 		sizeof(struct virtblk_req) +
 		sizeof(struct scatterlist) * sg_elems;
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index b9c86a7e3b97..f301917abc84 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -891,6 +891,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
 	shost->max_channel = 0;
 	shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
 	shost->nr_hw_queues = num_queues;
+	shost->use_managed_irq = vdev->use_managed_irq;
 
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	if (virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
index 222d630c41fc..f2ac48fb477b 100644
--- a/drivers/virtio/virtio_pci_common.c
+++ b/drivers/virtio/virtio_pci_common.c
@@ -128,6 +128,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
 	if (desc) {
 		flags |= PCI_IRQ_AFFINITY;
 		desc->pre_vectors++; /* virtio config vector */
+		vdev->use_managed_irq = true;
 	}
 
 	err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index b1894e0323fa..85cc773b2dc7 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -109,6 +109,7 @@ struct virtio_device {
 	bool failed;
 	bool config_enabled;
 	bool config_change_pending;
+	bool use_managed_irq;
 	spinlock_t config_lock;
 	struct device dev;
 	struct virtio_device_id id;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-02 15:05   ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei,
	Michael S. Tsirkin, Jason Wang, virtualization

blk-mq needs to know if the device uses managed irq, so add one field
to virtio_device for recording if device uses managed irq.

If the driver use managed irq, this flag has to be set so it can be
passed to blk-mq.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/block/virtio_blk.c         | 2 ++
 drivers/scsi/virtio_scsi.c         | 1 +
 drivers/virtio/virtio_pci_common.c | 1 +
 include/linux/virtio.h             | 1 +
 4 files changed, 5 insertions(+)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index e4bd3b1fc3c2..33b9c80ac475 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
 	vblk->tag_set.queue_depth = queue_depth;
 	vblk->tag_set.numa_node = NUMA_NO_NODE;
 	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+	if (vdev->use_managed_irq)
+		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;
 	vblk->tag_set.cmd_size =
 		sizeof(struct virtblk_req) +
 		sizeof(struct scatterlist) * sg_elems;
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index b9c86a7e3b97..f301917abc84 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -891,6 +891,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
 	shost->max_channel = 0;
 	shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
 	shost->nr_hw_queues = num_queues;
+	shost->use_managed_irq = vdev->use_managed_irq;
 
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	if (virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
index 222d630c41fc..f2ac48fb477b 100644
--- a/drivers/virtio/virtio_pci_common.c
+++ b/drivers/virtio/virtio_pci_common.c
@@ -128,6 +128,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
 	if (desc) {
 		flags |= PCI_IRQ_AFFINITY;
 		desc->pre_vectors++; /* virtio config vector */
+		vdev->use_managed_irq = true;
 	}
 
 	err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index b1894e0323fa..85cc773b2dc7 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -109,6 +109,7 @@ struct virtio_device {
 	bool failed;
 	bool config_enabled;
 	bool config_change_pending;
+	bool use_managed_irq;
 	spinlock_t config_lock;
 	struct device dev;
 	struct virtio_device_id id;
-- 
2.31.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-02 15:05   ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Damien Le Moal, Sagi Grimberg, Daniel Wagner, Michael S. Tsirkin,
	John Garry, Ming Lei, Keith Busch, virtualization, Wen Xiong

blk-mq needs to know if the device uses managed irq, so add one field
to virtio_device for recording if device uses managed irq.

If the driver use managed irq, this flag has to be set so it can be
passed to blk-mq.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/block/virtio_blk.c         | 2 ++
 drivers/scsi/virtio_scsi.c         | 1 +
 drivers/virtio/virtio_pci_common.c | 1 +
 include/linux/virtio.h             | 1 +
 4 files changed, 5 insertions(+)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index e4bd3b1fc3c2..33b9c80ac475 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
 	vblk->tag_set.queue_depth = queue_depth;
 	vblk->tag_set.numa_node = NUMA_NO_NODE;
 	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
+	if (vdev->use_managed_irq)
+		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;
 	vblk->tag_set.cmd_size =
 		sizeof(struct virtblk_req) +
 		sizeof(struct scatterlist) * sg_elems;
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index b9c86a7e3b97..f301917abc84 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -891,6 +891,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
 	shost->max_channel = 0;
 	shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
 	shost->nr_hw_queues = num_queues;
+	shost->use_managed_irq = vdev->use_managed_irq;
 
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	if (virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
index 222d630c41fc..f2ac48fb477b 100644
--- a/drivers/virtio/virtio_pci_common.c
+++ b/drivers/virtio/virtio_pci_common.c
@@ -128,6 +128,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
 	if (desc) {
 		flags |= PCI_IRQ_AFFINITY;
 		desc->pre_vectors++; /* virtio config vector */
+		vdev->use_managed_irq = true;
 	}
 
 	err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index b1894e0323fa..85cc773b2dc7 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -109,6 +109,7 @@ struct virtio_device {
 	bool failed;
 	bool config_enabled;
 	bool config_change_pending;
+	bool use_managed_irq;
 	spinlock_t config_lock;
 	struct device dev;
 	struct virtio_device_id id;
-- 
2.31.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 6/6] blk-mq: don't deactivate hctx if managed irq isn't used
  2021-07-02 15:05 ` Ming Lei
@ 2021-07-02 15:05   ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

No need to deactivate one hctx if managed irq isn't used because
non-managed irq can be migrated to online cpus.

So we don't need to register cpu hotplug handler for
CPUHP_AP_BLK_MQ_ONLINE if managed irq isn't used.

Given BLK_MQ_F_MANAGED_IRQ is more generic and straightforward, it covers
!BLK_MQ_F_STACKING perfectly because managed irq is only used in
underlying controller, so remove BLK_MQ_F_STACKING.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-debugfs.c | 1 -
 block/blk-mq.c         | 4 ++--
 drivers/block/loop.c   | 2 +-
 drivers/md/dm-rq.c     | 2 +-
 include/linux/blk-mq.h | 5 -----
 5 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 17f57af3a4d6..3641a16c2910 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -245,7 +245,6 @@ static const char *const hctx_flag_name[] = {
 	HCTX_FLAG_NAME(TAG_QUEUE_SHARED),
 	HCTX_FLAG_NAME(BLOCKING),
 	HCTX_FLAG_NAME(NO_SCHED),
-	HCTX_FLAG_NAME(STACKING),
 	HCTX_FLAG_NAME(TAG_HCTX_SHARED),
 	HCTX_FLAG_NAME(MANAGED_IRQ),
 };
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 1d45d2922ca7..e672ebd93342 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2636,7 +2636,7 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node)
 
 static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
 {
-	if (!(hctx->flags & BLK_MQ_F_STACKING))
+	if (hctx->flags & BLK_MQ_F_MANAGED_IRQ)
 		cpuhp_state_remove_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
 						    &hctx->cpuhp_online);
 	cpuhp_state_remove_instance_nocalls(CPUHP_BLK_MQ_DEAD,
@@ -2731,7 +2731,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
 {
 	hctx->queue_num = hctx_idx;
 
-	if (!(hctx->flags & BLK_MQ_F_STACKING))
+	if (hctx->flags & BLK_MQ_F_MANAGED_IRQ)
 		cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
 				&hctx->cpuhp_online);
 	cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index cc0e8c39a48b..02509bc54242 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2268,7 +2268,7 @@ static int loop_add(struct loop_device **l, int i)
 	lo->tag_set.queue_depth = 128;
 	lo->tag_set.numa_node = NUMA_NO_NODE;
 	lo->tag_set.cmd_size = sizeof(struct loop_cmd);
-	lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_STACKING;
+	lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
 	lo->tag_set.driver_data = lo;
 
 	err = blk_mq_alloc_tag_set(&lo->tag_set);
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index 0dbd48cbdff9..294d4858c067 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -540,7 +540,7 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t)
 	md->tag_set->ops = &dm_mq_ops;
 	md->tag_set->queue_depth = dm_get_blk_mq_queue_depth();
 	md->tag_set->numa_node = md->numa_node_id;
-	md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_STACKING;
+	md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE;
 	md->tag_set->nr_hw_queues = dm_get_blk_mq_nr_hw_queues();
 	md->tag_set->driver_data = md;
 
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 62fc0393cc3a..d75ab9fd64fc 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -397,11 +397,6 @@ struct blk_mq_ops {
 enum {
 	BLK_MQ_F_SHOULD_MERGE	= 1 << 0,
 	BLK_MQ_F_TAG_QUEUE_SHARED = 1 << 1,
-	/*
-	 * Set when this device requires underlying blk-mq device for
-	 * completing IO:
-	 */
-	BLK_MQ_F_STACKING	= 1 << 2,
 	BLK_MQ_F_TAG_HCTX_SHARED = 1 << 3,
 	BLK_MQ_F_MANAGED_IRQ	= 1 << 4,
 	BLK_MQ_F_BLOCKING	= 1 << 5,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH V2 6/6] blk-mq: don't deactivate hctx if managed irq isn't used
@ 2021-07-02 15:05   ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-02 15:05 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Ming Lei

No need to deactivate one hctx if managed irq isn't used because
non-managed irq can be migrated to online cpus.

So we don't need to register cpu hotplug handler for
CPUHP_AP_BLK_MQ_ONLINE if managed irq isn't used.

Given BLK_MQ_F_MANAGED_IRQ is more generic and straightforward, it covers
!BLK_MQ_F_STACKING perfectly because managed irq is only used in
underlying controller, so remove BLK_MQ_F_STACKING.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-debugfs.c | 1 -
 block/blk-mq.c         | 4 ++--
 drivers/block/loop.c   | 2 +-
 drivers/md/dm-rq.c     | 2 +-
 include/linux/blk-mq.h | 5 -----
 5 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 17f57af3a4d6..3641a16c2910 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -245,7 +245,6 @@ static const char *const hctx_flag_name[] = {
 	HCTX_FLAG_NAME(TAG_QUEUE_SHARED),
 	HCTX_FLAG_NAME(BLOCKING),
 	HCTX_FLAG_NAME(NO_SCHED),
-	HCTX_FLAG_NAME(STACKING),
 	HCTX_FLAG_NAME(TAG_HCTX_SHARED),
 	HCTX_FLAG_NAME(MANAGED_IRQ),
 };
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 1d45d2922ca7..e672ebd93342 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2636,7 +2636,7 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node)
 
 static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
 {
-	if (!(hctx->flags & BLK_MQ_F_STACKING))
+	if (hctx->flags & BLK_MQ_F_MANAGED_IRQ)
 		cpuhp_state_remove_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
 						    &hctx->cpuhp_online);
 	cpuhp_state_remove_instance_nocalls(CPUHP_BLK_MQ_DEAD,
@@ -2731,7 +2731,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
 {
 	hctx->queue_num = hctx_idx;
 
-	if (!(hctx->flags & BLK_MQ_F_STACKING))
+	if (hctx->flags & BLK_MQ_F_MANAGED_IRQ)
 		cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
 				&hctx->cpuhp_online);
 	cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index cc0e8c39a48b..02509bc54242 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -2268,7 +2268,7 @@ static int loop_add(struct loop_device **l, int i)
 	lo->tag_set.queue_depth = 128;
 	lo->tag_set.numa_node = NUMA_NO_NODE;
 	lo->tag_set.cmd_size = sizeof(struct loop_cmd);
-	lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_STACKING;
+	lo->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
 	lo->tag_set.driver_data = lo;
 
 	err = blk_mq_alloc_tag_set(&lo->tag_set);
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index 0dbd48cbdff9..294d4858c067 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -540,7 +540,7 @@ int dm_mq_init_request_queue(struct mapped_device *md, struct dm_table *t)
 	md->tag_set->ops = &dm_mq_ops;
 	md->tag_set->queue_depth = dm_get_blk_mq_queue_depth();
 	md->tag_set->numa_node = md->numa_node_id;
-	md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_STACKING;
+	md->tag_set->flags = BLK_MQ_F_SHOULD_MERGE;
 	md->tag_set->nr_hw_queues = dm_get_blk_mq_nr_hw_queues();
 	md->tag_set->driver_data = md;
 
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 62fc0393cc3a..d75ab9fd64fc 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -397,11 +397,6 @@ struct blk_mq_ops {
 enum {
 	BLK_MQ_F_SHOULD_MERGE	= 1 << 0,
 	BLK_MQ_F_TAG_QUEUE_SHARED = 1 << 1,
-	/*
-	 * Set when this device requires underlying blk-mq device for
-	 * completing IO:
-	 */
-	BLK_MQ_F_STACKING	= 1 << 2,
 	BLK_MQ_F_TAG_HCTX_SHARED = 1 << 3,
 	BLK_MQ_F_MANAGED_IRQ	= 1 << 4,
 	BLK_MQ_F_BLOCKING	= 1 << 5,
-- 
2.31.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-02 15:05   ` Ming Lei
  (?)
@ 2021-07-02 15:55     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 58+ messages in thread
From: Michael S. Tsirkin @ 2021-07-02 15:55 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Jason Wang, virtualization

On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
> 
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.
> 
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Cc: virtualization@lists.linux-foundation.org
> Signed-off-by: Ming Lei <ming.lei@redhat.com>


The API seems somewhat confusing. virtio does not request
a managed irq as such, does it? I think it's a decision taken
by the irq core.

Any way to query the irq to find out if it's managed?


> ---
>  drivers/block/virtio_blk.c         | 2 ++
>  drivers/scsi/virtio_scsi.c         | 1 +
>  drivers/virtio/virtio_pci_common.c | 1 +
>  include/linux/virtio.h             | 1 +
>  4 files changed, 5 insertions(+)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index e4bd3b1fc3c2..33b9c80ac475 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
>  	vblk->tag_set.queue_depth = queue_depth;
>  	vblk->tag_set.numa_node = NUMA_NO_NODE;
>  	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
> +	if (vdev->use_managed_irq)
> +		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;
>  	vblk->tag_set.cmd_size =
>  		sizeof(struct virtblk_req) +
>  		sizeof(struct scatterlist) * sg_elems;
> diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
> index b9c86a7e3b97..f301917abc84 100644
> --- a/drivers/scsi/virtio_scsi.c
> +++ b/drivers/scsi/virtio_scsi.c
> @@ -891,6 +891,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
>  	shost->max_channel = 0;
>  	shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
>  	shost->nr_hw_queues = num_queues;
> +	shost->use_managed_irq = vdev->use_managed_irq;
>  
>  #ifdef CONFIG_BLK_DEV_INTEGRITY
>  	if (virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
> diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
> index 222d630c41fc..f2ac48fb477b 100644
> --- a/drivers/virtio/virtio_pci_common.c
> +++ b/drivers/virtio/virtio_pci_common.c
> @@ -128,6 +128,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
>  	if (desc) {
>  		flags |= PCI_IRQ_AFFINITY;
>  		desc->pre_vectors++; /* virtio config vector */
> +		vdev->use_managed_irq = true;
>  	}
>  
>  	err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index b1894e0323fa..85cc773b2dc7 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -109,6 +109,7 @@ struct virtio_device {
>  	bool failed;
>  	bool config_enabled;
>  	bool config_change_pending;
> +	bool use_managed_irq;
>  	spinlock_t config_lock;
>  	struct device dev;
>  	struct virtio_device_id id;
> -- 
> 2.31.1


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-02 15:55     ` Michael S. Tsirkin
  0 siblings, 0 replies; 58+ messages in thread
From: Michael S. Tsirkin @ 2021-07-02 15:55 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Jason Wang, virtualization

On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
> 
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.
> 
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Cc: virtualization@lists.linux-foundation.org
> Signed-off-by: Ming Lei <ming.lei@redhat.com>


The API seems somewhat confusing. virtio does not request
a managed irq as such, does it? I think it's a decision taken
by the irq core.

Any way to query the irq to find out if it's managed?


> ---
>  drivers/block/virtio_blk.c         | 2 ++
>  drivers/scsi/virtio_scsi.c         | 1 +
>  drivers/virtio/virtio_pci_common.c | 1 +
>  include/linux/virtio.h             | 1 +
>  4 files changed, 5 insertions(+)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index e4bd3b1fc3c2..33b9c80ac475 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
>  	vblk->tag_set.queue_depth = queue_depth;
>  	vblk->tag_set.numa_node = NUMA_NO_NODE;
>  	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
> +	if (vdev->use_managed_irq)
> +		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;
>  	vblk->tag_set.cmd_size =
>  		sizeof(struct virtblk_req) +
>  		sizeof(struct scatterlist) * sg_elems;
> diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
> index b9c86a7e3b97..f301917abc84 100644
> --- a/drivers/scsi/virtio_scsi.c
> +++ b/drivers/scsi/virtio_scsi.c
> @@ -891,6 +891,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
>  	shost->max_channel = 0;
>  	shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
>  	shost->nr_hw_queues = num_queues;
> +	shost->use_managed_irq = vdev->use_managed_irq;
>  
>  #ifdef CONFIG_BLK_DEV_INTEGRITY
>  	if (virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
> diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
> index 222d630c41fc..f2ac48fb477b 100644
> --- a/drivers/virtio/virtio_pci_common.c
> +++ b/drivers/virtio/virtio_pci_common.c
> @@ -128,6 +128,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
>  	if (desc) {
>  		flags |= PCI_IRQ_AFFINITY;
>  		desc->pre_vectors++; /* virtio config vector */
> +		vdev->use_managed_irq = true;
>  	}
>  
>  	err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index b1894e0323fa..85cc773b2dc7 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -109,6 +109,7 @@ struct virtio_device {
>  	bool failed;
>  	bool config_enabled;
>  	bool config_change_pending;
> +	bool use_managed_irq;
>  	spinlock_t config_lock;
>  	struct device dev;
>  	struct virtio_device_id id;
> -- 
> 2.31.1


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-02 15:55     ` Michael S. Tsirkin
  0 siblings, 0 replies; 58+ messages in thread
From: Michael S. Tsirkin @ 2021-07-02 15:55 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Damien Le Moal, John Garry, linux-scsi,
	Martin K . Petersen, Daniel Wagner, linux-nvme, virtualization,
	linux-block, Keith Busch, Wen Xiong, Christoph Hellwig,
	Sagi Grimberg

On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
> 
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.
> 
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Cc: virtualization@lists.linux-foundation.org
> Signed-off-by: Ming Lei <ming.lei@redhat.com>


The API seems somewhat confusing. virtio does not request
a managed irq as such, does it? I think it's a decision taken
by the irq core.

Any way to query the irq to find out if it's managed?


> ---
>  drivers/block/virtio_blk.c         | 2 ++
>  drivers/scsi/virtio_scsi.c         | 1 +
>  drivers/virtio/virtio_pci_common.c | 1 +
>  include/linux/virtio.h             | 1 +
>  4 files changed, 5 insertions(+)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index e4bd3b1fc3c2..33b9c80ac475 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
>  	vblk->tag_set.queue_depth = queue_depth;
>  	vblk->tag_set.numa_node = NUMA_NO_NODE;
>  	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
> +	if (vdev->use_managed_irq)
> +		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;
>  	vblk->tag_set.cmd_size =
>  		sizeof(struct virtblk_req) +
>  		sizeof(struct scatterlist) * sg_elems;
> diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
> index b9c86a7e3b97..f301917abc84 100644
> --- a/drivers/scsi/virtio_scsi.c
> +++ b/drivers/scsi/virtio_scsi.c
> @@ -891,6 +891,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
>  	shost->max_channel = 0;
>  	shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
>  	shost->nr_hw_queues = num_queues;
> +	shost->use_managed_irq = vdev->use_managed_irq;
>  
>  #ifdef CONFIG_BLK_DEV_INTEGRITY
>  	if (virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
> diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
> index 222d630c41fc..f2ac48fb477b 100644
> --- a/drivers/virtio/virtio_pci_common.c
> +++ b/drivers/virtio/virtio_pci_common.c
> @@ -128,6 +128,7 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
>  	if (desc) {
>  		flags |= PCI_IRQ_AFFINITY;
>  		desc->pre_vectors++; /* virtio config vector */
> +		vdev->use_managed_irq = true;
>  	}
>  
>  	err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index b1894e0323fa..85cc773b2dc7 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -109,6 +109,7 @@ struct virtio_device {
>  	bool failed;
>  	bool config_enabled;
>  	bool config_change_pending;
> +	bool use_managed_irq;
>  	spinlock_t config_lock;
>  	struct device dev;
>  	struct virtio_device_id id;
> -- 
> 2.31.1

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-02 15:55     ` Michael S. Tsirkin
  (?)
@ 2021-07-05  2:48       ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-05  2:48 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Jason Wang, virtualization

On Fri, Jul 02, 2021 at 11:55:40AM -0400, Michael S. Tsirkin wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> > blk-mq needs to know if the device uses managed irq, so add one field
> > to virtio_device for recording if device uses managed irq.
> > 
> > If the driver use managed irq, this flag has to be set so it can be
> > passed to blk-mq.
> > 
> > Cc: "Michael S. Tsirkin" <mst@redhat.com>
> > Cc: Jason Wang <jasowang@redhat.com>
> > Cc: virtualization@lists.linux-foundation.org
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> 
> 
> The API seems somewhat confusing. virtio does not request
> a managed irq as such, does it? I think it's a decision taken
> by the irq core.

vp_request_msix_vectors():

        if (desc) {
                flags |= PCI_IRQ_AFFINITY;
                desc->pre_vectors++; /* virtio config vector */
                vdev->use_managed_irq = true;
        }

        err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
                                             nvectors, flags, desc);

Managed irq is used once PCI_IRQ_AFFINITY is passed to
pci_alloc_irq_vectors_affinity().

> 
> Any way to query the irq to find out if it's managed?

So far the managed info isn't exported via /proc/irq, but if one irq is
managed, its smp_affinity & smp_affinity_list attributes can't be
changed successfully.

If irq's debugfs is enabled, this info can be retrieved.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-05  2:48       ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-05  2:48 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Jason Wang, virtualization

On Fri, Jul 02, 2021 at 11:55:40AM -0400, Michael S. Tsirkin wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> > blk-mq needs to know if the device uses managed irq, so add one field
> > to virtio_device for recording if device uses managed irq.
> > 
> > If the driver use managed irq, this flag has to be set so it can be
> > passed to blk-mq.
> > 
> > Cc: "Michael S. Tsirkin" <mst@redhat.com>
> > Cc: Jason Wang <jasowang@redhat.com>
> > Cc: virtualization@lists.linux-foundation.org
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> 
> 
> The API seems somewhat confusing. virtio does not request
> a managed irq as such, does it? I think it's a decision taken
> by the irq core.

vp_request_msix_vectors():

        if (desc) {
                flags |= PCI_IRQ_AFFINITY;
                desc->pre_vectors++; /* virtio config vector */
                vdev->use_managed_irq = true;
        }

        err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
                                             nvectors, flags, desc);

Managed irq is used once PCI_IRQ_AFFINITY is passed to
pci_alloc_irq_vectors_affinity().

> 
> Any way to query the irq to find out if it's managed?

So far the managed info isn't exported via /proc/irq, but if one irq is
managed, its smp_affinity & smp_affinity_list attributes can't be
changed successfully.

If irq's debugfs is enabled, this info can be retrieved.


Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-05  2:48       ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-05  2:48 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jens Axboe, Damien Le Moal, John Garry, linux-scsi,
	Martin K . Petersen, Daniel Wagner, linux-nvme, virtualization,
	linux-block, Keith Busch, Wen Xiong, Christoph Hellwig,
	Sagi Grimberg

On Fri, Jul 02, 2021 at 11:55:40AM -0400, Michael S. Tsirkin wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> > blk-mq needs to know if the device uses managed irq, so add one field
> > to virtio_device for recording if device uses managed irq.
> > 
> > If the driver use managed irq, this flag has to be set so it can be
> > passed to blk-mq.
> > 
> > Cc: "Michael S. Tsirkin" <mst@redhat.com>
> > Cc: Jason Wang <jasowang@redhat.com>
> > Cc: virtualization@lists.linux-foundation.org
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> 
> 
> The API seems somewhat confusing. virtio does not request
> a managed irq as such, does it? I think it's a decision taken
> by the irq core.

vp_request_msix_vectors():

        if (desc) {
                flags |= PCI_IRQ_AFFINITY;
                desc->pre_vectors++; /* virtio config vector */
                vdev->use_managed_irq = true;
        }

        err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors,
                                             nvectors, flags, desc);

Managed irq is used once PCI_IRQ_AFFINITY is passed to
pci_alloc_irq_vectors_affinity().

> 
> Any way to query the irq to find out if it's managed?

So far the managed info isn't exported via /proc/irq, but if one irq is
managed, its smp_affinity & smp_affinity_list attributes can't be
changed successfully.

If irq's debugfs is enabled, this info can be retrieved.


Thanks,
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-02 15:05   ` Ming Lei
  (?)
@ 2021-07-05  3:59     ` Jason Wang
  -1 siblings, 0 replies; 58+ messages in thread
From: Jason Wang @ 2021-07-05  3:59 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Michael S. Tsirkin,
	virtualization


在 2021/7/2 下午11:05, Ming Lei 写道:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
>
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.
>
> Cc: "Michael S. Tsirkin"<mst@redhat.com>
> Cc: Jason Wang<jasowang@redhat.com>
> Cc:virtualization@lists.linux-foundation.org
> Signed-off-by: Ming Lei<ming.lei@redhat.com>
> ---
>   drivers/block/virtio_blk.c         | 2 ++
>   drivers/scsi/virtio_scsi.c         | 1 +
>   drivers/virtio/virtio_pci_common.c | 1 +
>   include/linux/virtio.h             | 1 +
>   4 files changed, 5 insertions(+)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index e4bd3b1fc3c2..33b9c80ac475 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
>   	vblk->tag_set.queue_depth = queue_depth;
>   	vblk->tag_set.numa_node = NUMA_NO_NODE;
>   	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
> +	if (vdev->use_managed_irq)
> +		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;


I'm not familiar with blk mq.

But the name is kind of confusing, I guess 
"BLK_MQ_F_AFFINITY_MANAGED_IRQ" is better? (Consider we had 
"IRQD_AFFINITY_MANAGED")

This helps me to differ this from the devres (device managed IRQ) at least.

Thanks



^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-05  3:59     ` Jason Wang
  0 siblings, 0 replies; 58+ messages in thread
From: Jason Wang @ 2021-07-05  3:59 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Michael S. Tsirkin,
	virtualization


在 2021/7/2 下午11:05, Ming Lei 写道:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
>
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.
>
> Cc: "Michael S. Tsirkin"<mst@redhat.com>
> Cc: Jason Wang<jasowang@redhat.com>
> Cc:virtualization@lists.linux-foundation.org
> Signed-off-by: Ming Lei<ming.lei@redhat.com>
> ---
>   drivers/block/virtio_blk.c         | 2 ++
>   drivers/scsi/virtio_scsi.c         | 1 +
>   drivers/virtio/virtio_pci_common.c | 1 +
>   include/linux/virtio.h             | 1 +
>   4 files changed, 5 insertions(+)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index e4bd3b1fc3c2..33b9c80ac475 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
>   	vblk->tag_set.queue_depth = queue_depth;
>   	vblk->tag_set.numa_node = NUMA_NO_NODE;
>   	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
> +	if (vdev->use_managed_irq)
> +		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;


I'm not familiar with blk mq.

But the name is kind of confusing, I guess 
"BLK_MQ_F_AFFINITY_MANAGED_IRQ" is better? (Consider we had 
"IRQD_AFFINITY_MANAGED")

This helps me to differ this from the devres (device managed IRQ) at least.

Thanks



_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-05  3:59     ` Jason Wang
  0 siblings, 0 replies; 58+ messages in thread
From: Jason Wang @ 2021-07-05  3:59 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi
  Cc: Damien Le Moal, Sagi Grimberg, Daniel Wagner, Michael S. Tsirkin,
	John Garry, virtualization, Keith Busch, Wen Xiong


在 2021/7/2 下午11:05, Ming Lei 写道:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
>
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.
>
> Cc: "Michael S. Tsirkin"<mst@redhat.com>
> Cc: Jason Wang<jasowang@redhat.com>
> Cc:virtualization@lists.linux-foundation.org
> Signed-off-by: Ming Lei<ming.lei@redhat.com>
> ---
>   drivers/block/virtio_blk.c         | 2 ++
>   drivers/scsi/virtio_scsi.c         | 1 +
>   drivers/virtio/virtio_pci_common.c | 1 +
>   include/linux/virtio.h             | 1 +
>   4 files changed, 5 insertions(+)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index e4bd3b1fc3c2..33b9c80ac475 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -764,6 +764,8 @@ static int virtblk_probe(struct virtio_device *vdev)
>   	vblk->tag_set.queue_depth = queue_depth;
>   	vblk->tag_set.numa_node = NUMA_NO_NODE;
>   	vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
> +	if (vdev->use_managed_irq)
> +		vblk->tag_set.flags |= BLK_MQ_F_MANAGED_IRQ;


I'm not familiar with blk mq.

But the name is kind of confusing, I guess 
"BLK_MQ_F_AFFINITY_MANAGED_IRQ" is better? (Consider we had 
"IRQD_AFFINITY_MANAGED")

This helps me to differ this from the devres (device managed IRQ) at least.

Thanks


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 4/6] scsi: set shost->use_managed_irq if driver uses managed irq
  2021-07-02 15:05   ` Ming Lei
@ 2021-07-05  7:35     ` John Garry
  -1 siblings, 0 replies; 58+ messages in thread
From: John Garry @ 2021-07-05  7:35 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, Hannes Reinecke,
	Keith Busch, Damien Le Moal, Adaptec OEM Raid Solutions,
	Subbu Seetharaman, Don Brace, James Smart, Kashyap Desai,
	Sathya Prakash, Nilesh Javali

On 02/07/2021 16:05, Ming Lei wrote:
> Set shost->use_managed_irq if irq vectors are allocated via
> pci_alloc_irq_vectors_affinity(PCI_IRQ_AFFINITY) or
> pci_alloc_irq_vectors(PCI_IRQ_AFFINITY).
> 
> The rule is that driver has to tell scsi core if managed irq is used.
> 
> Cc: Adaptec OEM Raid Solutions<aacraid@microsemi.com>
> Cc: Subbu Seetharaman<subbu.seetharaman@broadcom.com>
> Cc: John Garry<john.garry@huawei.com>
> Cc: Don Brace<don.brace@microchip.com>
> Cc: James Smart<james.smart@broadcom.com>
> Cc: Kashyap Desai<kashyap.desai@broadcom.com>
> Cc: Sathya Prakash<sathya.prakash@broadcom.com>
> Cc: Nilesh Javali<njavali@marvell.com>
> Signed-off-by: Ming Lei<ming.lei@redhat.com>
> ---
>   drivers/scsi/aacraid/linit.c              | 3 +++
>   drivers/scsi/be2iscsi/be_main.c           | 3 +++
>   drivers/scsi/csiostor/csio_init.c         | 3 +++
>   drivers/scsi/hisi_sas/hisi_sas_v3_hw.c    | 1 +
>   drivers/scsi/hpsa.c                       | 3 +++
>   drivers/scsi/lpfc/lpfc.h                  | 1 +
>   drivers/scsi/lpfc/lpfc_init.c             | 4 ++++
>   drivers/scsi/megaraid/megaraid_sas_base.c | 3 +++
>   drivers/scsi/mpt3sas/mpt3sas_scsih.c      | 3 +++
>   drivers/scsi/qla2xxx/qla_isr.c            | 5 ++++-
>   drivers/scsi/smartpqi/smartpqi_init.c     | 3 +++
>   11 files changed, 31 insertions(+), 1 deletion(-)

Hi Ming,

hisi sas v2 hw is missed - it supports managed interrupt as a platform 
device. Setting that flag in hisi_sas_v2_interrupt_preinit() might be best.

Thanks,
John

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 4/6] scsi: set shost->use_managed_irq if driver uses managed irq
@ 2021-07-05  7:35     ` John Garry
  0 siblings, 0 replies; 58+ messages in thread
From: John Garry @ 2021-07-05  7:35 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, Hannes Reinecke,
	Keith Busch, Damien Le Moal, Adaptec OEM Raid Solutions,
	Subbu Seetharaman, Don Brace, James Smart, Kashyap Desai,
	Sathya Prakash, Nilesh Javali

On 02/07/2021 16:05, Ming Lei wrote:
> Set shost->use_managed_irq if irq vectors are allocated via
> pci_alloc_irq_vectors_affinity(PCI_IRQ_AFFINITY) or
> pci_alloc_irq_vectors(PCI_IRQ_AFFINITY).
> 
> The rule is that driver has to tell scsi core if managed irq is used.
> 
> Cc: Adaptec OEM Raid Solutions<aacraid@microsemi.com>
> Cc: Subbu Seetharaman<subbu.seetharaman@broadcom.com>
> Cc: John Garry<john.garry@huawei.com>
> Cc: Don Brace<don.brace@microchip.com>
> Cc: James Smart<james.smart@broadcom.com>
> Cc: Kashyap Desai<kashyap.desai@broadcom.com>
> Cc: Sathya Prakash<sathya.prakash@broadcom.com>
> Cc: Nilesh Javali<njavali@marvell.com>
> Signed-off-by: Ming Lei<ming.lei@redhat.com>
> ---
>   drivers/scsi/aacraid/linit.c              | 3 +++
>   drivers/scsi/be2iscsi/be_main.c           | 3 +++
>   drivers/scsi/csiostor/csio_init.c         | 3 +++
>   drivers/scsi/hisi_sas/hisi_sas_v3_hw.c    | 1 +
>   drivers/scsi/hpsa.c                       | 3 +++
>   drivers/scsi/lpfc/lpfc.h                  | 1 +
>   drivers/scsi/lpfc/lpfc_init.c             | 4 ++++
>   drivers/scsi/megaraid/megaraid_sas_base.c | 3 +++
>   drivers/scsi/mpt3sas/mpt3sas_scsih.c      | 3 +++
>   drivers/scsi/qla2xxx/qla_isr.c            | 5 ++++-
>   drivers/scsi/smartpqi/smartpqi_init.c     | 3 +++
>   11 files changed, 31 insertions(+), 1 deletion(-)

Hi Ming,

hisi sas v2 hw is missed - it supports managed interrupt as a platform 
device. Setting that flag in hisi_sas_v2_interrupt_preinit() might be best.

Thanks,
John

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  2021-07-02 15:05   ` Ming Lei
@ 2021-07-05  9:25     ` John Garry
  -1 siblings, 0 replies; 58+ messages in thread
From: John Garry @ 2021-07-05  9:25 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, Hannes Reinecke,
	Keith Busch, Damien Le Moal

On 02/07/2021 16:05, Ming Lei wrote:
> blk-mq needs this information of using managed irq for improving
> deactivating hctx, so add such flag to 'struct Scsi_Host', then
> drivers can pass such flag to blk-mq via scsi_mq_setup_tags().
> 
> The rule is that driver has to tell blk-mq if managed irq is used.
> 
> Signed-off-by: Ming Lei<ming.lei@redhat.com>

As was said before, can we have something like this instead of relying 
on the LLDs to do the setting:

--------->8------------

diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
index b595a94c4d16..2037a5b69fe1 100644
--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -37,7 +37,7 @@ int blk_mq_pci_map_queues(struct blk_mq_queue_map 
*qmap, struct pci_dev *pdev,
  		for_each_cpu(cpu, mask)
  			qmap->mq_map[cpu] = qmap->queue_offset + queue;
  	}
-
+	qmap->drain_hwq = 1;
  	return 0;

  fallback:
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 46fe5b45a8b8..7b610e680e08 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3408,7 +3408,8 @@ static int blk_mq_update_queue_map(struct 
blk_mq_tag_set *set)
  		set->map[HCTX_TYPE_DEFAULT].nr_queues = set->nr_hw_queues;

  	if (set->ops->map_queues && !is_kdump_kernel()) {
-		int i;
+		struct blk_mq_queue_map *qmap = &set->map[HCTX_TYPE_DEFAULT];
+		int i, ret;

  		/*
  		 * transport .map_queues is usually done in the following
@@ -3427,7 +3428,12 @@ static int blk_mq_update_queue_map(struct 
blk_mq_tag_set *set)
  		for (i = 0; i < set->nr_maps; i++)
  			blk_mq_clear_mq_map(&set->map[i]);

-		return set->ops->map_queues(set);
+		ret = set->ops->map_queues(set);
+		if (ret)
+			return ret;
+		if (qmap->drain_hwq)
+			set->flags |= BLK_MQ_F_MANAGED_IRQ;
+		return 0;
  	} else {
  		BUG_ON(set->nr_maps > 1);
  		return blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 65ed99b3f2f0..2b2e71ff43e0 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -193,6 +193,7 @@ struct blk_mq_queue_map {
  	unsigned int *mq_map;
  	unsigned int nr_queues;
  	unsigned int queue_offset;
+	bool drain_hwq;
  };





^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
@ 2021-07-05  9:25     ` John Garry
  0 siblings, 0 replies; 58+ messages in thread
From: John Garry @ 2021-07-05  9:25 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi
  Cc: Sagi Grimberg, Daniel Wagner, Wen Xiong, Hannes Reinecke,
	Keith Busch, Damien Le Moal

On 02/07/2021 16:05, Ming Lei wrote:
> blk-mq needs this information of using managed irq for improving
> deactivating hctx, so add such flag to 'struct Scsi_Host', then
> drivers can pass such flag to blk-mq via scsi_mq_setup_tags().
> 
> The rule is that driver has to tell blk-mq if managed irq is used.
> 
> Signed-off-by: Ming Lei<ming.lei@redhat.com>

As was said before, can we have something like this instead of relying 
on the LLDs to do the setting:

--------->8------------

diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
index b595a94c4d16..2037a5b69fe1 100644
--- a/block/blk-mq-pci.c
+++ b/block/blk-mq-pci.c
@@ -37,7 +37,7 @@ int blk_mq_pci_map_queues(struct blk_mq_queue_map 
*qmap, struct pci_dev *pdev,
  		for_each_cpu(cpu, mask)
  			qmap->mq_map[cpu] = qmap->queue_offset + queue;
  	}
-
+	qmap->drain_hwq = 1;
  	return 0;

  fallback:
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 46fe5b45a8b8..7b610e680e08 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3408,7 +3408,8 @@ static int blk_mq_update_queue_map(struct 
blk_mq_tag_set *set)
  		set->map[HCTX_TYPE_DEFAULT].nr_queues = set->nr_hw_queues;

  	if (set->ops->map_queues && !is_kdump_kernel()) {
-		int i;
+		struct blk_mq_queue_map *qmap = &set->map[HCTX_TYPE_DEFAULT];
+		int i, ret;

  		/*
  		 * transport .map_queues is usually done in the following
@@ -3427,7 +3428,12 @@ static int blk_mq_update_queue_map(struct 
blk_mq_tag_set *set)
  		for (i = 0; i < set->nr_maps; i++)
  			blk_mq_clear_mq_map(&set->map[i]);

-		return set->ops->map_queues(set);
+		ret = set->ops->map_queues(set);
+		if (ret)
+			return ret;
+		if (qmap->drain_hwq)
+			set->flags |= BLK_MQ_F_MANAGED_IRQ;
+		return 0;
  	} else {
  		BUG_ON(set->nr_maps > 1);
  		return blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 65ed99b3f2f0..2b2e71ff43e0 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -193,6 +193,7 @@ struct blk_mq_queue_map {
  	unsigned int *mq_map;
  	unsigned int nr_queues;
  	unsigned int queue_offset;
+	bool drain_hwq;
  };





_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  2021-07-05  9:25     ` John Garry
@ 2021-07-05  9:55       ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-05  9:55 UTC (permalink / raw)
  To: John Garry
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	Hannes Reinecke, Keith Busch, Damien Le Moal

On Mon, Jul 05, 2021 at 10:25:38AM +0100, John Garry wrote:
> On 02/07/2021 16:05, Ming Lei wrote:
> > blk-mq needs this information of using managed irq for improving
> > deactivating hctx, so add such flag to 'struct Scsi_Host', then
> > drivers can pass such flag to blk-mq via scsi_mq_setup_tags().
> > 
> > The rule is that driver has to tell blk-mq if managed irq is used.
> > 
> > Signed-off-by: Ming Lei<ming.lei@redhat.com>
> 
> As was said before, can we have something like this instead of relying on
> the LLDs to do the setting:
> 
> --------->8------------
> 
> diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
> index b595a94c4d16..2037a5b69fe1 100644
> --- a/block/blk-mq-pci.c
> +++ b/block/blk-mq-pci.c
> @@ -37,7 +37,7 @@ int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap,
> struct pci_dev *pdev,
>  		for_each_cpu(cpu, mask)
>  			qmap->mq_map[cpu] = qmap->queue_offset + queue;
>  	}
> -
> +	qmap->drain_hwq = 1;

The thing is that blk_mq_pci_map_queues() is allowed to be called for
non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().

So this way just provides hint about managed irq uses, but we really
need to get this flag set if driver uses managed irq.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
@ 2021-07-05  9:55       ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-05  9:55 UTC (permalink / raw)
  To: John Garry
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	Hannes Reinecke, Keith Busch, Damien Le Moal

On Mon, Jul 05, 2021 at 10:25:38AM +0100, John Garry wrote:
> On 02/07/2021 16:05, Ming Lei wrote:
> > blk-mq needs this information of using managed irq for improving
> > deactivating hctx, so add such flag to 'struct Scsi_Host', then
> > drivers can pass such flag to blk-mq via scsi_mq_setup_tags().
> > 
> > The rule is that driver has to tell blk-mq if managed irq is used.
> > 
> > Signed-off-by: Ming Lei<ming.lei@redhat.com>
> 
> As was said before, can we have something like this instead of relying on
> the LLDs to do the setting:
> 
> --------->8------------
> 
> diff --git a/block/blk-mq-pci.c b/block/blk-mq-pci.c
> index b595a94c4d16..2037a5b69fe1 100644
> --- a/block/blk-mq-pci.c
> +++ b/block/blk-mq-pci.c
> @@ -37,7 +37,7 @@ int blk_mq_pci_map_queues(struct blk_mq_queue_map *qmap,
> struct pci_dev *pdev,
>  		for_each_cpu(cpu, mask)
>  			qmap->mq_map[cpu] = qmap->queue_offset + queue;
>  	}
> -
> +	qmap->drain_hwq = 1;

The thing is that blk_mq_pci_map_queues() is allowed to be called for
non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().

So this way just provides hint about managed irq uses, but we really
need to get this flag set if driver uses managed irq.


Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  2021-07-05  9:55       ` Ming Lei
@ 2021-07-06  5:37         ` Christoph Hellwig
  -1 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-06  5:37 UTC (permalink / raw)
  To: Ming Lei
  Cc: John Garry, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi, Sagi Grimberg,
	Daniel Wagner, Wen Xiong, Hannes Reinecke, Keith Busch,
	Damien Le Moal

On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote:
> The thing is that blk_mq_pci_map_queues() is allowed to be called for
> non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().
> 
> So this way just provides hint about managed irq uses, but we really
> need to get this flag set if driver uses managed irq.

blk_mq_pci_map_queues is absolutely intended to only be used by
managed irqs.  I wonder if we can enforce that somehow?

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
@ 2021-07-06  5:37         ` Christoph Hellwig
  0 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-06  5:37 UTC (permalink / raw)
  To: Ming Lei
  Cc: John Garry, Jens Axboe, Christoph Hellwig, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi, Sagi Grimberg,
	Daniel Wagner, Wen Xiong, Hannes Reinecke, Keith Busch,
	Damien Le Moal

On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote:
> The thing is that blk_mq_pci_map_queues() is allowed to be called for
> non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().
> 
> So this way just provides hint about managed irq uses, but we really
> need to get this flag set if driver uses managed irq.

blk_mq_pci_map_queues is absolutely intended to only be used by
managed irqs.  I wonder if we can enforce that somehow?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 4/6] scsi: set shost->use_managed_irq if driver uses managed irq
  2021-07-02 15:05   ` Ming Lei
@ 2021-07-06  5:38     ` Christoph Hellwig
  -1 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-06  5:38 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Adaptec OEM Raid Solutions, Subbu Seetharaman, Don Brace,
	James Smart, Kashyap Desai, Sathya Prakash, Nilesh Javali

Only megaraid_sas actually ever uses more than a single blk-mq
queue, so this looks weird?

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 4/6] scsi: set shost->use_managed_irq if driver uses managed irq
@ 2021-07-06  5:38     ` Christoph Hellwig
  0 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-06  5:38 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Adaptec OEM Raid Solutions, Subbu Seetharaman, Don Brace,
	James Smart, Kashyap Desai, Sathya Prakash, Nilesh Javali

Only megaraid_sas actually ever uses more than a single blk-mq
queue, so this looks weird?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-02 15:05   ` Ming Lei
  (?)
@ 2021-07-06  5:42     ` Christoph Hellwig
  -1 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-06  5:42 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Michael S. Tsirkin, Jason Wang, virtualization, Thomas Gleixner

On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
> 
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.

I don't think all this boilerplate code make a whole lot of sense.
I think we need to record this information deep down in the irq code by
setting a flag in struct device only if pci_alloc_irq_vectors_affinity
atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
flag was set.  Then blk-mq can look at that flag, and also check that
more than one queue is in used and work based on that.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-06  5:42     ` Christoph Hellwig
  0 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-06  5:42 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Michael S. Tsirkin, Jason Wang, virtualization, Thomas Gleixner

On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
> 
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.

I don't think all this boilerplate code make a whole lot of sense.
I think we need to record this information deep down in the irq code by
setting a flag in struct device only if pci_alloc_irq_vectors_affinity
atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
flag was set.  Then blk-mq can look at that flag, and also check that
more than one queue is in used and work based on that.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-06  5:42     ` Christoph Hellwig
  0 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-06  5:42 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Damien Le Moal, John Garry, linux-scsi,
	Martin K . Petersen, Michael S. Tsirkin, Daniel Wagner,
	linux-nvme, virtualization, linux-block, Keith Busch,
	Thomas Gleixner, Wen Xiong, Christoph Hellwig, Sagi Grimberg

On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> blk-mq needs to know if the device uses managed irq, so add one field
> to virtio_device for recording if device uses managed irq.
> 
> If the driver use managed irq, this flag has to be set so it can be
> passed to blk-mq.

I don't think all this boilerplate code make a whole lot of sense.
I think we need to record this information deep down in the irq code by
setting a flag in struct device only if pci_alloc_irq_vectors_affinity
atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
flag was set.  Then blk-mq can look at that flag, and also check that
more than one queue is in used and work based on that.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  2021-07-06  5:37         ` Christoph Hellwig
@ 2021-07-06  7:41           ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-06  7:41 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: John Garry, Jens Axboe, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	Hannes Reinecke, Keith Busch, Damien Le Moal

On Tue, Jul 06, 2021 at 07:37:19AM +0200, Christoph Hellwig wrote:
> On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote:
> > The thing is that blk_mq_pci_map_queues() is allowed to be called for
> > non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().
> > 
> > So this way just provides hint about managed irq uses, but we really
> > need to get this flag set if driver uses managed irq.
> 
> blk_mq_pci_map_queues is absolutely intended to only be used by
> managed irqs.  I wonder if we can enforce that somehow?

It may break some scsi drivers.

And blk_mq_pci_map_queues() just calls pci_irq_get_affinity() to
retrieve the irq's affinity, and the irq can be one non-managed irq,
which affinity is set via either irq_set_affinity_hint() from kernel
or /proc/irq/.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
@ 2021-07-06  7:41           ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-06  7:41 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: John Garry, Jens Axboe, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	Hannes Reinecke, Keith Busch, Damien Le Moal

On Tue, Jul 06, 2021 at 07:37:19AM +0200, Christoph Hellwig wrote:
> On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote:
> > The thing is that blk_mq_pci_map_queues() is allowed to be called for
> > non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().
> > 
> > So this way just provides hint about managed irq uses, but we really
> > need to get this flag set if driver uses managed irq.
> 
> blk_mq_pci_map_queues is absolutely intended to only be used by
> managed irqs.  I wonder if we can enforce that somehow?

It may break some scsi drivers.

And blk_mq_pci_map_queues() just calls pci_irq_get_affinity() to
retrieve the irq's affinity, and the irq can be one non-managed irq,
which affinity is set via either irq_set_affinity_hint() from kernel
or /proc/irq/.


Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-06  5:42     ` Christoph Hellwig
  (?)
@ 2021-07-06  7:53       ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-06  7:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Martin K . Petersen, linux-block, linux-nvme,
	linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Michael S. Tsirkin,
	Jason Wang, virtualization, Thomas Gleixner

On Tue, Jul 06, 2021 at 07:42:03AM +0200, Christoph Hellwig wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> > blk-mq needs to know if the device uses managed irq, so add one field
> > to virtio_device for recording if device uses managed irq.
> > 
> > If the driver use managed irq, this flag has to be set so it can be
> > passed to blk-mq.
> 
> I don't think all this boilerplate code make a whole lot of sense.
> I think we need to record this information deep down in the irq code by
> setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> flag was set.  Then blk-mq can look at that flag, and also check that
> more than one queue is in used and work based on that.

How can blk-mq look at that flag? Usually blk-mq doesn't play with
physical device(HBA).

So far almost all physically properties(segment, max_hw_sectors, ...)
are not provided to blk-mq directly, instead by queue limits abstract.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-06  7:53       ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-06  7:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Martin K . Petersen, linux-block, linux-nvme,
	linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Michael S. Tsirkin,
	Jason Wang, virtualization, Thomas Gleixner

On Tue, Jul 06, 2021 at 07:42:03AM +0200, Christoph Hellwig wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> > blk-mq needs to know if the device uses managed irq, so add one field
> > to virtio_device for recording if device uses managed irq.
> > 
> > If the driver use managed irq, this flag has to be set so it can be
> > passed to blk-mq.
> 
> I don't think all this boilerplate code make a whole lot of sense.
> I think we need to record this information deep down in the irq code by
> setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> flag was set.  Then blk-mq can look at that flag, and also check that
> more than one queue is in used and work based on that.

How can blk-mq look at that flag? Usually blk-mq doesn't play with
physical device(HBA).

So far almost all physically properties(segment, max_hw_sectors, ...)
are not provided to blk-mq directly, instead by queue limits abstract.

Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-06  7:53       ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-06  7:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Damien Le Moal, Sagi Grimberg, Martin K . Petersen,
	Michael S. Tsirkin, John Garry, linux-nvme, virtualization,
	linux-block, linux-scsi, Keith Busch, Thomas Gleixner,
	Daniel Wagner, Wen Xiong

On Tue, Jul 06, 2021 at 07:42:03AM +0200, Christoph Hellwig wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> > blk-mq needs to know if the device uses managed irq, so add one field
> > to virtio_device for recording if device uses managed irq.
> > 
> > If the driver use managed irq, this flag has to be set so it can be
> > passed to blk-mq.
> 
> I don't think all this boilerplate code make a whole lot of sense.
> I think we need to record this information deep down in the irq code by
> setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> flag was set.  Then blk-mq can look at that flag, and also check that
> more than one queue is in used and work based on that.

How can blk-mq look at that flag? Usually blk-mq doesn't play with
physical device(HBA).

So far almost all physically properties(segment, max_hw_sectors, ...)
are not provided to blk-mq directly, instead by queue limits abstract.

Thanks,
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  2021-07-06  7:41           ` Ming Lei
@ 2021-07-06 10:32             ` Hannes Reinecke
  -1 siblings, 0 replies; 58+ messages in thread
From: Hannes Reinecke @ 2021-07-06 10:32 UTC (permalink / raw)
  To: Ming Lei, Christoph Hellwig
  Cc: John Garry, Jens Axboe, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	Keith Busch, Damien Le Moal

On 7/6/21 9:41 AM, Ming Lei wrote:
> On Tue, Jul 06, 2021 at 07:37:19AM +0200, Christoph Hellwig wrote:
>> On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote:
>>> The thing is that blk_mq_pci_map_queues() is allowed to be called for
>>> non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().
>>>
>>> So this way just provides hint about managed irq uses, but we really
>>> need to get this flag set if driver uses managed irq.
>>
>> blk_mq_pci_map_queues is absolutely intended to only be used by
>> managed irqs.  I wonder if we can enforce that somehow?
> 
> It may break some scsi drivers.
> 
> And blk_mq_pci_map_queues() just calls pci_irq_get_affinity() to
> retrieve the irq's affinity, and the irq can be one non-managed irq,
> which affinity is set via either irq_set_affinity_hint() from kernel
> or /proc/irq/.
> 
But that's static, right? IE blk_mq_pci_map_queues() will be called once 
during module init; so what happens if the user changes the mapping 
later on? How will that be transferred to the driver?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
@ 2021-07-06 10:32             ` Hannes Reinecke
  0 siblings, 0 replies; 58+ messages in thread
From: Hannes Reinecke @ 2021-07-06 10:32 UTC (permalink / raw)
  To: Ming Lei, Christoph Hellwig
  Cc: John Garry, Jens Axboe, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	Keith Busch, Damien Le Moal

On 7/6/21 9:41 AM, Ming Lei wrote:
> On Tue, Jul 06, 2021 at 07:37:19AM +0200, Christoph Hellwig wrote:
>> On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote:
>>> The thing is that blk_mq_pci_map_queues() is allowed to be called for
>>> non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().
>>>
>>> So this way just provides hint about managed irq uses, but we really
>>> need to get this flag set if driver uses managed irq.
>>
>> blk_mq_pci_map_queues is absolutely intended to only be used by
>> managed irqs.  I wonder if we can enforce that somehow?
> 
> It may break some scsi drivers.
> 
> And blk_mq_pci_map_queues() just calls pci_irq_get_affinity() to
> retrieve the irq's affinity, and the irq can be one non-managed irq,
> which affinity is set via either irq_set_affinity_hint() from kernel
> or /proc/irq/.
> 
But that's static, right? IE blk_mq_pci_map_queues() will be called once 
during module init; so what happens if the user changes the mapping 
later on? How will that be transferred to the driver?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-06  5:42     ` Christoph Hellwig
  (?)
@ 2021-07-07  9:06       ` Thomas Gleixner
  -1 siblings, 0 replies; 58+ messages in thread
From: Thomas Gleixner @ 2021-07-07  9:06 UTC (permalink / raw)
  To: Christoph Hellwig, Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Michael S. Tsirkin, Jason Wang, virtualization

On Tue, Jul 06 2021 at 07:42, Christoph Hellwig wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
>> blk-mq needs to know if the device uses managed irq, so add one field
>> to virtio_device for recording if device uses managed irq.
>> 
>> If the driver use managed irq, this flag has to be set so it can be
>> passed to blk-mq.
>
> I don't think all this boilerplate code make a whole lot of sense.
> I think we need to record this information deep down in the irq code by
> setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> flag was set.  Then blk-mq can look at that flag, and also check that
> more than one queue is in used and work based on that.

Ack.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-07  9:06       ` Thomas Gleixner
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Gleixner @ 2021-07-07  9:06 UTC (permalink / raw)
  To: Christoph Hellwig, Ming Lei
  Cc: Jens Axboe, Christoph Hellwig, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Michael S. Tsirkin, Jason Wang, virtualization

On Tue, Jul 06 2021 at 07:42, Christoph Hellwig wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
>> blk-mq needs to know if the device uses managed irq, so add one field
>> to virtio_device for recording if device uses managed irq.
>> 
>> If the driver use managed irq, this flag has to be set so it can be
>> passed to blk-mq.
>
> I don't think all this boilerplate code make a whole lot of sense.
> I think we need to record this information deep down in the irq code by
> setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> flag was set.  Then blk-mq can look at that flag, and also check that
> more than one queue is in used and work based on that.

Ack.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-07  9:06       ` Thomas Gleixner
  0 siblings, 0 replies; 58+ messages in thread
From: Thomas Gleixner @ 2021-07-07  9:06 UTC (permalink / raw)
  To: Christoph Hellwig, Ming Lei
  Cc: Jens Axboe, Damien Le Moal, John Garry, linux-scsi,
	Martin K . Petersen, Michael S. Tsirkin, Daniel Wagner,
	linux-nvme, virtualization, linux-block, Keith Busch, Wen Xiong,
	Christoph Hellwig, Sagi Grimberg

On Tue, Jul 06 2021 at 07:42, Christoph Hellwig wrote:
> On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
>> blk-mq needs to know if the device uses managed irq, so add one field
>> to virtio_device for recording if device uses managed irq.
>> 
>> If the driver use managed irq, this flag has to be set so it can be
>> passed to blk-mq.
>
> I don't think all this boilerplate code make a whole lot of sense.
> I think we need to record this information deep down in the irq code by
> setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> flag was set.  Then blk-mq can look at that flag, and also check that
> more than one queue is in used and work based on that.

Ack.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-07  9:06       ` Thomas Gleixner
  (?)
@ 2021-07-07  9:42         ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-07  9:42 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Christoph Hellwig, Jens Axboe, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Michael S. Tsirkin, Jason Wang, virtualization

On Wed, Jul 07, 2021 at 11:06:25AM +0200, Thomas Gleixner wrote:
> On Tue, Jul 06 2021 at 07:42, Christoph Hellwig wrote:
> > On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> >> blk-mq needs to know if the device uses managed irq, so add one field
> >> to virtio_device for recording if device uses managed irq.
> >> 
> >> If the driver use managed irq, this flag has to be set so it can be
> >> passed to blk-mq.
> >
> > I don't think all this boilerplate code make a whole lot of sense.
> > I think we need to record this information deep down in the irq code by
> > setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> > atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> > flag was set.  Then blk-mq can look at that flag, and also check that
> > more than one queue is in used and work based on that.
> 
> Ack.

The problem is that how blk-mq looks at that flag, since the device
representing the controller which allocates irq vectors isn't visible
to blk-mq.

Usually blk-mq(block layer) provides queue limits abstract for drivers
to tell any physical property(dma segment, max sectors, ...) to block
layer, that is why this patch takes this very similar usage to check if
HBA uses managed irq or not.


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-07  9:42         ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-07  9:42 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Christoph Hellwig, Jens Axboe, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Michael S. Tsirkin, Jason Wang, virtualization

On Wed, Jul 07, 2021 at 11:06:25AM +0200, Thomas Gleixner wrote:
> On Tue, Jul 06 2021 at 07:42, Christoph Hellwig wrote:
> > On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> >> blk-mq needs to know if the device uses managed irq, so add one field
> >> to virtio_device for recording if device uses managed irq.
> >> 
> >> If the driver use managed irq, this flag has to be set so it can be
> >> passed to blk-mq.
> >
> > I don't think all this boilerplate code make a whole lot of sense.
> > I think we need to record this information deep down in the irq code by
> > setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> > atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> > flag was set.  Then blk-mq can look at that flag, and also check that
> > more than one queue is in used and work based on that.
> 
> Ack.

The problem is that how blk-mq looks at that flag, since the device
representing the controller which allocates irq vectors isn't visible
to blk-mq.

Usually blk-mq(block layer) provides queue limits abstract for drivers
to tell any physical property(dma segment, max sectors, ...) to block
layer, that is why this patch takes this very similar usage to check if
HBA uses managed irq or not.


Thanks, 
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-07  9:42         ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-07  9:42 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Jens Axboe, Damien Le Moal, John Garry, linux-scsi,
	Martin K . Petersen, Michael S. Tsirkin, Daniel Wagner,
	linux-nvme, virtualization, linux-block, Keith Busch, Wen Xiong,
	Christoph Hellwig, Sagi Grimberg

On Wed, Jul 07, 2021 at 11:06:25AM +0200, Thomas Gleixner wrote:
> On Tue, Jul 06 2021 at 07:42, Christoph Hellwig wrote:
> > On Fri, Jul 02, 2021 at 11:05:54PM +0800, Ming Lei wrote:
> >> blk-mq needs to know if the device uses managed irq, so add one field
> >> to virtio_device for recording if device uses managed irq.
> >> 
> >> If the driver use managed irq, this flag has to be set so it can be
> >> passed to blk-mq.
> >
> > I don't think all this boilerplate code make a whole lot of sense.
> > I think we need to record this information deep down in the irq code by
> > setting a flag in struct device only if pci_alloc_irq_vectors_affinity
> > atually managed to allocate multiple vectors and the PCI_IRQ_AFFINITY
> > flag was set.  Then blk-mq can look at that flag, and also check that
> > more than one queue is in used and work based on that.
> 
> Ack.

The problem is that how blk-mq looks at that flag, since the device
representing the controller which allocates irq vectors isn't visible
to blk-mq.

Usually blk-mq(block layer) provides queue limits abstract for drivers
to tell any physical property(dma segment, max sectors, ...) to block
layer, that is why this patch takes this very similar usage to check if
HBA uses managed irq or not.


Thanks, 
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
  2021-07-06 10:32             ` Hannes Reinecke
@ 2021-07-07 10:53               ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-07 10:53 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, John Garry, Jens Axboe, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi, Sagi Grimberg,
	Daniel Wagner, Wen Xiong, Keith Busch, Damien Le Moal

On Tue, Jul 06, 2021 at 12:32:27PM +0200, Hannes Reinecke wrote:
> On 7/6/21 9:41 AM, Ming Lei wrote:
> > On Tue, Jul 06, 2021 at 07:37:19AM +0200, Christoph Hellwig wrote:
> > > On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote:
> > > > The thing is that blk_mq_pci_map_queues() is allowed to be called for
> > > > non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().
> > > > 
> > > > So this way just provides hint about managed irq uses, but we really
> > > > need to get this flag set if driver uses managed irq.
> > > 
> > > blk_mq_pci_map_queues is absolutely intended to only be used by
> > > managed irqs.  I wonder if we can enforce that somehow?
> > 
> > It may break some scsi drivers.
> > 
> > And blk_mq_pci_map_queues() just calls pci_irq_get_affinity() to
> > retrieve the irq's affinity, and the irq can be one non-managed irq,
> > which affinity is set via either irq_set_affinity_hint() from kernel
> > or /proc/irq/.
> > 
> But that's static, right? IE blk_mq_pci_map_queues() will be called once
> during module init; so what happens if the user changes the mapping later
> on? How will that be transferred to the driver?

Yeah, that may not work well enough, but still works since non-managed
irq supports migration.

And there are several SCSI drivers which provide module parameter to
enable/disable managed irq, meantime blk_mq_pci_map_queues() is always
called for mapping queues.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host'
@ 2021-07-07 10:53               ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-07 10:53 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, John Garry, Jens Axboe, Martin K . Petersen,
	linux-block, linux-nvme, linux-scsi, Sagi Grimberg,
	Daniel Wagner, Wen Xiong, Keith Busch, Damien Le Moal

On Tue, Jul 06, 2021 at 12:32:27PM +0200, Hannes Reinecke wrote:
> On 7/6/21 9:41 AM, Ming Lei wrote:
> > On Tue, Jul 06, 2021 at 07:37:19AM +0200, Christoph Hellwig wrote:
> > > On Mon, Jul 05, 2021 at 05:55:49PM +0800, Ming Lei wrote:
> > > > The thing is that blk_mq_pci_map_queues() is allowed to be called for
> > > > non-managed irqs. Also some managed irq consumers don't use blk_mq_pci_map_queues().
> > > > 
> > > > So this way just provides hint about managed irq uses, but we really
> > > > need to get this flag set if driver uses managed irq.
> > > 
> > > blk_mq_pci_map_queues is absolutely intended to only be used by
> > > managed irqs.  I wonder if we can enforce that somehow?
> > 
> > It may break some scsi drivers.
> > 
> > And blk_mq_pci_map_queues() just calls pci_irq_get_affinity() to
> > retrieve the irq's affinity, and the irq can be one non-managed irq,
> > which affinity is set via either irq_set_affinity_hint() from kernel
> > or /proc/irq/.
> > 
> But that's static, right? IE blk_mq_pci_map_queues() will be called once
> during module init; so what happens if the user changes the mapping later
> on? How will that be transferred to the driver?

Yeah, that may not work well enough, but still works since non-managed
irq supports migration.

And there are several SCSI drivers which provide module parameter to
enable/disable managed irq, meantime blk_mq_pci_map_queues() is always
called for mapping queues.


Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-07  9:42         ` Ming Lei
  (?)
@ 2021-07-07 14:05           ` Christoph Hellwig
  -1 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-07 14:05 UTC (permalink / raw)
  To: Ming Lei
  Cc: Thomas Gleixner, Christoph Hellwig, Jens Axboe,
	Martin K . Petersen, linux-block, linux-nvme, linux-scsi,
	Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Michael S. Tsirkin,
	Jason Wang, virtualization

On Wed, Jul 07, 2021 at 05:42:54PM +0800, Ming Lei wrote:
> The problem is that how blk-mq looks at that flag, since the device
> representing the controller which allocates irq vectors isn't visible
> to blk-mq.

In blk_mq_pci_map_queues and similar helpers.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-07 14:05           ` Christoph Hellwig
  0 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-07 14:05 UTC (permalink / raw)
  To: Ming Lei
  Cc: Thomas Gleixner, Christoph Hellwig, Jens Axboe,
	Martin K . Petersen, linux-block, linux-nvme, linux-scsi,
	Sagi Grimberg, Daniel Wagner, Wen Xiong, John Garry,
	Hannes Reinecke, Keith Busch, Damien Le Moal, Michael S. Tsirkin,
	Jason Wang, virtualization

On Wed, Jul 07, 2021 at 05:42:54PM +0800, Ming Lei wrote:
> The problem is that how blk-mq looks at that flag, since the device
> representing the controller which allocates irq vectors isn't visible
> to blk-mq.

In blk_mq_pci_map_queues and similar helpers.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-07 14:05           ` Christoph Hellwig
  0 siblings, 0 replies; 58+ messages in thread
From: Christoph Hellwig @ 2021-07-07 14:05 UTC (permalink / raw)
  To: Ming Lei
  Cc: Jens Axboe, Damien Le Moal, John Garry, linux-scsi,
	Martin K . Petersen, Michael S. Tsirkin, Daniel Wagner,
	linux-nvme, virtualization, linux-block, Keith Busch,
	Thomas Gleixner, Wen Xiong, Christoph Hellwig, Sagi Grimberg

On Wed, Jul 07, 2021 at 05:42:54PM +0800, Ming Lei wrote:
> The problem is that how blk-mq looks at that flag, since the device
> representing the controller which allocates irq vectors isn't visible
> to blk-mq.

In blk_mq_pci_map_queues and similar helpers.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
  2021-07-07 14:05           ` Christoph Hellwig
  (?)
@ 2021-07-08  6:34             ` Ming Lei
  -1 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-08  6:34 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Thomas Gleixner, Jens Axboe, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Michael S. Tsirkin, Jason Wang, virtualization

On Wed, Jul 07, 2021 at 04:05:29PM +0200, Christoph Hellwig wrote:
> On Wed, Jul 07, 2021 at 05:42:54PM +0800, Ming Lei wrote:
> > The problem is that how blk-mq looks at that flag, since the device
> > representing the controller which allocates irq vectors isn't visible
> > to blk-mq.
> 
> In blk_mq_pci_map_queues and similar helpers.

Firstly it depends if drivers call into these helpers, so this way
is fragile.

Secondly, I think it isn't good to expose specific physical devices
into blk-mq which shouldn't deal with physical device directly, also
all the three helpers just duplicates same logic except for retrieving
each vector's affinity from specific physical device.

I will think about how to cleanup them.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-08  6:34             ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-08  6:34 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Thomas Gleixner, Jens Axboe, Martin K . Petersen, linux-block,
	linux-nvme, linux-scsi, Sagi Grimberg, Daniel Wagner, Wen Xiong,
	John Garry, Hannes Reinecke, Keith Busch, Damien Le Moal,
	Michael S. Tsirkin, Jason Wang, virtualization

On Wed, Jul 07, 2021 at 04:05:29PM +0200, Christoph Hellwig wrote:
> On Wed, Jul 07, 2021 at 05:42:54PM +0800, Ming Lei wrote:
> > The problem is that how blk-mq looks at that flag, since the device
> > representing the controller which allocates irq vectors isn't visible
> > to blk-mq.
> 
> In blk_mq_pci_map_queues and similar helpers.

Firstly it depends if drivers call into these helpers, so this way
is fragile.

Secondly, I think it isn't good to expose specific physical devices
into blk-mq which shouldn't deal with physical device directly, also
all the three helpers just duplicates same logic except for retrieving
each vector's affinity from specific physical device.

I will think about how to cleanup them.


Thanks,
Ming


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device uses managed irq
@ 2021-07-08  6:34             ` Ming Lei
  0 siblings, 0 replies; 58+ messages in thread
From: Ming Lei @ 2021-07-08  6:34 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Damien Le Moal, linux-scsi, Martin K . Petersen,
	Michael S. Tsirkin, John Garry, linux-nvme, virtualization,
	linux-block, Keith Busch, Thomas Gleixner, Wen Xiong,
	Daniel Wagner, Sagi Grimberg

On Wed, Jul 07, 2021 at 04:05:29PM +0200, Christoph Hellwig wrote:
> On Wed, Jul 07, 2021 at 05:42:54PM +0800, Ming Lei wrote:
> > The problem is that how blk-mq looks at that flag, since the device
> > representing the controller which allocates irq vectors isn't visible
> > to blk-mq.
> 
> In blk_mq_pci_map_queues and similar helpers.

Firstly it depends if drivers call into these helpers, so this way
is fragile.

Secondly, I think it isn't good to expose specific physical devices
into blk-mq which shouldn't deal with physical device directly, also
all the three helpers just duplicates same logic except for retrieving
each vector's affinity from specific physical device.

I will think about how to cleanup them.


Thanks,
Ming

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2021-07-08  6:35 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-02 15:05 [PATCH V2 0/6] blk-mq: fix blk_mq_alloc_request_hctx Ming Lei
2021-07-02 15:05 ` Ming Lei
2021-07-02 15:05 ` [PATCH V2 1/6] blk-mq: prepare for not deactivating hctx if managed irq isn't used Ming Lei
2021-07-02 15:05   ` Ming Lei
2021-07-02 15:05 ` [PATCH V2 2/6] nvme: pci: pass BLK_MQ_F_MANAGED_IRQ to blk-mq Ming Lei
2021-07-02 15:05   ` Ming Lei
2021-07-02 15:05 ` [PATCH V2 3/6] scsi: add flag of .use_managed_irq to 'struct Scsi_Host' Ming Lei
2021-07-02 15:05   ` Ming Lei
2021-07-05  9:25   ` John Garry
2021-07-05  9:25     ` John Garry
2021-07-05  9:55     ` Ming Lei
2021-07-05  9:55       ` Ming Lei
2021-07-06  5:37       ` Christoph Hellwig
2021-07-06  5:37         ` Christoph Hellwig
2021-07-06  7:41         ` Ming Lei
2021-07-06  7:41           ` Ming Lei
2021-07-06 10:32           ` Hannes Reinecke
2021-07-06 10:32             ` Hannes Reinecke
2021-07-07 10:53             ` Ming Lei
2021-07-07 10:53               ` Ming Lei
2021-07-02 15:05 ` [PATCH V2 4/6] scsi: set shost->use_managed_irq if driver uses managed irq Ming Lei
2021-07-02 15:05   ` Ming Lei
2021-07-05  7:35   ` John Garry
2021-07-05  7:35     ` John Garry
2021-07-06  5:38   ` Christoph Hellwig
2021-07-06  5:38     ` Christoph Hellwig
2021-07-02 15:05 ` [PATCH V2 5/6] virtio: add one field into virtio_device for recording if device " Ming Lei
2021-07-02 15:05   ` Ming Lei
2021-07-02 15:05   ` Ming Lei
2021-07-02 15:55   ` Michael S. Tsirkin
2021-07-02 15:55     ` Michael S. Tsirkin
2021-07-02 15:55     ` Michael S. Tsirkin
2021-07-05  2:48     ` Ming Lei
2021-07-05  2:48       ` Ming Lei
2021-07-05  2:48       ` Ming Lei
2021-07-05  3:59   ` Jason Wang
2021-07-05  3:59     ` Jason Wang
2021-07-05  3:59     ` Jason Wang
2021-07-06  5:42   ` Christoph Hellwig
2021-07-06  5:42     ` Christoph Hellwig
2021-07-06  5:42     ` Christoph Hellwig
2021-07-06  7:53     ` Ming Lei
2021-07-06  7:53       ` Ming Lei
2021-07-06  7:53       ` Ming Lei
2021-07-07  9:06     ` Thomas Gleixner
2021-07-07  9:06       ` Thomas Gleixner
2021-07-07  9:06       ` Thomas Gleixner
2021-07-07  9:42       ` Ming Lei
2021-07-07  9:42         ` Ming Lei
2021-07-07  9:42         ` Ming Lei
2021-07-07 14:05         ` Christoph Hellwig
2021-07-07 14:05           ` Christoph Hellwig
2021-07-07 14:05           ` Christoph Hellwig
2021-07-08  6:34           ` Ming Lei
2021-07-08  6:34             ` Ming Lei
2021-07-08  6:34             ` Ming Lei
2021-07-02 15:05 ` [PATCH V2 6/6] blk-mq: don't deactivate hctx if managed irq isn't used Ming Lei
2021-07-02 15:05   ` Ming Lei

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.