linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs
@ 2020-03-10 16:25 John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template John Garry
                   ` (23 more replies)
  0 siblings, 24 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, John Garry

This is v2 of original series from Hannes, which I'm hijacking for the
moment:
https://patchwork.kernel.org/cover/10967071/

And the background:

Quite some drivers use internal commands for various purposes, most
commonly sending TMFs or querying the HBA status.
While these commands use the same submission mechanism than normal
I/O commands, they will not be counted as outstanding commands,
requiring those drivers to implement their own mechanism to figure
out outstanding commands.
This patchset enables the use of reserved tags for the SCSI midlayer,
enabling LLDDs to rely on the block layer for tracking outstanding
commands.
More importantly, it allows LLDD to request a valid tag from the block
layer without having to implement some tracking mechanism within the
driver. This removes quite some hacks which were required for some
drivers (eg. fnic or snic).

Finally:

As I see, there is no dependency on this series to go into the kernel now.
And we need it for [0].

Note: [0] also depends on [1]

[0] https://lore.kernel.org/linux-block/1583409280-158604-1-git-send-email-john.garry@huawei.com/T/#t
[1] https://lore.kernel.org/linux-block/20191014015043.25029-1-ming.lei@redhat.com/

Differences to v1:
- Make scsi_{get, put}_reserved_cmd() for Scsi host
	- Previously we separate scsi_{get, put}_reserved_cmd() for sdev
	  and scsi_host_get_reserved_cmd() for the host
- Fix how Scsi_Host.can_queue is set in the virtio-scsi change
- Drop Scsi_Host.use_reserved_cmd_q
- Drop scsi_is_reserved_cmd()
	- It was unused
	- But keep blk_mq_rq_is_reserved()
- Add support in libsas and associated HBA drivers
	- Allocate reserved command in slow task
	- Switch hisi_sas to use reserved Scsi command
- Reorder the series a little
- Some tidying

Hannes Reinecke (22):
  scsi: add 'nr_reserved_cmds' field to the SCSI host template
  scsi: allocate separate queue for reserved commands
  blk-mq: Implement blk_mq_rq_is_reserved()
  scsi: Add scsi_{get, put}_reserved_cmd()
  csiostor: use reserved command for LUN reset
  scsi: add scsi_cmd_from_priv()
  virtio_scsi: use reserved commands for TMF
  scsi: add host tagset busy iterator
  fnic: use reserved commands
  fnic: use scsi_host_tagset_busy_iter() to traverse commands
  hpsa: move hpsa_hba_inquiry after scsi_add_host()
  hpsa: use reserved commands
  hpsa: use blk_mq_tagset_busy_iter() to traverse outstanding commands
  hpsa: drop refcount field from CommandList
  snic: use reserved commands
  snic: use tagset iter for traversing commands
  aacraid: move scsi_add_host()
  aacraid: use private commands
  aacraid: replace cmd_list with scsi_host_tagset_busy_iter()
  aacraid: use scsi_host_tagset_busy_iter() to traverse outstanding
    commands
  dpt_i2o: drop cmd_list usage
  scsi: drop scsi command list

John Garry (2):
  scsi: libsas: aic94xx: hisi_sas: mvsas: pm8001: Allocate Scsi_cmd for
    slow task
  scsi: hisi_sas: Use libsas slow task SCSI command

 block/blk-mq.c                         |  13 +
 drivers/scsi/aacraid/aachba.c          | 125 ++--
 drivers/scsi/aacraid/aacraid.h         |   6 +-
 drivers/scsi/aacraid/comminit.c        |  28 +-
 drivers/scsi/aacraid/commsup.c         | 134 ++--
 drivers/scsi/aacraid/linit.c           | 300 ++++----
 drivers/scsi/csiostor/csio_init.c      |   3 +-
 drivers/scsi/csiostor/csio_scsi.c      |  49 +-
 drivers/scsi/dpt_i2o.c                 |  23 +-
 drivers/scsi/fnic/fnic_scsi.c          | 917 +++++++++++--------------
 drivers/scsi/hisi_sas/hisi_sas_main.c  |  13 +-
 drivers/scsi/hisi_sas/hisi_sas_v3_hw.c |   5 +-
 drivers/scsi/hosts.c                   |  27 +
 drivers/scsi/hpsa.c                    | 313 ++++-----
 drivers/scsi/hpsa.h                    |   1 -
 drivers/scsi/hpsa_cmd.h                |   1 -
 drivers/scsi/libsas/sas_expander.c     |   3 +-
 drivers/scsi/libsas/sas_init.c         |  36 +-
 drivers/scsi/mvsas/mv_sas.c            |   2 +-
 drivers/scsi/pm8001/pm8001_hwi.c       |   4 +-
 drivers/scsi/pm8001/pm8001_sas.c       |   4 +-
 drivers/scsi/pm8001/pm80xx_hwi.c       |   4 +-
 drivers/scsi/scsi.c                    |   1 -
 drivers/scsi/scsi_lib.c                |  80 ++-
 drivers/scsi/scsi_scan.c               |   1 -
 drivers/scsi/snic/snic.h               |   2 +-
 drivers/scsi/snic/snic_main.c          |   2 +
 drivers/scsi/snic/snic_scsi.c          | 502 +++++++-------
 drivers/scsi/virtio_scsi.c             | 107 ++-
 include/linux/blk-mq.h                 |   2 +
 include/scsi/libsas.h                  |   4 +-
 include/scsi/scsi_cmnd.h               |  14 +-
 include/scsi/scsi_device.h             |   1 -
 include/scsi/scsi_host.h               |  14 +-
 34 files changed, 1346 insertions(+), 1395 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 23:08   ` Ming Lei
  2020-03-10 16:25 ` [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands John Garry
                   ` (22 subsequent siblings)
  23 siblings, 1 reply; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Add a new field 'nr_reserved_cmds' to the SCSI host template to
instruct the block layer to set aside a tag space for reserved
commands.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/scsi_lib.c  | 1 +
 include/scsi/scsi_host.h | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 610ee41fa54c..2967325df7a0 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1896,6 +1896,7 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
 		shost->tag_set.ops = &scsi_mq_ops_no_commit;
 	shost->tag_set.nr_hw_queues = shost->nr_hw_queues ? : 1;
 	shost->tag_set.queue_depth = shost->can_queue;
+	shost->tag_set.reserved_tags = shost->nr_reserved_cmds;
 	shost->tag_set.cmd_size = cmd_size;
 	shost->tag_set.numa_node = NUMA_NO_NODE;
 	shost->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index f577647bf5f2..3f860c8ad623 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -599,6 +599,12 @@ struct Scsi_Host {
 	 * is nr_hw_queues * can_queue.
 	 */
 	unsigned nr_hw_queues;
+
+	/*
+	 * Number of reserved commands, if any.
+	 */
+	unsigned nr_reserved_cmds;
+
 	unsigned active_mode:2;
 	unsigned unchecked_isa_dma:1;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 18:32   ` Christoph Hellwig
  2020-03-10 16:25 ` [PATCH RFC v2 03/24] blk-mq: Implement blk_mq_rq_is_reserved() John Garry
                   ` (21 subsequent siblings)
  23 siblings, 1 reply; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Allocate a separate 'reserved_cmd_q' for sending reserved commands.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/scsi_lib.c  | 17 ++++++++++++++++-
 include/scsi/scsi_host.h |  1 +
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 2967325df7a0..e809b0e30a11 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1881,6 +1881,7 @@ struct request_queue *scsi_mq_alloc_queue(struct scsi_device *sdev)
 int scsi_mq_setup_tags(struct Scsi_Host *shost)
 {
 	unsigned int cmd_size, sgl_size;
+	int ret;
 
 	sgl_size = max_t(unsigned int, sizeof(struct scatterlist),
 				scsi_mq_inline_sgl_size(shost));
@@ -1904,11 +1905,25 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
 		BLK_ALLOC_POLICY_TO_MQ_FLAG(shost->hostt->tag_alloc_policy);
 	shost->tag_set.driver_data = shost;
 
-	return blk_mq_alloc_tag_set(&shost->tag_set);
+	ret = blk_mq_alloc_tag_set(&shost->tag_set);
+	if (ret)
+		return ret;
+
+	if (shost->nr_reserved_cmds) {
+		shost->reserved_cmd_q = blk_mq_init_queue(&shost->tag_set);
+		if (IS_ERR(shost->reserved_cmd_q)) {
+			blk_mq_free_tag_set(&shost->tag_set);
+			ret = PTR_ERR(shost->reserved_cmd_q);
+			shost->reserved_cmd_q = NULL;
+		}
+	}
+	return ret;
 }
 
 void scsi_mq_destroy_tags(struct Scsi_Host *shost)
 {
+	if (shost->reserved_cmd_q)
+		blk_cleanup_queue(shost->reserved_cmd_q);
 	blk_mq_free_tag_set(&shost->tag_set);
 }
 
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index 3f860c8ad623..2258a4f7b4d8 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -604,6 +604,7 @@ struct Scsi_Host {
 	 * Number of reserved commands, if any.
 	 */
 	unsigned nr_reserved_cmds;
+	struct request_queue *reserved_cmd_q;
 
 	unsigned active_mode:2;
 	unsigned unchecked_isa_dma:1;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 03/24] blk-mq: Implement blk_mq_rq_is_reserved()
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 04/24] scsi: Add scsi_{get, put}_reserved_cmd() John Garry
                   ` (20 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Add function to check if a blk-mq request is from the reserved pool

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 block/blk-mq.c         | 13 +++++++++++++
 include/linux/blk-mq.h |  2 ++
 2 files changed, 15 insertions(+)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index d92088dec6c3..603c60a1956c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -269,6 +269,19 @@ static inline bool blk_mq_need_time_stamp(struct request *rq)
 	return (rq->rq_flags & (RQF_IO_STAT | RQF_STATS)) || rq->q->elevator;
 }
 
+/**
+ * blk_mq_rq_is_reserved - Check if a request has a reserved tag
+ *
+ * @rq: Request to check
+ */
+bool blk_mq_rq_is_reserved(struct request *rq)
+{
+	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
+
+	return blk_mq_tag_is_reserved(hctx->tags, rq->tag);
+}
+EXPORT_SYMBOL_GPL(blk_mq_rq_is_reserved);
+
 static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
 		unsigned int tag, unsigned int op, u64 alloc_time_ns)
 {
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 11cfd6470b1a..8bdc35ac4a63 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -548,6 +548,8 @@ static inline void *blk_mq_rq_to_pdu(struct request *rq)
 	return rq + 1;
 }
 
+bool blk_mq_rq_is_reserved(struct request *rq);
+
 #define queue_for_each_hw_ctx(q, hctx, i)				\
 	for ((i) = 0; (i) < (q)->nr_hw_queues &&			\
 	     ({ hctx = (q)->queue_hw_ctx[i]; 1; }); (i)++)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 04/24] scsi: Add scsi_{get, put}_reserved_cmd()
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (2 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 03/24] blk-mq: Implement blk_mq_rq_is_reserved() John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 05/24] csiostor: use reserved command for LUN reset John Garry
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Implement a function to allocate a SCSI command from the reserved
tag pool using the host-wide reserved command queue.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/scsi_lib.c  | 30 ++++++++++++++++++++++++++++++
 include/scsi/scsi_cmnd.h |  3 +++
 2 files changed, 33 insertions(+)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index e809b0e30a11..af4f56cd0093 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1927,6 +1927,36 @@ void scsi_mq_destroy_tags(struct Scsi_Host *shost)
 	blk_mq_free_tag_set(&shost->tag_set);
 }
 
+struct scsi_cmnd *scsi_get_reserved_cmd(struct Scsi_Host *shost)
+{
+	struct scsi_cmnd *scmd;
+	struct request *rq;
+
+	if (WARN_ON(!shost->reserved_cmd_q))
+		return NULL;
+
+	rq = blk_mq_alloc_request(shost->reserved_cmd_q,
+				  REQ_OP_DRV_OUT | REQ_NOWAIT,
+				  BLK_MQ_REQ_RESERVED);
+	if (IS_ERR(rq))
+		return NULL;
+	WARN_ON(rq->tag == -1);
+	scmd = blk_mq_rq_to_pdu(rq);
+	scmd->request = rq;
+
+	return scmd;
+}
+EXPORT_SYMBOL_GPL(scsi_get_reserved_cmd);
+
+void scsi_put_reserved_cmd(struct scsi_cmnd *scmd)
+{
+	struct request *rq = blk_mq_rq_from_pdu(scmd);
+
+	if (blk_mq_rq_is_reserved(rq))
+		blk_mq_free_request(rq);
+}
+EXPORT_SYMBOL_GPL(scsi_put_reserved_cmd);
+
 /**
  * scsi_device_from_queue - return sdev associated with a request_queue
  * @q: The request queue to return the sdev from
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index a2849bb9cd19..82a4499539b3 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -159,6 +159,9 @@ static inline struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd)
 	return *(struct scsi_driver **)cmd->request->rq_disk->private_data;
 }
 
+struct scsi_cmnd *scsi_get_reserved_cmd(struct Scsi_Host *shost);
+void scsi_put_reserved_cmd(struct scsi_cmnd *);
+
 extern void scsi_put_command(struct scsi_cmnd *);
 extern void scsi_finish_command(struct scsi_cmnd *cmd);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 05/24] csiostor: use reserved command for LUN reset
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (3 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 04/24] scsi: Add scsi_{get, put}_reserved_cmd() John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 06/24] scsi: add scsi_cmd_from_priv() John Garry
                   ` (18 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.de>

When issuing a LUN reset we should be using a reserved command
to avoid overwriting the original command.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/csiostor/csio_init.c |  3 +-
 drivers/scsi/csiostor/csio_scsi.c | 49 +++++++++++++++++++------------
 2 files changed, 32 insertions(+), 20 deletions(-)

diff --git a/drivers/scsi/csiostor/csio_init.c b/drivers/scsi/csiostor/csio_init.c
index 8dea7d53788a..3848289485c9 100644
--- a/drivers/scsi/csiostor/csio_init.c
+++ b/drivers/scsi/csiostor/csio_init.c
@@ -621,7 +621,8 @@ csio_shost_init(struct csio_hw *hw, struct device *dev,
 	/* Link common lnode to this lnode */
 	ln->dev_num = (shost->host_no << 16);
 
-	shost->can_queue = CSIO_MAX_QUEUE;
+	shost->can_queue = CSIO_MAX_QUEUE - 1;
+	shost->nr_reserved_cmds = 1;
 	shost->this_id = -1;
 	shost->unique_id = shost->host_no;
 	shost->max_cmd_len = 16; /* Max CDB length supported */
diff --git a/drivers/scsi/csiostor/csio_scsi.c b/drivers/scsi/csiostor/csio_scsi.c
index 00cf33573136..9eacdb3d4f07 100644
--- a/drivers/scsi/csiostor/csio_scsi.c
+++ b/drivers/scsi/csiostor/csio_scsi.c
@@ -2057,10 +2057,13 @@ csio_tm_cbfn(struct csio_hw *hw, struct csio_ioreq *req)
 static int
 csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 {
-	struct csio_lnode *ln = shost_priv(cmnd->device->host);
+	struct scsi_cmnd *reset_cmnd;
+	struct scsi_device *sdev = cmnd->device;
+	struct Scsi_Host *shost = sdev->host;
+	struct csio_lnode *ln = shost_priv(sdev->host);
 	struct csio_hw *hw = csio_lnode_to_hw(ln);
 	struct csio_scsim *scsim = csio_hw_to_scsim(hw);
-	struct csio_rnode *rn = (struct csio_rnode *)(cmnd->device->hostdata);
+	struct csio_rnode *rn = (struct csio_rnode *)(sdev->hostdata);
 	struct csio_ioreq *ioreq = NULL;
 	struct csio_scsi_qset *sqset;
 	unsigned long flags;
@@ -2073,13 +2076,13 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 		goto fail;
 
 	csio_dbg(hw, "Request to reset LUN:%llu (ssni:0x%x tgtid:%d)\n",
-		      cmnd->device->lun, rn->flowid, rn->scsi_id);
+		      sdev->lun, rn->flowid, rn->scsi_id);
 
 	if (!csio_is_lnode_ready(ln)) {
 		csio_err(hw,
 			 "LUN reset cannot be issued on non-ready"
 			 " local node vnpi:0x%x (LUN:%llu)\n",
-			 ln->vnp_flowid, cmnd->device->lun);
+			 ln->vnp_flowid, sdev->lun);
 		goto fail;
 	}
 
@@ -2099,17 +2102,22 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 		csio_err(hw,
 			 "LUN reset cannot be issued on non-ready"
 			 " remote node ssni:0x%x (LUN:%llu)\n",
-			 rn->flowid, cmnd->device->lun);
+			 rn->flowid, sdev->lun);
 		goto fail;
 	}
 
+	reset_cmnd = scsi_get_reserved_cmd(shost);
+	if (!reset_cmnd) {
+		csio_err(hw, "No free TMF request\n");
+		goto fail;
+	}
 	/* Get a free ioreq structure - SM is already set to uninit */
 	ioreq = csio_get_scsi_ioreq_lock(hw, scsim);
 
 	if (!ioreq) {
 		csio_err(hw, "Out of IO request elements. Active # :%d\n",
 			 scsim->stats.n_active);
-		goto fail;
+		goto fail_ret_cmnd;
 	}
 
 	sqset			= &hw->sqset[ln->portid][smp_processor_id()];
@@ -2119,11 +2127,11 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 	ioreq->iq_idx		= sqset->iq_idx;
 	ioreq->eq_idx		= sqset->eq_idx;
 
-	csio_scsi_cmnd(ioreq)	= cmnd;
-	cmnd->host_scribble	= (unsigned char *)ioreq;
-	cmnd->SCp.Status	= 0;
+	csio_scsi_cmnd(ioreq)	= reset_cmnd;
+	reset_cmnd->host_scribble	= (unsigned char *)ioreq;
+	reset_cmnd->SCp.Status	= 0;
 
-	cmnd->SCp.Message	= FCP_TMF_LUN_RESET;
+	reset_cmnd->SCp.Message	= FCP_TMF_LUN_RESET;
 	ioreq->tmo		= CSIO_SCSI_LUNRST_TMO_MS / 1000;
 
 	/*
@@ -2140,7 +2148,7 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 	sld.level = CSIO_LEV_LUN;
 	sld.lnode = ioreq->lnode;
 	sld.rnode = ioreq->rnode;
-	sld.oslun = cmnd->device->lun;
+	sld.oslun = sdev->lun;
 
 	spin_lock_irqsave(&hw->lock, flags);
 	/* Kick off TM SM on the ioreq */
@@ -2156,14 +2164,14 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 	csio_dbg(hw, "Waiting max %d secs for LUN reset completion\n",
 		    count * (CSIO_SCSI_TM_POLL_MS / 1000));
 	/* Wait for completion */
-	while ((((struct scsi_cmnd *)csio_scsi_cmnd(ioreq)) == cmnd)
+	while ((((struct scsi_cmnd *)csio_scsi_cmnd(ioreq)) == reset_cmnd)
 								&& count--)
 		msleep(CSIO_SCSI_TM_POLL_MS);
 
 	/* LUN reset timed-out */
-	if (((struct scsi_cmnd *)csio_scsi_cmnd(ioreq)) == cmnd) {
+	if (((struct scsi_cmnd *)csio_scsi_cmnd(ioreq)) == reset_cmnd) {
 		csio_err(hw, "LUN reset (%d:%llu) timed out\n",
-			 cmnd->device->id, cmnd->device->lun);
+			 sdev->id, sdev->lun);
 
 		spin_lock_irq(&hw->lock);
 		csio_scsi_drvcleanup(ioreq);
@@ -2174,11 +2182,12 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 	}
 
 	/* LUN reset returned, check cached status */
-	if (cmnd->SCp.Status != FW_SUCCESS) {
+	if (reset_cmnd->SCp.Status != FW_SUCCESS) {
 		csio_err(hw, "LUN reset failed (%d:%llu), status: %d\n",
-			 cmnd->device->id, cmnd->device->lun, cmnd->SCp.Status);
-		goto fail;
+			 sdev->id, sdev->lun, reset_cmnd->SCp.Status);
+		goto fail_ret_cmnd;
 	}
+	scsi_put_reserved_cmd(reset_cmnd);
 
 	/* LUN reset succeeded, Start aborting affected I/Os */
 	/*
@@ -2196,7 +2205,7 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 	if (retval != 0) {
 		csio_err(hw,
 			 "Attempt to abort I/Os during LUN reset of %llu"
-			 " returned %d\n", cmnd->device->lun, retval);
+			 " returned %d\n", sdev->lun, retval);
 		/* Return I/Os back to active_q */
 		spin_lock_irq(&hw->lock);
 		list_splice_tail_init(&local_q, &scsim->active_q);
@@ -2207,12 +2216,14 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
 	CSIO_INC_STATS(rn, n_lun_rst);
 
 	csio_info(hw, "LUN reset occurred (%d:%llu)\n",
-		  cmnd->device->id, cmnd->device->lun);
+		  sdev->id, sdev->lun);
 
 	return SUCCESS;
 
 fail_ret_ioreq:
 	csio_put_scsi_ioreq_lock(hw, scsim, ioreq);
+fail_ret_cmnd:
+	scsi_put_reserved_cmd(reset_cmnd);
 fail:
 	CSIO_INC_STATS(rn, n_lun_rst_fail);
 	return FAILED;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 06/24] scsi: add scsi_cmd_from_priv()
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (4 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 05/24] csiostor: use reserved command for LUN reset John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 07/24] virtio_scsi: use reserved commands for TMF John Garry
                   ` (17 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Add a functon to retrieve the scsi_cmnd structure from the driver
private allocation data.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 include/scsi/scsi_cmnd.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index 82a4499539b3..c73280b3706d 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -153,6 +153,16 @@ static inline void *scsi_cmd_priv(struct scsi_cmnd *cmd)
 	return cmd + 1;
 }
 
+/*
+ * Return the scsi_cmnd structure located before the driver
+ * private allocation. Only works if cmd_size is set in the
+ * host template.
+ */
+static inline struct scsi_cmnd *scsi_cmd_from_priv(void *priv)
+{
+	return priv - sizeof(struct scsi_cmnd);
+}
+
 /* make sure not to use it with passthrough commands */
 static inline struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 07/24] virtio_scsi: use reserved commands for TMF
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (5 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 06/24] scsi: add scsi_cmd_from_priv() John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 08/24] scsi: add host tagset busy iterator John Garry
                   ` (16 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Set two commands aside for TMF, and use reserved commands to issue
TMFs. With that we can drop the TMF memory pool.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/virtio_scsi.c | 107 +++++++++++++++----------------------
 1 file changed, 43 insertions(+), 64 deletions(-)

diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index bfec84aacd90..05c772be2c9c 100644
--- a/drivers/scsi/virtio_scsi.c
+++ b/drivers/scsi/virtio_scsi.c
@@ -35,10 +35,10 @@
 #define VIRTIO_SCSI_MEMPOOL_SZ 64
 #define VIRTIO_SCSI_EVENT_LEN 8
 #define VIRTIO_SCSI_VQ_BASE 2
+#define VIRTIO_SCSI_RESERVED_CMDS 2
 
 /* Command queue element */
 struct virtio_scsi_cmd {
-	struct scsi_cmnd *sc;
 	struct completion *comp;
 	union {
 		struct virtio_scsi_cmd_req       cmd;
@@ -86,9 +86,6 @@ struct virtio_scsi {
 	struct virtio_scsi_vq req_vqs[];
 };
 
-static struct kmem_cache *virtscsi_cmd_cache;
-static mempool_t *virtscsi_cmd_pool;
-
 static inline struct Scsi_Host *virtio_scsi_host(struct virtio_device *vdev)
 {
 	return vdev->priv;
@@ -108,7 +105,7 @@ static void virtscsi_compute_resid(struct scsi_cmnd *sc, u32 resid)
 static void virtscsi_complete_cmd(struct virtio_scsi *vscsi, void *buf)
 {
 	struct virtio_scsi_cmd *cmd = buf;
-	struct scsi_cmnd *sc = cmd->sc;
+	struct scsi_cmnd *sc = scsi_cmd_from_priv(cmd);
 	struct virtio_scsi_cmd_resp *resp = &cmd->resp.cmd;
 
 	dev_dbg(&sc->device->sdev_gendev,
@@ -406,7 +403,7 @@ static int __virtscsi_add_cmd(struct virtqueue *vq,
 			    struct virtio_scsi_cmd *cmd,
 			    size_t req_size, size_t resp_size)
 {
-	struct scsi_cmnd *sc = cmd->sc;
+	struct scsi_cmnd *sc = scsi_cmd_from_priv(cmd);
 	struct scatterlist *sgs[6], req, resp;
 	struct sg_table *out, *in;
 	unsigned out_num = 0, in_num = 0;
@@ -557,8 +554,6 @@ static int virtscsi_queuecommand(struct Scsi_Host *shost,
 	dev_dbg(&sc->device->sdev_gendev,
 		"cmd %p CDB: %#02x\n", sc, sc->cmnd[0]);
 
-	cmd->sc = sc;
-
 	BUG_ON(sc->cmd_len > VIRTIO_SCSI_CDB_SIZE);
 
 #ifdef CONFIG_BLK_DEV_INTEGRITY
@@ -590,17 +585,17 @@ static int virtscsi_queuecommand(struct Scsi_Host *shost,
 static int virtscsi_tmf(struct virtio_scsi *vscsi, struct virtio_scsi_cmd *cmd)
 {
 	DECLARE_COMPLETION_ONSTACK(comp);
-	int ret = FAILED;
 
 	cmd->comp = &comp;
+
 	if (virtscsi_add_cmd(&vscsi->ctrl_vq, cmd,
 			      sizeof cmd->req.tmf, sizeof cmd->resp.tmf, true) < 0)
-		goto out;
+		return FAILED;
 
 	wait_for_completion(&comp);
 	if (cmd->resp.tmf.response == VIRTIO_SCSI_S_OK ||
 	    cmd->resp.tmf.response == VIRTIO_SCSI_S_FUNCTION_SUCCEEDED)
-		ret = SUCCESS;
+		return SUCCESS;
 
 	/*
 	 * The spec guarantees that all requests related to the TMF have
@@ -613,33 +608,37 @@ static int virtscsi_tmf(struct virtio_scsi *vscsi, struct virtio_scsi_cmd *cmd)
 	 * REQ_ATOM_COMPLETE has been set.
 	 */
 	virtscsi_poll_requests(vscsi);
-
-out:
-	mempool_free(cmd, virtscsi_cmd_pool);
-	return ret;
+	return FAILED;
 }
 
 static int virtscsi_device_reset(struct scsi_cmnd *sc)
 {
+	struct scsi_device *sdev = sc->device;
+	struct Scsi_Host *shost = sdev->host;
+	struct scsi_cmnd *reset_sc;
 	struct virtio_scsi *vscsi = shost_priv(sc->device->host);
 	struct virtio_scsi_cmd *cmd;
+	int rc;
 
-	sdev_printk(KERN_INFO, sc->device, "device reset\n");
-	cmd = mempool_alloc(virtscsi_cmd_pool, GFP_NOIO);
-	if (!cmd)
+	sdev_printk(KERN_INFO, sdev, "device reset\n");
+	reset_sc = scsi_get_reserved_cmd(shost);
+	if (!reset_sc)
 		return FAILED;
-
+	cmd = scsi_cmd_priv(reset_sc);
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
 		.type = VIRTIO_SCSI_T_TMF,
 		.subtype = cpu_to_virtio32(vscsi->vdev,
 					     VIRTIO_SCSI_T_TMF_LOGICAL_UNIT_RESET),
 		.lun[0] = 1,
-		.lun[1] = sc->device->id,
-		.lun[2] = (sc->device->lun >> 8) | 0x40,
-		.lun[3] = sc->device->lun & 0xff,
+		.lun[1] = sdev->id,
+		.lun[2] = (sdev->lun >> 8) | 0x40,
+		.lun[3] = sdev->lun & 0xff,
 	};
-	return virtscsi_tmf(vscsi, cmd);
+	rc = virtscsi_tmf(vscsi, cmd);
+	scsi_put_reserved_cmd(reset_sc);
+
+	return rc;
 }
 
 static int virtscsi_device_alloc(struct scsi_device *sdevice)
@@ -679,25 +678,32 @@ static int virtscsi_change_queue_depth(struct scsi_device *sdev, int qdepth)
 
 static int virtscsi_abort(struct scsi_cmnd *sc)
 {
-	struct virtio_scsi *vscsi = shost_priv(sc->device->host);
+	struct scsi_device *sdev = sc->device;
+	struct Scsi_Host *shost = sdev->host;
+	struct scsi_cmnd *reset_sc;
+	struct virtio_scsi *vscsi = shost_priv(sdev->host);
 	struct virtio_scsi_cmd *cmd;
+	int rc;
 
 	scmd_printk(KERN_INFO, sc, "abort\n");
-	cmd = mempool_alloc(virtscsi_cmd_pool, GFP_NOIO);
-	if (!cmd)
+	reset_sc = scsi_get_reserved_cmd(shost);
+	if (!reset_sc)
 		return FAILED;
+	cmd = scsi_cmd_priv(reset_sc);
 
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
 		.type = VIRTIO_SCSI_T_TMF,
 		.subtype = VIRTIO_SCSI_T_TMF_ABORT_TASK,
 		.lun[0] = 1,
-		.lun[1] = sc->device->id,
-		.lun[2] = (sc->device->lun >> 8) | 0x40,
-		.lun[3] = sc->device->lun & 0xff,
+		.lun[1] = sdev->id,
+		.lun[2] = (sdev->lun >> 8) | 0x40,
+		.lun[3] = sdev->lun & 0xff,
 		.tag = cpu_to_virtio64(vscsi->vdev, (unsigned long)sc),
 	};
-	return virtscsi_tmf(vscsi, cmd);
+	rc = virtscsi_tmf(vscsi, cmd);
+	scsi_put_reserved_cmd(reset_sc);
+	return rc;
 }
 
 static int virtscsi_map_queues(struct Scsi_Host *shost)
@@ -866,6 +872,11 @@ static int virtscsi_probe(struct virtio_device *vdev)
 		goto virtscsi_init_failed;
 
 	shost->can_queue = virtqueue_get_vring_size(vscsi->req_vqs[0].vq);
+	shost->can_queue -= VIRTIO_SCSI_RESERVED_CMDS;
+	if (shost->can_queue <= 0) {
+		err = -ENOMEM;
+		goto scsi_add_host_failed;
+	}
 
 	cmd_per_lun = virtscsi_config_get(vdev, cmd_per_lun) ?: 1;
 	shost->cmd_per_lun = min_t(u32, cmd_per_lun, shost->can_queue);
@@ -879,6 +890,7 @@ static int virtscsi_probe(struct virtio_device *vdev)
 	shost->max_channel = 0;
 	shost->max_cmd_len = VIRTIO_SCSI_CDB_SIZE;
 	shost->nr_hw_queues = num_queues;
+	shost->nr_reserved_cmds = VIRTIO_SCSI_RESERVED_CMDS;
 
 #ifdef CONFIG_BLK_DEV_INTEGRITY
 	if (virtio_has_feature(vdev, VIRTIO_SCSI_F_T10_PI)) {
@@ -980,45 +992,12 @@ static struct virtio_driver virtio_scsi_driver = {
 
 static int __init init(void)
 {
-	int ret = -ENOMEM;
-
-	virtscsi_cmd_cache = KMEM_CACHE(virtio_scsi_cmd, 0);
-	if (!virtscsi_cmd_cache) {
-		pr_err("kmem_cache_create() for virtscsi_cmd_cache failed\n");
-		goto error;
-	}
-
-
-	virtscsi_cmd_pool =
-		mempool_create_slab_pool(VIRTIO_SCSI_MEMPOOL_SZ,
-					 virtscsi_cmd_cache);
-	if (!virtscsi_cmd_pool) {
-		pr_err("mempool_create() for virtscsi_cmd_pool failed\n");
-		goto error;
-	}
-	ret = register_virtio_driver(&virtio_scsi_driver);
-	if (ret < 0)
-		goto error;
-
-	return 0;
-
-error:
-	if (virtscsi_cmd_pool) {
-		mempool_destroy(virtscsi_cmd_pool);
-		virtscsi_cmd_pool = NULL;
-	}
-	if (virtscsi_cmd_cache) {
-		kmem_cache_destroy(virtscsi_cmd_cache);
-		virtscsi_cmd_cache = NULL;
-	}
-	return ret;
+	return register_virtio_driver(&virtio_scsi_driver);
 }
 
 static void __exit fini(void)
 {
 	unregister_virtio_driver(&virtio_scsi_driver);
-	mempool_destroy(virtscsi_cmd_pool);
-	kmem_cache_destroy(virtscsi_cmd_cache);
 }
 module_init(init);
 module_exit(fini);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 08/24] scsi: add host tagset busy iterator
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (6 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 07/24] virtio_scsi: use reserved commands for TMF John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 09/24] fnic: use reserved commands John Garry
                   ` (15 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.de>

Add an iterator scsi_host_tagset_busy_iter() to easily traverse all
busy commands. No locking is done here; this need to be ensured by
the caller.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/hosts.c     | 27 +++++++++++++++++++++++++++
 include/scsi/scsi_host.h |  5 +++++
 2 files changed, 32 insertions(+)

diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
index 1d669e47b692..97704a630f8e 100644
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -650,3 +650,30 @@ void scsi_flush_work(struct Scsi_Host *shost)
 	flush_workqueue(shost->work_q);
 }
 EXPORT_SYMBOL_GPL(scsi_flush_work);
+
+struct scsi_host_tagset_busy_iter_data {
+	scsi_host_busy_iter_fn *fn;
+	void *priv;
+};
+
+static bool scsi_host_tagset_busy_iter_fn(struct request *req, void *priv,
+					  bool reserved)
+{
+	struct scsi_host_tagset_busy_iter_data *iter_data = priv;
+	struct scsi_cmnd *sc = blk_mq_rq_to_pdu(req);
+
+	return iter_data->fn(sc, iter_data->priv, reserved);
+}
+
+void scsi_host_tagset_busy_iter(struct Scsi_Host *shost,
+				scsi_host_busy_iter_fn *fn, void *priv)
+{
+	struct scsi_host_tagset_busy_iter_data iter_data = {
+		.fn = fn,
+		.priv = priv,
+	};
+
+	blk_mq_tagset_busy_iter(&shost->tag_set, scsi_host_tagset_busy_iter_fn,
+				&iter_data);
+}
+EXPORT_SYMBOL_GPL(scsi_host_tagset_busy_iter);
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index 2258a4f7b4d8..663aef22437a 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -767,6 +767,11 @@ static inline int scsi_host_scan_allowed(struct Scsi_Host *shost)
 extern void scsi_unblock_requests(struct Scsi_Host *);
 extern void scsi_block_requests(struct Scsi_Host *);
 
+typedef bool (scsi_host_busy_iter_fn)(struct scsi_cmnd *, void *, bool);
+
+void scsi_host_tagset_busy_iter(struct Scsi_Host *,
+				scsi_host_busy_iter_fn *fn, void *priv);
+
 struct class_container;
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 09/24] fnic: use reserved commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (7 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 08/24] scsi: add host tagset busy iterator John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 10/24] fnic: use scsi_host_tagset_busy_iter() to traverse commands John Garry
                   ` (14 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Remove hack to get tag for the reset command by using reserved
commands.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/fnic/fnic_scsi.c | 377 ++++++++++++++++------------------
 1 file changed, 172 insertions(+), 205 deletions(-)

diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
index b60795893994..e96c4bbf2374 100644
--- a/drivers/scsi/fnic/fnic_scsi.c
+++ b/drivers/scsi/fnic/fnic_scsi.c
@@ -2118,165 +2118,174 @@ static inline int fnic_queue_dr_io_req(struct fnic *fnic,
 	return ret;
 }
 
-/*
- * Clean up any pending aborts on the lun
- * For each outstanding IO on this lun, whose abort is not completed by fw,
- * issue a local abort. Wait for abort to complete. Return 0 if all commands
- * successfully aborted, 1 otherwise
- */
-static int fnic_clean_pending_aborts(struct fnic *fnic,
-				     struct scsi_cmnd *lr_sc,
-					 bool new_sc)
+struct fnic_pending_aborts_iter_data {
+	struct fnic *fnic;
+	struct scsi_device *lun_dev;
+	int ret;
+};
 
+static bool fnic_pending_aborts_iter(struct scsi_cmnd *sc,
+				     void *data, bool reserved)
 {
-	int tag, abt_tag;
+	struct fnic_pending_aborts_iter_data *iter_data = data;
+	struct fnic *fnic = iter_data->fnic;
+	struct scsi_device *lun_dev = iter_data->lun_dev;
+	int abt_tag = sc->request->tag;
 	struct fnic_io_req *io_req;
 	spinlock_t *io_lock;
 	unsigned long flags;
-	int ret = 0;
-	struct scsi_cmnd *sc;
 	struct scsi_lun fc_lun;
-	struct scsi_device *lun_dev = lr_sc->device;
 	DECLARE_COMPLETION_ONSTACK(tm_done);
 	enum fnic_ioreq_state old_ioreq_state;
 
-	for (tag = 0; tag < fnic->fnic_max_tag_id; tag++) {
-		io_lock = fnic_io_lock_tag(fnic, tag);
-		spin_lock_irqsave(io_lock, flags);
-		sc = scsi_host_find_tag(fnic->lport->host, tag);
-		/*
-		 * ignore this lun reset cmd if issued using new SC
-		 * or cmds that do not belong to this lun
-		 */
-		if (!sc || ((sc == lr_sc) && new_sc) || sc->device != lun_dev) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-
-		io_req = (struct fnic_io_req *)CMD_SP(sc);
+	if (sc->device != lun_dev)
+		return true;
+	if (reserved)
+		return true;
 
-		if (!io_req || sc->device != lun_dev) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-
-		/*
-		 * Found IO that is still pending with firmware and
-		 * belongs to the LUN that we are resetting
-		 */
-		FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
-			      "Found IO in %s on lun\n",
-			      fnic_ioreq_state_to_str(CMD_STATE(sc)));
-
-		if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-		if ((CMD_FLAGS(sc) & FNIC_DEVICE_RESET) &&
-			(!(CMD_FLAGS(sc) & FNIC_DEV_RST_ISSUED))) {
-			FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
-				"%s dev rst not pending sc 0x%p\n", __func__,
-				sc);
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
+	io_lock = fnic_io_lock_tag(fnic, abt_tag);
+	spin_lock_irqsave(io_lock, flags);
+	io_req = (struct fnic_io_req *)CMD_SP(sc);
+	if (!io_req) {
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
 
-		if (io_req->abts_done)
-			shost_printk(KERN_ERR, fnic->lport->host,
-			  "%s: io_req->abts_done is set state is %s\n",
-			  __func__, fnic_ioreq_state_to_str(CMD_STATE(sc)));
-		old_ioreq_state = CMD_STATE(sc);
-		/*
-		 * Any pending IO issued prior to reset is expected to be
-		 * in abts pending state, if not we need to set
-		 * FNIC_IOREQ_ABTS_PENDING to indicate the IO is abort pending.
-		 * When IO is completed, the IO will be handed over and
-		 * handled in this function.
-		 */
-		CMD_STATE(sc) = FNIC_IOREQ_ABTS_PENDING;
+	/*
+	 * Found IO that is still pending with firmware and
+	 * belongs to the LUN that we are resetting
+	 */
+	FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+		      "Found IO in %s on lun\n",
+		      fnic_ioreq_state_to_str(CMD_STATE(sc)));
 
-		BUG_ON(io_req->abts_done);
+	if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING) {
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
+	if ((CMD_FLAGS(sc) & FNIC_DEVICE_RESET) &&
+	    (!(CMD_FLAGS(sc) & FNIC_DEV_RST_ISSUED))) {
+		FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+			      "%s dev rst not pending sc 0x%p\n", __func__,
+			      sc);
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
 
-		abt_tag = tag;
-		if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET) {
-			abt_tag |= FNIC_TAG_DEV_RST;
-			FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
-				  "%s: dev rst sc 0x%p\n", __func__, sc);
-		}
+	if (io_req->abts_done)
+		shost_printk(KERN_ERR, fnic->lport->host,
+			     "%s: io_req->abts_done is set state is %s\n",
+			     __func__, fnic_ioreq_state_to_str(CMD_STATE(sc)));
+	old_ioreq_state = CMD_STATE(sc);
+	/*
+	 * Any pending IO issued prior to reset is expected to be
+	 * in abts pending state, if not we need to set
+	 * FNIC_IOREQ_ABTS_PENDING to indicate the IO is abort pending.
+	 * When IO is completed, the IO will be handed over and
+	 * handled in this function.
+	 */
+	CMD_STATE(sc) = FNIC_IOREQ_ABTS_PENDING;
 
-		CMD_ABTS_STATUS(sc) = FCPIO_INVALID_CODE;
-		io_req->abts_done = &tm_done;
-		spin_unlock_irqrestore(io_lock, flags);
+	BUG_ON(io_req->abts_done);
 
-		/* Now queue the abort command to firmware */
-		int_to_scsilun(sc->device->lun, &fc_lun);
+	if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET) {
+		abt_tag |= FNIC_TAG_DEV_RST;
+		FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+			      "%s: dev rst sc 0x%p\n", __func__, sc);
+	}
 
-		if (fnic_queue_abort_io_req(fnic, abt_tag,
-					    FCPIO_ITMF_ABT_TASK_TERM,
-					    fc_lun.scsi_lun, io_req)) {
-			spin_lock_irqsave(io_lock, flags);
-			io_req = (struct fnic_io_req *)CMD_SP(sc);
-			if (io_req)
-				io_req->abts_done = NULL;
-			if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING)
-				CMD_STATE(sc) = old_ioreq_state;
-			spin_unlock_irqrestore(io_lock, flags);
-			ret = 1;
-			goto clean_pending_aborts_end;
-		} else {
-			spin_lock_irqsave(io_lock, flags);
-			if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET)
-				CMD_FLAGS(sc) |= FNIC_DEV_RST_TERM_ISSUED;
-			spin_unlock_irqrestore(io_lock, flags);
-		}
-		CMD_FLAGS(sc) |= FNIC_IO_INTERNAL_TERM_ISSUED;
+	CMD_ABTS_STATUS(sc) = FCPIO_INVALID_CODE;
+	io_req->abts_done = &tm_done;
+	spin_unlock_irqrestore(io_lock, flags);
 
-		wait_for_completion_timeout(&tm_done,
-					    msecs_to_jiffies
-					    (fnic->config.ed_tov));
+	/* Now queue the abort command to firmware */
+	int_to_scsilun(sc->device->lun, &fc_lun);
 
-		/* Recheck cmd state to check if it is now aborted */
+	if (fnic_queue_abort_io_req(fnic, abt_tag,
+				    FCPIO_ITMF_ABT_TASK_TERM,
+				    fc_lun.scsi_lun, io_req)) {
 		spin_lock_irqsave(io_lock, flags);
 		io_req = (struct fnic_io_req *)CMD_SP(sc);
-		if (!io_req) {
-			spin_unlock_irqrestore(io_lock, flags);
-			CMD_FLAGS(sc) |= FNIC_IO_ABT_TERM_REQ_NULL;
-			continue;
-		}
+		if (io_req)
+			io_req->abts_done = NULL;
+		if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING)
+			CMD_STATE(sc) = old_ioreq_state;
+		spin_unlock_irqrestore(io_lock, flags);
+		return false;
+	} else {
+		spin_lock_irqsave(io_lock, flags);
+		if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET)
+			CMD_FLAGS(sc) |= FNIC_DEV_RST_TERM_ISSUED;
+		spin_unlock_irqrestore(io_lock, flags);
+	}
+	CMD_FLAGS(sc) |= FNIC_IO_INTERNAL_TERM_ISSUED;
 
-		io_req->abts_done = NULL;
+	wait_for_completion_timeout(&tm_done, msecs_to_jiffies
+				    (fnic->config.ed_tov));
 
-		/* if abort is still pending with fw, fail */
-		if (CMD_ABTS_STATUS(sc) == FCPIO_INVALID_CODE) {
-			spin_unlock_irqrestore(io_lock, flags);
-			CMD_FLAGS(sc) |= FNIC_IO_ABT_TERM_DONE;
-			ret = 1;
-			goto clean_pending_aborts_end;
-		}
-		CMD_STATE(sc) = FNIC_IOREQ_ABTS_COMPLETE;
+	/* Recheck cmd state to check if it is now aborted */
+	spin_lock_irqsave(io_lock, flags);
+	io_req = (struct fnic_io_req *)CMD_SP(sc);
+	if (!io_req) {
+		spin_unlock_irqrestore(io_lock, flags);
+		CMD_FLAGS(sc) |= FNIC_IO_ABT_TERM_REQ_NULL;
+		return true;
+	}
 
-		/* original sc used for lr is handled by dev reset code */
-		if (sc != lr_sc)
-			CMD_SP(sc) = NULL;
+	io_req->abts_done = NULL;
+
+	/* if abort is still pending with fw, fail */
+	if (CMD_ABTS_STATUS(sc) == FCPIO_INVALID_CODE) {
 		spin_unlock_irqrestore(io_lock, flags);
+		CMD_FLAGS(sc) |= FNIC_IO_ABT_TERM_DONE;
+		iter_data->ret = FAILED;
+		return false;
+	}
+	CMD_STATE(sc) = FNIC_IOREQ_ABTS_COMPLETE;
 
-		/* original sc used for lr is handled by dev reset code */
-		if (sc != lr_sc) {
-			fnic_release_ioreq_buf(fnic, io_req, sc);
-			mempool_free(io_req, fnic->io_req_pool);
-		}
+	/* original sc used for lr is handled by dev reset code */
+	CMD_SP(sc) = NULL;
+	spin_unlock_irqrestore(io_lock, flags);
 
-		/*
-		 * Any IO is returned during reset, it needs to call scsi_done
-		 * to return the scsi_cmnd to upper layer.
-		 */
-		if (sc->scsi_done) {
-			/* Set result to let upper SCSI layer retry */
-			sc->result = DID_RESET << 16;
-			sc->scsi_done(sc);
-		}
+	/* original sc used for lr is handled by dev reset code */
+	fnic_release_ioreq_buf(fnic, io_req, sc);
+	mempool_free(io_req, fnic->io_req_pool);
+
+	/*
+	 * Any IO is returned during reset, it needs to call scsi_done
+	 * to return the scsi_cmnd to upper layer.
+	 */
+	if (sc->scsi_done) {
+		/* Set result to let upper SCSI layer retry */
+		sc->result = DID_RESET << 16;
+		sc->scsi_done(sc);
 	}
+	return true;
+}
+
+/*
+ * Clean up any pending aborts on the lun
+ * For each outstanding IO on this lun, whose abort is not completed by fw,
+ * issue a local abort. Wait for abort to complete. Return 0 if all commands
+ * successfully aborted, 1 otherwise
+ */
+static int fnic_clean_pending_aborts(struct fnic *fnic,
+				     struct scsi_cmnd *lr_sc)
 
+{
+	int ret = SUCCESS;
+	struct fnic_pending_aborts_iter_data iter_data = {
+		.fnic = fnic,
+		.lun_dev = lr_sc->device,
+		.ret = SUCCESS,
+	};
+
+	scsi_host_tagset_busy_iter(fnic->lport->host,
+				fnic_pending_aborts_iter, &iter_data);
+	if (iter_data.ret == FAILED) {
+		ret = iter_data.ret;
+		goto clean_pending_aborts_end;
+	}
 	schedule_timeout(msecs_to_jiffies(2 * fnic->config.ed_tov));
 
 	/* walk again to check, if IOs are still pending in fw */
@@ -2287,38 +2296,6 @@ static int fnic_clean_pending_aborts(struct fnic *fnic,
 	return ret;
 }
 
-/**
- * fnic_scsi_host_start_tag
- * Allocates tagid from host's tag list
- **/
-static inline int
-fnic_scsi_host_start_tag(struct fnic *fnic, struct scsi_cmnd *sc)
-{
-	struct request_queue *q = sc->request->q;
-	struct request *dummy;
-
-	dummy = blk_mq_alloc_request(q, REQ_OP_WRITE, BLK_MQ_REQ_NOWAIT);
-	if (IS_ERR(dummy))
-		return SCSI_NO_TAG;
-
-	sc->tag = sc->request->tag = dummy->tag;
-	sc->host_scribble = (unsigned char *)dummy;
-
-	return dummy->tag;
-}
-
-/**
- * fnic_scsi_host_end_tag
- * frees tag allocated by fnic_scsi_host_start_tag.
- **/
-static inline void
-fnic_scsi_host_end_tag(struct fnic *fnic, struct scsi_cmnd *sc)
-{
-	struct request *dummy = (struct request *)sc->host_scribble;
-
-	blk_mq_free_request(dummy);
-}
-
 /*
  * SCSI Eh thread issues a Lun Reset when one or more commands on a LUN
  * fail to get aborted. It calls driver's eh_device_reset with a SCSI command
@@ -2340,8 +2317,8 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 	struct reset_stats *reset_stats;
 	int tag = 0;
 	DECLARE_COMPLETION_ONSTACK(tm_done);
-	int tag_gen_flag = 0;   /*to track tags allocated by fnic driver*/
-	bool new_sc = 0;
+	struct scsi_cmnd *reset_sc = NULL;
+	struct Scsi_Host *shost = sc->device->host;
 
 	/* Wait for rport to unblock */
 	fc_block_scsi_eh(sc);
@@ -2369,24 +2346,15 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 		goto fnic_device_reset_end;
 	}
 
-	CMD_FLAGS(sc) = FNIC_DEVICE_RESET;
-	/* Allocate tag if not present */
+	reset_sc = scsi_get_reserved_cmd(shost);
+	if (unlikely(!reset_sc))
+		goto fnic_device_reset_end;
 
-	tag = sc->request->tag;
-	if (unlikely(tag < 0)) {
-		/*
-		 * Really should fix the midlayer to pass in a proper
-		 * request for ioctls...
-		 */
-		tag = fnic_scsi_host_start_tag(fnic, sc);
-		if (unlikely(tag == SCSI_NO_TAG))
-			goto fnic_device_reset_end;
-		tag_gen_flag = 1;
-		new_sc = 1;
-	}
-	io_lock = fnic_io_lock_hash(fnic, sc);
+	CMD_FLAGS(reset_sc) = FNIC_DEVICE_RESET;
+	tag = reset_sc->request->tag;
+	io_lock = fnic_io_lock_hash(fnic, reset_sc);
 	spin_lock_irqsave(io_lock, flags);
-	io_req = (struct fnic_io_req *)CMD_SP(sc);
+	io_req = (struct fnic_io_req *)CMD_SP(reset_sc);
 
 	/*
 	 * If there is a io_req attached to this command, then use it,
@@ -2400,11 +2368,11 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 		}
 		memset(io_req, 0, sizeof(*io_req));
 		io_req->port_id = rport->port_id;
-		CMD_SP(sc) = (char *)io_req;
+		CMD_SP(reset_sc) = (char *)io_req;
 	}
 	io_req->dr_done = &tm_done;
-	CMD_STATE(sc) = FNIC_IOREQ_CMD_PENDING;
-	CMD_LR_STATUS(sc) = FCPIO_INVALID_CODE;
+	CMD_STATE(reset_sc) = FNIC_IOREQ_CMD_PENDING;
+	CMD_LR_STATUS(reset_sc) = FCPIO_INVALID_CODE;
 	spin_unlock_irqrestore(io_lock, flags);
 
 	FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, "TAG %x\n", tag);
@@ -2413,15 +2381,15 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 	 * issue the device reset, if enqueue failed, clean up the ioreq
 	 * and break assoc with scsi cmd
 	 */
-	if (fnic_queue_dr_io_req(fnic, sc, io_req)) {
+	if (fnic_queue_dr_io_req(fnic, reset_sc, io_req)) {
 		spin_lock_irqsave(io_lock, flags);
-		io_req = (struct fnic_io_req *)CMD_SP(sc);
+		io_req = (struct fnic_io_req *)CMD_SP(reset_sc);
 		if (io_req)
 			io_req->dr_done = NULL;
 		goto fnic_device_reset_clean;
 	}
 	spin_lock_irqsave(io_lock, flags);
-	CMD_FLAGS(sc) |= FNIC_DEV_RST_ISSUED;
+	CMD_FLAGS(reset_sc) |= FNIC_DEV_RST_ISSUED;
 	spin_unlock_irqrestore(io_lock, flags);
 
 	/*
@@ -2432,16 +2400,16 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 				    msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT));
 
 	spin_lock_irqsave(io_lock, flags);
-	io_req = (struct fnic_io_req *)CMD_SP(sc);
+	io_req = (struct fnic_io_req *)CMD_SP(reset_sc);
 	if (!io_req) {
 		spin_unlock_irqrestore(io_lock, flags);
 		FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
-				"io_req is null tag 0x%x sc 0x%p\n", tag, sc);
+				"io_req is null tag 0x%x sc 0x%p\n", tag, reset_sc);
 		goto fnic_device_reset_end;
 	}
 	io_req->dr_done = NULL;
 
-	status = CMD_LR_STATUS(sc);
+	status = CMD_LR_STATUS(reset_sc);
 
 	/*
 	 * If lun reset not completed, bail out with failed. io_req
@@ -2451,7 +2419,7 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 		atomic64_inc(&reset_stats->device_reset_timeouts);
 		FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
 			      "Device reset timed out\n");
-		CMD_FLAGS(sc) |= FNIC_DEV_RST_TIMED_OUT;
+		CMD_FLAGS(reset_sc) |= FNIC_DEV_RST_TIMED_OUT;
 		spin_unlock_irqrestore(io_lock, flags);
 		int_to_scsilun(sc->device->lun, &fc_lun);
 		/*
@@ -2460,7 +2428,7 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 		 */
 		while (1) {
 			spin_lock_irqsave(io_lock, flags);
-			if (CMD_FLAGS(sc) & FNIC_DEV_RST_TERM_ISSUED) {
+			if (CMD_FLAGS(reset_sc) & FNIC_DEV_RST_TERM_ISSUED) {
 				spin_unlock_irqrestore(io_lock, flags);
 				break;
 			}
@@ -2473,8 +2441,8 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 				msecs_to_jiffies(FNIC_ABT_TERM_DELAY_TIMEOUT));
 			} else {
 				spin_lock_irqsave(io_lock, flags);
-				CMD_FLAGS(sc) |= FNIC_DEV_RST_TERM_ISSUED;
-				CMD_STATE(sc) = FNIC_IOREQ_ABTS_PENDING;
+				CMD_FLAGS(reset_sc) |= FNIC_DEV_RST_TERM_ISSUED;
+				CMD_STATE(reset_sc) = FNIC_IOREQ_ABTS_PENDING;
 				io_req->abts_done = &tm_done;
 				spin_unlock_irqrestore(io_lock, flags);
 				FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
@@ -2491,7 +2459,7 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 				msecs_to_jiffies(FNIC_LUN_RESET_TIMEOUT));
 				break;
 			} else {
-				io_req = (struct fnic_io_req *)CMD_SP(sc);
+				io_req = (struct fnic_io_req *)CMD_SP(reset_sc);
 				io_req->abts_done = NULL;
 				goto fnic_device_reset_clean;
 			}
@@ -2506,7 +2474,7 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 		FNIC_SCSI_DBG(KERN_DEBUG,
 			      fnic->lport->host,
 			      "Device reset completed - failed\n");
-		io_req = (struct fnic_io_req *)CMD_SP(sc);
+		io_req = (struct fnic_io_req *)CMD_SP(reset_sc);
 		goto fnic_device_reset_clean;
 	}
 
@@ -2517,7 +2485,7 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 	 * the lun reset cmd. If all cmds get cleaned, the lun reset
 	 * succeeds
 	 */
-	if (fnic_clean_pending_aborts(fnic, sc, new_sc)) {
+	if (fnic_clean_pending_aborts(fnic, reset_sc)) {
 		spin_lock_irqsave(io_lock, flags);
 		io_req = (struct fnic_io_req *)CMD_SP(sc);
 		FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
@@ -2528,35 +2496,34 @@ int fnic_device_reset(struct scsi_cmnd *sc)
 
 	/* Clean lun reset command */
 	spin_lock_irqsave(io_lock, flags);
-	io_req = (struct fnic_io_req *)CMD_SP(sc);
+	io_req = (struct fnic_io_req *)CMD_SP(reset_sc);
 	if (io_req)
 		/* Completed, and successful */
 		ret = SUCCESS;
 
 fnic_device_reset_clean:
 	if (io_req)
-		CMD_SP(sc) = NULL;
+		CMD_SP(reset_sc) = NULL;
 
 	spin_unlock_irqrestore(io_lock, flags);
 
 	if (io_req) {
 		start_time = io_req->start_time;
-		fnic_release_ioreq_buf(fnic, io_req, sc);
+		fnic_release_ioreq_buf(fnic, io_req, reset_sc);
 		mempool_free(io_req, fnic->io_req_pool);
 	}
 
 fnic_device_reset_end:
-	FNIC_TRACE(fnic_device_reset, sc->device->host->host_no,
-		  sc->request->tag, sc,
+	FNIC_TRACE(fnic_device_reset, reset_sc->device->host->host_no,
+		   reset_sc->request->tag, reset_sc,
 		  jiffies_to_msecs(jiffies - start_time),
-		  0, ((u64)sc->cmnd[0] << 32 |
+		  0, ((u64)reset_sc->cmnd[0] << 32 |
 		  (u64)sc->cmnd[2] << 24 | (u64)sc->cmnd[3] << 16 |
 		  (u64)sc->cmnd[4] << 8 | sc->cmnd[5]),
 		  (((u64)CMD_FLAGS(sc) << 32) | CMD_STATE(sc)));
 
 	/* free tag if it is allocated */
-	if (unlikely(tag_gen_flag))
-		fnic_scsi_host_end_tag(fnic, sc);
+	scsi_put_reserved_cmd(reset_sc);
 
 	FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
 		      "Returning from device reset %s\n",
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 10/24] fnic: use scsi_host_tagset_busy_iter() to traverse commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (8 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 09/24] fnic: use reserved commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 11/24] hpsa: move hpsa_hba_inquiry after scsi_add_host() John Garry
                   ` (13 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Use scsi_host_tagset_busy_iter() to traverse commands instead of
hand-crafted routine walking the tagspace.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/fnic/fnic_scsi.c | 540 ++++++++++++++--------------------
 1 file changed, 220 insertions(+), 320 deletions(-)

diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c
index e96c4bbf2374..54a11510e93e 100644
--- a/drivers/scsi/fnic/fnic_scsi.c
+++ b/drivers/scsi/fnic/fnic_scsi.c
@@ -101,7 +101,7 @@ static const char *fnic_fcpio_status_to_str(unsigned int status)
 	return fcpio_status_str[status];
 }
 
-static void fnic_cleanup_io(struct fnic *fnic, int exclude_id);
+static void fnic_cleanup_io(struct fnic *fnic);
 
 static inline spinlock_t *fnic_io_lock_hash(struct fnic *fnic,
 					    struct scsi_cmnd *sc)
@@ -637,7 +637,7 @@ static int fnic_fcpio_fw_reset_cmpl_handler(struct fnic *fnic,
 	atomic64_inc(&reset_stats->fw_reset_completions);
 
 	/* Clean up all outstanding io requests */
-	fnic_cleanup_io(fnic, SCSI_NO_TAG);
+	fnic_cleanup_io(fnic);
 
 	atomic64_set(&fnic->fnic_stats.fw_stats.active_fw_reqs, 0);
 	atomic64_set(&fnic->fnic_stats.io_stats.active_ios, 0);
@@ -1359,94 +1359,90 @@ int fnic_wq_copy_cmpl_handler(struct fnic *fnic, int copy_work_to_do)
 	return wq_work_done;
 }
 
-static void fnic_cleanup_io(struct fnic *fnic, int exclude_id)
+static bool fnic_cleanup_io_iter(struct scsi_cmnd *sc, void *data,
+				 bool reserved)
 {
-	int i;
+	struct fnic *fnic = data;
 	struct fnic_io_req *io_req;
 	unsigned long flags = 0;
-	struct scsi_cmnd *sc;
 	spinlock_t *io_lock;
 	unsigned long start_time = 0;
 	struct fnic_stats *fnic_stats = &fnic->fnic_stats;
 
-	for (i = 0; i < fnic->fnic_max_tag_id; i++) {
-		if (i == exclude_id)
-			continue;
-
-		io_lock = fnic_io_lock_tag(fnic, i);
-		spin_lock_irqsave(io_lock, flags);
-		sc = scsi_host_find_tag(fnic->lport->host, i);
-		if (!sc) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
+	io_lock = fnic_io_lock_tag(fnic, sc->request->tag);
+	spin_lock_irqsave(io_lock, flags);
 
-		io_req = (struct fnic_io_req *)CMD_SP(sc);
-		if ((CMD_FLAGS(sc) & FNIC_DEVICE_RESET) &&
-			!(CMD_FLAGS(sc) & FNIC_DEV_RST_DONE)) {
-			/*
-			 * We will be here only when FW completes reset
-			 * without sending completions for outstanding ios.
-			 */
-			CMD_FLAGS(sc) |= FNIC_DEV_RST_DONE;
-			if (io_req && io_req->dr_done)
-				complete(io_req->dr_done);
-			else if (io_req && io_req->abts_done)
-				complete(io_req->abts_done);
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		} else if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-		if (!io_req) {
-			spin_unlock_irqrestore(io_lock, flags);
-			goto cleanup_scsi_cmd;
-		}
+	io_req = (struct fnic_io_req *)CMD_SP(sc);
+	if ((CMD_FLAGS(sc) & FNIC_DEVICE_RESET) &&
+	    !(CMD_FLAGS(sc) & FNIC_DEV_RST_DONE)) {
+		/*
+		 * We will be here only when FW completes reset
+		 * without sending completions for outstanding ios.
+		 */
+		CMD_FLAGS(sc) |= FNIC_DEV_RST_DONE;
+		if (io_req && io_req->dr_done)
+			complete(io_req->dr_done);
+		else if (io_req && io_req->abts_done)
+			complete(io_req->abts_done);
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	} else if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET) {
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
+	if (!io_req) {
+		spin_unlock_irqrestore(io_lock, flags);
+		goto cleanup_scsi_cmd;
+	}
 
-		CMD_SP(sc) = NULL;
+	CMD_SP(sc) = NULL;
 
-		spin_unlock_irqrestore(io_lock, flags);
+	spin_unlock_irqrestore(io_lock, flags);
 
-		/*
-		 * If there is a scsi_cmnd associated with this io_req, then
-		 * free the corresponding state
-		 */
-		start_time = io_req->start_time;
-		fnic_release_ioreq_buf(fnic, io_req, sc);
-		mempool_free(io_req, fnic->io_req_pool);
+	/*
+	 * If there is a scsi_cmnd associated with this io_req, then
+	 * free the corresponding state
+	 */
+	start_time = io_req->start_time;
+	fnic_release_ioreq_buf(fnic, io_req, sc);
+	mempool_free(io_req, fnic->io_req_pool);
 
 cleanup_scsi_cmd:
-		sc->result = DID_TRANSPORT_DISRUPTED << 16;
-		FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
-			      "%s: tag:0x%x : sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n",
-			      __func__, sc->request->tag, sc,
-			      (jiffies - start_time));
-
-		if (atomic64_read(&fnic->io_cmpl_skip))
-			atomic64_dec(&fnic->io_cmpl_skip);
-		else
-			atomic64_inc(&fnic_stats->io_stats.io_completions);
+	sc->result = DID_TRANSPORT_DISRUPTED << 16;
+	FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+		      "fnic_cleanup_io: tag:0x%x : sc:0x%p duration = %lu DID_TRANSPORT_DISRUPTED\n",
+		      sc->request->tag, sc, (jiffies - start_time));
 
-		/* Complete the command to SCSI */
-		if (sc->scsi_done) {
-			if (!(CMD_FLAGS(sc) & FNIC_IO_ISSUED))
-				shost_printk(KERN_ERR, fnic->lport->host,
-				"Calling done for IO not issued to fw: tag:0x%x sc:0x%p\n",
-				 sc->request->tag, sc);
+	if (atomic64_read(&fnic->io_cmpl_skip))
+		atomic64_dec(&fnic->io_cmpl_skip);
+	else
+		atomic64_inc(&fnic_stats->io_stats.io_completions);
 
-			FNIC_TRACE(fnic_cleanup_io,
-				  sc->device->host->host_no, i, sc,
-				  jiffies_to_msecs(jiffies - start_time),
-				  0, ((u64)sc->cmnd[0] << 32 |
-				  (u64)sc->cmnd[2] << 24 |
-				  (u64)sc->cmnd[3] << 16 |
-				  (u64)sc->cmnd[4] << 8 | sc->cmnd[5]),
-				  (((u64)CMD_FLAGS(sc) << 32) | CMD_STATE(sc)));
+	/* Complete the command to SCSI */
+	if (sc->scsi_done) {
+		if (!(CMD_FLAGS(sc) & FNIC_IO_ISSUED))
+			shost_printk(KERN_ERR, fnic->lport->host,
+				     "Calling done for IO not issued to fw: tag:0x%x sc:0x%p\n",
+				     sc->request->tag, sc);
+
+		FNIC_TRACE(fnic_cleanup_io,
+			   sc->device->host->host_no, sc->request->tag, sc,
+			   jiffies_to_msecs(jiffies - start_time),
+			   0, ((u64)sc->cmnd[0] << 32 |
+			       (u64)sc->cmnd[2] << 24 |
+			       (u64)sc->cmnd[3] << 16 |
+			       (u64)sc->cmnd[4] << 8 | sc->cmnd[5]),
+			   (((u64)CMD_FLAGS(sc) << 32) | CMD_STATE(sc)));
 
-			sc->scsi_done(sc);
-		}
+		sc->scsi_done(sc);
 	}
+	return true;
+}
+
+static void fnic_cleanup_io(struct fnic *fnic)
+{
+	scsi_host_tagset_busy_iter(fnic->lport->host,
+				   fnic_cleanup_io_iter, fnic);
 }
 
 void fnic_wq_copy_cleanup_handler(struct vnic_wq_copy *wq,
@@ -1557,143 +1553,140 @@ static inline int fnic_queue_abort_io_req(struct fnic *fnic, int tag,
 	return 0;
 }
 
-static void fnic_rport_exch_reset(struct fnic *fnic, u32 port_id)
+struct fnic_rport_abort_io_iter_data {
+	struct fnic *fnic;
+	u32 port_id;
+	int term_cnt;
+};
+
+static bool fnic_rport_abort_io(struct scsi_cmnd *sc, void *data, bool reserved)
 {
-	int tag;
-	int abt_tag;
-	int term_cnt = 0;
+	struct fnic_rport_abort_io_iter_data *iter_data = data;
+	struct fnic *fnic = iter_data->fnic;
+	int abt_tag = sc->request->tag;
 	struct fnic_io_req *io_req;
 	spinlock_t *io_lock;
 	unsigned long flags;
-	struct scsi_cmnd *sc;
 	struct reset_stats *reset_stats = &fnic->fnic_stats.reset_stats;
 	struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats;
 	struct scsi_lun fc_lun;
 	enum fnic_ioreq_state old_ioreq_state;
 
-	FNIC_SCSI_DBG(KERN_DEBUG,
-		      fnic->lport->host,
-		      "fnic_rport_exch_reset called portid 0x%06x\n",
-		      port_id);
-
-	if (fnic->in_remove)
-		return;
-
-	for (tag = 0; tag < fnic->fnic_max_tag_id; tag++) {
-		abt_tag = tag;
-		io_lock = fnic_io_lock_tag(fnic, tag);
-		spin_lock_irqsave(io_lock, flags);
-		sc = scsi_host_find_tag(fnic->lport->host, tag);
-		if (!sc) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
+	io_lock = fnic_io_lock_tag(fnic, abt_tag);
+	spin_lock_irqsave(io_lock, flags);
 
-		io_req = (struct fnic_io_req *)CMD_SP(sc);
+	io_req = (struct fnic_io_req *)CMD_SP(sc);
 
-		if (!io_req || io_req->port_id != port_id) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
+	if (!io_req || io_req->port_id != iter_data->port_id) {
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
 
-		if ((CMD_FLAGS(sc) & FNIC_DEVICE_RESET) &&
-			(!(CMD_FLAGS(sc) & FNIC_DEV_RST_ISSUED))) {
-			FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+	if ((CMD_FLAGS(sc) & FNIC_DEVICE_RESET) &&
+	    (!(CMD_FLAGS(sc) & FNIC_DEV_RST_ISSUED))) {
+		FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
 			"fnic_rport_exch_reset dev rst not pending sc 0x%p\n",
 			sc);
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
 
-		/*
-		 * Found IO that is still pending with firmware and
-		 * belongs to rport that went away
-		 */
-		if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-		if (io_req->abts_done) {
-			shost_printk(KERN_ERR, fnic->lport->host,
+	/*
+	 * Found IO that is still pending with firmware and
+	 * belongs to rport that went away
+	 */
+	if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING) {
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
+	if (io_req->abts_done) {
+		shost_printk(KERN_ERR, fnic->lport->host,
 			"fnic_rport_exch_reset: io_req->abts_done is set "
 			"state is %s\n",
 			fnic_ioreq_state_to_str(CMD_STATE(sc)));
-		}
+	}
 
-		if (!(CMD_FLAGS(sc) & FNIC_IO_ISSUED)) {
-			shost_printk(KERN_ERR, fnic->lport->host,
-				  "rport_exch_reset "
-				  "IO not yet issued %p tag 0x%x flags "
-				  "%x state %d\n",
-				  sc, tag, CMD_FLAGS(sc), CMD_STATE(sc));
-		}
-		old_ioreq_state = CMD_STATE(sc);
-		CMD_STATE(sc) = FNIC_IOREQ_ABTS_PENDING;
-		CMD_ABTS_STATUS(sc) = FCPIO_INVALID_CODE;
-		if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET) {
-			atomic64_inc(&reset_stats->device_reset_terminates);
-			abt_tag = (tag | FNIC_TAG_DEV_RST);
-			FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
-			"fnic_rport_exch_reset dev rst sc 0x%p\n",
-			sc);
-		}
+	if (!(CMD_FLAGS(sc) & FNIC_IO_ISSUED)) {
+		shost_printk(KERN_ERR, fnic->lport->host,
+			     "rport_exch_reset "
+			     "IO not yet issued %p tag 0x%x flags "
+			     "%x state %d\n",
+			     sc, abt_tag, CMD_FLAGS(sc), CMD_STATE(sc));
+	}
+	old_ioreq_state = CMD_STATE(sc);
+	CMD_STATE(sc) = FNIC_IOREQ_ABTS_PENDING;
+	CMD_ABTS_STATUS(sc) = FCPIO_INVALID_CODE;
+	if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET) {
+		atomic64_inc(&reset_stats->device_reset_terminates);
+		abt_tag |= FNIC_TAG_DEV_RST;
+	}
+	FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+		      "fnic_rport_exch_reset dev rst sc 0x%p\n", sc);
+	BUG_ON(io_req->abts_done);
 
-		BUG_ON(io_req->abts_done);
+	FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+		      "fnic_rport_reset_exch: Issuing abts\n");
 
-		FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
-			      "fnic_rport_reset_exch: Issuing abts\n");
+	spin_unlock_irqrestore(io_lock, flags);
+
+	/* Now queue the abort command to firmware */
+	int_to_scsilun(sc->device->lun, &fc_lun);
 
+	if (fnic_queue_abort_io_req(fnic, abt_tag,
+				    FCPIO_ITMF_ABT_TASK_TERM,
+				    fc_lun.scsi_lun, io_req)) {
+		/*
+		 * Revert the cmd state back to old state, if
+		 * it hasn't changed in between. This cmd will get
+		 * aborted later by scsi_eh, or cleaned up during
+		 * lun reset
+		 */
+		spin_lock_irqsave(io_lock, flags);
+		if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING)
+			CMD_STATE(sc) = old_ioreq_state;
 		spin_unlock_irqrestore(io_lock, flags);
+	} else {
+		spin_lock_irqsave(io_lock, flags);
+		if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET)
+			CMD_FLAGS(sc) |= FNIC_DEV_RST_TERM_ISSUED;
+		else
+			CMD_FLAGS(sc) |= FNIC_IO_INTERNAL_TERM_ISSUED;
+		spin_unlock_irqrestore(io_lock, flags);
+		atomic64_inc(&term_stats->terminates);
+		iter_data->term_cnt++;
+	}
+	return true;
+}
 
-		/* Now queue the abort command to firmware */
-		int_to_scsilun(sc->device->lun, &fc_lun);
+static void fnic_rport_exch_reset(struct fnic *fnic, u32 port_id)
+{
+	struct terminate_stats *term_stats = &fnic->fnic_stats.term_stats;
+	struct fnic_rport_abort_io_iter_data iter_data = {
+		.fnic = fnic,
+		.port_id = port_id,
+		.term_cnt = 0,
+	};
 
-		if (fnic_queue_abort_io_req(fnic, abt_tag,
-					    FCPIO_ITMF_ABT_TASK_TERM,
-					    fc_lun.scsi_lun, io_req)) {
-			/*
-			 * Revert the cmd state back to old state, if
-			 * it hasn't changed in between. This cmd will get
-			 * aborted later by scsi_eh, or cleaned up during
-			 * lun reset
-			 */
-			spin_lock_irqsave(io_lock, flags);
-			if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING)
-				CMD_STATE(sc) = old_ioreq_state;
-			spin_unlock_irqrestore(io_lock, flags);
-		} else {
-			spin_lock_irqsave(io_lock, flags);
-			if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET)
-				CMD_FLAGS(sc) |= FNIC_DEV_RST_TERM_ISSUED;
-			else
-				CMD_FLAGS(sc) |= FNIC_IO_INTERNAL_TERM_ISSUED;
-			spin_unlock_irqrestore(io_lock, flags);
-			atomic64_inc(&term_stats->terminates);
-			term_cnt++;
-		}
-	}
-	if (term_cnt > atomic64_read(&term_stats->max_terminates))
-		atomic64_set(&term_stats->max_terminates, term_cnt);
+	FNIC_SCSI_DBG(KERN_DEBUG,
+		      fnic->lport->host,
+		      "fnic_rport_exch_reset called portid 0x%06x\n",
+		      port_id);
+
+	if (fnic->in_remove)
+		return;
+
+	scsi_host_tagset_busy_iter(fnic->lport->host, fnic_rport_abort_io,
+				   &iter_data);
+	if (iter_data.term_cnt > atomic64_read(&term_stats->max_terminates))
+		atomic64_set(&term_stats->max_terminates, iter_data.term_cnt);
 
 }
 
 void fnic_terminate_rport_io(struct fc_rport *rport)
 {
-	int tag;
-	int abt_tag;
-	int term_cnt = 0;
-	struct fnic_io_req *io_req;
-	spinlock_t *io_lock;
-	unsigned long flags;
-	struct scsi_cmnd *sc;
-	struct scsi_lun fc_lun;
 	struct fc_rport_libfc_priv *rdata;
 	struct fc_lport *lport;
 	struct fnic *fnic;
-	struct fc_rport *cmd_rport;
-	struct reset_stats *reset_stats;
-	struct terminate_stats *term_stats;
-	enum fnic_ioreq_state old_ioreq_state;
 
 	if (!rport) {
 		printk(KERN_ERR "fnic_terminate_rport_io: rport is NULL\n");
@@ -1721,109 +1714,7 @@ void fnic_terminate_rport_io(struct fc_rport *rport)
 	if (fnic->in_remove)
 		return;
 
-	reset_stats = &fnic->fnic_stats.reset_stats;
-	term_stats = &fnic->fnic_stats.term_stats;
-
-	for (tag = 0; tag < fnic->fnic_max_tag_id; tag++) {
-		abt_tag = tag;
-		io_lock = fnic_io_lock_tag(fnic, tag);
-		spin_lock_irqsave(io_lock, flags);
-		sc = scsi_host_find_tag(fnic->lport->host, tag);
-		if (!sc) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-
-		cmd_rport = starget_to_rport(scsi_target(sc->device));
-		if (rport != cmd_rport) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-
-		io_req = (struct fnic_io_req *)CMD_SP(sc);
-
-		if (!io_req || rport != cmd_rport) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-
-		if ((CMD_FLAGS(sc) & FNIC_DEVICE_RESET) &&
-			(!(CMD_FLAGS(sc) & FNIC_DEV_RST_ISSUED))) {
-			FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
-			"fnic_terminate_rport_io dev rst not pending sc 0x%p\n",
-			sc);
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-		/*
-		 * Found IO that is still pending with firmware and
-		 * belongs to rport that went away
-		 */
-		if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-		if (io_req->abts_done) {
-			shost_printk(KERN_ERR, fnic->lport->host,
-			"fnic_terminate_rport_io: io_req->abts_done is set "
-			"state is %s\n",
-			fnic_ioreq_state_to_str(CMD_STATE(sc)));
-		}
-		if (!(CMD_FLAGS(sc) & FNIC_IO_ISSUED)) {
-			FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
-				  "fnic_terminate_rport_io "
-				  "IO not yet issued %p tag 0x%x flags "
-				  "%x state %d\n",
-				  sc, tag, CMD_FLAGS(sc), CMD_STATE(sc));
-		}
-		old_ioreq_state = CMD_STATE(sc);
-		CMD_STATE(sc) = FNIC_IOREQ_ABTS_PENDING;
-		CMD_ABTS_STATUS(sc) = FCPIO_INVALID_CODE;
-		if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET) {
-			atomic64_inc(&reset_stats->device_reset_terminates);
-			abt_tag = (tag | FNIC_TAG_DEV_RST);
-			FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
-			"fnic_terminate_rport_io dev rst sc 0x%p\n", sc);
-		}
-
-		BUG_ON(io_req->abts_done);
-
-		FNIC_SCSI_DBG(KERN_DEBUG,
-			      fnic->lport->host,
-			      "fnic_terminate_rport_io: Issuing abts\n");
-
-		spin_unlock_irqrestore(io_lock, flags);
-
-		/* Now queue the abort command to firmware */
-		int_to_scsilun(sc->device->lun, &fc_lun);
-
-		if (fnic_queue_abort_io_req(fnic, abt_tag,
-					    FCPIO_ITMF_ABT_TASK_TERM,
-					    fc_lun.scsi_lun, io_req)) {
-			/*
-			 * Revert the cmd state back to old state, if
-			 * it hasn't changed in between. This cmd will get
-			 * aborted later by scsi_eh, or cleaned up during
-			 * lun reset
-			 */
-			spin_lock_irqsave(io_lock, flags);
-			if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING)
-				CMD_STATE(sc) = old_ioreq_state;
-			spin_unlock_irqrestore(io_lock, flags);
-		} else {
-			spin_lock_irqsave(io_lock, flags);
-			if (CMD_FLAGS(sc) & FNIC_DEVICE_RESET)
-				CMD_FLAGS(sc) |= FNIC_DEV_RST_TERM_ISSUED;
-			else
-				CMD_FLAGS(sc) |= FNIC_IO_INTERNAL_TERM_ISSUED;
-			spin_unlock_irqrestore(io_lock, flags);
-			atomic64_inc(&term_stats->terminates);
-			term_cnt++;
-		}
-	}
-	if (term_cnt > atomic64_read(&term_stats->max_terminates))
-		atomic64_set(&term_stats->max_terminates, term_cnt);
-
+	fnic_rport_exch_reset(fnic, rport->port_id);
 }
 
 /*
@@ -2741,6 +2632,46 @@ void fnic_exch_mgr_reset(struct fc_lport *lp, u32 sid, u32 did)
 
 }
 
+static bool fnic_abts_pending_iter(struct scsi_cmnd *sc, void *data,
+				   bool reserved)
+{
+	struct fnic_pending_aborts_iter_data *iter_data = data;
+	struct fnic *fnic = iter_data->fnic;
+	int cmd_state;
+	struct fnic_io_req *io_req;
+	spinlock_t *io_lock;
+	unsigned long flags;
+
+	if (reserved)
+		return true;
+
+	if (iter_data->lun_dev && sc->device != iter_data->lun_dev)
+		return true;
+
+	io_lock = fnic_io_lock_hash(fnic, sc);
+	spin_lock_irqsave(io_lock, flags);
+
+	io_req = (struct fnic_io_req *)CMD_SP(sc);
+	if (!io_req) {
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
+
+	/*
+	 * Found IO that is still pending with firmware and
+	 * belongs to the LUN that we are resetting
+	 */
+	FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
+		      "Found IO in %s on lun\n",
+		      fnic_ioreq_state_to_str(CMD_STATE(sc)));
+	cmd_state = CMD_STATE(sc);
+	spin_unlock_irqrestore(io_lock, flags);
+	if (cmd_state == FNIC_IOREQ_ABTS_PENDING)
+		iter_data->ret = 1;
+
+	return iter_data->ret ? false : true;
+}
+
 /*
  * fnic_is_abts_pending() is a helper function that
  * walks through tag map to check if there is any IOs pending,if there is one,
@@ -2750,49 +2681,18 @@ void fnic_exch_mgr_reset(struct fc_lport *lp, u32 sid, u32 did)
  */
 int fnic_is_abts_pending(struct fnic *fnic, struct scsi_cmnd *lr_sc)
 {
-	int tag;
-	struct fnic_io_req *io_req;
-	spinlock_t *io_lock;
-	unsigned long flags;
-	int ret = 0;
-	struct scsi_cmnd *sc;
-	struct scsi_device *lun_dev = NULL;
+	struct fnic_pending_aborts_iter_data iter_data = {
+		.fnic = fnic,
+		.lun_dev = NULL,
+		.ret = 0,
+	};
 
 	if (lr_sc)
-		lun_dev = lr_sc->device;
+		iter_data.lun_dev = lr_sc->device;
 
 	/* walk again to check, if IOs are still pending in fw */
-	for (tag = 0; tag < fnic->fnic_max_tag_id; tag++) {
-		sc = scsi_host_find_tag(fnic->lport->host, tag);
-		/*
-		 * ignore this lun reset cmd or cmds that do not belong to
-		 * this lun
-		 */
-		if (!sc || (lr_sc && (sc->device != lun_dev || sc == lr_sc)))
-			continue;
-
-		io_lock = fnic_io_lock_hash(fnic, sc);
-		spin_lock_irqsave(io_lock, flags);
-
-		io_req = (struct fnic_io_req *)CMD_SP(sc);
-
-		if (!io_req || sc->device != lun_dev) {
-			spin_unlock_irqrestore(io_lock, flags);
-			continue;
-		}
-
-		/*
-		 * Found IO that is still pending with firmware and
-		 * belongs to the LUN that we are resetting
-		 */
-		FNIC_SCSI_DBG(KERN_INFO, fnic->lport->host,
-			      "Found IO in %s on lun\n",
-			      fnic_ioreq_state_to_str(CMD_STATE(sc)));
-
-		if (CMD_STATE(sc) == FNIC_IOREQ_ABTS_PENDING)
-			ret = 1;
-		spin_unlock_irqrestore(io_lock, flags);
-	}
+	scsi_host_tagset_busy_iter(fnic->lport->host,
+				fnic_abts_pending_iter, &iter_data);
 
-	return ret;
+	return iter_data.ret;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 11/24] hpsa: move hpsa_hba_inquiry after scsi_add_host()
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (9 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 10/24] fnic: use scsi_host_tagset_busy_iter() to traverse commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 12/24] hpsa: use reserved commands John Garry
                   ` (12 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.de>

Move hpsa_hba_inquiry to after scsi_add_host() so that the host
is fully initialized.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/hpsa.c | 37 +++++++++++++++++++------------------
 1 file changed, 19 insertions(+), 18 deletions(-)

diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index 1a4ddfacb458..703f824584fe 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -5814,6 +5814,22 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
 	return 0;
 }
 
+static void hpsa_hba_inquiry(struct ctlr_info *h)
+{
+	int rc;
+
+#define HBA_INQUIRY_BYTE_COUNT 64
+	h->hba_inquiry_data = kmalloc(HBA_INQUIRY_BYTE_COUNT, GFP_KERNEL);
+	if (!h->hba_inquiry_data)
+		return;
+	rc = hpsa_scsi_do_inquiry(h, RAID_CTLR_LUNID, 0,
+		h->hba_inquiry_data, HBA_INQUIRY_BYTE_COUNT);
+	if (rc != 0) {
+		kfree(h->hba_inquiry_data);
+		h->hba_inquiry_data = NULL;
+	}
+}
+
 static int hpsa_scsi_add_host(struct ctlr_info *h)
 {
 	int rv;
@@ -5823,6 +5839,9 @@ static int hpsa_scsi_add_host(struct ctlr_info *h)
 		dev_err(&h->pdev->dev, "scsi_add_host failed\n");
 		return rv;
 	}
+
+	hpsa_hba_inquiry(h);
+
 	scsi_scan_host(h->scsi_host);
 	return 0;
 }
@@ -7899,22 +7918,6 @@ static int hpsa_pci_init(struct ctlr_info *h)
 	return err;
 }
 
-static void hpsa_hba_inquiry(struct ctlr_info *h)
-{
-	int rc;
-
-#define HBA_INQUIRY_BYTE_COUNT 64
-	h->hba_inquiry_data = kmalloc(HBA_INQUIRY_BYTE_COUNT, GFP_KERNEL);
-	if (!h->hba_inquiry_data)
-		return;
-	rc = hpsa_scsi_do_inquiry(h, RAID_CTLR_LUNID, 0,
-		h->hba_inquiry_data, HBA_INQUIRY_BYTE_COUNT);
-	if (rc != 0) {
-		kfree(h->hba_inquiry_data);
-		h->hba_inquiry_data = NULL;
-	}
-}
-
 static int hpsa_init_reset_devices(struct pci_dev *pdev, u32 board_id)
 {
 	int rc, i;
@@ -8810,8 +8813,6 @@ static int hpsa_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 	/* Turn the interrupts on so we can service requests */
 	h->access.set_intr_mask(h, HPSA_INTR_ON);
 
-	hpsa_hba_inquiry(h);
-
 	h->lastlogicals = kzalloc(sizeof(*(h->lastlogicals)), GFP_KERNEL);
 	if (!h->lastlogicals)
 		dev_info(&h->pdev->dev,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 12/24] hpsa: use reserved commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (10 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 11/24] hpsa: move hpsa_hba_inquiry after scsi_add_host() John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-11  8:10   ` Ming Lei
  2020-03-10 16:25 ` [PATCH RFC v2 13/24] hpsa: use blk_mq_tagset_busy_iter() to traverse outstanding commands John Garry
                   ` (11 subsequent siblings)
  23 siblings, 1 reply; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Enable the use of reserved commands, and drop the hand-crafted
command allocation.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/hpsa.c | 147 ++++++++++++++------------------------------
 drivers/scsi/hpsa.h |   1 -
 2 files changed, 45 insertions(+), 103 deletions(-)

diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index 703f824584fe..c14dd4b6e598 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -244,10 +244,6 @@ static struct hpsa_scsi_dev_t
 	*hpsa_find_device_by_sas_rphy(struct ctlr_info *h,
 		struct sas_rphy *rphy);
 
-#define SCSI_CMD_BUSY ((struct scsi_cmnd *)&hpsa_cmd_busy)
-static const struct scsi_cmnd hpsa_cmd_busy;
-#define SCSI_CMD_IDLE ((struct scsi_cmnd *)&hpsa_cmd_idle)
-static const struct scsi_cmnd hpsa_cmd_idle;
 static int number_of_controllers;
 
 static irqreturn_t do_hpsa_intr_intx(int irq, void *dev_id);
@@ -342,7 +338,7 @@ static inline struct ctlr_info *shost_to_hba(struct Scsi_Host *sh)
 
 static inline bool hpsa_is_cmd_idle(struct CommandList *c)
 {
-	return c->scsi_cmd == SCSI_CMD_IDLE;
+	return c->scsi_cmd == NULL;
 }
 
 /* extract sense key, asc, and ascq from sense data.  -1 means invalid. */
@@ -2445,7 +2441,12 @@ static void hpsa_cmd_resolve_events(struct ctlr_info *h,
 	 * this command has completed.  Then, check to see if the handler is
 	 * waiting for this command, and, if so, wake it.
 	 */
-	c->scsi_cmd = SCSI_CMD_IDLE;
+	if (c->scsi_cmd && c->cmd_type == CMD_IOCTL_PEND) {
+		struct scsi_cmnd *scmd = c->scsi_cmd;
+
+		scsi_put_reserved_cmd(scmd);
+	}
+	c->scsi_cmd = NULL;
 	mb();	/* Declare command idle before checking for pending events. */
 	if (dev) {
 		atomic_dec(&dev->commands_outstanding);
@@ -5502,7 +5503,6 @@ static void hpsa_cmd_init(struct ctlr_info *h, int index,
 	c->ErrDesc.Addr = cpu_to_le64((u64) err_dma_handle);
 	c->ErrDesc.Len = cpu_to_le32((u32) sizeof(*c->err_info));
 	c->h = h;
-	c->scsi_cmd = SCSI_CMD_IDLE;
 }
 
 static void hpsa_preinitialize_commands(struct ctlr_info *h)
@@ -5803,6 +5803,7 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
 	sh->max_lun = HPSA_MAX_LUN;
 	sh->max_id = HPSA_MAX_LUN;
 	sh->can_queue = h->nr_cmds - HPSA_NRESERVED_CMDS;
+	sh->nr_reserved_cmds = HPSA_NRESERVED_CMDS;
 	sh->cmd_per_lun = sh->can_queue;
 	sh->sg_tablesize = h->maxsgentries;
 	sh->transportt = hpsa_sas_transport_template;
@@ -5846,23 +5847,6 @@ static int hpsa_scsi_add_host(struct ctlr_info *h)
 	return 0;
 }
 
-/*
- * The block layer has already gone to the trouble of picking out a unique,
- * small-integer tag for this request.  We use an offset from that value as
- * an index to select our command block.  (The offset allows us to reserve the
- * low-numbered entries for our own uses.)
- */
-static int hpsa_get_cmd_index(struct scsi_cmnd *scmd)
-{
-	int idx = scmd->request->tag;
-
-	if (idx < 0)
-		return idx;
-
-	/* Offset to leave space for internal cmds. */
-	return idx += HPSA_NRESERVED_CMDS;
-}
-
 /*
  * Send a TEST_UNIT_READY command to the specified LUN using the specified
  * reply queue; returns zero if the unit is ready, and non-zero otherwise.
@@ -6019,7 +6003,7 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
 	if (lockup_detected(h)) {
 		snprintf(msg, sizeof(msg),
 			 "cmd %d RESET FAILED, lockup detected",
-			 hpsa_get_cmd_index(scsicmd));
+			 scsicmd->request->tag);
 		hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
 		rc = FAILED;
 		goto return_reset_status;
@@ -6029,7 +6013,7 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
 	if (detect_controller_lockup(h)) {
 		snprintf(msg, sizeof(msg),
 			 "cmd %d RESET FAILED, new lockup detected",
-			 hpsa_get_cmd_index(scsicmd));
+			  scsicmd->request->tag);
 		hpsa_show_dev_msg(KERN_WARNING, h, dev, msg);
 		rc = FAILED;
 		goto return_reset_status;
@@ -6091,12 +6075,12 @@ static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
 static struct CommandList *cmd_tagged_alloc(struct ctlr_info *h,
 					    struct scsi_cmnd *scmd)
 {
-	int idx = hpsa_get_cmd_index(scmd);
+	int idx = scmd->request->tag;
 	struct CommandList *c = h->cmd_pool + idx;
 
-	if (idx < HPSA_NRESERVED_CMDS || idx >= h->nr_cmds) {
+	if (idx < 0 || idx >= h->nr_cmds) {
 		dev_err(&h->pdev->dev, "Bad block tag: %d not in [%d..%d]\n",
-			idx, HPSA_NRESERVED_CMDS, h->nr_cmds - 1);
+			idx, 0, h->nr_cmds - 1);
 		/* The index value comes from the block layer, so if it's out of
 		 * bounds, it's probably not our bug.
 		 */
@@ -6133,81 +6117,49 @@ static void cmd_tagged_free(struct ctlr_info *h, struct CommandList *c)
 	 * else to free it, because it is accessed by index.
 	 */
 	(void)atomic_dec(&c->refcount);
+	c->scsi_cmd = NULL;
 }
 
-/*
- * For operations that cannot sleep, a command block is allocated at init,
- * and managed by cmd_alloc() and cmd_free() using a simple bitmap to track
- * which ones are free or in use.  Lock must be held when calling this.
- * cmd_free() is the complement.
- * This function never gives up and returns NULL.  If it hangs,
- * another thread must call cmd_free() to free some tags.
- */
-
 static struct CommandList *cmd_alloc(struct ctlr_info *h)
 {
+	struct scsi_cmnd *scmd;
 	struct CommandList *c;
-	int refcount, i;
-	int offset = 0;
-
-	/*
-	 * There is some *extremely* small but non-zero chance that that
-	 * multiple threads could get in here, and one thread could
-	 * be scanning through the list of bits looking for a free
-	 * one, but the free ones are always behind him, and other
-	 * threads sneak in behind him and eat them before he can
-	 * get to them, so that while there is always a free one, a
-	 * very unlucky thread might be starved anyway, never able to
-	 * beat the other threads.  In reality, this happens so
-	 * infrequently as to be indistinguishable from never.
-	 *
-	 * Note that we start allocating commands before the SCSI host structure
-	 * is initialized.  Since the search starts at bit zero, this
-	 * all works, since we have at least one command structure available;
-	 * however, it means that the structures with the low indexes have to be
-	 * reserved for driver-initiated requests, while requests from the block
-	 * layer will use the higher indexes.
-	 */
+	int idx;
 
-	for (;;) {
-		i = find_next_zero_bit(h->cmd_pool_bits,
-					HPSA_NRESERVED_CMDS,
-					offset);
-		if (unlikely(i >= HPSA_NRESERVED_CMDS)) {
-			offset = 0;
-			continue;
-		}
-		c = h->cmd_pool + i;
-		refcount = atomic_inc_return(&c->refcount);
-		if (unlikely(refcount > 1)) {
-			cmd_free(h, c); /* already in use */
-			offset = (i + 1) % HPSA_NRESERVED_CMDS;
-			continue;
-		}
-		set_bit(i & (BITS_PER_LONG - 1),
-			h->cmd_pool_bits + (i / BITS_PER_LONG));
-		break; /* it's ours now. */
+	scmd = scsi_get_reserved_cmd(h->scsi_host);
+	if (!scmd) {
+		dev_warn(&h->pdev->dev, "failed to allocate reserved cmd\n");
+		return NULL;
+ 	}
+	idx = scmd->request->tag;
+	c = cmd_tagged_alloc(h, scmd);
+	if (!c) {
+		dev_warn(&h->pdev->dev, "failed to allocate reserved cmd %u\n",
+			 idx);
+		scsi_put_reserved_cmd(scmd);
+		return NULL;
 	}
-	hpsa_cmd_partial_init(h, i, c);
-	c->device = NULL;
-	return c;
+	hpsa_cmd_partial_init(h, idx, c);
+	c->scsi_cmd = scmd;
+ 	c->device = NULL;
+	dev_dbg(&h->pdev->dev, "using reserved cmd %u\n", idx);
+ 	return c;
 }
 
-/*
- * This is the complementary operation to cmd_alloc().  Note, however, in some
- * corner cases it may also be used to free blocks allocated by
- * cmd_tagged_alloc() in which case the ref-count decrement does the trick and
- * the clear-bit is harmless.
- */
 static void cmd_free(struct ctlr_info *h, struct CommandList *c)
 {
-	if (atomic_dec_and_test(&c->refcount)) {
-		int i;
-
-		i = c - h->cmd_pool;
-		clear_bit(i & (BITS_PER_LONG - 1),
-			  h->cmd_pool_bits + (i / BITS_PER_LONG));
+	struct scsi_cmnd *scmd = c->scsi_cmd;
+ 
+	if (!scmd) {
+		dev_warn(&h->pdev->dev, "freeing idle cmd\n");
+		return;
 	}
+	cmd_tagged_free(h, c);
+	if (c->cmd_type == CMD_IOCTL_PEND && scmd) {
+		dev_dbg(&h->pdev->dev, "returning reserved cmd %u\n",
+			scmd->request->tag);
+		scsi_put_reserved_cmd(scmd);
+ 	}
 }
 
 #ifdef CONFIG_COMPAT
@@ -6393,7 +6345,6 @@ static int hpsa_passthru_ioctl(struct ctlr_info *h, void __user *argp)
 
 	/* Fill in the command type */
 	c->cmd_type = CMD_IOCTL_PEND;
-	c->scsi_cmd = SCSI_CMD_BUSY;
 	/* Fill in Command Header */
 	c->Header.ReplyQueue = 0; /* unused in simple mode */
 	if (iocommand.buf_size > 0) {	/* buffer to fill */
@@ -6525,7 +6476,6 @@ static int hpsa_big_passthru_ioctl(struct ctlr_info *h, void __user *argp)
 	c = cmd_alloc(h);
 
 	c->cmd_type = CMD_IOCTL_PEND;
-	c->scsi_cmd = SCSI_CMD_BUSY;
 	c->Header.ReplyQueue = 0;
 	c->Header.SGList = (u8) sg_used;
 	c->Header.SGTotal = cpu_to_le16(sg_used);
@@ -6669,7 +6619,6 @@ static int fill_cmd(struct CommandList *c, u8 cmd, struct ctlr_info *h,
 	enum dma_data_direction dir = DMA_NONE;
 
 	c->cmd_type = CMD_IOCTL_PEND;
-	c->scsi_cmd = SCSI_CMD_BUSY;
 	c->Header.ReplyQueue = 0;
 	if (buff != NULL && size > 0) {
 		c->Header.SGList = 1;
@@ -7982,8 +7931,6 @@ static int hpsa_init_reset_devices(struct pci_dev *pdev, u32 board_id)
 
 static void hpsa_free_cmd_pool(struct ctlr_info *h)
 {
-	kfree(h->cmd_pool_bits);
-	h->cmd_pool_bits = NULL;
 	if (h->cmd_pool) {
 		dma_free_coherent(&h->pdev->dev,
 				h->nr_cmds * sizeof(struct CommandList),
@@ -8004,17 +7951,13 @@ static void hpsa_free_cmd_pool(struct ctlr_info *h)
 
 static int hpsa_alloc_cmd_pool(struct ctlr_info *h)
 {
-	h->cmd_pool_bits = kcalloc(DIV_ROUND_UP(h->nr_cmds, BITS_PER_LONG),
-				   sizeof(unsigned long),
-				   GFP_KERNEL);
 	h->cmd_pool = dma_alloc_coherent(&h->pdev->dev,
 		    h->nr_cmds * sizeof(*h->cmd_pool),
 		    &h->cmd_pool_dhandle, GFP_KERNEL);
 	h->errinfo_pool = dma_alloc_coherent(&h->pdev->dev,
 		    h->nr_cmds * sizeof(*h->errinfo_pool),
 		    &h->errinfo_pool_dhandle, GFP_KERNEL);
-	if ((h->cmd_pool_bits == NULL)
-	    || (h->cmd_pool == NULL)
+	if ((h->cmd_pool == NULL)
 	    || (h->errinfo_pool == NULL)) {
 		dev_err(&h->pdev->dev, "out of memory in %s", __func__);
 		goto clean_up;
diff --git a/drivers/scsi/hpsa.h b/drivers/scsi/hpsa.h
index f8c88fc7b80a..70addb024ebb 100644
--- a/drivers/scsi/hpsa.h
+++ b/drivers/scsi/hpsa.h
@@ -205,7 +205,6 @@ struct ctlr_info {
 	dma_addr_t		ioaccel2_cmd_pool_dhandle;
 	struct ErrorInfo 	*errinfo_pool;
 	dma_addr_t		errinfo_pool_dhandle;
-	unsigned long  		*cmd_pool_bits;
 	int			scan_finished;
 	u8			scan_waiting : 1;
 	spinlock_t		scan_lock;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 13/24] hpsa: use blk_mq_tagset_busy_iter() to traverse outstanding commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (11 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 12/24] hpsa: use reserved commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 14/24] hpsa: drop refcount field from CommandList John Garry
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Replace all hand-crafted command iterations with
blk_mq_tagset_busy_iter().

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/hpsa.c | 117 +++++++++++++++++++++++++-------------------
 1 file changed, 67 insertions(+), 50 deletions(-)

diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index c14dd4b6e598..64ec0b2c647a 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -1807,30 +1807,26 @@ static int hpsa_add_device(struct ctlr_info *h, struct hpsa_scsi_dev_t *device)
 	return rc;
 }
 
-static int hpsa_find_outstanding_commands_for_dev(struct ctlr_info *h,
-						struct hpsa_scsi_dev_t *dev)
-{
-	int i;
-	int count = 0;
-
-	for (i = 0; i < h->nr_cmds; i++) {
-		struct CommandList *c = h->cmd_pool + i;
-		int refcount = atomic_inc_return(&c->refcount);
-
-		if (refcount > 1 && hpsa_cmd_dev_match(h, c, dev,
-				dev->scsi3addr)) {
-			unsigned long flags;
+struct hpsa_command_iter_data {
+	struct ctlr_info *h;
+	struct hpsa_scsi_dev_t *dev;
+	unsigned char *scsi3addr;
+	int count;
+};
 
-			spin_lock_irqsave(&h->lock, flags);	/* Implied MB */
-			if (!hpsa_is_cmd_idle(c))
-				++count;
-			spin_unlock_irqrestore(&h->lock, flags);
-		}
+static bool hpsa_find_outstanding_commands_iter(struct scsi_cmnd *sc,
+						void *data, bool reserved)
+{
+	struct hpsa_command_iter_data *iter_data = data;
+	struct ctlr_info *h = iter_data->h;
+	struct hpsa_scsi_dev_t *dev = iter_data->dev;
+	struct CommandList *c = h->cmd_pool + sc->request->tag;
 
-		cmd_free(h, c);
+	if (hpsa_cmd_dev_match(h, c, dev, dev->scsi3addr)) {
+		iter_data->count++;
+		return false;
 	}
-
-	return count;
+	return true;
 }
 
 #define NUM_WAIT 20
@@ -1840,13 +1836,20 @@ static void hpsa_wait_for_outstanding_commands_for_dev(struct ctlr_info *h,
 	int cmds = 0;
 	int waits = 0;
 	int num_wait = NUM_WAIT;
+	struct hpsa_command_iter_data iter_data = {
+		.h = h,
+		.dev = device,
+	};
 
 	if (device->external)
 		num_wait = HPSA_EH_PTRAID_TIMEOUT;
 
 	while (1) {
-		cmds = hpsa_find_outstanding_commands_for_dev(h, device);
-		if (cmds == 0)
+		iter_data.count = 0;
+		scsi_host_tagset_busy_iter(h->scsi_host,
+					   hpsa_find_outstanding_commands_iter,
+					   &iter_data);
+		if (iter_data.count == 0)
 			break;
 		if (++waits > num_wait)
 			break;
@@ -8128,27 +8131,34 @@ static void hpsa_undo_allocations_after_kdump_soft_reset(struct ctlr_info *h)
 	kfree(h);				/* init_one 1 */
 }
 
+static bool fail_all_outstanding_cmds_iter(struct scsi_cmnd *sc, void *data,
+					   bool reserved)
+{
+	struct hpsa_command_iter_data *iter_data = data;
+	struct ctlr_info *h = iter_data->h;
+	struct CommandList *c = h->cmd_pool + sc->request->tag;
+
+	c->err_info->CommandStatus = CMD_CTLR_LOCKUP;
+	finish_cmd(c);
+	atomic_dec(&h->commands_outstanding);
+	iter_data->count++;
+
+	return true;
+}
+
 /* Called when controller lockup detected. */
 static void fail_all_outstanding_cmds(struct ctlr_info *h)
 {
-	int i, refcount;
-	struct CommandList *c;
-	int failcount = 0;
+	struct hpsa_command_iter_data iter_data = {
+		.h = h,
+		.count = 0,
+	};
 
 	flush_workqueue(h->resubmit_wq); /* ensure all cmds are fully built */
-	for (i = 0; i < h->nr_cmds; i++) {
-		c = h->cmd_pool + i;
-		refcount = atomic_inc_return(&c->refcount);
-		if (refcount > 1) {
-			c->err_info->CommandStatus = CMD_CTLR_LOCKUP;
-			finish_cmd(c);
-			atomic_dec(&h->commands_outstanding);
-			failcount++;
-		}
-		cmd_free(h, c);
-	}
+	scsi_host_tagset_busy_iter(h->scsi_host,
+				   fail_all_outstanding_cmds_iter, &iter_data);
 	dev_warn(&h->pdev->dev,
-		"failed %d commands in fail_all\n", failcount);
+		"failed %d commands in fail_all\n", iter_data.count);
 }
 
 static void set_lockup_detected_for_all_cpus(struct ctlr_info *h, u32 value)
@@ -9434,22 +9444,29 @@ static int is_accelerated_cmd(struct CommandList *c)
 	return c->cmd_type == CMD_IOACCEL1 || c->cmd_type == CMD_IOACCEL2;
 }
 
+static bool hpsa_drain_accel_commands_iter(struct scsi_cmnd *sc, void *data,
+					   bool reserved)
+{
+	struct hpsa_command_iter_data *iter_data = data;
+	struct ctlr_info *h = iter_data->h;
+	struct CommandList *c = h->cmd_pool + sc->request->tag;
+
+	iter_data->count += is_accelerated_cmd(c);
+	return true;
+}
+
 static void hpsa_drain_accel_commands(struct ctlr_info *h)
 {
-	struct CommandList *c = NULL;
-	int i, accel_cmds_out;
-	int refcount;
+	struct hpsa_command_iter_data iter_data = {
+		.h = h,
+	};
 
 	do { /* wait for all outstanding ioaccel commands to drain out */
-		accel_cmds_out = 0;
-		for (i = 0; i < h->nr_cmds; i++) {
-			c = h->cmd_pool + i;
-			refcount = atomic_inc_return(&c->refcount);
-			if (refcount > 1) /* Command is allocated */
-				accel_cmds_out += is_accelerated_cmd(c);
-			cmd_free(h, c);
-		}
-		if (accel_cmds_out <= 0)
+		iter_data.count = 0;
+		scsi_host_tagset_busy_iter(h->scsi_host,
+					   hpsa_drain_accel_commands_iter,
+					   &iter_data);
+		if (iter_data.count <= 0)
 			break;
 		msleep(100);
 	} while (1);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 14/24] hpsa: drop refcount field from CommandList
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (12 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 13/24] hpsa: use blk_mq_tagset_busy_iter() to traverse outstanding commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 15/24] snic: use reserved commands John Garry
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.de>

Field is not unused, so drop it.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/hpsa.c     | 12 ++----------
 drivers/scsi/hpsa_cmd.h |  1 -
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index 64ec0b2c647a..f6de0f2beee2 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -5493,8 +5493,8 @@ static void hpsa_cmd_init(struct ctlr_info *h, int index,
 {
 	dma_addr_t cmd_dma_handle, err_dma_handle;
 
-	/* Zero out all of commandlist except the last field, refcount */
-	memset(c, 0, offsetof(struct CommandList, refcount));
+	/* Zero out all of commandlist */
+	memset(c, 0, sizeof(struct CommandList));
 	c->Header.tag = cpu_to_le64((u64) (index << DIRECT_LOOKUP_SHIFT));
 	cmd_dma_handle = h->cmd_pool_dhandle + index * sizeof(*c);
 	c->err_info = h->errinfo_pool + index;
@@ -5516,7 +5516,6 @@ static void hpsa_preinitialize_commands(struct ctlr_info *h)
 		struct CommandList *c = h->cmd_pool + i;
 
 		hpsa_cmd_init(h, i, c);
-		atomic_set(&c->refcount, 0);
 	}
 }
 
@@ -6107,19 +6106,12 @@ static struct CommandList *cmd_tagged_alloc(struct ctlr_info *h,
 		return NULL;
 	}
 
-	atomic_inc(&c->refcount);
-
 	hpsa_cmd_partial_init(h, idx, c);
 	return c;
 }
 
 static void cmd_tagged_free(struct ctlr_info *h, struct CommandList *c)
 {
-	/*
-	 * Release our reference to the block.  We don't need to do anything
-	 * else to free it, because it is accessed by index.
-	 */
-	(void)atomic_dec(&c->refcount);
 	c->scsi_cmd = NULL;
 }
 
diff --git a/drivers/scsi/hpsa_cmd.h b/drivers/scsi/hpsa_cmd.h
index 7825cbfea4dc..2575a396f1a5 100644
--- a/drivers/scsi/hpsa_cmd.h
+++ b/drivers/scsi/hpsa_cmd.h
@@ -449,7 +449,6 @@ struct CommandList {
 
 	int abort_pending;
 	struct hpsa_scsi_dev_t *device;
-	atomic_t refcount; /* Must be last to avoid memset in hpsa_cmd_init() */
 } __aligned(COMMANDLIST_ALIGNMENT);
 
 /* Max S/G elements in I/O accelerator command */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 15/24] snic: use reserved commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (13 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 14/24] hpsa: drop refcount field from CommandList John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 16/24] snic: use tagset iter for traversing commands John Garry
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Use a reserved command for host and device reset.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/snic/snic.h      |  2 +-
 drivers/scsi/snic/snic_main.c |  2 +
 drivers/scsi/snic/snic_scsi.c | 96 +++++++++++++++++++----------------
 3 files changed, 54 insertions(+), 46 deletions(-)

diff --git a/drivers/scsi/snic/snic.h b/drivers/scsi/snic/snic.h
index de0ab5fc8474..0af5918a6bb8 100644
--- a/drivers/scsi/snic/snic.h
+++ b/drivers/scsi/snic/snic.h
@@ -380,7 +380,7 @@ int snic_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);
 int snic_abort_cmd(struct scsi_cmnd *);
 int snic_device_reset(struct scsi_cmnd *);
 int snic_host_reset(struct scsi_cmnd *);
-int snic_reset(struct Scsi_Host *, struct scsi_cmnd *);
+int snic_reset(struct Scsi_Host *);
 void snic_shutdown_scsi_cleanup(struct snic *);
 
 
diff --git a/drivers/scsi/snic/snic_main.c b/drivers/scsi/snic/snic_main.c
index 14f4ce665e58..1ae5f8ce5361 100644
--- a/drivers/scsi/snic/snic_main.c
+++ b/drivers/scsi/snic/snic_main.c
@@ -385,6 +385,8 @@ snic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 
 		goto prob_end;
 	}
+	shost->nr_reserved_cmds = 1;
+	shost->can_queue--;
 	snic = shost_priv(shost);
 	snic->shost = shost;
 
diff --git a/drivers/scsi/snic/snic_scsi.c b/drivers/scsi/snic/snic_scsi.c
index b3650c989ed4..c4aaad945d51 100644
--- a/drivers/scsi/snic/snic_scsi.c
+++ b/drivers/scsi/snic/snic_scsi.c
@@ -2150,7 +2150,7 @@ snic_device_reset(struct scsi_cmnd *sc)
 	struct Scsi_Host *shost = sc->device->host;
 	struct snic *snic = shost_priv(shost);
 	struct snic_req_info *rqi = NULL;
-	int tag = snic_cmd_tag(sc);
+	struct scsi_cmnd *reset_sc = NULL;
 	int start_time = jiffies;
 	int ret = FAILED;
 	int dr_supp = 0;
@@ -2174,42 +2174,43 @@ snic_device_reset(struct scsi_cmnd *sc)
 		goto dev_rst_end;
 	}
 
-	/* There is no tag when lun reset is issue through ioctl. */
-	if (unlikely(tag <= SNIC_NO_TAG)) {
-		SNIC_HOST_INFO(snic->shost,
-			       "Devrst: LUN Reset Recvd thru IOCTL.\n");
+	reset_sc = scsi_get_reserved_cmd(shost);
+	if (!reset_sc)
+		goto dev_rst_end;
 
-		rqi = snic_req_init(snic, 0);
-		if (!rqi)
-			goto dev_rst_end;
+	rqi = snic_req_init(snic, 0);
+	if (!rqi)
+		goto dev_rst_end;
 
-		memset(scsi_cmd_priv(sc), 0,
-			sizeof(struct snic_internal_io_state));
-		CMD_SP(sc) = (char *)rqi;
-		CMD_FLAGS(sc) = SNIC_NO_FLAGS;
+	memset(scsi_cmd_priv(reset_sc), 0,
+	       sizeof(struct snic_internal_io_state));
+	CMD_SP(reset_sc) = (char *)rqi;
+	CMD_FLAGS(reset_sc) = SNIC_NO_FLAGS;
 
-		/* Add special tag for dr coming from user spc */
-		rqi->tm_tag = SNIC_TAG_IOCTL_DEV_RST;
-		rqi->sc = sc;
-	}
+	/* Add special tag for dr coming from user spc */
+	rqi->tm_tag = SNIC_TAG_IOCTL_DEV_RST;
+	rqi->sc = reset_sc;
 
-	ret = snic_send_dr_and_wait(snic, sc);
+	ret = snic_send_dr_and_wait(snic, reset_sc);
 	if (ret) {
 		SNIC_HOST_ERR(snic->shost,
 			      "Devrst: IO w/ Tag %x Failed w/ err = %d\n",
-			      tag, ret);
+			      snic_cmd_tag(reset_sc), ret);
 
-		snic_unlink_and_release_req(snic, sc, 0);
+		snic_unlink_and_release_req(snic, reset_sc, 0);
 
 		goto dev_rst_end;
 	}
 
-	ret = snic_dr_finish(snic, sc);
+	ret = snic_dr_finish(snic, reset_sc);
 
 dev_rst_end:
-	SNIC_TRC(snic->shost->host_no, tag, (ulong) sc,
-		 jiffies_to_msecs(jiffies - start_time),
-		 0, SNIC_TRC_CMD(sc), SNIC_TRC_CMD_STATE_FLAGS(sc));
+	if (reset_sc) {
+		SNIC_TRC(snic->shost->host_no, snic_cmd_tag(reset_sc), (ulong) reset_sc,
+			 jiffies_to_msecs(jiffies - start_time),
+			 0, SNIC_TRC_CMD(reset_sc), SNIC_TRC_CMD_STATE_FLAGS(reset_sc));
+		scsi_put_reserved_cmd(reset_sc);
+	}
 
 	SNIC_SCSI_DBG(snic->shost,
 		      "Devrst: Returning from Device Reset : %s\n",
@@ -2229,10 +2230,11 @@ snic_device_reset(struct scsi_cmnd *sc)
  * snic_issue_hba_reset : Queues FW Reset Request.
  */
 static int
-snic_issue_hba_reset(struct snic *snic, struct scsi_cmnd *sc)
+snic_issue_hba_reset(struct snic *snic)
 {
 	struct snic_req_info *rqi = NULL;
 	struct snic_host_req *req = NULL;
+	struct scsi_cmnd *reset_sc;
 	spinlock_t *io_lock = NULL;
 	DECLARE_COMPLETION_ONSTACK(wait);
 	unsigned long flags;
@@ -2241,30 +2243,30 @@ snic_issue_hba_reset(struct snic *snic, struct scsi_cmnd *sc)
 	rqi = snic_req_init(snic, 0);
 	if (!rqi) {
 		ret = -ENOMEM;
-
 		goto hba_rst_end;
 	}
 
-	if (snic_cmd_tag(sc) == SCSI_NO_TAG) {
-		memset(scsi_cmd_priv(sc), 0,
-			sizeof(struct snic_internal_io_state));
-		SNIC_HOST_INFO(snic->shost, "issu_hr:Host reset thru ioctl.\n");
-		rqi->sc = sc;
+	reset_sc = scsi_get_reserved_cmd(snic->shost);
+	if (!reset_sc) {
+		ret = -EBUSY;
+		goto hba_rst_end_put;
 	}
-
+	memset(scsi_cmd_priv(reset_sc), 0,
+	       sizeof(struct snic_internal_io_state));
+	rqi->sc = reset_sc;
 	req = rqi_to_req(rqi);
 
-	io_lock = snic_io_lock_hash(snic, sc);
+	io_lock = snic_io_lock_hash(snic, reset_sc);
 	spin_lock_irqsave(io_lock, flags);
-	SNIC_BUG_ON(CMD_SP(sc) != NULL);
-	CMD_STATE(sc) = SNIC_IOREQ_PENDING;
-	CMD_SP(sc) = (char *) rqi;
-	CMD_FLAGS(sc) |= SNIC_IO_INITIALIZED;
+	SNIC_BUG_ON(CMD_SP(reset_sc) != NULL);
+	CMD_STATE(reset_sc) = SNIC_IOREQ_PENDING;
+	CMD_SP(reset_sc) = (char *) rqi;
+	CMD_FLAGS(reset_sc) |= SNIC_IO_INITIALIZED;
 	snic->remove_wait = &wait;
 	spin_unlock_irqrestore(io_lock, flags);
 
 	/* Initialize Request */
-	snic_io_hdr_enc(&req->hdr, SNIC_REQ_HBA_RESET, 0, snic_cmd_tag(sc),
+	snic_io_hdr_enc(&req->hdr, SNIC_REQ_HBA_RESET, 0, snic_cmd_tag(reset_sc),
 			snic->config.hid, 0, (ulong) rqi);
 
 	req->u.reset.flags = 0;
@@ -2279,7 +2281,7 @@ snic_issue_hba_reset(struct snic *snic, struct scsi_cmnd *sc)
 	}
 
 	spin_lock_irqsave(io_lock, flags);
-	CMD_FLAGS(sc) |= SNIC_HOST_RESET_ISSUED;
+	CMD_FLAGS(reset_sc) |= SNIC_HOST_RESET_ISSUED;
 	spin_unlock_irqrestore(io_lock, flags);
 	atomic64_inc(&snic->s_stats.reset.hba_resets);
 	SNIC_HOST_INFO(snic->shost, "Queued HBA Reset Successfully.\n");
@@ -2296,13 +2298,14 @@ snic_issue_hba_reset(struct snic *snic, struct scsi_cmnd *sc)
 
 	spin_lock_irqsave(io_lock, flags);
 	snic->remove_wait = NULL;
-	rqi = (struct snic_req_info *) CMD_SP(sc);
-	CMD_SP(sc) = NULL;
+	rqi = (struct snic_req_info *) CMD_SP(reset_sc);
+	CMD_SP(reset_sc) = NULL;
 	spin_unlock_irqrestore(io_lock, flags);
 
 	if (rqi)
 		snic_req_free(snic, rqi);
 
+	scsi_put_reserved_cmd(reset_sc);
 	ret = 0;
 
 	return ret;
@@ -2310,10 +2313,13 @@ snic_issue_hba_reset(struct snic *snic, struct scsi_cmnd *sc)
 hba_rst_err:
 	spin_lock_irqsave(io_lock, flags);
 	snic->remove_wait = NULL;
-	rqi = (struct snic_req_info *) CMD_SP(sc);
-	CMD_SP(sc) = NULL;
+	rqi = (struct snic_req_info *) CMD_SP(reset_sc);
+	CMD_SP(reset_sc) = NULL;
 	spin_unlock_irqrestore(io_lock, flags);
 
+hba_rst_end_put:
+	if (reset_sc)
+		scsi_put_reserved_cmd(reset_sc);
 	if (rqi)
 		snic_req_free(snic, rqi);
 
@@ -2326,7 +2332,7 @@ snic_issue_hba_reset(struct snic *snic, struct scsi_cmnd *sc)
 } /* end of snic_issue_hba_reset */
 
 int
-snic_reset(struct Scsi_Host *shost, struct scsi_cmnd *sc)
+snic_reset(struct Scsi_Host *shost)
 {
 	struct snic *snic = shost_priv(shost);
 	enum snic_state sv_state;
@@ -2355,7 +2361,7 @@ snic_reset(struct Scsi_Host *shost, struct scsi_cmnd *sc)
 	while (atomic_read(&snic->ios_inflight))
 		schedule_timeout(msecs_to_jiffies(1));
 
-	ret = snic_issue_hba_reset(snic, sc);
+	ret = snic_issue_hba_reset(snic);
 	if (ret) {
 		SNIC_HOST_ERR(shost,
 			      "reset:Host Reset Failed w/ err %d.\n",
@@ -2394,7 +2400,7 @@ snic_host_reset(struct scsi_cmnd *sc)
 		      sc, sc->cmnd[0], sc->request,
 		      snic_cmd_tag(sc), CMD_FLAGS(sc));
 
-	ret = snic_reset(shost, sc);
+	ret = snic_reset(shost);
 
 	SNIC_TRC(shost->host_no, snic_cmd_tag(sc), (ulong) sc,
 		 jiffies_to_msecs(jiffies - start_time),
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 16/24] snic: use tagset iter for traversing commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (14 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 15/24] snic: use reserved commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 17/24] aacraid: move scsi_add_host() John Garry
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Use the tagset iter to traverse active commands during device and
hba reset.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/snic/snic_scsi.c | 406 ++++++++++++++++------------------
 1 file changed, 194 insertions(+), 212 deletions(-)

diff --git a/drivers/scsi/snic/snic_scsi.c b/drivers/scsi/snic/snic_scsi.c
index c4aaad945d51..f969d1654526 100644
--- a/drivers/scsi/snic/snic_scsi.c
+++ b/drivers/scsi/snic/snic_scsi.c
@@ -77,7 +77,7 @@ static const char * const snic_io_status_str[] = {
 	[SNIC_STAT_FATAL_ERROR]	= "SNIC_STAT_FATAL_ERROR",
 };
 
-static void snic_scsi_cleanup(struct snic *, int);
+static void snic_scsi_cleanup(struct snic *);
 
 const char *
 snic_state_to_str(unsigned int state)
@@ -974,13 +974,13 @@ snic_itmf_cmpl_handler(struct snic *snic, struct snic_fw_req *fwreq)
 
 
 static void
-snic_hba_reset_scsi_cleanup(struct snic *snic, struct scsi_cmnd *sc)
+snic_hba_reset_scsi_cleanup(struct snic *snic)
 {
 	struct snic_stats *st = &snic->s_stats;
 	long act_ios = 0, act_fwreqs = 0;
 
 	SNIC_SCSI_DBG(snic->shost, "HBA Reset scsi cleanup.\n");
-	snic_scsi_cleanup(snic, snic_cmd_tag(sc));
+	snic_scsi_cleanup(snic);
 
 	/* Update stats on pending IOs */
 	act_ios = atomic64_read(&st->io.active);
@@ -1021,17 +1021,6 @@ snic_hba_reset_cmpl_handler(struct snic *snic, struct snic_fw_req *fwreq)
 		      "reset_cmpl: type = %x, hdr_stat = %x, cmnd_id = %x, hid = %x, ctx = %lx\n",
 		      typ, hdr_stat, cmnd_id, hid, ctx);
 
-	/* spl case, host reset issued through ioctl */
-	if (cmnd_id == SCSI_NO_TAG) {
-		rqi = (struct snic_req_info *) ctx;
-		SNIC_HOST_INFO(snic->shost,
-			       "reset_cmpl:Tag %d ctx %lx cmpl stat %s\n",
-			       cmnd_id, ctx, snic_io_status_to_str(hdr_stat));
-		sc = rqi->sc;
-
-		goto ioctl_hba_rst;
-	}
-
 	if (cmnd_id >= snic->max_tag_id) {
 		SNIC_HOST_ERR(snic->shost,
 			      "reset_cmpl: Tag 0x%x out of Range,HdrStat %s\n",
@@ -1042,7 +1031,6 @@ snic_hba_reset_cmpl_handler(struct snic *snic, struct snic_fw_req *fwreq)
 	}
 
 	sc = scsi_host_find_tag(snic->shost, cmnd_id);
-ioctl_hba_rst:
 	if (!sc) {
 		atomic64_inc(&snic->s_stats.io.sc_null);
 		SNIC_HOST_ERR(snic->shost,
@@ -1089,7 +1077,7 @@ snic_hba_reset_cmpl_handler(struct snic *snic, struct snic_fw_req *fwreq)
 	spin_unlock_irqrestore(io_lock, flags);
 
 	/* scsi cleanup */
-	snic_hba_reset_scsi_cleanup(snic, sc);
+	snic_hba_reset_scsi_cleanup(snic);
 
 	SNIC_BUG_ON(snic_get_state(snic) != SNIC_OFFLINE &&
 		    snic_get_state(snic) != SNIC_FWRESET);
@@ -1359,7 +1347,7 @@ snic_issue_tm_req(struct snic *snic,
 		    int tmf)
 {
 	struct snic_host_req *tmreq = NULL;
-	int req_id = 0, tag = snic_cmd_tag(sc);
+	int tag = snic_cmd_tag(sc);
 	int ret = 0;
 
 	if (snic_get_state(snic) == SNIC_FWRESET)
@@ -1372,13 +1360,10 @@ snic_issue_tm_req(struct snic *snic,
 		      tmf, rqi, tag);
 
 
-	if (tmf == SNIC_ITMF_LUN_RESET) {
+	if (tmf == SNIC_ITMF_LUN_RESET)
 		tmreq = snic_dr_req_init(snic, rqi);
-		req_id = SCSI_NO_TAG;
-	} else {
+	else
 		tmreq = snic_abort_req_init(snic, rqi);
-		req_id = tag;
-	}
 
 	if (!tmreq) {
 		ret = -ENOMEM;
@@ -1386,7 +1371,7 @@ snic_issue_tm_req(struct snic *snic,
 		goto tmreq_err;
 	}
 
-	ret = snic_queue_itmf_req(snic, tmreq, sc, tmf, req_id);
+	ret = snic_queue_itmf_req(snic, tmreq, sc, tmf, tag);
 	if (ret)
 		goto tmreq_err;
 
@@ -1395,12 +1380,12 @@ snic_issue_tm_req(struct snic *snic,
 tmreq_err:
 	if (ret) {
 		SNIC_HOST_ERR(snic->shost,
-			      "issu_tmreq: Queing ITMF(%d) Req, sc %p rqi %p req_id %d tag %x fails err = %d\n",
-			      tmf, sc, rqi, req_id, tag, ret);
+			      "issu_tmreq: Queing ITMF(%d) Req, sc %p rqi %p tag %x fails err = %d\n",
+			      tmf, sc, rqi, tag, ret);
 	} else {
 		SNIC_SCSI_DBG(snic->shost,
-			      "issu_tmreq: Queuing ITMF(%d) Req, sc %p, rqi %p, req_id %d tag %x - Success.\n",
-			      tmf, sc, rqi, req_id, tag);
+			      "issu_tmreq: Queuing ITMF(%d) Req, sc %p, rqi %p, tag %x - Success.\n",
+			      tmf, sc, rqi, tag);
 	}
 
 	atomic_dec(&snic->ios_inflight);
@@ -1671,84 +1656,87 @@ snic_abort_cmd(struct scsi_cmnd *sc)
 	return ret;
 }
 
+struct snic_cmd_pending_iter_data {
+	struct snic *snic;
+	struct scsi_device *sdev;
+	int ret;
+};
 
-
-static int
-snic_is_abts_pending(struct snic *snic, struct scsi_cmnd *lr_sc)
+static bool
+snic_is_abts_pending_iter(struct scsi_cmnd *sc, void *data, bool reserved)
 {
+	struct snic_cmd_pending_iter_data *iter_data = data;
 	struct snic_req_info *rqi = NULL;
-	struct scsi_cmnd *sc = NULL;
-	struct scsi_device *lr_sdev = NULL;
 	spinlock_t *io_lock = NULL;
-	u32 tag;
 	unsigned long flags;
 
-	if (lr_sc)
-		lr_sdev = lr_sc->device;
-
-	/* walk through the tag map, an dcheck if IOs are still pending in fw*/
-	for (tag = 0; tag < snic->max_tag_id; tag++) {
-		io_lock = snic_io_lock_tag(snic, tag);
-
-		spin_lock_irqsave(io_lock, flags);
-		sc = scsi_host_find_tag(snic->shost, tag);
+	if (reserved)
+		return true;
 
-		if (!sc || (lr_sc && (sc->device != lr_sdev || sc == lr_sc))) {
-			spin_unlock_irqrestore(io_lock, flags);
+	if (iter_data->sdev && iter_data->sdev != sc->device)
+		return true;
 
-			continue;
-		}
+	io_lock = snic_io_lock_tag(iter_data->snic, sc->request->tag);
+	spin_lock_irqsave(io_lock, flags);
 
-		rqi = (struct snic_req_info *) CMD_SP(sc);
-		if (!rqi) {
-			spin_unlock_irqrestore(io_lock, flags);
+	rqi = (struct snic_req_info *) CMD_SP(sc);
+	if (!rqi) {
+		spin_unlock_irqrestore(io_lock, flags);
+		return true;
+	}
 
-			continue;
-		}
+	/*
+	 * Found IO that is still pending w/ firmware and belongs to
+	 * the LUN that is under reset, if lr_sc != NULL
+	 */
+	SNIC_SCSI_DBG(iter_data->snic->shost, "Found IO in %s on LUN\n",
+		      snic_ioreq_state_to_str(CMD_STATE(sc)));
 
-		/*
-		 * Found IO that is still pending w/ firmware and belongs to
-		 * the LUN that is under reset, if lr_sc != NULL
-		 */
-		SNIC_SCSI_DBG(snic->shost, "Found IO in %s on LUN\n",
-			      snic_ioreq_state_to_str(CMD_STATE(sc)));
+	if (CMD_STATE(sc) == SNIC_IOREQ_ABTS_PENDING)
+		iter_data->ret = 1;
+	spin_unlock_irqrestore(io_lock, flags);
 
-		if (CMD_STATE(sc) == SNIC_IOREQ_ABTS_PENDING) {
-			spin_unlock_irqrestore(io_lock, flags);
+	return true;
+}
 
-			return 1;
-		}
+static int
+snic_is_abts_pending(struct snic *snic, struct scsi_device *lr_sdev)
+{
+	struct snic_cmd_pending_iter_data iter_data = {
+		.snic = snic,
+		.sdev = lr_sdev,
+		.ret = 0,
+	};
 
-		spin_unlock_irqrestore(io_lock, flags);
-	}
+	/* walk through the tag map, an dcheck if IOs are still pending in fw*/
+	scsi_host_tagset_busy_iter(snic->shost,
+				snic_is_abts_pending_iter, &iter_data);
 
-	return 0;
+	return iter_data.ret;
 } /* end of snic_is_abts_pending */
 
-static int
-snic_dr_clean_single_req(struct snic *snic,
-			 u32 tag,
-			 struct scsi_device *lr_sdev)
+static bool
+snic_dr_clean_single_req(struct scsi_cmnd *sc, void *data, bool reserved)
 {
 	struct snic_req_info *rqi = NULL;
 	struct snic_tgt *tgt = NULL;
-	struct scsi_cmnd *sc = NULL;
 	spinlock_t *io_lock = NULL;
 	u32 sv_state = 0, tmf = 0;
 	DECLARE_COMPLETION_ONSTACK(tm_done);
 	unsigned long flags;
 	int ret = 0;
+	struct snic_cmd_pending_iter_data *iter_data = data;
+	struct snic *snic = iter_data->snic;
 
-	io_lock = snic_io_lock_tag(snic, tag);
-	spin_lock_irqsave(io_lock, flags);
-	sc = scsi_host_find_tag(snic->shost, tag);
+	if (reserved)
+		return true;
 
-	/* Ignore Cmd that don't belong to Lun Reset device */
-	if (!sc || sc->device != lr_sdev)
-		goto skip_clean;
+	if (sc->device != iter_data->sdev)
+		return true;
 
+	io_lock = snic_io_lock_tag(snic, sc->request->tag);
+	spin_lock_irqsave(io_lock, flags);
 	rqi = (struct snic_req_info *) CMD_SP(sc);
-
 	if (!rqi)
 		goto skip_clean;
 
@@ -1807,7 +1795,7 @@ snic_dr_clean_single_req(struct snic *snic,
 	if (ret) {
 		SNIC_HOST_ERR(snic->shost,
 			      "clean_single_req_err:sc %p, tag %d abt failed. tm_tag %d flags 0x%llx\n",
-			      sc, tag, rqi->tm_tag, CMD_FLAGS(sc));
+			      sc, sc->request->tag, rqi->tm_tag, CMD_FLAGS(sc));
 
 		spin_lock_irqsave(io_lock, flags);
 		rqi = (struct snic_req_info *) CMD_SP(sc);
@@ -1818,7 +1806,7 @@ snic_dr_clean_single_req(struct snic *snic,
 		if (CMD_STATE(sc) == SNIC_IOREQ_ABTS_PENDING)
 			CMD_STATE(sc) = sv_state;
 
-		ret = 1;
+		iter_data->ret = 1;
 		goto skip_clean;
 	}
 
@@ -1844,56 +1832,49 @@ snic_dr_clean_single_req(struct snic *snic,
 	if (CMD_ABTS_STATUS(sc) == SNIC_INVALID_CODE) {
 		SNIC_HOST_ERR(snic->shost,
 			      "clean_single_req_err:sc %p tag %d abt still pending w/ fw, tm_tag %d flags 0x%llx\n",
-			      sc, tag, rqi->tm_tag, CMD_FLAGS(sc));
+			      sc, sc->request->tag, rqi->tm_tag, CMD_FLAGS(sc));
 
 		CMD_FLAGS(sc) |= SNIC_IO_ABTS_TERM_DONE;
-		ret = 1;
-
-		goto skip_clean;
-	}
-
-	CMD_STATE(sc) = SNIC_IOREQ_ABTS_COMPLETE;
-	CMD_SP(sc) = NULL;
-	spin_unlock_irqrestore(io_lock, flags);
-
-	snic_release_req_buf(snic, rqi, sc);
-
-	sc->result = (DID_ERROR << 16);
-	sc->scsi_done(sc);
+		iter_data->ret = 1;
+	} else {
+		CMD_STATE(sc) = SNIC_IOREQ_ABTS_COMPLETE;
+		CMD_SP(sc) = NULL;
+		spin_unlock_irqrestore(io_lock, flags);
 
-	ret = 0;
+		snic_release_req_buf(snic, rqi, sc);
 
-	return ret;
+		sc->result = (DID_ERROR << 16);
+		sc->scsi_done(sc);
+	}
+	return true;
 
 skip_clean:
 	spin_unlock_irqrestore(io_lock, flags);
-
-	return ret;
+	return true;
 } /* end of snic_dr_clean_single_req */
 
 static int
-snic_dr_clean_pending_req(struct snic *snic, struct scsi_cmnd *lr_sc)
+snic_dr_clean_pending_req(struct snic *snic, struct scsi_device *lr_sdev)
 {
-	struct scsi_device *lr_sdev = lr_sc->device;
-	u32 tag = 0;
 	int ret = FAILED;
+	struct snic_cmd_pending_iter_data iter_data = {
+		.snic = snic,
+		.sdev = lr_sdev,
+		.ret = 0,
+	};
 
-	for (tag = 0; tag < snic->max_tag_id; tag++) {
-		if (tag == snic_cmd_tag(lr_sc))
-			continue;
+	scsi_host_tagset_busy_iter(snic->shost,
+				snic_dr_clean_single_req, &iter_data);
+	if (iter_data.ret) {
+		SNIC_HOST_ERR(snic->shost, "clean_err = %d\n", iter_data.ret);
 
-		ret = snic_dr_clean_single_req(snic, tag, lr_sdev);
-		if (ret) {
-			SNIC_HOST_ERR(snic->shost, "clean_err:tag = %d\n", tag);
-
-			goto clean_err;
-		}
+		goto clean_err;
 	}
 
 	schedule_timeout(msecs_to_jiffies(100));
 
 	/* Walk through all the cmds and check abts status. */
-	if (snic_is_abts_pending(snic, lr_sc)) {
+	if (snic_is_abts_pending(snic, lr_sdev)) {
 		ret = FAILED;
 
 		goto clean_err;
@@ -1980,7 +1961,7 @@ snic_dr_finish(struct snic *snic, struct scsi_cmnd *sc)
 	 * succeeds.
 	 */
 
-	ret = snic_dr_clean_pending_req(snic, sc);
+	ret = snic_dr_clean_pending_req(snic, sc->device);
 	if (ret) {
 		spin_lock_irqsave(io_lock, flags);
 		SNIC_SCSI_DBG(snic->shost,
@@ -2438,87 +2419,83 @@ snic_cmpl_pending_tmreq(struct snic *snic, struct scsi_cmnd *sc)
 		complete(rqi->abts_done);
 }
 
-/*
- * snic_scsi_cleanup: Walks through tag map and releases the reqs
- */
-static void
-snic_scsi_cleanup(struct snic *snic, int ex_tag)
+static bool
+snic_scsi_cleanup_iter(struct scsi_cmnd *sc, void *data, bool reserved)
 {
+	struct snic *snic = data;
 	struct snic_req_info *rqi = NULL;
-	struct scsi_cmnd *sc = NULL;
 	spinlock_t *io_lock = NULL;
 	unsigned long flags;
-	int tag;
 	u64 st_time = 0;
 
-	SNIC_SCSI_DBG(snic->shost, "sc_clean: scsi cleanup.\n");
-
-	for (tag = 0; tag < snic->max_tag_id; tag++) {
-		/* Skip ex_tag */
-		if (tag == ex_tag)
-			continue;
+	if (reserved)
+		return true;
 
-		io_lock = snic_io_lock_tag(snic, tag);
-		spin_lock_irqsave(io_lock, flags);
-		sc = scsi_host_find_tag(snic->shost, tag);
-		if (!sc) {
-			spin_unlock_irqrestore(io_lock, flags);
-
-			continue;
-		}
-
-		if (unlikely(snic_tmreq_pending(sc))) {
-			/*
-			 * When FW Completes reset w/o sending completions
-			 * for outstanding ios.
-			 */
-			snic_cmpl_pending_tmreq(snic, sc);
-			spin_unlock_irqrestore(io_lock, flags);
-
-			continue;
-		}
+	io_lock = snic_io_lock_tag(snic, snic_cmd_tag(sc));
+	spin_lock_irqsave(io_lock, flags);
+	if (unlikely(snic_tmreq_pending(sc))) {
+		/*
+		 * When FW Completes reset w/o sending completions
+		 * for outstanding ios.
+		 */
+		snic_cmpl_pending_tmreq(snic, sc);
+		spin_unlock_irqrestore(io_lock, flags);
 
-		rqi = (struct snic_req_info *) CMD_SP(sc);
-		if (!rqi) {
-			spin_unlock_irqrestore(io_lock, flags);
+		return true;;
+	}
 
-			goto cleanup;
-		}
+	rqi = (struct snic_req_info *) CMD_SP(sc);
+	if (!rqi) {
+		spin_unlock_irqrestore(io_lock, flags);
+		goto cleanup;
+	}
 
-		SNIC_SCSI_DBG(snic->shost,
-			      "sc_clean: sc %p, rqi %p, tag %d flags 0x%llx\n",
-			      sc, rqi, tag, CMD_FLAGS(sc));
+	SNIC_SCSI_DBG(snic->shost,
+		      "sc_clean: sc %p, rqi %p, tag %d flags 0x%llx\n",
+		      sc, rqi, snic_cmd_tag(sc), CMD_FLAGS(sc));
 
-		CMD_SP(sc) = NULL;
-		CMD_FLAGS(sc) |= SNIC_SCSI_CLEANUP;
-		spin_unlock_irqrestore(io_lock, flags);
-		st_time = rqi->start_time;
+	CMD_SP(sc) = NULL;
+	CMD_FLAGS(sc) |= SNIC_SCSI_CLEANUP;
+	spin_unlock_irqrestore(io_lock, flags);
+	st_time = rqi->start_time;
 
-		SNIC_HOST_INFO(snic->shost,
-			       "sc_clean: Releasing rqi %p : flags 0x%llx\n",
-			       rqi, CMD_FLAGS(sc));
+	SNIC_HOST_INFO(snic->shost,
+		       "sc_clean: Releasing rqi %p : flags 0x%llx\n",
+		       rqi, CMD_FLAGS(sc));
 
-		snic_release_req_buf(snic, rqi, sc);
+	snic_release_req_buf(snic, rqi, sc);
 
 cleanup:
-		sc->result = DID_TRANSPORT_DISRUPTED << 16;
-		SNIC_HOST_INFO(snic->shost,
-			       "sc_clean: DID_TRANSPORT_DISRUPTED for sc %p, Tag %d flags 0x%llx rqi %p duration %u msecs\n",
-			       sc, sc->request->tag, CMD_FLAGS(sc), rqi,
-			       jiffies_to_msecs(jiffies - st_time));
+	sc->result = DID_TRANSPORT_DISRUPTED << 16;
+	SNIC_HOST_INFO(snic->shost,
+		       "sc_clean: DID_TRANSPORT_DISRUPTED for sc %p, Tag %d flags 0x%llx rqi %p duration %u msecs\n",
+		       sc, snic_cmd_tag(sc), CMD_FLAGS(sc), rqi,
+		       jiffies_to_msecs(jiffies - st_time));
 
-		/* Update IO stats */
-		snic_stats_update_io_cmpl(&snic->s_stats);
+	/* Update IO stats */
+	snic_stats_update_io_cmpl(&snic->s_stats);
 
-		if (sc->scsi_done) {
-			SNIC_TRC(snic->shost->host_no, tag, (ulong) sc,
-				 jiffies_to_msecs(jiffies - st_time), 0,
-				 SNIC_TRC_CMD(sc),
-				 SNIC_TRC_CMD_STATE_FLAGS(sc));
+	if (sc->scsi_done) {
+		SNIC_TRC(snic->shost->host_no, snic_cmd_tag(sc), (ulong) sc,
+			 jiffies_to_msecs(jiffies - st_time), 0,
+			 SNIC_TRC_CMD(sc),
+			 SNIC_TRC_CMD_STATE_FLAGS(sc));
 
-			sc->scsi_done(sc);
-		}
+		sc->scsi_done(sc);
 	}
+	return true;
+}
+
+/*
+ * snic_scsi_cleanup: Walks through tag map and releases the reqs
+ */
+static void
+snic_scsi_cleanup(struct snic *snic)
+{
+	SNIC_SCSI_DBG(snic->shost, "sc_clean: scsi cleanup.\n");
+
+	scsi_host_tagset_busy_iter(snic->shost,
+				   snic_scsi_cleanup_iter, snic);
 } /* end of snic_scsi_cleanup */
 
 void
@@ -2526,7 +2503,7 @@ snic_shutdown_scsi_cleanup(struct snic *snic)
 {
 	SNIC_HOST_INFO(snic->shost, "Shutdown time SCSI Cleanup.\n");
 
-	snic_scsi_cleanup(snic, SCSI_NO_TAG);
+	snic_scsi_cleanup(snic);
 } /* end of snic_shutdown_scsi_cleanup */
 
 /*
@@ -2615,6 +2592,40 @@ snic_internal_abort_io(struct snic *snic, struct scsi_cmnd *sc, int tmf)
 	return ret;
 } /* end of snic_internal_abort_io */
 
+struct snic_tgt_scsi_abort_io_iter_data {
+	struct snic *snic;
+	struct snic_tgt *tgt;
+	int tmf;
+	int abt_cnt;
+};
+
+static bool
+snic_tgt_scsi_abort_io_iter(struct scsi_cmnd *sc, void *data, bool reserved)
+{
+	struct snic_tgt_scsi_abort_io_iter_data *iter_data = data;
+	struct snic_tgt *sc_tgt = NULL;
+	int ret;
+
+	if (reserved)
+		return true;
+
+	sc_tgt = starget_to_tgt(scsi_target(sc->device));
+	if (sc_tgt != iter_data->tgt)
+		return true;
+
+	ret = snic_internal_abort_io(iter_data->snic, sc, iter_data->tmf);
+	if (ret < 0) {
+		SNIC_HOST_ERR(iter_data->snic->shost,
+			      "tgt_abt_io: Tag %x, Failed w err = %d\n",
+			      snic_cmd_tag(sc), ret);
+		return true;
+	}
+
+	if (ret == SUCCESS)
+		iter_data->abt_cnt++;
+	return true;
+}
+
 /*
  * snic_tgt_scsi_abort_io : called by snic_tgt_del
  */
@@ -2622,11 +2633,10 @@ int
 snic_tgt_scsi_abort_io(struct snic_tgt *tgt)
 {
 	struct snic *snic = NULL;
-	struct scsi_cmnd *sc = NULL;
-	struct snic_tgt *sc_tgt = NULL;
-	spinlock_t *io_lock = NULL;
-	unsigned long flags;
-	int ret = 0, tag, abt_cnt = 0, tmf = 0;
+	struct snic_tgt_scsi_abort_io_iter_data iter_data = {
+		.tgt = tgt,
+		.abt_cnt = 0,
+	};
 
 	if (!tgt)
 		return -1;
@@ -2635,43 +2645,15 @@ snic_tgt_scsi_abort_io(struct snic_tgt *tgt)
 	SNIC_SCSI_DBG(snic->shost, "tgt_abt_io: Cleaning Pending IOs.\n");
 
 	if (tgt->tdata.typ == SNIC_TGT_DAS)
-		tmf = SNIC_ITMF_ABTS_TASK;
+		iter_data.tmf = SNIC_ITMF_ABTS_TASK;
 	else
-		tmf = SNIC_ITMF_ABTS_TASK_TERM;
-
-	for (tag = 0; tag < snic->max_tag_id; tag++) {
-		io_lock = snic_io_lock_tag(snic, tag);
-
-		spin_lock_irqsave(io_lock, flags);
-		sc = scsi_host_find_tag(snic->shost, tag);
-		if (!sc) {
-			spin_unlock_irqrestore(io_lock, flags);
-
-			continue;
-		}
-
-		sc_tgt = starget_to_tgt(scsi_target(sc->device));
-		if (sc_tgt != tgt) {
-			spin_unlock_irqrestore(io_lock, flags);
-
-			continue;
-		}
-		spin_unlock_irqrestore(io_lock, flags);
-
-		ret = snic_internal_abort_io(snic, sc, tmf);
-		if (ret < 0) {
-			SNIC_HOST_ERR(snic->shost,
-				      "tgt_abt_io: Tag %x, Failed w err = %d\n",
-				      tag, ret);
+		iter_data.tmf = SNIC_ITMF_ABTS_TASK_TERM;
+	iter_data.snic = snic;
 
-			continue;
-		}
-
-		if (ret == SUCCESS)
-			abt_cnt++;
-	}
+	scsi_host_tagset_busy_iter(snic->shost,
+				   snic_tgt_scsi_abort_io_iter, &iter_data);
 
-	SNIC_SCSI_DBG(snic->shost, "tgt_abt_io: abt_cnt = %d\n", abt_cnt);
+	SNIC_SCSI_DBG(snic->shost, "tgt_abt_io: abt_cnt = %d\n", iter_data.abt_cnt);
 
 	return 0;
 } /* end of snic_tgt_scsi_abort_io */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 17/24] aacraid: move scsi_add_host()
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (15 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 16/24] snic: use tagset iter for traversing commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 18/24] aacraid: use private commands John Garry
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.de>

Move the call to scsi_add_host() so that the Scsi_Host structure
is initialized before any I/O is sent.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/aacraid/linit.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
index ee6bc2f9b80a..0e084b09615d 100644
--- a/drivers/scsi/aacraid/linit.c
+++ b/drivers/scsi/aacraid/linit.c
@@ -1676,6 +1676,9 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 	shost->unique_id = unique_id;
 	shost->max_cmd_len = 16;
 	shost->use_cmd_list = 1;
+	shost->max_id = MAXIMUM_NUM_CONTAINERS;
+	shost->max_lun = AAC_MAX_LUN;
+	shost->sg_tablesize = HBA_MAX_SG_SEPARATE;
 
 	if (aac_cfg_major == AAC_CHARDEV_NEEDS_REINIT)
 		aac_init_char();
@@ -1740,6 +1743,10 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 		goto out_deinit;
 	}
 
+	error = scsi_add_host(shost, &pdev->dev);
+	if (error)
+		goto out_deinit;
+
 	aac->maximum_num_channels = aac_drivers[index].channels;
 	error = aac_get_adapter_info(aac);
 	if (error < 0)
@@ -1798,18 +1805,8 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (!aac->sa_firmware && aac_drivers[index].quirks & AAC_QUIRK_SRC)
 		aac_intr_normal(aac, 0, 2, 0, NULL);
 
-	/*
-	 * dmb - we may need to move the setting of these parms somewhere else once
-	 * we get a fib that can report the actual numbers
-	 */
-	shost->max_lun = AAC_MAX_LUN;
-
 	pci_set_drvdata(pdev, shost);
 
-	error = scsi_add_host(shost, &pdev->dev);
-	if (error)
-		goto out_deinit;
-
 	aac_scan_host(aac);
 
 	pci_enable_pcie_error_reporting(pdev);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 18/24] aacraid: use private commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (16 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 17/24] aacraid: move scsi_add_host() John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 19/24] aacraid: replace cmd_list with scsi_host_tagset_busy_iter() John Garry
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Use private commands to allocate internal commands.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/aacraid/aacraid.h |  6 +++--
 drivers/scsi/aacraid/commsup.c | 47 ++++++++++++----------------------
 drivers/scsi/aacraid/linit.c   |  1 +
 3 files changed, 22 insertions(+), 32 deletions(-)

diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
index e3e4ecbea726..0bef32a62e41 100644
--- a/drivers/scsi/aacraid/aacraid.h
+++ b/drivers/scsi/aacraid/aacraid.h
@@ -1291,13 +1291,16 @@ struct fsa_dev_info {
 };
 
 struct fib {
-	void			*next;	/* this is used by the allocator */
 	s16			type;
 	s16			size;
 	/*
 	 *	The Adapter that this I/O is destined for.
 	 */
 	struct aac_dev		*dev;
+	/*
+	 * The associated scsi command
+	 */
+	struct scsi_cmnd 	*scmd;
 	/*
 	 *	This is the event the sendfib routine will wait on if the
 	 *	caller did not pass one and this is synch io.
@@ -1552,7 +1555,6 @@ struct aac_dev
 	 */
 	struct fib              *fibs;
 
-	struct fib		*free_fib;
 	spinlock_t		fib_lock;
 
 	struct mutex		ioctl_mutex;
diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
index 5a8a999606ea..225d67b3177a 100644
--- a/drivers/scsi/aacraid/commsup.c
+++ b/drivers/scsi/aacraid/commsup.c
@@ -35,6 +35,7 @@
 #include <scsi/scsi_host.h>
 #include <scsi/scsi_device.h>
 #include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_tcq.h>
 
 #include "aacraid.h"
 
@@ -173,7 +174,6 @@ int aac_fib_setup(struct aac_dev * dev)
 		fibptr->dev = dev;
 		fibptr->hw_fib_va = hw_fib;
 		fibptr->data = (void *) fibptr->hw_fib_va->data;
-		fibptr->next = fibptr+1;	/* Forward chain the fibs */
 		init_completion(&fibptr->event_wait);
 		spin_lock_init(&fibptr->event_lock);
 		hw_fib->header.XferState = cpu_to_le32(0xffffffff);
@@ -200,14 +200,6 @@ int aac_fib_setup(struct aac_dev * dev)
 	 */
 	aac_fib_vector_assign(dev);
 
-	/*
-	 *	Add the fib chain to the free list
-	 */
-	dev->fibs[dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB - 1].next = NULL;
-	/*
-	*	Set 8 fibs aside for management tools
-	*/
-	dev->free_fib = &dev->fibs[dev->scsi_host_ptr->can_queue];
 	return 0;
 }
 
@@ -233,6 +225,7 @@ struct fib *aac_fib_alloc_tag(struct aac_dev *dev, struct scsi_cmnd *scmd)
 	fibptr->callback_data = NULL;
 	fibptr->callback = NULL;
 	fibptr->flags = 0;
+	fibptr->scmd = scmd;
 
 	return fibptr;
 }
@@ -247,29 +240,19 @@ struct fib *aac_fib_alloc_tag(struct aac_dev *dev, struct scsi_cmnd *scmd)
 
 struct fib *aac_fib_alloc(struct aac_dev *dev)
 {
-	struct fib * fibptr;
+	struct scsi_cmnd *scmd;
+	struct fib * fibptr = NULL;
 	unsigned long flags;
+
 	spin_lock_irqsave(&dev->fib_lock, flags);
-	fibptr = dev->free_fib;
-	if(!fibptr){
-		spin_unlock_irqrestore(&dev->fib_lock, flags);
-		return fibptr;
-	}
-	dev->free_fib = fibptr->next;
+	scmd = scsi_get_reserved_cmd(dev->scsi_host_ptr);
+	if (scmd)
+		fibptr = aac_fib_alloc_tag(dev, scmd);
 	spin_unlock_irqrestore(&dev->fib_lock, flags);
-	/*
-	 *	Set the proper node type code and node byte size
-	 */
-	fibptr->type = FSAFS_NTC_FIB_CONTEXT;
+	if (!fibptr)
+		return NULL;
+
 	fibptr->size = sizeof(struct fib);
-	/*
-	 *	Null out fields that depend on being zero at the start of
-	 *	each I/O
-	 */
-	fibptr->hw_fib_va->header.XferState = 0;
-	fibptr->flags = 0;
-	fibptr->callback = NULL;
-	fibptr->callback_data = NULL;
 
 	return fibptr;
 }
@@ -297,8 +280,12 @@ void aac_fib_free(struct fib *fibptr)
 			 (void*)fibptr,
 			 le32_to_cpu(fibptr->hw_fib_va->header.XferState));
 	}
-	fibptr->next = fibptr->dev->free_fib;
-	fibptr->dev->free_fib = fibptr;
+	if (fibptr->scmd) {
+		struct scsi_cmnd *scmd = fibptr->scmd;
+
+		fibptr->scmd = NULL;
+		scsi_put_reserved_cmd(scmd);
+	}
 	spin_unlock_irqrestore(&fibptr->dev->fib_lock, flags);
 }
 
diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
index 0e084b09615d..f027e91656c9 100644
--- a/drivers/scsi/aacraid/linit.c
+++ b/drivers/scsi/aacraid/linit.c
@@ -1678,6 +1678,7 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 	shost->use_cmd_list = 1;
 	shost->max_id = MAXIMUM_NUM_CONTAINERS;
 	shost->max_lun = AAC_MAX_LUN;
+	shost->nr_reserved_cmds = AAC_NUM_MGT_FIB;
 	shost->sg_tablesize = HBA_MAX_SG_SEPARATE;
 
 	if (aac_cfg_major == AAC_CHARDEV_NEEDS_REINIT)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 19/24] aacraid: replace cmd_list with scsi_host_tagset_busy_iter()
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (17 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 18/24] aacraid: use private commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 20/24] aacraid: use scsi_host_tagset_busy_iter() to traverse outstanding commands John Garry
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

As we're now using a common pool for handlng requests we can use
scsi_host_tagset_iter() to traverse commands and can drop the
legacy cmd_list usage.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/aacraid/aachba.c   | 125 ++++++++++++++++++--------------
 drivers/scsi/aacraid/comminit.c |  28 ++++---
 drivers/scsi/aacraid/commsup.c  |  38 ++++------
 drivers/scsi/aacraid/linit.c    |  98 +++++++++++++------------
 4 files changed, 152 insertions(+), 137 deletions(-)

diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c
index 33dbc051bff9..bf2cde874303 100644
--- a/drivers/scsi/aacraid/aachba.c
+++ b/drivers/scsi/aacraid/aachba.c
@@ -809,7 +809,6 @@ int aac_probe_container(struct aac_dev *dev, int cid)
 		kfree(scsidev);
 		return -ENOMEM;
 	}
-	scsicmd->list.next = NULL;
 	scsicmd->scsi_done = (void (*)(struct scsi_cmnd*))aac_probe_container_callback1;
 
 	scsicmd->device = scsidev;
@@ -2639,76 +2638,92 @@ static void synchronize_callback(void *context, struct fib *fibptr)
 	cmd->scsi_done(cmd);
 }
 
+struct aac_synchronize_iter_data {
+	u64 lba;
+	u32 count;
+	int active;
+};
+
+static bool aac_synchronize_iter(struct scsi_cmnd *cmd, void *data,
+				 bool reserved)
+{
+	struct aac_synchronize_iter_data *iter_data = data;
+	u64 cmnd_lba;
+	u32 cmnd_count;
+
+	if (cmd->SCp.phase != AAC_OWNER_FIRMWARE || reserved)
+		return true;
+
+	if (cmd->cmnd[0] == WRITE_6) {
+		cmnd_lba = ((cmd->cmnd[1] & 0x1F) << 16) |
+			(cmd->cmnd[2] << 8) |
+			cmd->cmnd[3];
+		cmnd_count = cmd->cmnd[4];
+		if (cmnd_count == 0)
+			cmnd_count = 256;
+	} else if (cmd->cmnd[0] == WRITE_16) {
+		cmnd_lba = ((u64)cmd->cmnd[2] << 56) |
+			((u64)cmd->cmnd[3] << 48) |
+			((u64)cmd->cmnd[4] << 40) |
+			((u64)cmd->cmnd[5] << 32) |
+			((u64)cmd->cmnd[6] << 24) |
+			(cmd->cmnd[7] << 16) |
+			(cmd->cmnd[8] << 8) |
+			cmd->cmnd[9];
+		cmnd_count = (cmd->cmnd[10] << 24) |
+			(cmd->cmnd[11] << 16) |
+			(cmd->cmnd[12] << 8) |
+			cmd->cmnd[13];
+	} else if (cmd->cmnd[0] == WRITE_12) {
+		cmnd_lba = ((u64)cmd->cmnd[2] << 24) |
+			(cmd->cmnd[3] << 16) |
+			(cmd->cmnd[4] << 8) |
+			cmd->cmnd[5];
+		cmnd_count = (cmd->cmnd[6] << 24) |
+			(cmd->cmnd[7] << 16) |
+			(cmd->cmnd[8] << 8) |
+			cmd->cmnd[9];
+	} else if (cmd->cmnd[0] == WRITE_10) {
+		cmnd_lba = ((u64)cmd->cmnd[2] << 24) |
+			(cmd->cmnd[3] << 16) |
+			(cmd->cmnd[4] << 8) |
+			cmd->cmnd[5];
+		cmnd_count = (cmd->cmnd[7] << 8) |
+			cmd->cmnd[8];
+	} else
+		return true;
+
+	if (((cmnd_lba + cmnd_count) < iter_data->lba) ||
+	    (iter_data->count && ((iter_data->lba + iter_data->count) < cmnd_lba)))
+		return true;
+
+	++iter_data->active;
+	return true;
+}
+
 static int aac_synchronize(struct scsi_cmnd *scsicmd)
 {
 	int status;
 	struct fib *cmd_fibcontext;
 	struct aac_synchronize *synchronizecmd;
-	struct scsi_cmnd *cmd;
 	struct scsi_device *sdev = scsicmd->device;
 	int active = 0;
 	struct aac_dev *aac;
 	u64 lba = ((u64)scsicmd->cmnd[2] << 24) | (scsicmd->cmnd[3] << 16) |
 		(scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
 	u32 count = (scsicmd->cmnd[7] << 8) | scsicmd->cmnd[8];
-	unsigned long flags;
+	struct aac_synchronize_iter_data iter_data = {
+		.lba = lba,
+		.count = count,
+		.active = 0,
+	};
 
 	/*
 	 * Wait for all outstanding queued commands to complete to this
 	 * specific target (block).
 	 */
-	spin_lock_irqsave(&sdev->list_lock, flags);
-	list_for_each_entry(cmd, &sdev->cmd_list, list)
-		if (cmd->SCp.phase == AAC_OWNER_FIRMWARE) {
-			u64 cmnd_lba;
-			u32 cmnd_count;
-
-			if (cmd->cmnd[0] == WRITE_6) {
-				cmnd_lba = ((cmd->cmnd[1] & 0x1F) << 16) |
-					(cmd->cmnd[2] << 8) |
-					cmd->cmnd[3];
-				cmnd_count = cmd->cmnd[4];
-				if (cmnd_count == 0)
-					cmnd_count = 256;
-			} else if (cmd->cmnd[0] == WRITE_16) {
-				cmnd_lba = ((u64)cmd->cmnd[2] << 56) |
-					((u64)cmd->cmnd[3] << 48) |
-					((u64)cmd->cmnd[4] << 40) |
-					((u64)cmd->cmnd[5] << 32) |
-					((u64)cmd->cmnd[6] << 24) |
-					(cmd->cmnd[7] << 16) |
-					(cmd->cmnd[8] << 8) |
-					cmd->cmnd[9];
-				cmnd_count = (cmd->cmnd[10] << 24) |
-					(cmd->cmnd[11] << 16) |
-					(cmd->cmnd[12] << 8) |
-					cmd->cmnd[13];
-			} else if (cmd->cmnd[0] == WRITE_12) {
-				cmnd_lba = ((u64)cmd->cmnd[2] << 24) |
-					(cmd->cmnd[3] << 16) |
-					(cmd->cmnd[4] << 8) |
-					cmd->cmnd[5];
-				cmnd_count = (cmd->cmnd[6] << 24) |
-					(cmd->cmnd[7] << 16) |
-					(cmd->cmnd[8] << 8) |
-					cmd->cmnd[9];
-			} else if (cmd->cmnd[0] == WRITE_10) {
-				cmnd_lba = ((u64)cmd->cmnd[2] << 24) |
-					(cmd->cmnd[3] << 16) |
-					(cmd->cmnd[4] << 8) |
-					cmd->cmnd[5];
-				cmnd_count = (cmd->cmnd[7] << 8) |
-					cmd->cmnd[8];
-			} else
-				continue;
-			if (((cmnd_lba + cmnd_count) < lba) ||
-			  (count && ((lba + count) < cmnd_lba)))
-				continue;
-			++active;
-			break;
-		}
-
-	spin_unlock_irqrestore(&sdev->list_lock, flags);
+	scsi_host_tagset_busy_iter(sdev->host, aac_synchronize_iter,
+				   &iter_data);
 
 	/*
 	 *	Yield the processor (requeue for later)
diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/comminit.c
index f75878d773cf..f84a2ca91e49 100644
--- a/drivers/scsi/aacraid/comminit.c
+++ b/drivers/scsi/aacraid/comminit.c
@@ -272,29 +272,27 @@ static void aac_queue_init(struct aac_dev * dev, struct aac_queue * q, u32 *mem,
 	q->entries = qsize;
 }
 
+static bool aac_io_completion_iter(struct scsi_cmnd *command, void *data,
+				   bool reserved)
+{
+	int *active = data;
+
+	if (!reserved && command->SCp.phase == AAC_OWNER_FIRMWARE)
+		(*active)++;
+	return true;
+}
+
 static void aac_wait_for_io_completion(struct aac_dev *aac)
 {
-	unsigned long flagv = 0;
 	int i = 0;
+	struct Scsi_Host *shost = aac->scsi_host_ptr;
 
 	for (i = 60; i; --i) {
-		struct scsi_device *dev;
-		struct scsi_cmnd *command;
 		int active = 0;
 
-		__shost_for_each_device(dev, aac->scsi_host_ptr) {
-			spin_lock_irqsave(&dev->list_lock, flagv);
-			list_for_each_entry(command, &dev->cmd_list, list) {
-				if (command->SCp.phase == AAC_OWNER_FIRMWARE) {
-					active++;
-					break;
-				}
-			}
-			spin_unlock_irqrestore(&dev->list_lock, flagv);
-			if (active)
-				break;
+		scsi_host_tagset_busy_iter(shost, aac_io_completion_iter,
+					   &active);
 
-		}
 		/*
 		 * We can exit If all the commands are complete
 		 */
diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
index 225d67b3177a..c0c2c87c4ae3 100644
--- a/drivers/scsi/aacraid/commsup.c
+++ b/drivers/scsi/aacraid/commsup.c
@@ -1459,14 +1459,26 @@ static void aac_schedule_bus_scan(struct aac_dev *aac)
 		aac_schedule_src_reinit_aif_worker(aac);
 }
 
+static bool aac_reset_adapter_iter(struct scsi_cmnd *command, void *data,
+				   bool reserved)
+{
+	if (!reserved && command->SCp.phase == AAC_OWNER_FIRMWARE) {
+		command->SCp.buffer = NULL;
+		command->result = DID_OK << 16
+			| COMMAND_COMPLETE << 8
+			| SAM_STAT_TASK_SET_FULL;
+		command->SCp.phase = AAC_OWNER_ERROR_HANDLER;
+		command->scsi_done(command);
+	}
+	return true;
+}
+
 static int _aac_reset_adapter(struct aac_dev *aac, int forced, u8 reset_type)
 {
 	int index, quirks;
 	int retval;
 	struct Scsi_Host *host;
 	struct scsi_device *dev;
-	struct scsi_cmnd *command;
-	struct scsi_cmnd *command_list;
 	int jafo = 0;
 	int bled;
 	u64 dmamask;
@@ -1594,26 +1606,8 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced, u8 reset_type)
 	 * This is where the assumption that the Adapter is quiesced
 	 * is important.
 	 */
-	command_list = NULL;
-	__shost_for_each_device(dev, host) {
-		unsigned long flags;
-		spin_lock_irqsave(&dev->list_lock, flags);
-		list_for_each_entry(command, &dev->cmd_list, list)
-			if (command->SCp.phase == AAC_OWNER_FIRMWARE) {
-				command->SCp.buffer = (struct scatterlist *)command_list;
-				command_list = command;
-			}
-		spin_unlock_irqrestore(&dev->list_lock, flags);
-	}
-	while ((command = command_list)) {
-		command_list = (struct scsi_cmnd *)command->SCp.buffer;
-		command->SCp.buffer = NULL;
-		command->result = DID_OK << 16
-		  | COMMAND_COMPLETE << 8
-		  | SAM_STAT_TASK_SET_FULL;
-		command->SCp.phase = AAC_OWNER_ERROR_HANDLER;
-		command->scsi_done(command);
-	}
+	scsi_host_tagset_busy_iter(host, aac_reset_adapter_iter, NULL);
+
 	/*
 	 * Any Device that was already marked offline needs to be marked
 	 * running
diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
index f027e91656c9..2da2f3bfcdc1 100644
--- a/drivers/scsi/aacraid/linit.c
+++ b/drivers/scsi/aacraid/linit.c
@@ -622,54 +622,63 @@ static int aac_ioctl(struct scsi_device *sdev, unsigned int cmd,
 	return aac_do_ioctl(dev, cmd, arg);
 }
 
-static int get_num_of_incomplete_fibs(struct aac_dev *aac)
-{
+struct incomplete_fib_iter_data {
+	int fwcnt;
+	int ehcnt;
+	int llcnt;
+	int mlcnt;
+	int krlcnt;
+};
 
-	unsigned long flags;
-	struct scsi_device *sdev = NULL;
-	struct Scsi_Host *shost = aac->scsi_host_ptr;
-	struct scsi_cmnd *scmnd = NULL;
-	struct device *ctrl_dev;
-
-	int mlcnt  = 0;
-	int llcnt  = 0;
-	int ehcnt  = 0;
-	int fwcnt  = 0;
-	int krlcnt = 0;
-
-	__shost_for_each_device(sdev, shost) {
-		spin_lock_irqsave(&sdev->list_lock, flags);
-		list_for_each_entry(scmnd, &sdev->cmd_list, list) {
-			switch (scmnd->SCp.phase) {
-			case AAC_OWNER_FIRMWARE:
-				fwcnt++;
-				break;
-			case AAC_OWNER_ERROR_HANDLER:
-				ehcnt++;
-				break;
-			case AAC_OWNER_LOWLEVEL:
-				llcnt++;
-				break;
-			case AAC_OWNER_MIDLEVEL:
-				mlcnt++;
-				break;
-			default:
-				krlcnt++;
-				break;
-			}
-		}
-		spin_unlock_irqrestore(&sdev->list_lock, flags);
-	}
+static bool aac_incomplete_fib_iter(struct scsi_cmnd *scmd, void *data,
+				    bool reserved)
+{
+	struct incomplete_fib_iter_data *iter_data = data;
 
-	ctrl_dev = &aac->pdev->dev;
+	if (reserved)
+		return true;
 
-	dev_info(ctrl_dev, "outstanding cmd: midlevel-%d\n", mlcnt);
-	dev_info(ctrl_dev, "outstanding cmd: lowlevel-%d\n", llcnt);
-	dev_info(ctrl_dev, "outstanding cmd: error handler-%d\n", ehcnt);
-	dev_info(ctrl_dev, "outstanding cmd: firmware-%d\n", fwcnt);
-	dev_info(ctrl_dev, "outstanding cmd: kernel-%d\n", krlcnt);
+	switch (scmd->SCp.phase) {
+	case AAC_OWNER_FIRMWARE:
+		iter_data->fwcnt++;
+		break;
+	case AAC_OWNER_ERROR_HANDLER:
+		iter_data->ehcnt++;
+		break;
+	case AAC_OWNER_LOWLEVEL:
+		iter_data->llcnt++;
+		break;
+	case AAC_OWNER_MIDLEVEL:
+		iter_data->mlcnt++;
+		break;
+	default:
+		iter_data->krlcnt++;
+		break;
+	}
+	return true;
+}
 
-	return mlcnt + llcnt + ehcnt + fwcnt;
+static int get_num_of_incomplete_fibs(struct aac_dev *aac)
+{
+	struct Scsi_Host *shost = aac->scsi_host_ptr;
+	struct device *ctrl_dev = &aac->pdev->dev;
+	struct incomplete_fib_iter_data iter_data = {
+		.mlcnt  = 0,
+		.llcnt  = 0,
+		.ehcnt  = 0,
+		.fwcnt  = 0,
+		.krlcnt = 0,
+	};
+
+	scsi_host_tagset_busy_iter(shost, aac_incomplete_fib_iter, &iter_data);
+
+	dev_info(ctrl_dev, "outstanding cmd: midlevel-%d\n", iter_data.mlcnt);
+	dev_info(ctrl_dev, "outstanding cmd: lowlevel-%d\n", iter_data.llcnt);
+	dev_info(ctrl_dev, "outstanding cmd: error handler-%d\n", iter_data.ehcnt);
+	dev_info(ctrl_dev, "outstanding cmd: firmware-%d\n", iter_data.fwcnt);
+	dev_info(ctrl_dev, "outstanding cmd: kernel-%d\n", iter_data.krlcnt);
+
+	return iter_data.mlcnt + iter_data.llcnt + iter_data.ehcnt + iter_data.fwcnt;
 }
 
 static int aac_eh_abort(struct scsi_cmnd* cmd)
@@ -1675,7 +1684,6 @@ static int aac_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
 	shost->irq = pdev->irq;
 	shost->unique_id = unique_id;
 	shost->max_cmd_len = 16;
-	shost->use_cmd_list = 1;
 	shost->max_id = MAXIMUM_NUM_CONTAINERS;
 	shost->max_lun = AAC_MAX_LUN;
 	shost->nr_reserved_cmds = AAC_NUM_MGT_FIB;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 20/24] aacraid: use scsi_host_tagset_busy_iter() to traverse outstanding commands
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (18 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 19/24] aacraid: replace cmd_list with scsi_host_tagset_busy_iter() John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 21/24] dpt_i2o: drop cmd_list usage John Garry
                   ` (3 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.de>

Instead of walking the array of potential commands and trying to figure out
which one might be pending the driver should be using
scsi_host_tagset_busy_iter() to traverse all outstanding commands.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/aacraid/commsup.c |  49 +++++----
 drivers/scsi/aacraid/linit.c   | 188 ++++++++++++++++++++-------------
 2 files changed, 142 insertions(+), 95 deletions(-)

diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
index c0c2c87c4ae3..5e82f4741d82 100644
--- a/drivers/scsi/aacraid/commsup.c
+++ b/drivers/scsi/aacraid/commsup.c
@@ -1459,6 +1459,32 @@ static void aac_schedule_bus_scan(struct aac_dev *aac)
 		aac_schedule_src_reinit_aif_worker(aac);
 }
 
+static bool aac_close_sync_fib_iter(struct scsi_cmnd *command, void *data,
+				    bool reserved)
+{
+	struct Scsi_Host *host = command->device->host;
+	struct aac_dev *aac = (struct aac_dev *)host->hostdata;
+	struct fib *fib = &aac->fibs[command->request->tag];
+	int *retval = data;
+	__le32 XferState = fib->hw_fib_va->header.XferState;
+	bool is_response_expected = false;
+
+	if (!(XferState & cpu_to_le32(NoResponseExpected | Async)) &&
+	    (XferState & cpu_to_le32(ResponseExpected)))
+		is_response_expected = true;
+
+	if (is_response_expected
+	    || fib->flags & FIB_CONTEXT_FLAG_WAIT) {
+		unsigned long flagv;
+		spin_lock_irqsave(&fib->event_lock, flagv);
+		complete(&fib->event_wait);
+		spin_unlock_irqrestore(&fib->event_lock, flagv);
+		schedule();
+		*retval = 0;
+	}
+	return true;
+}
+
 static bool aac_reset_adapter_iter(struct scsi_cmnd *command, void *data,
 				   bool reserved)
 {
@@ -1482,7 +1508,6 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced, u8 reset_type)
 	int jafo = 0;
 	int bled;
 	u64 dmamask;
-	int num_of_fibs = 0;
 
 	/*
 	 * Assumptions:
@@ -1518,27 +1543,7 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced, u8 reset_type)
 	 *	Loop through the fibs, close the synchronous FIBS
 	 */
 	retval = 1;
-	num_of_fibs = aac->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB;
-	for (index = 0; index <  num_of_fibs; index++) {
-
-		struct fib *fib = &aac->fibs[index];
-		__le32 XferState = fib->hw_fib_va->header.XferState;
-		bool is_response_expected = false;
-
-		if (!(XferState & cpu_to_le32(NoResponseExpected | Async)) &&
-		   (XferState & cpu_to_le32(ResponseExpected)))
-			is_response_expected = true;
-
-		if (is_response_expected
-		  || fib->flags & FIB_CONTEXT_FLAG_WAIT) {
-			unsigned long flagv;
-			spin_lock_irqsave(&fib->event_lock, flagv);
-			complete(&fib->event_wait);
-			spin_unlock_irqrestore(&fib->event_lock, flagv);
-			schedule();
-			retval = 0;
-		}
-	}
+	scsi_host_tagset_busy_iter(host, aac_close_sync_fib_iter, &retval);
 	/* Give some extra time for ioctls to complete. */
 	if (retval == 0)
 		ssleep(2);
diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
index 2da2f3bfcdc1..457bf342aa82 100644
--- a/drivers/scsi/aacraid/linit.c
+++ b/drivers/scsi/aacraid/linit.c
@@ -681,14 +681,87 @@ static int get_num_of_incomplete_fibs(struct aac_dev *aac)
 	return iter_data.mlcnt + iter_data.llcnt + iter_data.ehcnt + iter_data.fwcnt;
 }
 
+struct aac_eh_abort_iter_data {
+	struct aac_dev *aac;
+	struct scsi_cmnd *cmd;
+	int ret;
+};
+
+static bool aac_eh_abort_busy_iter(struct scsi_cmnd *cmd, void *data,
+				   bool reserved)
+{
+	struct aac_eh_abort_iter_data *iter_data = data;
+	struct aac_dev *aac = iter_data->aac;
+	struct fib *fib = &aac->fibs[cmd->request->tag];
+
+	if (cmd != iter_data->cmd)
+		return true;
+
+	if (*(u8 *)fib->hw_fib_va != 0 &&
+	    (fib->flags & FIB_CONTEXT_FLAG_NATIVE_HBA) &&
+	    (fib->callback_data == cmd)) {
+		iter_data->ret = SUCCESS;
+		return false;
+	}
+	return true;
+}
+
+static bool aac_eh_abort_cmd_iter(struct scsi_cmnd *cmd, void *data,
+				  bool reserved)
+{
+	struct aac_eh_abort_iter_data *iter_data = data;
+	struct fib *fib = &iter_data->aac->fibs[cmd->request->tag];
+
+	if (cmd != iter_data->cmd)
+		return true;
+
+	if (fib->hw_fib_va->header.XferState &&
+	    (fib->flags & FIB_CONTEXT_FLAG) &&
+	    (fib->callback_data == iter_data->cmd)) {
+		fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
+		cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER;
+		iter_data->ret = SUCCESS;
+	}
+	return true;
+}
+
+static bool aac_eh_abort_tur_iter(struct scsi_cmnd *cmd, void *data,
+				  bool reserved)
+{
+	struct aac_eh_abort_iter_data *iter_data = data;
+	struct fib *fib = &iter_data->aac->fibs[cmd->request->tag];
+	struct scsi_cmnd *command;
+
+	if (cmd != iter_data->cmd)
+		return true;
+
+	command = fib->callback_data;
+	if ((fib->hw_fib_va->header.XferState &
+	     cpu_to_le32(Async | NoResponseExpected)) &&
+	    (fib->flags & FIB_CONTEXT_FLAG) &&
+	    ((command)) && (command->device == cmd->device)) {
+		fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT;
+		command->SCp.phase = AAC_OWNER_ERROR_HANDLER;
+		if (command == cmd)
+			iter_data->ret = SUCCESS;
+	}
+	return true;
+}
+
 static int aac_eh_abort(struct scsi_cmnd* cmd)
 {
 	struct scsi_device * dev = cmd->device;
 	struct Scsi_Host * host = dev->host;
 	struct aac_dev * aac = (struct aac_dev *)host->hostdata;
-	int count, found;
+	int count;
 	u32 bus, cid;
 	int ret = FAILED;
+	struct aac_eh_abort_iter_data iter_data = {
+		.aac = aac,
+		.cmd = cmd,
+		.ret = FAILED,
+	};
+
 
 	if (aac_adapter_check_health(aac))
 		return ret;
@@ -705,17 +778,9 @@ static int aac_eh_abort(struct scsi_cmnd* cmd)
 		 AAC_DRIVERNAME,
 		 host->host_no, sdev_channel(dev), sdev_id(dev), (int)dev->lun);
 
-		found = 0;
-		for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
-			fib = &aac->fibs[count];
-			if (*(u8 *)fib->hw_fib_va != 0 &&
-				(fib->flags & FIB_CONTEXT_FLAG_NATIVE_HBA) &&
-				(fib->callback_data == cmd)) {
-				found = 1;
-				break;
-			}
-		}
-		if (!found)
+		scsi_host_tagset_busy_iter(host, aac_eh_abort_busy_iter,
+					   &iter_data);
+		if (iter_data.ret == FAILED)
 			return ret;
 
 		/* start a HBA_TMF_ABORT_TASK TMF request */
@@ -773,49 +838,18 @@ static int aac_eh_abort(struct scsi_cmnd* cmd)
 			 * Mark associated FIB to not complete,
 			 * eh handler does this
 			 */
-			for (count = 0;
-				count < (host->can_queue + AAC_NUM_MGT_FIB);
-				++count) {
-				struct fib *fib = &aac->fibs[count];
-
-				if (fib->hw_fib_va->header.XferState &&
-				(fib->flags & FIB_CONTEXT_FLAG) &&
-				(fib->callback_data == cmd)) {
-					fib->flags |=
-						FIB_CONTEXT_FLAG_TIMED_OUT;
-					cmd->SCp.phase =
-						AAC_OWNER_ERROR_HANDLER;
-					ret = SUCCESS;
-				}
-			}
+			scsi_host_tagset_busy_iter(host, aac_eh_abort_cmd_iter,
+						   &iter_data);
+			ret = iter_data.ret;
 			break;
 		case TEST_UNIT_READY:
 			/*
 			 * Mark associated FIB to not complete,
 			 * eh handler does this
 			 */
-			for (count = 0;
-				count < (host->can_queue + AAC_NUM_MGT_FIB);
-				++count) {
-				struct scsi_cmnd *command;
-				struct fib *fib = &aac->fibs[count];
-
-				command = fib->callback_data;
-
-				if ((fib->hw_fib_va->header.XferState &
-					cpu_to_le32
-					(Async | NoResponseExpected)) &&
-					(fib->flags & FIB_CONTEXT_FLAG) &&
-					((command)) &&
-					(command->device == cmd->device)) {
-					fib->flags |=
-						FIB_CONTEXT_FLAG_TIMED_OUT;
-					command->SCp.phase =
-						AAC_OWNER_ERROR_HANDLER;
-					if (command == cmd)
-						ret = SUCCESS;
-				}
-			}
+			scsi_host_tagset_busy_iter(host, aac_eh_abort_tur_iter,
+						   &iter_data);
+			ret = iter_data.ret;
 			break;
 		}
 	}
@@ -1010,6 +1044,36 @@ static int aac_eh_target_reset(struct scsi_cmnd *cmd)
 	return ret;
 }
 
+static bool aac_eh_bus_reset_iter(struct scsi_cmnd *cmd, void *data,
+				  bool reserved)
+{
+	struct Scsi_Host *host = cmd->device->host;
+	struct aac_dev *aac = (struct aac_dev *)host->hostdata;
+	struct fib *fib = &aac->fibs[cmd->request->tag];
+	int *cmd_bus = data;
+
+	if (fib->hw_fib_va->header.XferState &&
+	    (fib->flags & FIB_CONTEXT_FLAG) &&
+	    (fib->flags & FIB_CONTEXT_FLAG_SCSI_CMD)) {
+		struct aac_hba_map_info *info;
+		u32 bus, cid;
+
+		if (cmd != (struct scsi_cmnd *)fib->callback_data)
+			return true;
+		bus = aac_logical_to_phys(scmd_channel(cmd));
+		if (bus != *cmd_bus)
+			return true;
+		cid = scmd_id(cmd);
+		info = &aac->hba_map[bus][cid];
+		if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS ||
+		    info->devtype != AAC_DEVTYPE_NATIVE_RAW) {
+			fib->flags |= FIB_CONTEXT_FLAG_EH_RESET;
+			cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER;
+		}
+	}
+	return true;
+}
+
 /*
  *	aac_eh_bus_reset	- Bus reset command handling
  *	@scsi_cmd:	SCSI command block causing the reset
@@ -1024,32 +1088,10 @@ static int aac_eh_bus_reset(struct scsi_cmnd* cmd)
 	u32 cmd_bus;
 	int status = 0;
 
-
 	cmd_bus = aac_logical_to_phys(scmd_channel(cmd));
-	/* Mark the assoc. FIB to not complete, eh handler does this */
-	for (count = 0; count < (host->can_queue + AAC_NUM_MGT_FIB); ++count) {
-		struct fib *fib = &aac->fibs[count];
-
-		if (fib->hw_fib_va->header.XferState &&
-		    (fib->flags & FIB_CONTEXT_FLAG) &&
-		    (fib->flags & FIB_CONTEXT_FLAG_SCSI_CMD)) {
-			struct aac_hba_map_info *info;
-			u32 bus, cid;
-
-			cmd = (struct scsi_cmnd *)fib->callback_data;
-			bus = aac_logical_to_phys(scmd_channel(cmd));
-			if (bus != cmd_bus)
-				continue;
-			cid = scmd_id(cmd);
-			info = &aac->hba_map[bus][cid];
-			if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS ||
-			    info->devtype != AAC_DEVTYPE_NATIVE_RAW) {
-				fib->flags |= FIB_CONTEXT_FLAG_EH_RESET;
-				cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER;
-			}
-		}
-	}
 
+	/* Mark the assoc. FIB to not complete, eh handler does this */
+	scsi_host_tagset_busy_iter(host, aac_eh_bus_reset_iter, &cmd_bus);
 	pr_err("%s: Host adapter reset request. SCSI hang ?\n", AAC_DRIVERNAME);
 
 	/*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 21/24] dpt_i2o: drop cmd_list usage
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (19 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 20/24] aacraid: use scsi_host_tagset_busy_iter() to traverse outstanding commands John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 22/24] scsi: drop scsi command list John Garry
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

Now each command has a unique tag, so we can drop the legacy 'cmd_list'
usage and rely on the tag to identify the command.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/dpt_i2o.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/drivers/scsi/dpt_i2o.c b/drivers/scsi/dpt_i2o.c
index abc74fd474dc..4c7305bf6b57 100644
--- a/drivers/scsi/dpt_i2o.c
+++ b/drivers/scsi/dpt_i2o.c
@@ -2335,7 +2335,6 @@ static s32 adpt_scsi_host_alloc(adpt_hba* pHba, struct scsi_host_template *sht)
 	host->unique_id = (u32)sys_tbl_pa + pHba->unit;
 	host->sg_tablesize = pHba->sg_tablesize;
 	host->can_queue = pHba->post_fifo_size;
-	host->use_cmd_list = 1;
 
 	return 0;
 }
@@ -2647,20 +2646,18 @@ static s32 adpt_i2o_reparse_lct(adpt_hba* pHba)
 	return 0;
 }
 
-static void adpt_fail_posted_scbs(adpt_hba* pHba)
+static bool adpt_fail_posted_scbs_iter(struct request *req, void *data, bool reserved)
 {
-	struct scsi_cmnd* 	cmd = NULL;
-	struct scsi_device* 	d = NULL;
+	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
 
-	shost_for_each_device(d, pHba->host) {
-		unsigned long flags;
-		spin_lock_irqsave(&d->list_lock, flags);
-		list_for_each_entry(cmd, &d->cmd_list, list) {
-			cmd->result = (DID_OK << 16) | (QUEUE_FULL <<1);
-			cmd->scsi_done(cmd);
-		}
-		spin_unlock_irqrestore(&d->list_lock, flags);
-	}
+	cmd->result = (DID_OK << 16) | (QUEUE_FULL <<1);
+	cmd->scsi_done(cmd);
+	return true;
+}
+
+static void adpt_fail_posted_scbs(adpt_hba* pHba)
+{
+	blk_mq_tagset_busy_iter(&pHba->host->tag_set, adpt_fail_posted_scbs_iter, NULL);
 }
 
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 22/24] scsi: drop scsi command list
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (20 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 21/24] dpt_i2o: drop cmd_list usage John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 18:35   ` Christoph Hellwig
  2020-03-10 16:25 ` [PATCH RFC v2 23/24] scsi: libsas: aic94xx: hisi_sas: mvsas: pm8001: Allocate Scsi_cmd for slow task John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 24/24] scsi: hisi_sas: Use libsas slow task SCSI command John Garry
  23 siblings, 1 reply; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

From: Hannes Reinecke <hare@suse.com>

No users left, kill it.

Signed-off-by: Hannes Reinecke <hare@suse.com>
---
 drivers/scsi/scsi.c        |  1 -
 drivers/scsi/scsi_lib.c    | 32 --------------------------------
 drivers/scsi/scsi_scan.c   |  1 -
 include/scsi/scsi_cmnd.h   |  1 -
 include/scsi/scsi_device.h |  1 -
 include/scsi/scsi_host.h   |  2 --
 6 files changed, 38 deletions(-)

diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index 930e4803d888..8ca36d778bcf 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -104,7 +104,6 @@ EXPORT_SYMBOL(scsi_sd_pm_domain);
  */
 void scsi_put_command(struct scsi_cmnd *cmd)
 {
-	scsi_del_cmd_from_list(cmd);
 	BUG_ON(delayed_work_pending(&cmd->abort_work));
 }
 
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index af4f56cd0093..1253893007d3 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -562,7 +562,6 @@ static void scsi_mq_uninit_cmd(struct scsi_cmnd *cmd)
 {
 	scsi_mq_free_sgtables(cmd);
 	scsi_uninit_cmd(cmd);
-	scsi_del_cmd_from_list(cmd);
 }
 
 /* Returns false when no more bytes to process, true if there are more */
@@ -1098,35 +1097,6 @@ static void scsi_cleanup_rq(struct request *rq)
 	}
 }
 
-/* Add a command to the list used by the aacraid and dpt_i2o drivers */
-void scsi_add_cmd_to_list(struct scsi_cmnd *cmd)
-{
-	struct scsi_device *sdev = cmd->device;
-	struct Scsi_Host *shost = sdev->host;
-	unsigned long flags;
-
-	if (shost->use_cmd_list) {
-		spin_lock_irqsave(&sdev->list_lock, flags);
-		list_add_tail(&cmd->list, &sdev->cmd_list);
-		spin_unlock_irqrestore(&sdev->list_lock, flags);
-	}
-}
-
-/* Remove a command from the list used by the aacraid and dpt_i2o drivers */
-void scsi_del_cmd_from_list(struct scsi_cmnd *cmd)
-{
-	struct scsi_device *sdev = cmd->device;
-	struct Scsi_Host *shost = sdev->host;
-	unsigned long flags;
-
-	if (shost->use_cmd_list) {
-		spin_lock_irqsave(&sdev->list_lock, flags);
-		BUG_ON(list_empty(&cmd->list));
-		list_del_init(&cmd->list);
-		spin_unlock_irqrestore(&sdev->list_lock, flags);
-	}
-}
-
 /* Called after a request has been started. */
 void scsi_init_command(struct scsi_device *dev, struct scsi_cmnd *cmd)
 {
@@ -1159,8 +1129,6 @@ void scsi_init_command(struct scsi_device *dev, struct scsi_cmnd *cmd)
 	cmd->retries = retries;
 	if (in_flight)
 		__set_bit(SCMD_STATE_INFLIGHT, &cmd->state);
-
-	scsi_add_cmd_to_list(cmd);
 }
 
 static blk_status_t scsi_setup_scsi_cmnd(struct scsi_device *sdev,
diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
index 058079f915f1..f2437a7570ce 100644
--- a/drivers/scsi/scsi_scan.c
+++ b/drivers/scsi/scsi_scan.c
@@ -236,7 +236,6 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
 	sdev->sdev_state = SDEV_CREATED;
 	INIT_LIST_HEAD(&sdev->siblings);
 	INIT_LIST_HEAD(&sdev->same_target_siblings);
-	INIT_LIST_HEAD(&sdev->cmd_list);
 	INIT_LIST_HEAD(&sdev->starved_entry);
 	INIT_LIST_HEAD(&sdev->event_list);
 	spin_lock_init(&sdev->list_lock);
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index c73280b3706d..50addc112212 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -68,7 +68,6 @@ struct scsi_pointer {
 struct scsi_cmnd {
 	struct scsi_request req;
 	struct scsi_device *device;
-	struct list_head list;  /* scsi_cmnd participates in queue lists */
 	struct list_head eh_entry; /* entry for the host eh_cmd_q */
 	struct delayed_work abort_work;
 
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index f8312a3e5b42..f146a4557787 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -110,7 +110,6 @@ struct scsi_device {
 	atomic_t device_blocked;	/* Device returned QUEUE_FULL. */
 
 	spinlock_t list_lock;
-	struct list_head cmd_list;	/* queue of in use SCSI Command structures */
 	struct list_head starved_entry;
 	unsigned short queue_depth;	/* How deep of a queue we want */
 	unsigned short max_queue_depth;	/* max queue depth */
diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
index 663aef22437a..dc62794f4cb5 100644
--- a/include/scsi/scsi_host.h
+++ b/include/scsi/scsi_host.h
@@ -634,8 +634,6 @@ struct Scsi_Host {
 	/* The controller does not support WRITE SAME */
 	unsigned no_write_same:1;
 
-	unsigned use_cmd_list:1;
-
 	/* Host responded with short (<36 bytes) INQUIRY result */
 	unsigned short_inquiry:1;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 23/24] scsi: libsas: aic94xx: hisi_sas: mvsas: pm8001: Allocate Scsi_cmd for slow task
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (21 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 22/24] scsi: drop scsi command list John Garry
@ 2020-03-10 16:25 ` John Garry
  2020-03-10 16:25 ` [PATCH RFC v2 24/24] scsi: hisi_sas: Use libsas slow task SCSI command John Garry
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, John Garry

Allocate a Scsi_cmd for SAS slow tasks, so they can be accounted for in
the blk-mq layer.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/scsi/hisi_sas/hisi_sas_main.c |  8 +++---
 drivers/scsi/libsas/sas_expander.c    |  3 ++-
 drivers/scsi/libsas/sas_init.c        | 36 +++++++++++++++++++++------
 drivers/scsi/mvsas/mv_sas.c           |  2 +-
 drivers/scsi/pm8001/pm8001_hwi.c      |  4 +--
 drivers/scsi/pm8001/pm8001_sas.c      |  4 +--
 drivers/scsi/pm8001/pm80xx_hwi.c      |  4 +--
 include/scsi/libsas.h                 |  4 ++-
 8 files changed, 46 insertions(+), 19 deletions(-)

diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
index 9a6deb21fe4d..c7951ac8b075 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
@@ -1182,12 +1182,13 @@ static int hisi_sas_exec_internal_tmf_task(struct domain_device *device,
 {
 	struct hisi_sas_device *sas_dev = device->lldd_dev;
 	struct hisi_hba *hisi_hba = sas_dev->hisi_hba;
+	struct sas_ha_struct *sha = &hisi_hba->sha;
 	struct device *dev = hisi_hba->dev;
 	struct sas_task *task;
 	int res, retry;
 
 	for (retry = 0; retry < TASK_RETRY; retry++) {
-		task = sas_alloc_slow_task(GFP_KERNEL);
+		task = sas_alloc_slow_task(GFP_KERNEL, sha);
 		if (!task)
 			return -ENOMEM;
 
@@ -2022,9 +2023,10 @@ _hisi_sas_internal_task_abort(struct hisi_hba *hisi_hba,
 			      struct domain_device *device, int abort_flag,
 			      int tag, struct hisi_sas_dq *dq)
 {
-	struct sas_task *task;
 	struct hisi_sas_device *sas_dev = device->lldd_dev;
+	struct sas_ha_struct *sha = &hisi_hba->sha;
 	struct device *dev = hisi_hba->dev;
+	struct sas_task *task;
 	int res;
 
 	/*
@@ -2036,7 +2038,7 @@ _hisi_sas_internal_task_abort(struct hisi_hba *hisi_hba,
 	if (!hisi_hba->hw->prep_abort)
 		return TMF_RESP_FUNC_FAILED;
 
-	task = sas_alloc_slow_task(GFP_KERNEL);
+	task = sas_alloc_slow_task(GFP_KERNEL, sha);
 	if (!task)
 		return -ENOMEM;
 
diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
index ab671cdd4cfb..24626acc6a11 100644
--- a/drivers/scsi/libsas/sas_expander.c
+++ b/drivers/scsi/libsas/sas_expander.c
@@ -56,6 +56,7 @@ static int smp_execute_task_sg(struct domain_device *dev,
 {
 	int res, retry;
 	struct sas_task *task = NULL;
+	struct sas_ha_struct *sha = dev->port->ha;
 	struct sas_internal *i =
 		to_sas_internal(dev->port->ha->core.shost->transportt);
 
@@ -66,7 +67,7 @@ static int smp_execute_task_sg(struct domain_device *dev,
 			break;
 		}
 
-		task = sas_alloc_slow_task(GFP_KERNEL);
+		task = sas_alloc_slow_task(GFP_KERNEL, sha);
 		if (!task) {
 			res = -ENOMEM;
 			break;
diff --git a/drivers/scsi/libsas/sas_init.c b/drivers/scsi/libsas/sas_init.c
index 21c43b18d5d5..493caaf50a12 100644
--- a/drivers/scsi/libsas/sas_init.c
+++ b/drivers/scsi/libsas/sas_init.c
@@ -14,6 +14,7 @@
 #include <scsi/sas_ata.h>
 #include <scsi/scsi_host.h>
 #include <scsi/scsi_device.h>
+#include <scsi/scsi_tcq.h>
 #include <scsi/scsi_transport.h>
 #include <scsi/scsi_transport_sas.h>
 
@@ -37,16 +38,23 @@ struct sas_task *sas_alloc_task(gfp_t flags)
 }
 EXPORT_SYMBOL_GPL(sas_alloc_task);
 
-struct sas_task *sas_alloc_slow_task(gfp_t flags)
+struct sas_task *sas_alloc_slow_task(gfp_t flags, struct sas_ha_struct *sha)
 {
 	struct sas_task *task = sas_alloc_task(flags);
-	struct sas_task_slow *slow = kmalloc(sizeof(*slow), flags);
+	struct Scsi_Host *shost = sha->core.shost;
+	struct sas_task_slow *slow;
 
-	if (!task || !slow) {
-		if (task)
-			kmem_cache_free(sas_task_cache, task);
-		kfree(slow);
+	if (!task)
 		return NULL;
+
+	slow = kzalloc(sizeof(*slow), flags);
+	if (!slow)
+		goto out_err_slow;
+
+	if (shost->reserved_cmd_q) {
+		slow->scmd = scsi_get_reserved_cmd(shost);
+		if (!slow->scmd)
+			goto out_err_scmd;
 	}
 
 	task->slow_task = slow;
@@ -55,13 +63,27 @@ struct sas_task *sas_alloc_slow_task(gfp_t flags)
 	init_completion(&slow->completion);
 
 	return task;
+
+out_err_scmd:
+	kfree(slow);
+out_err_slow:
+	kmem_cache_free(sas_task_cache, task);
+	return NULL;
 }
 EXPORT_SYMBOL_GPL(sas_alloc_slow_task);
 
 void sas_free_task(struct sas_task *task)
 {
 	if (task) {
-		kfree(task->slow_task);
+		/*
+		 * It could be good to just introduce separate sas_free_slow_task() to
+		 * avoid the following in the fastpath.
+		 */
+		if (task->slow_task) {
+			if (task->slow_task->scmd)
+				scsi_put_reserved_cmd(task->slow_task->scmd);
+			kfree(task->slow_task);
+		}
 		kmem_cache_free(sas_task_cache, task);
 	}
 }
diff --git a/drivers/scsi/mvsas/mv_sas.c b/drivers/scsi/mvsas/mv_sas.c
index a920eced92ec..2ef37d634c22 100644
--- a/drivers/scsi/mvsas/mv_sas.c
+++ b/drivers/scsi/mvsas/mv_sas.c
@@ -1283,7 +1283,7 @@ static int mvs_exec_internal_tmf_task(struct domain_device *dev,
 	struct sas_task *task = NULL;
 
 	for (retry = 0; retry < 3; retry++) {
-		task = sas_alloc_slow_task(GFP_KERNEL);
+		task = sas_alloc_slow_task(GFP_KERNEL, dev->port->ha);
 		if (!task)
 			return -ENOMEM;
 
diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c
index 2328ff1349ac..e9d759f64b3b 100644
--- a/drivers/scsi/pm8001/pm8001_hwi.c
+++ b/drivers/scsi/pm8001/pm8001_hwi.c
@@ -1743,7 +1743,7 @@ static void pm8001_send_abort_all(struct pm8001_hba_info *pm8001_ha,
 		return;
 	}
 
-	task = sas_alloc_slow_task(GFP_ATOMIC);
+	task = sas_alloc_slow_task(GFP_ATOMIC, pm8001_ha->sas);
 
 	if (!task) {
 		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("cannot "
@@ -1789,7 +1789,7 @@ static void pm8001_send_read_log(struct pm8001_hba_info *pm8001_ha,
 	struct inbound_queue_table *circularQ;
 	u32 opc = OPC_INB_SATA_HOST_OPSTART;
 
-	task = sas_alloc_slow_task(GFP_ATOMIC);
+	task = sas_alloc_slow_task(GFP_ATOMIC, pm8001_ha->sas);
 
 	if (!task) {
 		PM8001_FAIL_DBG(pm8001_ha,
diff --git a/drivers/scsi/pm8001/pm8001_sas.c b/drivers/scsi/pm8001/pm8001_sas.c
index b7cbc312843e..ac41a907447d 100644
--- a/drivers/scsi/pm8001/pm8001_sas.c
+++ b/drivers/scsi/pm8001/pm8001_sas.c
@@ -718,7 +718,7 @@ static int pm8001_exec_internal_tmf_task(struct domain_device *dev,
 	DECLARE_COMPLETION_ONSTACK(completion_setstate);
 
 	for (retry = 0; retry < 3; retry++) {
-		task = sas_alloc_slow_task(GFP_KERNEL);
+		task = sas_alloc_slow_task(GFP_KERNEL, pm8001_ha->sas);
 		if (!task)
 			return -ENOMEM;
 
@@ -805,7 +805,7 @@ pm8001_exec_internal_task_abort(struct pm8001_hba_info *pm8001_ha,
 	struct sas_task *task = NULL;
 
 	for (retry = 0; retry < 3; retry++) {
-		task = sas_alloc_slow_task(GFP_KERNEL);
+		task = sas_alloc_slow_task(GFP_KERNEL, pm8001_ha->sas);
 		if (!task)
 			return -ENOMEM;
 
diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c
index d1d95f1a2c6a..e09e8d54f4f1 100644
--- a/drivers/scsi/pm8001/pm80xx_hwi.c
+++ b/drivers/scsi/pm8001/pm80xx_hwi.c
@@ -1625,7 +1625,7 @@ static void pm80xx_send_abort_all(struct pm8001_hba_info *pm8001_ha,
 		return;
 	}
 
-	task = sas_alloc_slow_task(GFP_ATOMIC);
+	task = sas_alloc_slow_task(GFP_ATOMIC, pm8001_ha->sas);
 
 	if (!task) {
 		PM8001_FAIL_DBG(pm8001_ha, pm8001_printk("cannot "
@@ -1676,7 +1676,7 @@ static void pm80xx_send_read_log(struct pm8001_hba_info *pm8001_ha,
 	struct inbound_queue_table *circularQ;
 	u32 opc = OPC_INB_SATA_HOST_OPSTART;
 
-	task = sas_alloc_slow_task(GFP_ATOMIC);
+	task = sas_alloc_slow_task(GFP_ATOMIC, pm8001_ha->sas);
 
 	if (!task) {
 		PM8001_FAIL_DBG(pm8001_ha,
diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
index 4e2d61e8fb1e..25c0f0b94d27 100644
--- a/include/scsi/libsas.h
+++ b/include/scsi/libsas.h
@@ -608,6 +608,7 @@ struct sas_task_slow {
 	struct timer_list     timer;
 	struct completion     completion;
 	struct sas_task       *task;
+	struct scsi_cmnd      *scmd;
 };
 
 #define SAS_TASK_STATE_PENDING      1
@@ -617,7 +618,8 @@ struct sas_task_slow {
 #define SAS_TASK_AT_INITIATOR       16
 
 extern struct sas_task *sas_alloc_task(gfp_t flags);
-extern struct sas_task *sas_alloc_slow_task(gfp_t flags);
+extern struct sas_task *sas_alloc_slow_task(gfp_t flags,
+					    struct sas_ha_struct *sha);
 extern void sas_free_task(struct sas_task *task);
 
 struct sas_domain_function_template {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH RFC v2 24/24] scsi: hisi_sas: Use libsas slow task SCSI command
  2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
                   ` (22 preceding siblings ...)
  2020-03-10 16:25 ` [PATCH RFC v2 23/24] scsi: libsas: aic94xx: hisi_sas: mvsas: pm8001: Allocate Scsi_cmd for slow task John Garry
@ 2020-03-10 16:25 ` John Garry
  23 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 16:25 UTC (permalink / raw)
  To: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch
  Cc: linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, John Garry

Now that a SCSI command can be allocated for a libsas slow tasks, make the
task prep code use it.

Signed-off-by: John Garry <john.garry@huawei.com>
---
 drivers/scsi/hisi_sas/hisi_sas_main.c  | 5 ++++-
 drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 5 +++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
index c7951ac8b075..61039f3608c8 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
@@ -476,8 +476,10 @@ static int hisi_sas_task_prep(struct sas_task *task,
 			} else {
 				scsi_cmnd = task->uldd_task;
 			}
+		} else if (task->slow_task) {
+			scsi_cmnd = task->slow_task->scmd;
 		}
-		rc  = hisi_sas_slot_index_alloc(hisi_hba, scsi_cmnd);
+		rc = hisi_sas_slot_index_alloc(hisi_hba, scsi_cmnd);
 	}
 	if (rc < 0)
 		goto err_out_dif_dma_unmap;
@@ -2650,6 +2652,7 @@ int hisi_sas_probe(struct platform_device *pdev,
 	} else {
 		shost->can_queue = HISI_SAS_UNRESERVED_IPTT;
 		shost->cmd_per_lun = HISI_SAS_UNRESERVED_IPTT;
+		shost->nr_reserved_cmds = HISI_SAS_RESERVED_IPTT;
 	}
 
 	sha->sas_ha_name = DRV_NAME;
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index a2debe0c8185..44530f8c0319 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -3237,8 +3237,9 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	shost->max_lun = ~0;
 	shost->max_channel = 1;
 	shost->max_cmd_len = 16;
-	shost->can_queue = HISI_SAS_UNRESERVED_IPTT;
-	shost->cmd_per_lun = HISI_SAS_UNRESERVED_IPTT;
+	shost->can_queue = HISI_SAS_MAX_COMMANDS;
+	shost->cmd_per_lun = HISI_SAS_MAX_COMMANDS;
+	shost->nr_reserved_cmds = HISI_SAS_RESERVED_IPTT;
 
 	sha->sas_ha_name = DRV_NAME;
 	sha->dev = dev;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-03-10 16:25 ` [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands John Garry
@ 2020-03-10 18:32   ` Christoph Hellwig
  2020-03-10 21:08     ` John Garry
  0 siblings, 1 reply; 49+ messages in thread
From: Christoph Hellwig @ 2020-03-10 18:32 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch,
	linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
> From: Hannes Reinecke <hare@suse.com>
> 
> Allocate a separate 'reserved_cmd_q' for sending reserved commands.

Why?  Reserved command specifically are not in any way tied to queues.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 22/24] scsi: drop scsi command list
  2020-03-10 16:25 ` [PATCH RFC v2 22/24] scsi: drop scsi command list John Garry
@ 2020-03-10 18:35   ` Christoph Hellwig
  2020-03-10 20:47     ` John Garry
  0 siblings, 1 reply; 49+ messages in thread
From: Christoph Hellwig @ 2020-03-10 18:35 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche, hch,
	linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

On Wed, Mar 11, 2020 at 12:25:48AM +0800, John Garry wrote:
> From: Hannes Reinecke <hare@suse.com>
> 
> No users left, kill it.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.com>

Wasn't this part of a series from Hannes that already got merged?

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 22/24] scsi: drop scsi command list
  2020-03-10 18:35   ` Christoph Hellwig
@ 2020-03-10 20:47     ` John Garry
  0 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-10 20:47 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche,
	linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

On 10/03/2020 18:35, Christoph Hellwig wrote:
> On Wed, Mar 11, 2020 at 12:25:48AM +0800, John Garry wrote:
>> From: Hannes Reinecke <hare@suse.com>
>>
>> No users left, kill it.
>>
>> Signed-off-by: Hannes Reinecke <hare@suse.com>
> 
> Wasn't this part of a series from Hannes that already got merged?


Ah, yes. I'll check what can be dropped now.

cheers

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-03-10 18:32   ` Christoph Hellwig
@ 2020-03-10 21:08     ` John Garry
  2020-03-11  6:22       ` Christoph Hellwig
  0 siblings, 1 reply; 49+ messages in thread
From: John Garry @ 2020-03-10 21:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, hare, ming.lei, bvanassche,
	linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

On 10/03/2020 18:32, Christoph Hellwig wrote:
> On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
>> From: Hannes Reinecke <hare@suse.com>
>>
>> Allocate a separate 'reserved_cmd_q' for sending reserved commands.
> 
> Why?  Reserved command specifically are not in any way tied to queues.
> .
> 

So the v1 series used a combination of the sdev queue and the per-host 
reserved_cmd_q. Back then you questioned using the sdev queue for virtio 
scsi, and the unconfirmed conclusion was to use a common per-host q. 
This is the best link I can find now:

https://www.mail-archive.com/linux-scsi@vger.kernel.org/msg83177.html

"

 >> My implementation actually allows for per-device reserved tags (eg for
 >> virtio). But some drivers require to use internal commands prior to any
 >> device setup, so they have to use a separate reserved command queue 
just to
 >> be able to allocate tags.
 >
 > Why would virtio-scsi need per-device reserved commands?  It 
currently uses
 > a global mempool to allocate the reset commands.
 >
Oh, I'm perfectly fine with dropping the per-device reserved commands,
and use the host-wide queue in general.
It turns out most of the drivers use it that way already.
Will be doing so for the next iteration.

"

Cheers

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template
  2020-03-10 16:25 ` [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template John Garry
@ 2020-03-10 23:08   ` Ming Lei
  2020-03-11  6:55     ` Hannes Reinecke
  0 siblings, 1 reply; 49+ messages in thread
From: Ming Lei @ 2020-03-10 23:08 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, jejb, martin.petersen, hare, bvanassche, hch, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On Wed, Mar 11, 2020 at 12:25:27AM +0800, John Garry wrote:
> From: Hannes Reinecke <hare@suse.com>
> 
> Add a new field 'nr_reserved_cmds' to the SCSI host template to
> instruct the block layer to set aside a tag space for reserved
> commands.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.com>
> ---
>  drivers/scsi/scsi_lib.c  | 1 +
>  include/scsi/scsi_host.h | 6 ++++++
>  2 files changed, 7 insertions(+)
> 
> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index 610ee41fa54c..2967325df7a0 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -1896,6 +1896,7 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
>  		shost->tag_set.ops = &scsi_mq_ops_no_commit;
>  	shost->tag_set.nr_hw_queues = shost->nr_hw_queues ? : 1;
>  	shost->tag_set.queue_depth = shost->can_queue;
> +	shost->tag_set.reserved_tags = shost->nr_reserved_cmds;

You reserve tags for special usage, meantime the whole queue depth
isn't increased, that means the tags for IO request is decreased given
reserved tags can't be used for IO, so IO performance may be effected.

If not the case, please explain a bit in commit log.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-03-10 21:08     ` John Garry
@ 2020-03-11  6:22       ` Christoph Hellwig
  2020-03-11  6:58         ` Hannes Reinecke
  2020-04-06  9:05         ` Hannes Reinecke
  0 siblings, 2 replies; 49+ messages in thread
From: Christoph Hellwig @ 2020-03-11  6:22 UTC (permalink / raw)
  To: John Garry
  Cc: Christoph Hellwig, axboe, jejb, martin.petersen, hare, ming.lei,
	bvanassche, linux-block, linux-kernel, linux-scsi,
	virtualization, esc.storagedev, chenxiang66, Hannes Reinecke

On Tue, Mar 10, 2020 at 09:08:56PM +0000, John Garry wrote:
> On 10/03/2020 18:32, Christoph Hellwig wrote:
> > On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
> > > From: Hannes Reinecke <hare@suse.com>
> > > 
> > > Allocate a separate 'reserved_cmd_q' for sending reserved commands.
> > 
> > Why?  Reserved command specifically are not in any way tied to queues.
> > .
> > 
> 
> So the v1 series used a combination of the sdev queue and the per-host
> reserved_cmd_q. Back then you questioned using the sdev queue for virtio
> scsi, and the unconfirmed conclusion was to use a common per-host q. This is
> the best link I can find now:
> 
> https://www.mail-archive.com/linux-scsi@vger.kernel.org/msg83177.html

That was just a question on why virtio uses the per-device tags, which
didn't look like it made any sense.  What I'm worried about here is
mixing up the concept of reserved tags in the tagset, and queues to use
them.  Note that we already have the scsi_get_host_dev to allocate
a scsi_device and thus a request_queue for the host itself.  That seems
like the better interface to use a tag for a host wide command vs
introducing a parallel path.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template
  2020-03-10 23:08   ` Ming Lei
@ 2020-03-11  6:55     ` Hannes Reinecke
  2020-03-11  8:00       ` Ming Lei
  0 siblings, 1 reply; 49+ messages in thread
From: Hannes Reinecke @ 2020-03-11  6:55 UTC (permalink / raw)
  To: Ming Lei, John Garry
  Cc: axboe, jejb, martin.petersen, bvanassche, hch, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 3/11/20 12:08 AM, Ming Lei wrote:
> On Wed, Mar 11, 2020 at 12:25:27AM +0800, John Garry wrote:
>> From: Hannes Reinecke <hare@suse.com>
>>
>> Add a new field 'nr_reserved_cmds' to the SCSI host template to
>> instruct the block layer to set aside a tag space for reserved
>> commands.
>>
>> Signed-off-by: Hannes Reinecke <hare@suse.com>
>> ---
>>  drivers/scsi/scsi_lib.c  | 1 +
>>  include/scsi/scsi_host.h | 6 ++++++
>>  2 files changed, 7 insertions(+)
>>
>> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
>> index 610ee41fa54c..2967325df7a0 100644
>> --- a/drivers/scsi/scsi_lib.c
>> +++ b/drivers/scsi/scsi_lib.c
>> @@ -1896,6 +1896,7 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
>>  		shost->tag_set.ops = &scsi_mq_ops_no_commit;
>>  	shost->tag_set.nr_hw_queues = shost->nr_hw_queues ? : 1;
>>  	shost->tag_set.queue_depth = shost->can_queue;
>> +	shost->tag_set.reserved_tags = shost->nr_reserved_cmds;
> 
> You reserve tags for special usage, meantime the whole queue depth
> isn't increased, that means the tags for IO request is decreased given
> reserved tags can't be used for IO, so IO performance may be effected.
> 
> If not the case, please explain a bit in commit log.
> 
The overall idea of this patchset is to fold _existing_ management
command handling into using the blk-mq bitmap.
Currently, quite a lot of drivers are using management commands
internally, which typically use the same hardware tag pool (ie they are
being allocated from the same hardware resources) than the 'normal' I/O
commands.
But as they are using the same tagpool these drivers already decrement
the available number of commands; check eg. hpsa:

static int hpsa_scsi_host_alloc(struct ctlr_info *h)
{
	struct Scsi_Host *sh;

	sh = scsi_host_alloc(&hpsa_driver_template, sizeof(h));
	if (sh == NULL) {
		dev_err(&h->pdev->dev, "scsi_host_alloc failed\n");
		return -ENOMEM;
	}

	sh->io_port = 0;
	sh->n_io_port = 0;
	sh->this_id = -1;
	sh->max_channel = 3;
	sh->max_cmd_len = MAX_COMMAND_SIZE;
	sh->max_lun = HPSA_MAX_LUN;
	sh->max_id = HPSA_MAX_LUN;
	sh->can_queue = h->nr_cmds - HPSA_NRESERVED_CMDS;
	sh->cmd_per_lun = sh->can_queue;

So the idea of this patchset is to convert existing use-cases; seeing
that they already reserve commands internally performance will not be
affected.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-03-11  6:22       ` Christoph Hellwig
@ 2020-03-11  6:58         ` Hannes Reinecke
  2020-03-11  7:51           ` John Garry
  2020-04-06  9:05         ` Hannes Reinecke
  1 sibling, 1 reply; 49+ messages in thread
From: Hannes Reinecke @ 2020-03-11  6:58 UTC (permalink / raw)
  To: Christoph Hellwig, John Garry
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 3/11/20 7:22 AM, Christoph Hellwig wrote:
> On Tue, Mar 10, 2020 at 09:08:56PM +0000, John Garry wrote:
>> On 10/03/2020 18:32, Christoph Hellwig wrote:
>>> On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
>>>> From: Hannes Reinecke <hare@suse.com>
>>>>
>>>> Allocate a separate 'reserved_cmd_q' for sending reserved commands.
>>>
>>> Why?  Reserved command specifically are not in any way tied to queues.
>>> .
>>>
>>
>> So the v1 series used a combination of the sdev queue and the per-host
>> reserved_cmd_q. Back then you questioned using the sdev queue for virtio
>> scsi, and the unconfirmed conclusion was to use a common per-host q. This is
>> the best link I can find now:
>>
>> https://www.mail-archive.com/linux-scsi@vger.kernel.org/msg83177.html
> 
> That was just a question on why virtio uses the per-device tags, which
> didn't look like it made any sense.  What I'm worried about here is
> mixing up the concept of reserved tags in the tagset, and queues to use
> them.  Note that we already have the scsi_get_host_dev to allocate
> a scsi_device and thus a request_queue for the host itself.  That seems
> like the better interface to use a tag for a host wide command vs
> introducing a parallel path.
> 
Ah. Right.
Will be looking into that, and convert the patchset over to it.

And the problem of the separate queue is the fact that I'll need a queue
to reserve tags from; trying to allocate a tag directly from the bitmap
turns out to be major surgery in the blocklayer with no immediate gain.
And I can't use per-device queues as for some drivers the reserved
commands are used to query the HBA itself to figure out how many devices
are present.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-03-11  6:58         ` Hannes Reinecke
@ 2020-03-11  7:51           ` John Garry
  0 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-11  7:51 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 11/03/2020 06:58, Hannes Reinecke wrote:
> On 3/11/20 7:22 AM, Christoph Hellwig wrote:
>> On Tue, Mar 10, 2020 at 09:08:56PM +0000, John Garry wrote:
>>> On 10/03/2020 18:32, Christoph Hellwig wrote:
>>>> On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
>>>>> From: Hannes Reinecke <hare@suse.com>
>>>>>
>>>>> Allocate a separate 'reserved_cmd_q' for sending reserved commands.
>>>>
>>>> Why?  Reserved command specifically are not in any way tied to queues.
>>>> .
>>>>
>>>
>>> So the v1 series used a combination of the sdev queue and the per-host
>>> reserved_cmd_q. Back then you questioned using the sdev queue for virtio
>>> scsi, and the unconfirmed conclusion was to use a common per-host q. This is
>>> the best link I can find now:
>>>
>>> https://www.mail-archive.com/linux-scsi@vger.kernel.org/msg83177.html
>>
>> That was just a question on why virtio uses the per-device tags, which
>> didn't look like it made any sense.  What I'm worried about here is
>> mixing up the concept of reserved tags in the tagset, and queues to use
>> them.  Note that we already have the scsi_get_host_dev to allocate
>> a scsi_device and thus a request_queue for the host itself.  That seems
>> like the better interface to use a tag for a host wide command vs
>> introducing a parallel path.
>>
> Ah. Right.
> Will be looking into that, and convert the patchset over to it.
> 

Hannes,

I assume that you will take back over this series now. Let me know. The 
patches are here, if not in patchwork:
https://github.com/hisilicon/kernel-dev/tree/private-topic-sas-5.6-resv-commands-v2

> And the problem of the separate queue is the fact that I'll need a queue
> to reserve tags from; trying to allocate a tag directly from the bitmap
> turns out to be major surgery in the blocklayer with no immediate gain.
> And I can't use per-device queues as for some drivers the reserved
> commands are used to query the HBA itself to figure out how many devices
> are present.

Right, we still need to be able to allocate somewhere apart from sdev queue

> 


Thanks

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template
  2020-03-11  6:55     ` Hannes Reinecke
@ 2020-03-11  8:00       ` Ming Lei
  0 siblings, 0 replies; 49+ messages in thread
From: Ming Lei @ 2020-03-11  8:00 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: John Garry, axboe, jejb, martin.petersen, bvanassche, hch,
	linux-block, linux-kernel, linux-scsi, virtualization,
	esc.storagedev, chenxiang66, Hannes Reinecke

On Wed, Mar 11, 2020 at 07:55:46AM +0100, Hannes Reinecke wrote:
> On 3/11/20 12:08 AM, Ming Lei wrote:
> > On Wed, Mar 11, 2020 at 12:25:27AM +0800, John Garry wrote:
> >> From: Hannes Reinecke <hare@suse.com>
> >>
> >> Add a new field 'nr_reserved_cmds' to the SCSI host template to
> >> instruct the block layer to set aside a tag space for reserved
> >> commands.
> >>
> >> Signed-off-by: Hannes Reinecke <hare@suse.com>
> >> ---
> >>  drivers/scsi/scsi_lib.c  | 1 +
> >>  include/scsi/scsi_host.h | 6 ++++++
> >>  2 files changed, 7 insertions(+)
> >>
> >> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> >> index 610ee41fa54c..2967325df7a0 100644
> >> --- a/drivers/scsi/scsi_lib.c
> >> +++ b/drivers/scsi/scsi_lib.c
> >> @@ -1896,6 +1896,7 @@ int scsi_mq_setup_tags(struct Scsi_Host *shost)
> >>  		shost->tag_set.ops = &scsi_mq_ops_no_commit;
> >>  	shost->tag_set.nr_hw_queues = shost->nr_hw_queues ? : 1;
> >>  	shost->tag_set.queue_depth = shost->can_queue;
> >> +	shost->tag_set.reserved_tags = shost->nr_reserved_cmds;
> > 
> > You reserve tags for special usage, meantime the whole queue depth
> > isn't increased, that means the tags for IO request is decreased given
> > reserved tags can't be used for IO, so IO performance may be effected.
> > 
> > If not the case, please explain a bit in commit log.
> > 
> The overall idea of this patchset is to fold _existing_ management
> command handling into using the blk-mq bitmap.
> Currently, quite a lot of drivers are using management commands
> internally, which typically use the same hardware tag pool (ie they are
> being allocated from the same hardware resources) than the 'normal' I/O
> commands.
> But as they are using the same tagpool these drivers already decrement
> the available number of commands; check eg. hpsa:
> 
> static int hpsa_scsi_host_alloc(struct ctlr_info *h)
> {
> 	struct Scsi_Host *sh;
> 
> 	sh = scsi_host_alloc(&hpsa_driver_template, sizeof(h));
> 	if (sh == NULL) {
> 		dev_err(&h->pdev->dev, "scsi_host_alloc failed\n");
> 		return -ENOMEM;
> 	}
> 
> 	sh->io_port = 0;
> 	sh->n_io_port = 0;
> 	sh->this_id = -1;
> 	sh->max_channel = 3;
> 	sh->max_cmd_len = MAX_COMMAND_SIZE;
> 	sh->max_lun = HPSA_MAX_LUN;
> 	sh->max_id = HPSA_MAX_LUN;
> 	sh->can_queue = h->nr_cmds - HPSA_NRESERVED_CMDS;
> 	sh->cmd_per_lun = sh->can_queue;
> 
> So the idea of this patchset is to convert existing use-cases; seeing
> that they already reserve commands internally performance will not be
> affected.
> 

OK, got it.

I'd suggest to add comment about this idea in commit log, cause
it is one fundamental thing about this change.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 12/24] hpsa: use reserved commands
  2020-03-10 16:25 ` [PATCH RFC v2 12/24] hpsa: use reserved commands John Garry
@ 2020-03-11  8:10   ` Ming Lei
  2020-03-17  9:38     ` John Garry
  0 siblings, 1 reply; 49+ messages in thread
From: Ming Lei @ 2020-03-11  8:10 UTC (permalink / raw)
  To: John Garry
  Cc: axboe, jejb, martin.petersen, hare, bvanassche, hch, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On Wed, Mar 11, 2020 at 12:25:38AM +0800, John Garry wrote:
> From: Hannes Reinecke <hare@suse.com>
> 
> Enable the use of reserved commands, and drop the hand-crafted
> command allocation.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.com>
> ---
>  drivers/scsi/hpsa.c | 147 ++++++++++++++------------------------------
>  drivers/scsi/hpsa.h |   1 -
>  2 files changed, 45 insertions(+), 103 deletions(-)
> 
> diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
> index 703f824584fe..c14dd4b6e598 100644
> --- a/drivers/scsi/hpsa.c
> +++ b/drivers/scsi/hpsa.c
> @@ -244,10 +244,6 @@ static struct hpsa_scsi_dev_t
>  	*hpsa_find_device_by_sas_rphy(struct ctlr_info *h,
>  		struct sas_rphy *rphy);
>  
> -#define SCSI_CMD_BUSY ((struct scsi_cmnd *)&hpsa_cmd_busy)
> -static const struct scsi_cmnd hpsa_cmd_busy;
> -#define SCSI_CMD_IDLE ((struct scsi_cmnd *)&hpsa_cmd_idle)
> -static const struct scsi_cmnd hpsa_cmd_idle;
>  static int number_of_controllers;
>  
>  static irqreturn_t do_hpsa_intr_intx(int irq, void *dev_id);
> @@ -342,7 +338,7 @@ static inline struct ctlr_info *shost_to_hba(struct Scsi_Host *sh)
>  
>  static inline bool hpsa_is_cmd_idle(struct CommandList *c)
>  {
> -	return c->scsi_cmd == SCSI_CMD_IDLE;
> +	return c->scsi_cmd == NULL;
>  }
>  
>  /* extract sense key, asc, and ascq from sense data.  -1 means invalid. */
> @@ -2445,7 +2441,12 @@ static void hpsa_cmd_resolve_events(struct ctlr_info *h,
>  	 * this command has completed.  Then, check to see if the handler is
>  	 * waiting for this command, and, if so, wake it.
>  	 */
> -	c->scsi_cmd = SCSI_CMD_IDLE;
> +	if (c->scsi_cmd && c->cmd_type == CMD_IOCTL_PEND) {
> +		struct scsi_cmnd *scmd = c->scsi_cmd;
> +
> +		scsi_put_reserved_cmd(scmd);
> +	}
> +	c->scsi_cmd = NULL;
>  	mb();	/* Declare command idle before checking for pending events. */
>  	if (dev) {
>  		atomic_dec(&dev->commands_outstanding);
> @@ -5502,7 +5503,6 @@ static void hpsa_cmd_init(struct ctlr_info *h, int index,
>  	c->ErrDesc.Addr = cpu_to_le64((u64) err_dma_handle);
>  	c->ErrDesc.Len = cpu_to_le32((u32) sizeof(*c->err_info));
>  	c->h = h;
> -	c->scsi_cmd = SCSI_CMD_IDLE;
>  }
>  
>  static void hpsa_preinitialize_commands(struct ctlr_info *h)
> @@ -5803,6 +5803,7 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
>  	sh->max_lun = HPSA_MAX_LUN;
>  	sh->max_id = HPSA_MAX_LUN;
>  	sh->can_queue = h->nr_cmds - HPSA_NRESERVED_CMDS;
> +	sh->nr_reserved_cmds = HPSA_NRESERVED_CMDS;

Now .nr_reserved_cmds has been passed to blk-mq, you need to increase
sh->can_queue to h->nr_cmds, because .can_queue is the whole queue depth
(include the part of reserved tags), otherwise, IO tags will be
decreased.

Not look into other drivers, I guess they need such change too.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 12/24] hpsa: use reserved commands
  2020-03-11  8:10   ` Ming Lei
@ 2020-03-17  9:38     ` John Garry
  2020-03-17  9:48       ` Hannes Reinecke
  0 siblings, 1 reply; 49+ messages in thread
From: John Garry @ 2020-03-17  9:38 UTC (permalink / raw)
  To: Ming Lei
  Cc: axboe, jejb, martin.petersen, hare, bvanassche, hch, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 11/03/2020 08:10, Ming Lei wrote:
>> ands(struct ctlr_info *h)
>> @@ -5803,6 +5803,7 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
>>   	sh->max_lun = HPSA_MAX_LUN;
>>   	sh->max_id = HPSA_MAX_LUN;
>>   	sh->can_queue = h->nr_cmds - HPSA_NRESERVED_CMDS;
>> +	sh->nr_reserved_cmds = HPSA_NRESERVED_CMDS;
> Now .nr_reserved_cmds has been passed to blk-mq, you need to increase
> sh->can_queue to h->nr_cmds, because .can_queue is the whole queue depth
> (include the part of reserved tags), otherwise, IO tags will be
> decreased.
> 

Sounds correct.

Cheers,
John

> Not look into other drivers, I guess they need such change too.
> 
> Thanks,
> Ming
> 
> .


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 12/24] hpsa: use reserved commands
  2020-03-17  9:38     ` John Garry
@ 2020-03-17  9:48       ` Hannes Reinecke
  2020-03-30 13:42         ` John Garry
  0 siblings, 1 reply; 49+ messages in thread
From: Hannes Reinecke @ 2020-03-17  9:48 UTC (permalink / raw)
  To: John Garry, Ming Lei
  Cc: axboe, jejb, martin.petersen, bvanassche, hch, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 3/17/20 10:38 AM, John Garry wrote:
> On 11/03/2020 08:10, Ming Lei wrote:
>>> ands(struct ctlr_info *h)
>>> @@ -5803,6 +5803,7 @@ static int hpsa_scsi_host_alloc(struct 
>>> ctlr_info *h)
>>>       sh->max_lun = HPSA_MAX_LUN;
>>>       sh->max_id = HPSA_MAX_LUN;
>>>       sh->can_queue = h->nr_cmds - HPSA_NRESERVED_CMDS;
>>> +    sh->nr_reserved_cmds = HPSA_NRESERVED_CMDS;
>> Now .nr_reserved_cmds has been passed to blk-mq, you need to increase
>> sh->can_queue to h->nr_cmds, because .can_queue is the whole queue depth
>> (include the part of reserved tags), otherwise, IO tags will be
>> decreased.
>>
> 
> Sounds correct.
> 
I will have having a look at the patchset; I thought I did a patch to 
modify .can_queue so that it would cover only the usable tags, not the 
reserved ones.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 12/24] hpsa: use reserved commands
  2020-03-17  9:48       ` Hannes Reinecke
@ 2020-03-30 13:42         ` John Garry
  0 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-03-30 13:42 UTC (permalink / raw)
  To: Hannes Reinecke, Ming Lei
  Cc: axboe, jejb, martin.petersen, bvanassche, hch, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 17/03/2020 09:48, Hannes Reinecke wrote:
> On 3/17/20 10:38 AM, John Garry wrote:
>> On 11/03/2020 08:10, Ming Lei wrote:
>>>> ands(struct ctlr_info *h)
>>>> @@ -5803,6 +5803,7 @@ static int hpsa_scsi_host_alloc(struct 
>>>> ctlr_info *h)
>>>>       sh->max_lun = HPSA_MAX_LUN;
>>>>       sh->max_id = HPSA_MAX_LUN;
>>>>       sh->can_queue = h->nr_cmds - HPSA_NRESERVED_CMDS;
>>>> +    sh->nr_reserved_cmds = HPSA_NRESERVED_CMDS;
>>> Now .nr_reserved_cmds has been passed to blk-mq, you need to increase
>>> sh->can_queue to h->nr_cmds, because .can_queue is the whole queue depth
>>> (include the part of reserved tags), otherwise, IO tags will be
>>> decreased.
>>>
>>
>> Sounds correct.
>>
> I will have having a look at the patchset; I thought I did a patch to 
> modify .can_queue so that it would cover only the usable tags, not the 
> reserved ones.
> 

To me, it makes sense to leave .can_queue unmodified, carry it down to 
blk-mq and allow blk_mq_init_bitmap_tags() find the queue depth:

static struct blk_mq_tags *blk_mq_init_bitmap_tags(struct blk_mq_tags *tags,
   int node, int alloc_policy)
{
	unsigned int depth = tags->nr_tags - tags->nr_reserved_tags; *
	bool round_robin = alloc_policy == BLK_TAG_ALLOC_RR;

	if (bt_alloc(&tags->bitmap_tags, depth, round_robin, node))
		goto free_tags;

Cheers,
John

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-03-11  6:22       ` Christoph Hellwig
  2020-03-11  6:58         ` Hannes Reinecke
@ 2020-04-06  9:05         ` Hannes Reinecke
  2020-04-07 11:54           ` John Garry
  1 sibling, 1 reply; 49+ messages in thread
From: Hannes Reinecke @ 2020-04-06  9:05 UTC (permalink / raw)
  To: Christoph Hellwig, John Garry
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 3/11/20 7:22 AM, Christoph Hellwig wrote:
> On Tue, Mar 10, 2020 at 09:08:56PM +0000, John Garry wrote:
>> On 10/03/2020 18:32, Christoph Hellwig wrote:
>>> On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
>>>> From: Hannes Reinecke <hare@suse.com>
>>>>
>>>> Allocate a separate 'reserved_cmd_q' for sending reserved commands.
>>>
>>> Why?  Reserved command specifically are not in any way tied to queues.
>>> .
>>>
>>
>> So the v1 series used a combination of the sdev queue and the per-host
>> reserved_cmd_q. Back then you questioned using the sdev queue for virtio
>> scsi, and the unconfirmed conclusion was to use a common per-host q. This is
>> the best link I can find now:
>>
>> https://www.mail-archive.com/linux-scsi@vger.kernel.org/msg83177.html
> 
> That was just a question on why virtio uses the per-device tags, which
> didn't look like it made any sense.  What I'm worried about here is
> mixing up the concept of reserved tags in the tagset, and queues to use
> them.  Note that we already have the scsi_get_host_dev to allocate
> a scsi_device and thus a request_queue for the host itself.  That seems
> like the better interface to use a tag for a host wide command vs
> introducing a parallel path.
> 
Thinking about it some more, I don't think that scsi_get_host_dev() is
the best way of handling it.
Problem is that it'll create a new scsi_device with <hostno:this_id:0>,
which will then show up via eg 'lsscsi'.
This would be okay if 'this_id' would have been defined by the driver;
sadly, most drivers which are affected here do set 'this_id' to -1.
So we wouldn't have a nice target ID to allocate the device from, let
alone the problem that we would have to emulate a complete scsi device
with all required minimal command support etc.
And I'm not quite sure how well that would play with the exising SCSI
host template; the device we'll be allocating would have basically
nothing in common with the 'normal' SCSI devices.

What we could do, though, is to try it the other way round:
Lift the request queue from scsi_get_host_dev() into the scsi host
itself, so that scsi_get_host_dev() can use that queue, but we also
would be able to use it without a SCSI device attached.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-06  9:05         ` Hannes Reinecke
@ 2020-04-07 11:54           ` John Garry
  2020-04-07 14:00             ` Hannes Reinecke
  0 siblings, 1 reply; 49+ messages in thread
From: John Garry @ 2020-04-07 11:54 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 06/04/2020 10:05, Hannes Reinecke wrote:
> On 3/11/20 7:22 AM, Christoph Hellwig wrote:
>> On Tue, Mar 10, 2020 at 09:08:56PM +0000, John Garry wrote:
>>> On 10/03/2020 18:32, Christoph Hellwig wrote:
>>>> On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
>>>>> From: Hannes Reinecke <hare@suse.com>
>>>>>
>>>>> Allocate a separate 'reserved_cmd_q' for sending reserved commands.
>>>>
>>>> Why?  Reserved command specifically are not in any way tied to queues.
>>>> .
>>>>
>>>
>>> So the v1 series used a combination of the sdev queue and the per-host
>>> reserved_cmd_q. Back then you questioned using the sdev queue for virtio
>>> scsi, and the unconfirmed conclusion was to use a common per-host q. This is
>>> the best link I can find now:
>>>
>>> https://www.mail-archive.com/linux-scsi@vger.kernel.org/msg83177.html
>>
>> That was just a question on why virtio uses the per-device tags, which
>> didn't look like it made any sense.  What I'm worried about here is
>> mixing up the concept of reserved tags in the tagset, and queues to use
>> them.  Note that we already have the scsi_get_host_dev to allocate
>> a scsi_device and thus a request_queue for the host itself.  That seems
>> like the better interface to use a tag for a host wide command vs
>> introducing a parallel path.
>>
> Thinking about it some more, I don't think that scsi_get_host_dev() is
> the best way of handling it.
> Problem is that it'll create a new scsi_device with <hostno:this_id:0>,
> which will then show up via eg 'lsscsi'.

are you sure? Doesn't this function just allocate the sdev, but do 
nothing with it, like probing it?

I bludgeoned it in here for PoC:

https://github.com/hisilicon/kernel-dev/commit/ef0ae8540811e32776f64a5b42bd76cbed17ba47

And then still:

john@ubuntu:~$ lsscsi
[0:0:0:0] disk SEAGATE  ST2000NM0045  N004  /dev/sda
[0:0:1:0] disk SEAGATE  ST2000NM0045  N004  /dev/sdb
[0:0:2:0] disk ATASAMSUNG HM320JI  0_01  /dev/sdc
[0:0:3:0] disk SEAGATE  ST1000NM0023  0006  /dev/sdd
[0:0:4:0] enclosu HUAWEIExpander 12Gx16  128-
john@ubuntu:~$

Some proper plumbing would be needed, though.

> This would be okay if 'this_id' would have been defined by the driver;
> sadly, most drivers which are affected here do set 'this_id' to -1.
> So we wouldn't have a nice target ID to allocate the device from, let
> alone the problem that we would have to emulate a complete scsi device
> with all required minimal command support etc.
> And I'm not quite sure how well that would play with the exising SCSI
> host template; the device we'll be allocating would have basically
> nothing in common with the 'normal' SCSI devices.
> 
> What we could do, though, is to try it the other way round:
> Lift the request queue from scsi_get_host_dev() into the scsi host
> itself, so that scsi_get_host_dev() can use that queue, but we also
> would be able to use it without a SCSI device attached.

wouldn't that limit 1x scsi device per host, not that I know if any more 
would ever be required? But it does still seem better to use the request 
queue in the scsi device.

> 

cheers,
John

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-07 11:54           ` John Garry
@ 2020-04-07 14:00             ` Hannes Reinecke
  2020-04-07 14:35               ` John Garry
  2020-04-07 16:30               ` Christoph Hellwig
  0 siblings, 2 replies; 49+ messages in thread
From: Hannes Reinecke @ 2020-04-07 14:00 UTC (permalink / raw)
  To: John Garry, Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 4/7/20 1:54 PM, John Garry wrote:
> On 06/04/2020 10:05, Hannes Reinecke wrote:
>> On 3/11/20 7:22 AM, Christoph Hellwig wrote:
>>> On Tue, Mar 10, 2020 at 09:08:56PM +0000, John Garry wrote:
>>>> On 10/03/2020 18:32, Christoph Hellwig wrote:
>>>>> On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
>>>>>> From: Hannes Reinecke <hare@suse.com>
>>>>>>
>>>>>> Allocate a separate 'reserved_cmd_q' for sending reserved commands.
>>>>>
>>>>> Why?  Reserved command specifically are not in any way tied to queues.
>>>>> .
>>>>>
>>>>
>>>> So the v1 series used a combination of the sdev queue and the per-host
>>>> reserved_cmd_q. Back then you questioned using the sdev queue for 
>>>> virtio
>>>> scsi, and the unconfirmed conclusion was to use a common per-host q. 
>>>> This is
>>>> the best link I can find now:
>>>>
>>>> https://www.mail-archive.com/linux-scsi@vger.kernel.org/msg83177.html
>>>
>>> That was just a question on why virtio uses the per-device tags, which
>>> didn't look like it made any sense.  What I'm worried about here is
>>> mixing up the concept of reserved tags in the tagset, and queues to use
>>> them.  Note that we already have the scsi_get_host_dev to allocate
>>> a scsi_device and thus a request_queue for the host itself.  That seems
>>> like the better interface to use a tag for a host wide command vs
>>> introducing a parallel path.
>>>
>> Thinking about it some more, I don't think that scsi_get_host_dev() is
>> the best way of handling it.
>> Problem is that it'll create a new scsi_device with <hostno:this_id:0>,
>> which will then show up via eg 'lsscsi'.
> 
> are you sure? Doesn't this function just allocate the sdev, but do 
> nothing with it, like probing it?
> 
> I bludgeoned it in here for PoC:
> 
> https://github.com/hisilicon/kernel-dev/commit/ef0ae8540811e32776f64a5b42bd76cbed17ba47 
> 
> 
> And then still:
> 
> john@ubuntu:~$ lsscsi
> [0:0:0:0] disk SEAGATE  ST2000NM0045  N004  /dev/sda
> [0:0:1:0] disk SEAGATE  ST2000NM0045  N004  /dev/sdb
> [0:0:2:0] disk ATASAMSUNG HM320JI  0_01  /dev/sdc
> [0:0:3:0] disk SEAGATE  ST1000NM0023  0006  /dev/sdd
> [0:0:4:0] enclosu HUAWEIExpander 12Gx16  128-
> john@ubuntu:~$
> 
> Some proper plumbing would be needed, though.
> 
>> This would be okay if 'this_id' would have been defined by the driver;
>> sadly, most drivers which are affected here do set 'this_id' to -1.
>> So we wouldn't have a nice target ID to allocate the device from, let
>> alone the problem that we would have to emulate a complete scsi device
>> with all required minimal command support etc.
>> And I'm not quite sure how well that would play with the exising SCSI
>> host template; the device we'll be allocating would have basically
>> nothing in common with the 'normal' SCSI devices.
>>
>> What we could do, though, is to try it the other way round:
>> Lift the request queue from scsi_get_host_dev() into the scsi host
>> itself, so that scsi_get_host_dev() can use that queue, but we also
>> would be able to use it without a SCSI device attached.
> 
> wouldn't that limit 1x scsi device per host, not that I know if any more 
> would ever be required? But it does still seem better to use the request 
> queue in the scsi device.
> 
My concern is this:

struct scsi_device *scsi_get_host_dev(struct Scsi_Host *shost)
{
	[ .. ]
	starget = scsi_alloc_target(&shost->shost_gendev, 0, shost->this_id);
	[ .. ]

and we have typically:

drivers/scsi/hisi_sas/hisi_sas_v3_hw.c: .this_id                = -1,

It's _very_ uncommon to have a negative number as the SCSI target 
device; in fact, it _is_ an unsigned int already.

But alright, I'll give it a go; let's see what I'll end up with.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-07 14:00             ` Hannes Reinecke
@ 2020-04-07 14:35               ` John Garry
  2020-04-07 14:45                 ` Hannes Reinecke
  2020-04-07 16:30               ` Christoph Hellwig
  1 sibling, 1 reply; 49+ messages in thread
From: John Garry @ 2020-04-07 14:35 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 07/04/2020 15:00, Hannes Reinecke wrote:
> On 4/7/20 1:54 PM, John Garry wrote:
>> On 06/04/2020 10:05, Hannes Reinecke wrote:
>>> On 3/11/20 7:22 AM, Christoph Hellwig wrote:
>>>> On Tue, Mar 10, 2020 at 09:08:56PM +0000, John Garry wrote:
>>>>> On 10/03/2020 18:32, Christoph Hellwig wrote:
>>>>>> On Wed, Mar 11, 2020 at 12:25:28AM +0800, John Garry wrote:
>>>>>>> From: Hannes Reinecke <hare@suse.com>
>>>>>>>
>>>>>>> Allocate a separate 'reserved_cmd_q' for sending reserved commands.
>>>>>>
>>>>>> Why?  Reserved command specifically are not in any way tied to 
>>>>>> queues.
>>>>>> .
>>>>>>
>>>>>
>>>>> So the v1 series used a combination of the sdev queue and the per-host
>>>>> reserved_cmd_q. Back then you questioned using the sdev queue for 
>>>>> virtio
>>>>> scsi, and the unconfirmed conclusion was to use a common per-host 
>>>>> q. This is
>>>>> the best link I can find now:
>>>>>
>>>>> https://www.mail-archive.com/linux-scsi@vger.kernel.org/msg83177.html
>>>>
>>>> That was just a question on why virtio uses the per-device tags, which
>>>> didn't look like it made any sense.  What I'm worried about here is
>>>> mixing up the concept of reserved tags in the tagset, and queues to use
>>>> them.  Note that we already have the scsi_get_host_dev to allocate
>>>> a scsi_device and thus a request_queue for the host itself.  That seems
>>>> like the better interface to use a tag for a host wide command vs
>>>> introducing a parallel path.
>>>>
>>> Thinking about it some more, I don't think that scsi_get_host_dev() is
>>> the best way of handling it.
>>> Problem is that it'll create a new scsi_device with <hostno:this_id:0>,
>>> which will then show up via eg 'lsscsi'.
>>
>> are you sure? Doesn't this function just allocate the sdev, but do 
>> nothing with it, like probing it?
>>
>> I bludgeoned it in here for PoC:
>>
>> https://github.com/hisilicon/kernel-dev/commit/ef0ae8540811e32776f64a5b42bd76cbed17ba47 
>>
>>
>> And then still:
>>
>> john@ubuntu:~$ lsscsi
>> [0:0:0:0] disk SEAGATE  ST2000NM0045  N004  /dev/sda
>> [0:0:1:0] disk SEAGATE  ST2000NM0045  N004  /dev/sdb
>> [0:0:2:0] disk ATASAMSUNG HM320JI  0_01  /dev/sdc
>> [0:0:3:0] disk SEAGATE  ST1000NM0023  0006  /dev/sdd
>> [0:0:4:0] enclosu HUAWEIExpander 12Gx16  128-
>> john@ubuntu:~$
>>
>> Some proper plumbing would be needed, though.
>>
>>> This would be okay if 'this_id' would have been defined by the driver;
>>> sadly, most drivers which are affected here do set 'this_id' to -1.
>>> So we wouldn't have a nice target ID to allocate the device from, let
>>> alone the problem that we would have to emulate a complete scsi device
>>> with all required minimal command support etc.
>>> And I'm not quite sure how well that would play with the exising SCSI
>>> host template; the device we'll be allocating would have basically
>>> nothing in common with the 'normal' SCSI devices.
>>>
>>> What we could do, though, is to try it the other way round:
>>> Lift the request queue from scsi_get_host_dev() into the scsi host
>>> itself, so that scsi_get_host_dev() can use that queue, but we also
>>> would be able to use it without a SCSI device attached.
>>
>> wouldn't that limit 1x scsi device per host, not that I know if any 
>> more would ever be required? But it does still seem better to use the 
>> request queue in the scsi device.
>>
> My concern is this:
> 
> struct scsi_device *scsi_get_host_dev(struct Scsi_Host *shost)
> {
>      [ .. ]
>      starget = scsi_alloc_target(&shost->shost_gendev, 0, shost->this_id);
>      [ .. ]
> 
> and we have typically:
> 
> drivers/scsi/hisi_sas/hisi_sas_v3_hw.c: .this_id                = -1,
> 
> It's _very_ uncommon to have a negative number as the SCSI target 
> device; in fact, it _is_ an unsigned int already.
> 

FWIW, the only other driver (gdth) which I see uses this API has this_id 
= -1 in the scsi host template.

> But alright, I'll give it a go; let's see what I'll end up with.

note: If we want a fixed scsi_device per host, calling 
scsi_mq_setup_tags() -> scsi_get_host_dev() will fail as shost state is 
not running. Maybe we need to juggle some things there to provide a 
generic solution.

thanks

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-07 14:35               ` John Garry
@ 2020-04-07 14:45                 ` Hannes Reinecke
  2020-04-07 15:19                   ` John Garry
  0 siblings, 1 reply; 49+ messages in thread
From: Hannes Reinecke @ 2020-04-07 14:45 UTC (permalink / raw)
  To: John Garry, Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 4/7/20 4:35 PM, John Garry wrote:
> On 07/04/2020 15:00, Hannes Reinecke wrote:
>> On 4/7/20 1:54 PM, John Garry wrote:
>>> On 06/04/2020 10:05, Hannes Reinecke wrote:
[ .. ]
>>>> This would be okay if 'this_id' would have been defined by the driver;
>>>> sadly, most drivers which are affected here do set 'this_id' to -1.
>>>> So we wouldn't have a nice target ID to allocate the device from, let
>>>> alone the problem that we would have to emulate a complete scsi device
>>>> with all required minimal command support etc.
>>>> And I'm not quite sure how well that would play with the exising SCSI
>>>> host template; the device we'll be allocating would have basically
>>>> nothing in common with the 'normal' SCSI devices.
>>>>
>>>> What we could do, though, is to try it the other way round:
>>>> Lift the request queue from scsi_get_host_dev() into the scsi host
>>>> itself, so that scsi_get_host_dev() can use that queue, but we also
>>>> would be able to use it without a SCSI device attached.
>>>
>>> wouldn't that limit 1x scsi device per host, not that I know if any 
>>> more would ever be required? But it does still seem better to use the 
>>> request queue in the scsi device.
>>>
>> My concern is this:
>>
>> struct scsi_device *scsi_get_host_dev(struct Scsi_Host *shost)
>> {
>>      [ .. ]
>>      starget = scsi_alloc_target(&shost->shost_gendev, 0, 
>> shost->this_id);
>>      [ .. ]
>>
>> and we have typically:
>>
>> drivers/scsi/hisi_sas/hisi_sas_v3_hw.c: .this_id                = -1,
>>
>> It's _very_ uncommon to have a negative number as the SCSI target 
>> device; in fact, it _is_ an unsigned int already.
>>
> 
> FWIW, the only other driver (gdth) which I see uses this API has this_id 
> = -1 in the scsi host template.
> 
>> But alright, I'll give it a go; let's see what I'll end up with.
> 
> note: If we want a fixed scsi_device per host, calling 
> scsi_mq_setup_tags() -> scsi_get_host_dev() will fail as shost state is 
> not running. Maybe we need to juggle some things there to provide a 
> generic solution.
> 
It might even get worse, as during device setup things like 
'slave_alloc' etc is getting called, which has a fair chance of getting 
confused for non-existing devices.
Cf qla2xxx:qla2xx_slave_alloc() is calling starget_to_rport(), which 
will get us a nice oops when accessing a target which is _not_ the child 
of a fc remote port.
And this is why I'm not utterly keen on this approach; auditing all 
these callbacks is _not_ fun.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-07 14:45                 ` Hannes Reinecke
@ 2020-04-07 15:19                   ` John Garry
  0 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-04-07 15:19 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

>>>
>>
>> FWIW, the only other driver (gdth) which I see uses this API has 
>> this_id = -1 in the scsi host template.
>>
>>> But alright, I'll give it a go; let's see what I'll end up with.
>>
>> note: If we want a fixed scsi_device per host, calling 
>> scsi_mq_setup_tags() -> scsi_get_host_dev() will fail as shost state 
>> is not running. Maybe we need to juggle some things there to provide a 
>> generic solution.
>>
> It might even get worse, as during device setup things like 
> 'slave_alloc' etc is getting called, which has a fair chance of getting 
> confused for non-existing devices.
> Cf qla2xxx:qla2xx_slave_alloc() is calling starget_to_rport(), which 
> will get us a nice oops when accessing a target which is _not_ the child 
> of a fc remote port.

Yes, something similar happens for libsas [hence my hack], where 
sas_alloc_target()->sas_find_dev_by_rphy() fails as it cannot handle 
rphy for scsi host as parent properly.

> And this is why I'm not utterly keen on this approach; auditing all 
> these callbacks is _not_ fun.
> 

Understood. And if you can't test them, then a change like this is too 
risky for those drivers.

Cheers,
John

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-07 14:00             ` Hannes Reinecke
  2020-04-07 14:35               ` John Garry
@ 2020-04-07 16:30               ` Christoph Hellwig
  2020-04-23 14:13                 ` John Garry
  1 sibling, 1 reply; 49+ messages in thread
From: Christoph Hellwig @ 2020-04-07 16:30 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: John Garry, Christoph Hellwig, axboe, jejb, martin.petersen,
	ming.lei, bvanassche, linux-block, linux-kernel, linux-scsi,
	virtualization, esc.storagedev, chenxiang66, Hannes Reinecke

On Tue, Apr 07, 2020 at 04:00:10PM +0200, Hannes Reinecke wrote:
> My concern is this:
> 
> struct scsi_device *scsi_get_host_dev(struct Scsi_Host *shost)
> {
> 	[ .. ]
> 	starget = scsi_alloc_target(&shost->shost_gendev, 0, shost->this_id);
> 	[ .. ]
> 
> and we have typically:
> 
> drivers/scsi/hisi_sas/hisi_sas_v3_hw.c: .this_id                = -1,
> 
> It's _very_ uncommon to have a negative number as the SCSI target device; in
> fact, it _is_ an unsigned int already.
> 
> But alright, I'll give it a go; let's see what I'll end up with.

But this shouldn't be exposed anywhere.  And I prefer that over having
magic requests/scsi_cmnd that do not have a valid ->device pointer.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-07 16:30               ` Christoph Hellwig
@ 2020-04-23 14:13                 ` John Garry
  2020-04-23 14:49                   ` Hannes Reinecke
  0 siblings, 1 reply; 49+ messages in thread
From: John Garry @ 2020-04-23 14:13 UTC (permalink / raw)
  To: Christoph Hellwig, Hannes Reinecke
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 07/04/2020 17:30, Christoph Hellwig wrote:
> On Tue, Apr 07, 2020 at 04:00:10PM +0200, Hannes Reinecke wrote:
>> My concern is this:
>>
>> struct scsi_device *scsi_get_host_dev(struct Scsi_Host *shost)
>> {
>> 	[ .. ]
>> 	starget = scsi_alloc_target(&shost->shost_gendev, 0, shost->this_id);
>> 	[ .. ]
>>
>> and we have typically:
>>
>> drivers/scsi/hisi_sas/hisi_sas_v3_hw.c: .this_id                = -1,
>>
>> It's _very_ uncommon to have a negative number as the SCSI target device; in
>> fact, it _is_ an unsigned int already.
>>
>> But alright, I'll give it a go; let's see what I'll end up with.
> 
> But this shouldn't be exposed anywhere.  And I prefer that over having
> magic requests/scsi_cmnd that do not have a valid ->device pointer.
> .
> 

(just looking at this again)

Hi Christoph,

So how would this look added in scsi_lib.c:

struct scsi_cmnd *scsi_get_reserved_cmd(struct Scsi_Host *shost)
{
	struct scsi_cmnd *scmd;
	struct request *rq;
	struct scsi_device *sdev = scsi_get_host_dev(shost);

	if (!sdev)
		return NULL;

	rq = blk_mq_alloc_request(sdev->request_queue,
				  REQ_OP_DRV_OUT | REQ_NOWAIT,
				  BLK_MQ_REQ_RESERVED);
	if (IS_ERR(rq)) // fix tidy-up
		return NULL;
	WARN_ON(rq->tag == -1);
	scmd = blk_mq_rq_to_pdu(rq);
	scmd->request = rq;
	scmd->device = sdev;

	return scmd;
}
EXPORT_SYMBOL_GPL(scsi_get_reserved_cmd);

void scsi_put_reserved_cmd(struct scsi_cmnd *scmd)
{
	struct request *rq = blk_mq_rq_from_pdu(scmd);

	if (blk_mq_rq_is_reserved(rq)) {
		struct scsi_device *sdev = scmd->device;
		blk_mq_free_request(rq);
		scsi_free_host_dev(sdev);
	}
}
EXPORT_SYMBOL_GPL(scsi_put_reserved_cmd);

Not sure if we want a static scsi_device per host, or alloc and free 
dynamically.

(@Hannes, I also have some proper patches for libsas if you want to add it)

Cheers,
John

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-23 14:13                 ` John Garry
@ 2020-04-23 14:49                   ` Hannes Reinecke
  2020-04-23 15:33                     ` John Garry
  0 siblings, 1 reply; 49+ messages in thread
From: Hannes Reinecke @ 2020-04-23 14:49 UTC (permalink / raw)
  To: John Garry, Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke

On 4/23/20 4:13 PM, John Garry wrote:
> On 07/04/2020 17:30, Christoph Hellwig wrote:
>> On Tue, Apr 07, 2020 at 04:00:10PM +0200, Hannes Reinecke wrote:
>>> My concern is this:
>>>
>>> struct scsi_device *scsi_get_host_dev(struct Scsi_Host *shost)
>>> {
>>>     [ .. ]
>>>     starget = scsi_alloc_target(&shost->shost_gendev, 0, 
>>> shost->this_id);
>>>     [ .. ]
>>>
>>> and we have typically:
>>>
>>> drivers/scsi/hisi_sas/hisi_sas_v3_hw.c: .this_id                = -1,
>>>
>>> It's _very_ uncommon to have a negative number as the SCSI target 
>>> device; in
>>> fact, it _is_ an unsigned int already.
>>>
>>> But alright, I'll give it a go; let's see what I'll end up with.
>>
>> But this shouldn't be exposed anywhere.  And I prefer that over having
>> magic requests/scsi_cmnd that do not have a valid ->device pointer.
>> .
>>
> 
> (just looking at this again)
> 
> Hi Christoph,
> 
> So how would this look added in scsi_lib.c:
> 
> struct scsi_cmnd *scsi_get_reserved_cmd(struct Scsi_Host *shost)
> {
>      struct scsi_cmnd *scmd;
>      struct request *rq;
>      struct scsi_device *sdev = scsi_get_host_dev(shost);
> 
>      if (!sdev)
>          return NULL;
> 
>      rq = blk_mq_alloc_request(sdev->request_queue,
>                    REQ_OP_DRV_OUT | REQ_NOWAIT,
>                    BLK_MQ_REQ_RESERVED);
>      if (IS_ERR(rq)) // fix tidy-up
>          return NULL;
>      WARN_ON(rq->tag == -1);
>      scmd = blk_mq_rq_to_pdu(rq);
>      scmd->request = rq;
>      scmd->device = sdev;
> 
>      return scmd;
> }
> EXPORT_SYMBOL_GPL(scsi_get_reserved_cmd);
> 
> void scsi_put_reserved_cmd(struct scsi_cmnd *scmd)
> {
>      struct request *rq = blk_mq_rq_from_pdu(scmd);
> 
>      if (blk_mq_rq_is_reserved(rq)) {
>          struct scsi_device *sdev = scmd->device;
>          blk_mq_free_request(rq);
>          scsi_free_host_dev(sdev);
>      }
> }
> EXPORT_SYMBOL_GPL(scsi_put_reserved_cmd);
> 
> Not sure if we want a static scsi_device per host, or alloc and free 
> dynamically.
> 
> (@Hannes, I also have some proper patches for libsas if you want to add it)
> 
Hold your horses.
I'm currently preparing a patchset implementing things by improving the 
current scsi_get_host_dev() etc.

RFC should be ready by the end of the week.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                               +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands
  2020-04-23 14:49                   ` Hannes Reinecke
@ 2020-04-23 15:33                     ` John Garry
  0 siblings, 0 replies; 49+ messages in thread
From: John Garry @ 2020-04-23 15:33 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig
  Cc: axboe, jejb, martin.petersen, ming.lei, bvanassche, linux-block,
	linux-kernel, linux-scsi, virtualization, esc.storagedev,
	chenxiang66, Hannes Reinecke


>> Not sure if we want a static scsi_device per host, or alloc and free 
>> dynamically.
>>
>> (@Hannes, I also have some proper patches for libsas if you want to 
>> add it)
>>
> Hold your horses.

I wasn't going to do anything else...

> I'm currently preparing a patchset implementing things by improving the 
> current scsi_get_host_dev() etc.

OK, great, all you have to do is say.

> 
> RFC should be ready by the end of the week.
> 

Thanks,
John

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2020-04-23 15:34 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-10 16:25 [PATCH RFC v2 00/24] scsi: enable reserved commands for LLDDs John Garry
2020-03-10 16:25 ` [PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template John Garry
2020-03-10 23:08   ` Ming Lei
2020-03-11  6:55     ` Hannes Reinecke
2020-03-11  8:00       ` Ming Lei
2020-03-10 16:25 ` [PATCH RFC v2 02/24] scsi: allocate separate queue for reserved commands John Garry
2020-03-10 18:32   ` Christoph Hellwig
2020-03-10 21:08     ` John Garry
2020-03-11  6:22       ` Christoph Hellwig
2020-03-11  6:58         ` Hannes Reinecke
2020-03-11  7:51           ` John Garry
2020-04-06  9:05         ` Hannes Reinecke
2020-04-07 11:54           ` John Garry
2020-04-07 14:00             ` Hannes Reinecke
2020-04-07 14:35               ` John Garry
2020-04-07 14:45                 ` Hannes Reinecke
2020-04-07 15:19                   ` John Garry
2020-04-07 16:30               ` Christoph Hellwig
2020-04-23 14:13                 ` John Garry
2020-04-23 14:49                   ` Hannes Reinecke
2020-04-23 15:33                     ` John Garry
2020-03-10 16:25 ` [PATCH RFC v2 03/24] blk-mq: Implement blk_mq_rq_is_reserved() John Garry
2020-03-10 16:25 ` [PATCH RFC v2 04/24] scsi: Add scsi_{get, put}_reserved_cmd() John Garry
2020-03-10 16:25 ` [PATCH RFC v2 05/24] csiostor: use reserved command for LUN reset John Garry
2020-03-10 16:25 ` [PATCH RFC v2 06/24] scsi: add scsi_cmd_from_priv() John Garry
2020-03-10 16:25 ` [PATCH RFC v2 07/24] virtio_scsi: use reserved commands for TMF John Garry
2020-03-10 16:25 ` [PATCH RFC v2 08/24] scsi: add host tagset busy iterator John Garry
2020-03-10 16:25 ` [PATCH RFC v2 09/24] fnic: use reserved commands John Garry
2020-03-10 16:25 ` [PATCH RFC v2 10/24] fnic: use scsi_host_tagset_busy_iter() to traverse commands John Garry
2020-03-10 16:25 ` [PATCH RFC v2 11/24] hpsa: move hpsa_hba_inquiry after scsi_add_host() John Garry
2020-03-10 16:25 ` [PATCH RFC v2 12/24] hpsa: use reserved commands John Garry
2020-03-11  8:10   ` Ming Lei
2020-03-17  9:38     ` John Garry
2020-03-17  9:48       ` Hannes Reinecke
2020-03-30 13:42         ` John Garry
2020-03-10 16:25 ` [PATCH RFC v2 13/24] hpsa: use blk_mq_tagset_busy_iter() to traverse outstanding commands John Garry
2020-03-10 16:25 ` [PATCH RFC v2 14/24] hpsa: drop refcount field from CommandList John Garry
2020-03-10 16:25 ` [PATCH RFC v2 15/24] snic: use reserved commands John Garry
2020-03-10 16:25 ` [PATCH RFC v2 16/24] snic: use tagset iter for traversing commands John Garry
2020-03-10 16:25 ` [PATCH RFC v2 17/24] aacraid: move scsi_add_host() John Garry
2020-03-10 16:25 ` [PATCH RFC v2 18/24] aacraid: use private commands John Garry
2020-03-10 16:25 ` [PATCH RFC v2 19/24] aacraid: replace cmd_list with scsi_host_tagset_busy_iter() John Garry
2020-03-10 16:25 ` [PATCH RFC v2 20/24] aacraid: use scsi_host_tagset_busy_iter() to traverse outstanding commands John Garry
2020-03-10 16:25 ` [PATCH RFC v2 21/24] dpt_i2o: drop cmd_list usage John Garry
2020-03-10 16:25 ` [PATCH RFC v2 22/24] scsi: drop scsi command list John Garry
2020-03-10 18:35   ` Christoph Hellwig
2020-03-10 20:47     ` John Garry
2020-03-10 16:25 ` [PATCH RFC v2 23/24] scsi: libsas: aic94xx: hisi_sas: mvsas: pm8001: Allocate Scsi_cmd for slow task John Garry
2020-03-10 16:25 ` [PATCH RFC v2 24/24] scsi: hisi_sas: Use libsas slow task SCSI command John Garry

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).