All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/12] More patches for kernel v4.13
@ 2017-05-31 22:52 Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 01/12] block: Make request operation type argument declarations consistent Bart Van Assche
                   ` (11 more replies)
  0 siblings, 12 replies; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche

Hello Jens,

The changes compared to v1 of this patch series are:
* Addressed Christoph's comment about moving the .initialize_rq_fn() call
  from blk_rq_init() / blk_mq_rq_ctx_init() into blk_get_request().
* Left out patch "scsi: Make scsi_ioctl_reset() pass the request queue pointer
  to blk_rq_init()" since it's no longer needed.
* Restored the scsi_req_init() call in ide_prep_sense().
* Combined the two patches that reduce the blk_mq_hw_ctx size into a single
  patch.
* Modified patch "blk-mq: Initialize a request before assigning a tag" such
  that .tag and .internal_tag are no longer initialized twice.
* Removed WARN_ON_ONCE(q->mq_ops) from blk_queue_bypass_end() because this
  function is used by both blk-sq and blk-mq.
* Added several new patches, e.g. "block: Rename blk_mq_rq_{to,from}_pdu()".

Please consider these patches for kernel v4.13.

Thanks,

Bart.

Bart Van Assche (12):
  block: Make request operation type argument declarations consistent
  block: Introduce request_queue.initialize_rq_fn()
  block: Make most scsi_req_init() calls implicit
  block: Change argument type of scsi_req_init()
  blk-mq: Initialize a request before assigning a tag
  block: Add a comment above queue_lockdep_assert_held()
  block: Check locking assumptions at runtime
  block: Document what queue type each function is intended for
  blk-mq: Document locking assumptions
  block: Constify disk_type
  blk-mq: Warn when attempting to run a hardware queue that is not
    mapped
  block: Rename blk_mq_rq_{to,from}_pdu()

 block/blk-core.c                   | 124 ++++++++++++++++++++++++++++---------
 block/blk-flush.c                  |   8 ++-
 block/blk-merge.c                  |   3 +
 block/blk-mq-sched.c               |   2 +
 block/blk-mq.c                     |  30 +++++----
 block/blk-tag.c                    |  15 ++---
 block/blk-timeout.c                |   4 +-
 block/bsg.c                        |   1 -
 block/genhd.c                      |   4 +-
 block/scsi_ioctl.c                 |  13 ++--
 drivers/block/loop.c               |   8 +--
 drivers/block/mtip32xx/mtip32xx.c  |  28 ++++-----
 drivers/block/nbd.c                |  18 +++---
 drivers/block/null_blk.c           |   4 +-
 drivers/block/pktcdvd.c            |   1 -
 drivers/block/rbd.c                |   6 +-
 drivers/block/virtio_blk.c         |  12 ++--
 drivers/block/xen-blkfront.c       |   2 +-
 drivers/cdrom/cdrom.c              |   1 -
 drivers/ide/ide-atapi.c            |   3 +-
 drivers/ide/ide-cd.c               |   1 -
 drivers/ide/ide-cd_ioctl.c         |   1 -
 drivers/ide/ide-devsets.c          |   1 -
 drivers/ide/ide-disk.c             |   1 -
 drivers/ide/ide-ioctls.c           |   2 -
 drivers/ide/ide-park.c             |   2 -
 drivers/ide/ide-pm.c               |   2 -
 drivers/ide/ide-probe.c            |   8 +--
 drivers/ide/ide-tape.c             |   1 -
 drivers/ide/ide-taskfile.c         |   1 -
 drivers/md/dm-rq.c                 |   6 +-
 drivers/mtd/ubi/block.c            |   8 +--
 drivers/nvme/host/fc.c             |  20 +++---
 drivers/nvme/host/nvme.h           |   2 +-
 drivers/nvme/host/pci.c            |  22 +++----
 drivers/nvme/host/rdma.c           |  18 +++---
 drivers/nvme/target/loop.c         |  10 +--
 drivers/scsi/osd/osd_initiator.c   |   2 -
 drivers/scsi/osst.c                |   1 -
 drivers/scsi/scsi_error.c          |   1 -
 drivers/scsi/scsi_lib.c            |  28 ++++++---
 drivers/scsi/scsi_transport_sas.c  |   6 ++
 drivers/scsi/sg.c                  |   2 -
 drivers/scsi/st.c                  |   1 -
 drivers/target/target_core_pscsi.c |   2 -
 fs/nfsd/blocklayout.c              |   1 -
 include/linux/blk-mq.h             |  19 +-----
 include/linux/blkdev.h             |  27 +++++++-
 include/linux/ide.h                |   2 +-
 include/scsi/scsi_request.h        |   4 +-
 50 files changed, 284 insertions(+), 205 deletions(-)

-- 
2.12.2

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 01/12] block: Make request operation type argument declarations consistent
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 02/12] block: Introduce request_queue.initialize_rq_fn() Bart Van Assche
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Ming Lei

Instead of declaring the second argument of blk_*_get_request()
as int and passing it to functions that expect an unsigned int,
declare that second argument as unsigned int. Also because of
consistency, rename that second argument from 'rw' into 'op'.
This patch does not change any functionality.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c       | 13 +++++++------
 block/blk-mq.c         | 10 +++++-----
 include/linux/blk-mq.h |  6 +++---
 include/linux/blkdev.h |  3 ++-
 4 files changed, 17 insertions(+), 15 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index a7421b772d0e..3bc431a77309 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1283,8 +1283,8 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
 	goto retry;
 }
 
-static struct request *blk_old_get_request(struct request_queue *q, int rw,
-		gfp_t gfp_mask)
+static struct request *blk_old_get_request(struct request_queue *q,
+					   unsigned int op, gfp_t gfp_mask)
 {
 	struct request *rq;
 
@@ -1292,7 +1292,7 @@ static struct request *blk_old_get_request(struct request_queue *q, int rw,
 	create_io_context(gfp_mask, q->node);
 
 	spin_lock_irq(q->queue_lock);
-	rq = get_request(q, rw, NULL, gfp_mask);
+	rq = get_request(q, op, NULL, gfp_mask);
 	if (IS_ERR(rq)) {
 		spin_unlock_irq(q->queue_lock);
 		return rq;
@@ -1305,14 +1305,15 @@ static struct request *blk_old_get_request(struct request_queue *q, int rw,
 	return rq;
 }
 
-struct request *blk_get_request(struct request_queue *q, int rw, gfp_t gfp_mask)
+struct request *blk_get_request(struct request_queue *q, unsigned int op,
+				gfp_t gfp_mask)
 {
 	if (q->mq_ops)
-		return blk_mq_alloc_request(q, rw,
+		return blk_mq_alloc_request(q, op,
 			(gfp_mask & __GFP_DIRECT_RECLAIM) ?
 				0 : BLK_MQ_REQ_NOWAIT);
 	else
-		return blk_old_get_request(q, rw, gfp_mask);
+		return blk_old_get_request(q, op, gfp_mask);
 }
 EXPORT_SYMBOL(blk_get_request);
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e068a26173fc..9aa1754e938b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -278,7 +278,7 @@ struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data,
 }
 EXPORT_SYMBOL_GPL(__blk_mq_alloc_request);
 
-struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
+struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 		unsigned int flags)
 {
 	struct blk_mq_alloc_data alloc_data = { .flags = flags };
@@ -289,7 +289,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
 	if (ret)
 		return ERR_PTR(ret);
 
-	rq = blk_mq_sched_get_request(q, NULL, rw, &alloc_data);
+	rq = blk_mq_sched_get_request(q, NULL, op, &alloc_data);
 
 	blk_mq_put_ctx(alloc_data.ctx);
 	blk_queue_exit(q);
@@ -304,8 +304,8 @@ struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
 }
 EXPORT_SYMBOL(blk_mq_alloc_request);
 
-struct request *blk_mq_alloc_request_hctx(struct request_queue *q, int rw,
-		unsigned int flags, unsigned int hctx_idx)
+struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+		unsigned int op, unsigned int flags, unsigned int hctx_idx)
 {
 	struct blk_mq_alloc_data alloc_data = { .flags = flags };
 	struct request *rq;
@@ -340,7 +340,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, int rw,
 	cpu = cpumask_first(alloc_data.hctx->cpumask);
 	alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
 
-	rq = blk_mq_sched_get_request(q, NULL, rw, &alloc_data);
+	rq = blk_mq_sched_get_request(q, NULL, op, &alloc_data);
 
 	blk_queue_exit(q);
 
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index c534ec64e214..a4759fd34e7e 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -205,10 +205,10 @@ enum {
 	BLK_MQ_REQ_INTERNAL	= (1 << 2), /* allocate internal/sched tag */
 };
 
-struct request *blk_mq_alloc_request(struct request_queue *q, int rw,
+struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op,
 		unsigned int flags);
-struct request *blk_mq_alloc_request_hctx(struct request_queue *q, int op,
-		unsigned int flags, unsigned int hctx_idx);
+struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
+		unsigned int op, unsigned int flags, unsigned int hctx_idx);
 struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag);
 
 enum {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 019f18c65098..6c4235018b49 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -932,7 +932,8 @@ extern void blk_rq_init(struct request_queue *q, struct request *rq);
 extern void blk_init_request_from_bio(struct request *req, struct bio *bio);
 extern void blk_put_request(struct request *);
 extern void __blk_put_request(struct request_queue *, struct request *);
-extern struct request *blk_get_request(struct request_queue *, int, gfp_t);
+extern struct request *blk_get_request(struct request_queue *, unsigned int op,
+				       gfp_t gfp_mask);
 extern void blk_requeue_request(struct request_queue *, struct request *);
 extern int blk_lld_busy(struct request_queue *q);
 extern int blk_rq_prep_clone(struct request *rq, struct request *rq_src,
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 02/12] block: Introduce request_queue.initialize_rq_fn()
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 01/12] block: Make request operation type argument declarations consistent Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-06-01  6:06   ` Christoph Hellwig
  2017-05-31 22:52 ` [PATCH v2 03/12] block: Make most scsi_req_init() calls implicit Bart Van Assche
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval

Several block drivers need to initialize the driver-private data
after having called blk_get_request() and before .prep_rq_fn() is
called, e.g. when submitting a REQ_OP_SCSI_* request. Avoid that
that initialization code has to be repeated after every
blk_get_request() call by adding a new callback function to struct
request_queue.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
---
 block/blk-core.c       | 11 +++++++++--
 include/linux/blkdev.h |  4 ++++
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 3bc431a77309..3f68bc1f044c 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1308,12 +1308,19 @@ static struct request *blk_old_get_request(struct request_queue *q,
 struct request *blk_get_request(struct request_queue *q, unsigned int op,
 				gfp_t gfp_mask)
 {
+	struct request *req;
+
 	if (q->mq_ops)
-		return blk_mq_alloc_request(q, op,
+		req = blk_mq_alloc_request(q, op,
 			(gfp_mask & __GFP_DIRECT_RECLAIM) ?
 				0 : BLK_MQ_REQ_NOWAIT);
 	else
-		return blk_old_get_request(q, op, gfp_mask);
+		req = blk_old_get_request(q, op, gfp_mask);
+
+	if (!IS_ERR(req) && q->initialize_rq_fn)
+		q->initialize_rq_fn(req);
+
+	return req;
 }
 EXPORT_SYMBOL(blk_get_request);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 6c4235018b49..cbc0028290e4 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -410,8 +410,12 @@ struct request_queue {
 	rq_timed_out_fn		*rq_timed_out_fn;
 	dma_drain_needed_fn	*dma_drain_needed;
 	lld_busy_fn		*lld_busy_fn;
+	/* Called just after a request is allocated */
 	init_rq_fn		*init_rq_fn;
+	/* Called just before a request is freed */
 	exit_rq_fn		*exit_rq_fn;
+	/* Called from inside blk_get_request() */
+	void (*initialize_rq_fn)(struct request *rq);
 
 	const struct blk_mq_ops	*mq_ops;
 
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 03/12] block: Make most scsi_req_init() calls implicit
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 01/12] block: Make request operation type argument declarations consistent Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 02/12] block: Introduce request_queue.initialize_rq_fn() Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-06-01  6:08   ` Christoph Hellwig
  2017-05-31 22:52 ` [PATCH v2 04/12] block: Change argument type of scsi_req_init() Bart Van Assche
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Nicholas Bellinger

Instead of explicitly calling scsi_req_init() after blk_get_request(),
call that function from inside blk_get_request(). Add an
.initialize_rq_fn() callback function to the block drivers that need
it. Merge the IDE .init_rq_fn() function into .initialize_rq_fn()
because it is too small to keep it as a separate function. Keep the
scsi_req_init() call in ide_prep_sense() because it follows a
blk_rq_init() call.

References: commit 82ed4db499b8 ("block: split scsi_request out of struct request")
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Nicholas Bellinger <nab@linux-iscsi.org>
---
 block/bsg.c                        |  1 -
 block/scsi_ioctl.c                 |  3 ---
 drivers/block/pktcdvd.c            |  1 -
 drivers/cdrom/cdrom.c              |  1 -
 drivers/ide/ide-atapi.c            |  1 -
 drivers/ide/ide-cd.c               |  1 -
 drivers/ide/ide-cd_ioctl.c         |  1 -
 drivers/ide/ide-devsets.c          |  1 -
 drivers/ide/ide-disk.c             |  1 -
 drivers/ide/ide-ioctls.c           |  2 --
 drivers/ide/ide-park.c             |  2 --
 drivers/ide/ide-pm.c               |  2 --
 drivers/ide/ide-probe.c            |  6 +++---
 drivers/ide/ide-tape.c             |  1 -
 drivers/ide/ide-taskfile.c         |  1 -
 drivers/scsi/osd/osd_initiator.c   |  2 --
 drivers/scsi/osst.c                |  1 -
 drivers/scsi/scsi_error.c          |  1 -
 drivers/scsi/scsi_lib.c            | 10 +++++++++-
 drivers/scsi/scsi_transport_sas.c  |  6 ++++++
 drivers/scsi/sg.c                  |  2 --
 drivers/scsi/st.c                  |  1 -
 drivers/target/target_core_pscsi.c |  2 --
 fs/nfsd/blocklayout.c              |  1 -
 24 files changed, 18 insertions(+), 33 deletions(-)

diff --git a/block/bsg.c b/block/bsg.c
index 40db8ff4c618..84ec1b19d516 100644
--- a/block/bsg.c
+++ b/block/bsg.c
@@ -236,7 +236,6 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm)
 	rq = blk_get_request(q, op, GFP_KERNEL);
 	if (IS_ERR(rq))
 		return rq;
-	scsi_req_init(rq);
 
 	ret = blk_fill_sgv4_hdr_rq(q, rq, hdr, bd, has_write_perm);
 	if (ret)
diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index 4a294a5f7fab..f96c51f5df40 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -326,7 +326,6 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
 	req = scsi_req(rq);
-	scsi_req_init(rq);
 
 	if (hdr->cmd_len > BLK_MAX_CDB) {
 		req->cmd = kzalloc(hdr->cmd_len, GFP_KERNEL);
@@ -456,7 +455,6 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode,
 		goto error_free_buffer;
 	}
 	req = scsi_req(rq);
-	scsi_req_init(rq);
 
 	cmdlen = COMMAND_SIZE(opcode);
 
@@ -542,7 +540,6 @@ static int __blk_send_generic(struct request_queue *q, struct gendisk *bd_disk,
 	rq = blk_get_request(q, REQ_OP_SCSI_OUT, __GFP_RECLAIM);
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
-	scsi_req_init(rq);
 	rq->timeout = BLK_DEFAULT_SG_TIMEOUT;
 	scsi_req(rq)->cmd[0] = cmd;
 	scsi_req(rq)->cmd[4] = data;
diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 42e3c880a8a5..2ea332c9438a 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -707,7 +707,6 @@ static int pkt_generic_packet(struct pktcdvd_device *pd, struct packet_command *
 			     REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, __GFP_RECLAIM);
 	if (IS_ERR(rq))
 		return PTR_ERR(rq);
-	scsi_req_init(rq);
 
 	if (cgc->buflen) {
 		ret = blk_rq_map_kern(q, rq, cgc->buffer, cgc->buflen,
diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
index ff19cfc587f0..e36d160c458f 100644
--- a/drivers/cdrom/cdrom.c
+++ b/drivers/cdrom/cdrom.c
@@ -2201,7 +2201,6 @@ static int cdrom_read_cdda_bpc(struct cdrom_device_info *cdi, __u8 __user *ubuf,
 			break;
 		}
 		req = scsi_req(rq);
-		scsi_req_init(rq);
 
 		ret = blk_rq_map_user(q, rq, NULL, ubuf, len, GFP_KERNEL);
 		if (ret) {
diff --git a/drivers/ide/ide-atapi.c b/drivers/ide/ide-atapi.c
index 5901937284e7..98e78b520417 100644
--- a/drivers/ide/ide-atapi.c
+++ b/drivers/ide/ide-atapi.c
@@ -93,7 +93,6 @@ int ide_queue_pc_tail(ide_drive_t *drive, struct gendisk *disk,
 	int error;
 
 	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_MISC;
 	rq->special = (char *)pc;
 
diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c
index 07e5ff3a64c3..a14ccb34c923 100644
--- a/drivers/ide/ide-cd.c
+++ b/drivers/ide/ide-cd.c
@@ -438,7 +438,6 @@ int ide_cd_queue_pc(ide_drive_t *drive, const unsigned char *cmd,
 
 		rq = blk_get_request(drive->queue,
 			write ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN,  __GFP_RECLAIM);
-		scsi_req_init(rq);
 		memcpy(scsi_req(rq)->cmd, cmd, BLK_MAX_CDB);
 		ide_req(rq)->type = ATA_PRIV_PC;
 		rq->rq_flags |= rq_flags;
diff --git a/drivers/ide/ide-cd_ioctl.c b/drivers/ide/ide-cd_ioctl.c
index 55cd736c39c6..9d26c9737e21 100644
--- a/drivers/ide/ide-cd_ioctl.c
+++ b/drivers/ide/ide-cd_ioctl.c
@@ -304,7 +304,6 @@ int ide_cdrom_reset(struct cdrom_device_info *cdi)
 	int ret;
 
 	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_MISC;
 	rq->rq_flags = RQF_QUIET;
 	blk_execute_rq(drive->queue, cd->disk, rq, 0);
diff --git a/drivers/ide/ide-devsets.c b/drivers/ide/ide-devsets.c
index 9b69c32ee560..ef7c8c43a380 100644
--- a/drivers/ide/ide-devsets.c
+++ b/drivers/ide/ide-devsets.c
@@ -166,7 +166,6 @@ int ide_devset_execute(ide_drive_t *drive, const struct ide_devset *setting,
 		return setting->set(drive, arg);
 
 	rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_MISC;
 	scsi_req(rq)->cmd_len = 5;
 	scsi_req(rq)->cmd[0] = REQ_DEVSET_EXEC;
diff --git a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c
index 7c06237f3479..241983da5fc4 100644
--- a/drivers/ide/ide-disk.c
+++ b/drivers/ide/ide-disk.c
@@ -478,7 +478,6 @@ static int set_multcount(ide_drive_t *drive, int arg)
 		return -EBUSY;
 
 	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_TASKFILE;
 
 	drive->mult_req = arg;
diff --git a/drivers/ide/ide-ioctls.c b/drivers/ide/ide-ioctls.c
index 8c0d17297a7a..3661abb16a5f 100644
--- a/drivers/ide/ide-ioctls.c
+++ b/drivers/ide/ide-ioctls.c
@@ -126,7 +126,6 @@ static int ide_cmd_ioctl(ide_drive_t *drive, unsigned long arg)
 		struct request *rq;
 
 		rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
-		scsi_req_init(rq);
 		ide_req(rq)->type = ATA_PRIV_TASKFILE;
 		blk_execute_rq(drive->queue, NULL, rq, 0);
 		err = scsi_req(rq)->result ? -EIO : 0;
@@ -224,7 +223,6 @@ static int generic_drive_reset(ide_drive_t *drive)
 	int ret = 0;
 
 	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_MISC;
 	scsi_req(rq)->cmd_len = 1;
 	scsi_req(rq)->cmd[0] = REQ_DRIVE_RESET;
diff --git a/drivers/ide/ide-park.c b/drivers/ide/ide-park.c
index 94e3107f59b9..1f264d5d3f3f 100644
--- a/drivers/ide/ide-park.c
+++ b/drivers/ide/ide-park.c
@@ -32,7 +32,6 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
 	spin_unlock_irq(&hwif->lock);
 
 	rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	scsi_req(rq)->cmd[0] = REQ_PARK_HEADS;
 	scsi_req(rq)->cmd_len = 1;
 	ide_req(rq)->type = ATA_PRIV_MISC;
@@ -48,7 +47,6 @@ static void issue_park_cmd(ide_drive_t *drive, unsigned long timeout)
 	 * timeout has expired, so power management will be reenabled.
 	 */
 	rq = blk_get_request(q, REQ_OP_DRV_IN, GFP_NOWAIT);
-	scsi_req_init(rq);
 	if (IS_ERR(rq))
 		goto out;
 
diff --git a/drivers/ide/ide-pm.c b/drivers/ide/ide-pm.c
index 0977fc1f40ce..cfe3c2d7db7f 100644
--- a/drivers/ide/ide-pm.c
+++ b/drivers/ide/ide-pm.c
@@ -19,7 +19,6 @@ int generic_ide_suspend(struct device *dev, pm_message_t mesg)
 
 	memset(&rqpm, 0, sizeof(rqpm));
 	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_PM_SUSPEND;
 	rq->special = &rqpm;
 	rqpm.pm_step = IDE_PM_START_SUSPEND;
@@ -91,7 +90,6 @@ int generic_ide_resume(struct device *dev)
 
 	memset(&rqpm, 0, sizeof(rqpm));
 	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_PM_RESUME;
 	rq->rq_flags |= RQF_PREEMPT;
 	rq->special = &rqpm;
diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index b3f85250dea9..c60e5ffc9231 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@ -741,12 +741,12 @@ static void ide_port_tune_devices(ide_hwif_t *hwif)
 	}
 }
 
-static int ide_init_rq(struct request_queue *q, struct request *rq, gfp_t gfp)
+static void ide_initialize_rq(struct request *rq)
 {
 	struct ide_request *req = blk_mq_rq_to_pdu(rq);
 
+	scsi_req_init(rq);
 	req->sreq.sense = req->sense;
-	return 0;
 }
 
 /*
@@ -771,7 +771,7 @@ static int ide_init_queue(ide_drive_t *drive)
 		return 1;
 
 	q->request_fn = do_ide_request;
-	q->init_rq_fn = ide_init_rq;
+	q->initialize_rq_fn = ide_initialize_rq;
 	q->cmd_size = sizeof(struct ide_request);
 	queue_flag_set_unlocked(QUEUE_FLAG_SCSI_PASSTHROUGH, q);
 	if (blk_init_allocated_queue(q) < 0) {
diff --git a/drivers/ide/ide-tape.c b/drivers/ide/ide-tape.c
index a0651f948b76..370fd39dce94 100644
--- a/drivers/ide/ide-tape.c
+++ b/drivers/ide/ide-tape.c
@@ -855,7 +855,6 @@ static int idetape_queue_rw_tail(ide_drive_t *drive, int cmd, int size)
 	BUG_ON(size < 0 || size % tape->blk_size);
 
 	rq = blk_get_request(drive->queue, REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_MISC;
 	scsi_req(rq)->cmd[13] = cmd;
 	rq->rq_disk = tape->disk;
diff --git a/drivers/ide/ide-taskfile.c b/drivers/ide/ide-taskfile.c
index d71199d23c9e..d915a8eba557 100644
--- a/drivers/ide/ide-taskfile.c
+++ b/drivers/ide/ide-taskfile.c
@@ -433,7 +433,6 @@ int ide_raw_taskfile(ide_drive_t *drive, struct ide_cmd *cmd, u8 *buf,
 	rq = blk_get_request(drive->queue,
 		(cmd->tf_flags & IDE_TFLAG_WRITE) ?
 			REQ_OP_DRV_OUT : REQ_OP_DRV_IN, __GFP_RECLAIM);
-	scsi_req_init(rq);
 	ide_req(rq)->type = ATA_PRIV_TASKFILE;
 
 	/*
diff --git a/drivers/scsi/osd/osd_initiator.c b/drivers/scsi/osd/osd_initiator.c
index 8a1b94816419..d974e7f1d2f1 100644
--- a/drivers/scsi/osd/osd_initiator.c
+++ b/drivers/scsi/osd/osd_initiator.c
@@ -1572,7 +1572,6 @@ static struct request *_make_request(struct request_queue *q, bool has_write,
 			flags);
 	if (IS_ERR(req))
 		return req;
-	scsi_req_init(req);
 
 	for_each_bio(bio) {
 		struct bio *bounce_bio = bio;
@@ -1617,7 +1616,6 @@ static int _init_blk_request(struct osd_request *or,
 				ret = PTR_ERR(req);
 				goto out;
 			}
-			scsi_req_init(req);
 			or->in.req = or->request->next_rq = req;
 		}
 	} else if (has_in)
diff --git a/drivers/scsi/osst.c b/drivers/scsi/osst.c
index 67cbed92f07d..22080148c6a8 100644
--- a/drivers/scsi/osst.c
+++ b/drivers/scsi/osst.c
@@ -373,7 +373,6 @@ static int osst_execute(struct osst_request *SRpnt, const unsigned char *cmd,
 		return DRIVER_ERROR << 24;
 
 	rq = scsi_req(req);
-	scsi_req_init(req);
 	req->rq_flags |= RQF_QUIET;
 
 	SRpnt->bio = NULL;
diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
index ecc07dab893d..b1ff26ac68c1 100644
--- a/drivers/scsi/scsi_error.c
+++ b/drivers/scsi/scsi_error.c
@@ -1903,7 +1903,6 @@ static void scsi_eh_lock_door(struct scsi_device *sdev)
 	if (IS_ERR(req))
 		return;
 	rq = scsi_req(req);
-	scsi_req_init(req);
 
 	rq->cmd[0] = ALLOW_MEDIUM_REMOVAL;
 	rq->cmd[1] = 0;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 884aaa84c2dd..e96ffd187558 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -250,7 +250,6 @@ int scsi_execute(struct scsi_device *sdev, const unsigned char *cmd,
 	if (IS_ERR(req))
 		return ret;
 	rq = scsi_req(req);
-	scsi_req_init(req);
 
 	if (bufflen &&	blk_rq_map_kern(sdev->request_queue, req,
 					buffer, bufflen, __GFP_RECLAIM))
@@ -1134,6 +1133,13 @@ int scsi_init_io(struct scsi_cmnd *cmd)
 }
 EXPORT_SYMBOL(scsi_init_io);
 
+/* Called from inside blk_get_request() */
+static void scsi_initialize_rq(struct request *rq)
+{
+	scsi_req_init(rq);
+}
+
+/* Called after a request has been started. */
 void scsi_init_command(struct scsi_device *dev, struct scsi_cmnd *cmd)
 {
 	void *buf = cmd->sense_buffer;
@@ -2089,6 +2095,8 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)
 	 * blk_queue_update_dma_alignment() later.
 	 */
 	blk_queue_dma_alignment(q, 0x03);
+
+	q->initialize_rq_fn = scsi_initialize_rq;
 }
 EXPORT_SYMBOL_GPL(__scsi_init_queue);
 
diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
index d16414bfe2ef..f5449da6fcad 100644
--- a/drivers/scsi/scsi_transport_sas.c
+++ b/drivers/scsi/scsi_transport_sas.c
@@ -213,6 +213,11 @@ static void sas_host_release(struct device *dev)
 		blk_cleanup_queue(q);
 }
 
+static void sas_initialize_rq(struct request *rq)
+{
+	scsi_req_init(rq);
+}
+
 static int sas_bsg_initialize(struct Scsi_Host *shost, struct sas_rphy *rphy)
 {
 	struct request_queue *q;
@@ -230,6 +235,7 @@ static int sas_bsg_initialize(struct Scsi_Host *shost, struct sas_rphy *rphy)
 	q = blk_alloc_queue(GFP_KERNEL);
 	if (!q)
 		return -ENOMEM;
+	q->initialize_rq_fn = sas_initialize_rq;
 	q->cmd_size = sizeof(struct scsi_request);
 
 	if (rphy) {
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index 82c33a6edbea..c3215cec0c82 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -1732,8 +1732,6 @@ sg_start_req(Sg_request *srp, unsigned char *cmd)
 	}
 	req = scsi_req(rq);
 
-	scsi_req_init(rq);
-
 	if (hp->cmd_len > BLK_MAX_CDB)
 		req->cmd = long_cmdp;
 	memcpy(req->cmd, cmd, hp->cmd_len);
diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
index 1ea34d6f5437..dc4d2b9e15a0 100644
--- a/drivers/scsi/st.c
+++ b/drivers/scsi/st.c
@@ -549,7 +549,6 @@ static int st_scsi_execute(struct st_request *SRpnt, const unsigned char *cmd,
 	if (IS_ERR(req))
 		return DRIVER_ERROR << 24;
 	rq = scsi_req(req);
-	scsi_req_init(req);
 	req->rq_flags |= RQF_QUIET;
 
 	mdata->null_mapped = 1;
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 3e4abb13f8ea..d4572639949f 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -992,8 +992,6 @@ pscsi_execute_cmd(struct se_cmd *cmd)
 		goto fail;
 	}
 
-	scsi_req_init(req);
-
 	if (sgl) {
 		ret = pscsi_map_sg(cmd, sgl, sgl_nents, req);
 		if (ret)
diff --git a/fs/nfsd/blocklayout.c b/fs/nfsd/blocklayout.c
index 47ed19c53f2e..c862c2489df0 100644
--- a/fs/nfsd/blocklayout.c
+++ b/fs/nfsd/blocklayout.c
@@ -232,7 +232,6 @@ static int nfsd4_scsi_identify_device(struct block_device *bdev,
 		goto out_free_buf;
 	}
 	req = scsi_req(rq);
-	scsi_req_init(rq);
 
 	error = blk_rq_map_kern(q, rq, buf, bufflen, GFP_KERNEL);
 	if (error)
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 04/12] block: Change argument type of scsi_req_init()
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (2 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 03/12] block: Make most scsi_req_init() calls implicit Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 05/12] blk-mq: Initialize a request before assigning a tag Bart Van Assche
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Christoph Hellwig, Bart Van Assche

Since scsi_req_init() works on a struct scsi_request, change the
argument type into struct scsi_request *.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
---
 block/scsi_ioctl.c                | 10 +++++++---
 drivers/ide/ide-atapi.c           |  2 +-
 drivers/ide/ide-probe.c           |  2 +-
 drivers/scsi/scsi_lib.c           |  4 +++-
 drivers/scsi/scsi_transport_sas.c |  2 +-
 include/scsi/scsi_request.h       |  2 +-
 6 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c
index f96c51f5df40..7440de44dd85 100644
--- a/block/scsi_ioctl.c
+++ b/block/scsi_ioctl.c
@@ -741,10 +741,14 @@ int scsi_cmd_blk_ioctl(struct block_device *bd, fmode_t mode,
 }
 EXPORT_SYMBOL(scsi_cmd_blk_ioctl);
 
-void scsi_req_init(struct request *rq)
+/**
+ * scsi_req_init - initialize certain fields of a scsi_request structure
+ * @req: Pointer to a scsi_request structure.
+ * Initializes .__cmd[], .cmd, .cmd_len and .sense_len but no other members
+ * of struct scsi_request.
+ */
+void scsi_req_init(struct scsi_request *req)
 {
-	struct scsi_request *req = scsi_req(rq);
-
 	memset(req->__cmd, 0, sizeof(req->__cmd));
 	req->cmd = req->__cmd;
 	req->cmd_len = BLK_MAX_CDB;
diff --git a/drivers/ide/ide-atapi.c b/drivers/ide/ide-atapi.c
index 98e78b520417..5ffecef8b910 100644
--- a/drivers/ide/ide-atapi.c
+++ b/drivers/ide/ide-atapi.c
@@ -199,7 +199,7 @@ void ide_prep_sense(ide_drive_t *drive, struct request *rq)
 	memset(sense, 0, sizeof(*sense));
 
 	blk_rq_init(rq->q, sense_rq);
-	scsi_req_init(sense_rq);
+	scsi_req_init(req);
 
 	err = blk_rq_map_kern(drive->queue, sense_rq, sense, sense_len,
 			      GFP_NOIO);
diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index c60e5ffc9231..01b2adfd8226 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@ -745,7 +745,7 @@ static void ide_initialize_rq(struct request *rq)
 {
 	struct ide_request *req = blk_mq_rq_to_pdu(rq);
 
-	scsi_req_init(rq);
+	scsi_req_init(&req->sreq);
 	req->sreq.sense = req->sense;
 }
 
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index e96ffd187558..b629d8cbf0d1 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1136,7 +1136,9 @@ EXPORT_SYMBOL(scsi_init_io);
 /* Called from inside blk_get_request() */
 static void scsi_initialize_rq(struct request *rq)
 {
-	scsi_req_init(rq);
+	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+
+	scsi_req_init(&cmd->req);
 }
 
 /* Called after a request has been started. */
diff --git a/drivers/scsi/scsi_transport_sas.c b/drivers/scsi/scsi_transport_sas.c
index f5449da6fcad..35598905d785 100644
--- a/drivers/scsi/scsi_transport_sas.c
+++ b/drivers/scsi/scsi_transport_sas.c
@@ -215,7 +215,7 @@ static void sas_host_release(struct device *dev)
 
 static void sas_initialize_rq(struct request *rq)
 {
-	scsi_req_init(rq);
+	scsi_req_init(scsi_req(rq));
 }
 
 static int sas_bsg_initialize(struct Scsi_Host *shost, struct sas_rphy *rphy)
diff --git a/include/scsi/scsi_request.h b/include/scsi/scsi_request.h
index f0c76f9dc285..e0afa445ee4e 100644
--- a/include/scsi/scsi_request.h
+++ b/include/scsi/scsi_request.h
@@ -27,6 +27,6 @@ static inline void scsi_req_free_cmd(struct scsi_request *req)
 		kfree(req->cmd);
 }
 
-void scsi_req_init(struct request *);
+void scsi_req_init(struct scsi_request *req);
 
 #endif /* _SCSI_SCSI_REQUEST_H */
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 05/12] blk-mq: Initialize a request before assigning a tag
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (3 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 04/12] block: Change argument type of scsi_req_init() Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-06-01  6:09   ` Christoph Hellwig
  2017-05-31 22:52 ` [PATCH v2 06/12] block: Add a comment above queue_lockdep_assert_held() Bart Van Assche
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Ming Lei

Initialization of blk-mq requests is a bit weird: blk_mq_rq_ctx_init()
is called after a value has been assigned to .rq_flags and .rq_flags
is initialized in __blk_mq_finish_request(). Call blk_mq_rq_ctx_init()
before modifying any struct request members. Initialize .rq_flags in
blk_mq_rq_ctx_init() instead of relying on __blk_mq_finish_request().
Moving the initialization of .rq_flags is fine because all changes
and tests of .rq_flags occur between blk_get_request() and finishing
a request.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9aa1754e938b..488c6ca2ad91 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -212,6 +212,7 @@ void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
 	rq->q = q;
 	rq->mq_ctx = ctx;
 	rq->cmd_flags = op;
+	rq->rq_flags = 0;
 	if (blk_queue_io_stat(q))
 		rq->rq_flags |= RQF_IO_STAT;
 	/* do not touch atomic flags, it needs atomic ops against the timer */
@@ -231,7 +232,7 @@ void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
 	rq->nr_integrity_segments = 0;
 #endif
 	rq->special = NULL;
-	/* tag was already set */
+	/* tag will be set by caller */
 	rq->extra_len = 0;
 
 	INIT_LIST_HEAD(&rq->timeout_list);
@@ -257,12 +258,14 @@ struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data,
 
 		rq = tags->static_rqs[tag];
 
+		blk_mq_rq_ctx_init(data->q, data->ctx, rq, op);
+
 		if (data->flags & BLK_MQ_REQ_INTERNAL) {
 			rq->tag = -1;
 			rq->internal_tag = tag;
 		} else {
 			if (blk_mq_tag_busy(data->hctx)) {
-				rq->rq_flags = RQF_MQ_INFLIGHT;
+				rq->rq_flags |= RQF_MQ_INFLIGHT;
 				atomic_inc(&data->hctx->nr_active);
 			}
 			rq->tag = tag;
@@ -270,7 +273,6 @@ struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data,
 			data->hctx->tags->rqs[rq->tag] = rq;
 		}
 
-		blk_mq_rq_ctx_init(data->q, data->ctx, rq, op);
 		return rq;
 	}
 
@@ -361,7 +363,6 @@ void __blk_mq_finish_request(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
 		atomic_dec(&hctx->nr_active);
 
 	wbt_done(q->rq_wb, &rq->issue_stat);
-	rq->rq_flags = 0;
 
 	clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags);
 	clear_bit(REQ_ATOM_POLL_SLEPT, &rq->atomic_flags);
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 06/12] block: Add a comment above queue_lockdep_assert_held()
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (4 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 05/12] blk-mq: Initialize a request before assigning a tag Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 07/12] block: Check locking assumptions at runtime Bart Van Assche
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Ming Lei

Add a comment above the queue_lockdep_assert_held() macro that
explains the purpose of the q->queue_lock test.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
 include/linux/blkdev.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cbc0028290e4..1e73b4df13a9 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -634,6 +634,13 @@ struct request_queue {
 				 (1 << QUEUE_FLAG_SAME_COMP)	|	\
 				 (1 << QUEUE_FLAG_POLL))
 
+/*
+ * @q->queue_lock is set while a queue is being initialized. Since we know
+ * that no other threads access the queue object before @q->queue_lock has
+ * been set, it is safe to manipulate queue flags without holding the
+ * queue_lock if @q->queue_lock == NULL. See also blk_alloc_queue_node() and
+ * blk_init_allocated_queue().
+ */
 static inline void queue_lockdep_assert_held(struct request_queue *q)
 {
 	if (q->queue_lock)
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 07/12] block: Check locking assumptions at runtime
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (5 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 06/12] block: Add a comment above queue_lockdep_assert_held() Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-06-01  6:09   ` Christoph Hellwig
  2017-05-31 22:52 ` [PATCH v2 08/12] block: Document what queue type each function is intended for Bart Van Assche
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Ming Lei

Instead of documenting the locking assumptions of most block layer
functions as a comment, use lockdep_assert_held() to verify locking
assumptions at runtime.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c    | 71 +++++++++++++++++++++++++++++++++++------------------
 block/blk-flush.c   |  8 +++---
 block/blk-merge.c   |  3 +++
 block/blk-tag.c     | 15 +++++------
 block/blk-timeout.c |  4 ++-
 5 files changed, 64 insertions(+), 37 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 3f68bc1f044c..f3ad963eccdd 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -177,10 +177,12 @@ static void blk_delay_work(struct work_struct *work)
  * Description:
  *   Sometimes queueing needs to be postponed for a little while, to allow
  *   resources to come back. This function will make sure that queueing is
- *   restarted around the specified time. Queue lock must be held.
+ *   restarted around the specified time.
  */
 void blk_delay_queue(struct request_queue *q, unsigned long msecs)
 {
+	lockdep_assert_held(q->queue_lock);
+
 	if (likely(!blk_queue_dead(q)))
 		queue_delayed_work(kblockd_workqueue, &q->delay_work,
 				   msecs_to_jiffies(msecs));
@@ -198,6 +200,8 @@ EXPORT_SYMBOL(blk_delay_queue);
  **/
 void blk_start_queue_async(struct request_queue *q)
 {
+	lockdep_assert_held(q->queue_lock);
+
 	queue_flag_clear(QUEUE_FLAG_STOPPED, q);
 	blk_run_queue_async(q);
 }
@@ -210,10 +214,11 @@ EXPORT_SYMBOL(blk_start_queue_async);
  * Description:
  *   blk_start_queue() will clear the stop flag on the queue, and call
  *   the request_fn for the queue if it was in a stopped state when
- *   entered. Also see blk_stop_queue(). Queue lock must be held.
+ *   entered. Also see blk_stop_queue().
  **/
 void blk_start_queue(struct request_queue *q)
 {
+	lockdep_assert_held(q->queue_lock);
 	WARN_ON(!irqs_disabled());
 
 	queue_flag_clear(QUEUE_FLAG_STOPPED, q);
@@ -233,10 +238,12 @@ EXPORT_SYMBOL(blk_start_queue);
  *   or if it simply chooses not to queue more I/O at one point, it can
  *   call this function to prevent the request_fn from being called until
  *   the driver has signalled it's ready to go again. This happens by calling
- *   blk_start_queue() to restart queue operations. Queue lock must be held.
+ *   blk_start_queue() to restart queue operations.
  **/
 void blk_stop_queue(struct request_queue *q)
 {
+	lockdep_assert_held(q->queue_lock);
+
 	cancel_delayed_work(&q->delay_work);
 	queue_flag_set(QUEUE_FLAG_STOPPED, q);
 }
@@ -289,6 +296,8 @@ EXPORT_SYMBOL(blk_sync_queue);
  */
 inline void __blk_run_queue_uncond(struct request_queue *q)
 {
+	lockdep_assert_held(q->queue_lock);
+
 	if (unlikely(blk_queue_dead(q)))
 		return;
 
@@ -310,11 +319,12 @@ EXPORT_SYMBOL_GPL(__blk_run_queue_uncond);
  * @q:	The queue to run
  *
  * Description:
- *    See @blk_run_queue. This variant must be called with the queue lock
- *    held and interrupts disabled.
+ *    See @blk_run_queue.
  */
 void __blk_run_queue(struct request_queue *q)
 {
+	lockdep_assert_held(q->queue_lock);
+
 	if (unlikely(blk_queue_stopped(q)))
 		return;
 
@@ -328,10 +338,17 @@ EXPORT_SYMBOL(__blk_run_queue);
  *
  * Description:
  *    Tells kblockd to perform the equivalent of @blk_run_queue on behalf
- *    of us. The caller must hold the queue lock.
+ *    of us.
+ *
+ * Note:
+ *    Since it is not allowed to run q->delay_work after blk_cleanup_queue()
+ *    has canceled q->delay_work, callers must hold the queue lock to avoid
+ *    race conditions between blk_cleanup_queue() and blk_run_queue_async().
  */
 void blk_run_queue_async(struct request_queue *q)
 {
+	lockdep_assert_held(q->queue_lock);
+
 	if (likely(!blk_queue_stopped(q) && !blk_queue_dead(q)))
 		mod_delayed_work(kblockd_workqueue, &q->delay_work, 0);
 }
@@ -1077,6 +1094,8 @@ static struct request *__get_request(struct request_list *rl, unsigned int op,
 	int may_queue;
 	req_flags_t rq_flags = RQF_ALLOCED;
 
+	lockdep_assert_held(q->queue_lock);
+
 	if (unlikely(blk_queue_dying(q)))
 		return ERR_PTR(-ENODEV);
 
@@ -1250,6 +1269,8 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
 	struct request_list *rl;
 	struct request *rq;
 
+	lockdep_assert_held(q->queue_lock);
+
 	rl = blk_get_rl(q, bio);	/* transferred to @rq on success */
 retry:
 	rq = __get_request(rl, op, bio, gfp_mask);
@@ -1336,6 +1357,8 @@ EXPORT_SYMBOL(blk_get_request);
  */
 void blk_requeue_request(struct request_queue *q, struct request *rq)
 {
+	lockdep_assert_held(q->queue_lock);
+
 	blk_delete_timer(rq);
 	blk_clear_rq_complete(rq);
 	trace_block_rq_requeue(q, rq);
@@ -1410,9 +1433,6 @@ static void blk_pm_put_request(struct request *rq)
 static inline void blk_pm_put_request(struct request *rq) {}
 #endif
 
-/*
- * queue lock must be held
- */
 void __blk_put_request(struct request_queue *q, struct request *req)
 {
 	req_flags_t rq_flags = req->rq_flags;
@@ -1425,6 +1445,8 @@ void __blk_put_request(struct request_queue *q, struct request *req)
 		return;
 	}
 
+	lockdep_assert_held(q->queue_lock);
+
 	blk_pm_put_request(req);
 
 	elv_completed_request(q, req);
@@ -2246,9 +2268,6 @@ EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
  *
  * Return:
  *     The number of bytes to fail.
- *
- * Context:
- *     queue_lock must be held.
  */
 unsigned int blk_rq_err_bytes(const struct request *rq)
 {
@@ -2388,15 +2407,14 @@ void blk_account_io_start(struct request *rq, bool new_io)
  * Return:
  *     Pointer to the request at the top of @q if available.  Null
  *     otherwise.
- *
- * Context:
- *     queue_lock must be held.
  */
 struct request *blk_peek_request(struct request_queue *q)
 {
 	struct request *rq;
 	int ret;
 
+	lockdep_assert_held(q->queue_lock);
+
 	while ((rq = __elv_next_request(q)) != NULL) {
 
 		rq = blk_pm_peek_request(q, rq);
@@ -2513,12 +2531,11 @@ void blk_dequeue_request(struct request *rq)
  *
  *     Block internal functions which don't want to start timer should
  *     call blk_dequeue_request().
- *
- * Context:
- *     queue_lock must be held.
  */
 void blk_start_request(struct request *req)
 {
+	lockdep_assert_held(req->q->queue_lock);
+
 	blk_dequeue_request(req);
 
 	if (test_bit(QUEUE_FLAG_STATS, &req->q->queue_flags)) {
@@ -2543,14 +2560,13 @@ EXPORT_SYMBOL(blk_start_request);
  * Return:
  *     Pointer to the request at the top of @q if available.  Null
  *     otherwise.
- *
- * Context:
- *     queue_lock must be held.
  */
 struct request *blk_fetch_request(struct request_queue *q)
 {
 	struct request *rq;
 
+	lockdep_assert_held(q->queue_lock);
+
 	rq = blk_peek_request(q);
 	if (rq)
 		blk_start_request(rq);
@@ -2726,13 +2742,12 @@ void blk_unprep_request(struct request *req)
 }
 EXPORT_SYMBOL_GPL(blk_unprep_request);
 
-/*
- * queue lock must be held
- */
 void blk_finish_request(struct request *req, int error)
 {
 	struct request_queue *q = req->q;
 
+	lockdep_assert_held(req->q->queue_lock);
+
 	if (req->rq_flags & RQF_STATS)
 		blk_stat_add(req);
 
@@ -2814,6 +2829,8 @@ static bool blk_end_bidi_request(struct request *rq, int error,
 static bool __blk_end_bidi_request(struct request *rq, int error,
 				   unsigned int nr_bytes, unsigned int bidi_bytes)
 {
+	lockdep_assert_held(rq->q->queue_lock);
+
 	if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes))
 		return true;
 
@@ -2878,6 +2895,8 @@ EXPORT_SYMBOL(blk_end_request_all);
  **/
 bool __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
 {
+	lockdep_assert_held(rq->q->queue_lock);
+
 	return __blk_end_bidi_request(rq, error, nr_bytes, 0);
 }
 EXPORT_SYMBOL(__blk_end_request);
@@ -2895,6 +2914,8 @@ void __blk_end_request_all(struct request *rq, int error)
 	bool pending;
 	unsigned int bidi_bytes = 0;
 
+	lockdep_assert_held(rq->q->queue_lock);
+
 	if (unlikely(blk_bidi_rq(rq)))
 		bidi_bytes = blk_rq_bytes(rq->next_rq);
 
@@ -3159,6 +3180,8 @@ static void queue_unplugged(struct request_queue *q, unsigned int depth,
 			    bool from_schedule)
 	__releases(q->queue_lock)
 {
+	lockdep_assert_held(q->queue_lock);
+
 	trace_block_unplug(q, depth, !from_schedule);
 
 	if (from_schedule)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index c4e0880b54bb..610c35bd9eeb 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -346,6 +346,8 @@ static void flush_data_end_io(struct request *rq, int error)
 	struct request_queue *q = rq->q;
 	struct blk_flush_queue *fq = blk_get_flush_queue(q, NULL);
 
+	lockdep_assert_held(q->queue_lock);
+
 	/*
 	 * Updating q->in_flight[] here for making this tag usable
 	 * early. Because in blk_queue_start_tag(),
@@ -411,9 +413,6 @@ static void mq_flush_data_end_io(struct request *rq, int error)
  * or __blk_mq_run_hw_queue() to dispatch request.
  * @rq is being submitted.  Analyze what needs to be done and put it on the
  * right queue.
- *
- * CONTEXT:
- * spin_lock_irq(q->queue_lock) in !mq case
  */
 void blk_insert_flush(struct request *rq)
 {
@@ -422,6 +421,9 @@ void blk_insert_flush(struct request *rq)
 	unsigned int policy = blk_flush_policy(fflags, rq);
 	struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
 
+	if (!q->mq_ops)
+		lockdep_assert_held(q->queue_lock);
+
 	/*
 	 * @policy now records what operations need to be done.  Adjust
 	 * REQ_PREFLUSH and FUA for the driver.
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 3990ae406341..573db663d53f 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -671,6 +671,9 @@ static void blk_account_io_merge(struct request *req)
 static struct request *attempt_merge(struct request_queue *q,
 				     struct request *req, struct request *next)
 {
+	if (!q->mq_ops)
+		lockdep_assert_held(q->queue_lock);
+
 	if (!rq_mergeable(req) || !rq_mergeable(next))
 		return NULL;
 
diff --git a/block/blk-tag.c b/block/blk-tag.c
index 07cc329fa4b0..2290f65b9d73 100644
--- a/block/blk-tag.c
+++ b/block/blk-tag.c
@@ -258,15 +258,14 @@ EXPORT_SYMBOL(blk_queue_resize_tags);
  *    all transfers have been done for a request. It's important to call
  *    this function before end_that_request_last(), as that will put the
  *    request back on the free list thus corrupting the internal tag list.
- *
- *  Notes:
- *   queue lock must be held.
  **/
 void blk_queue_end_tag(struct request_queue *q, struct request *rq)
 {
 	struct blk_queue_tag *bqt = q->queue_tags;
 	unsigned tag = rq->tag; /* negative tags invalid */
 
+	lockdep_assert_held(q->queue_lock);
+
 	BUG_ON(tag >= bqt->real_max_depth);
 
 	list_del_init(&rq->queuelist);
@@ -307,9 +306,6 @@ EXPORT_SYMBOL(blk_queue_end_tag);
  *    calling this function.  The request will also be removed from
  *    the request queue, so it's the drivers responsibility to readd
  *    it if it should need to be restarted for some reason.
- *
- *  Notes:
- *   queue lock must be held.
  **/
 int blk_queue_start_tag(struct request_queue *q, struct request *rq)
 {
@@ -317,6 +313,8 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
 	unsigned max_depth;
 	int tag;
 
+	lockdep_assert_held(q->queue_lock);
+
 	if (unlikely((rq->rq_flags & RQF_QUEUED))) {
 		printk(KERN_ERR
 		       "%s: request %p for device [%s] already tagged %d",
@@ -389,14 +387,13 @@ EXPORT_SYMBOL(blk_queue_start_tag);
  *   Hardware conditions may dictate a need to stop all pending requests.
  *   In this case, we will safely clear the block side of the tag queue and
  *   readd all requests to the request queue in the right order.
- *
- *  Notes:
- *   queue lock must be held.
  **/
 void blk_queue_invalidate_tags(struct request_queue *q)
 {
 	struct list_head *tmp, *n;
 
+	lockdep_assert_held(q->queue_lock);
+
 	list_for_each_safe(tmp, n, &q->tag_busy_list)
 		blk_requeue_request(q, list_entry_rq(tmp));
 }
diff --git a/block/blk-timeout.c b/block/blk-timeout.c
index cbff183f3d9f..17ec83bb0900 100644
--- a/block/blk-timeout.c
+++ b/block/blk-timeout.c
@@ -189,13 +189,15 @@ unsigned long blk_rq_timeout(unsigned long timeout)
  * Notes:
  *    Each request has its own timer, and as it is added to the queue, we
  *    set up the timer. When the request completes, we cancel the timer.
- *    Queue lock must be held for the non-mq case, mq case doesn't care.
  */
 void blk_add_timer(struct request *req)
 {
 	struct request_queue *q = req->q;
 	unsigned long expiry;
 
+	if (!q->mq_ops)
+		lockdep_assert_held(q->queue_lock);
+
 	/* blk-mq has its own handler, so we don't need ->rq_timed_out_fn */
 	if (!q->mq_ops && !q->rq_timed_out_fn)
 		return;
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 08/12] block: Document what queue type each function is intended for
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (6 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 07/12] block: Check locking assumptions at runtime Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-06-01  6:10   ` Christoph Hellwig
  2017-05-31 22:52 ` [PATCH v2 09/12] blk-mq: Document locking assumptions Bart Van Assche
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Ming Lei

Some functions in block/blk-core.c must only be used on blk-sq queues
while others are safe to use against any queue type. Document which
functions are intended for blk-sq queues and issue a warning if the
blk-sq API is misused.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/block/blk-core.c b/block/blk-core.c
index f3ad963eccdd..4689c20943fb 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -182,6 +182,7 @@ static void blk_delay_work(struct work_struct *work)
 void blk_delay_queue(struct request_queue *q, unsigned long msecs)
 {
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	if (likely(!blk_queue_dead(q)))
 		queue_delayed_work(kblockd_workqueue, &q->delay_work,
@@ -201,6 +202,7 @@ EXPORT_SYMBOL(blk_delay_queue);
 void blk_start_queue_async(struct request_queue *q)
 {
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	queue_flag_clear(QUEUE_FLAG_STOPPED, q);
 	blk_run_queue_async(q);
@@ -220,6 +222,7 @@ void blk_start_queue(struct request_queue *q)
 {
 	lockdep_assert_held(q->queue_lock);
 	WARN_ON(!irqs_disabled());
+	WARN_ON_ONCE(q->mq_ops);
 
 	queue_flag_clear(QUEUE_FLAG_STOPPED, q);
 	__blk_run_queue(q);
@@ -243,6 +246,7 @@ EXPORT_SYMBOL(blk_start_queue);
 void blk_stop_queue(struct request_queue *q)
 {
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	cancel_delayed_work(&q->delay_work);
 	queue_flag_set(QUEUE_FLAG_STOPPED, q);
@@ -297,6 +301,7 @@ EXPORT_SYMBOL(blk_sync_queue);
 inline void __blk_run_queue_uncond(struct request_queue *q)
 {
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	if (unlikely(blk_queue_dead(q)))
 		return;
@@ -324,6 +329,7 @@ EXPORT_SYMBOL_GPL(__blk_run_queue_uncond);
 void __blk_run_queue(struct request_queue *q)
 {
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	if (unlikely(blk_queue_stopped(q)))
 		return;
@@ -348,6 +354,7 @@ EXPORT_SYMBOL(__blk_run_queue);
 void blk_run_queue_async(struct request_queue *q)
 {
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	if (likely(!blk_queue_stopped(q) && !blk_queue_dead(q)))
 		mod_delayed_work(kblockd_workqueue, &q->delay_work, 0);
@@ -366,6 +373,8 @@ void blk_run_queue(struct request_queue *q)
 {
 	unsigned long flags;
 
+	WARN_ON_ONCE(q->mq_ops);
+
 	spin_lock_irqsave(q->queue_lock, flags);
 	__blk_run_queue(q);
 	spin_unlock_irqrestore(q->queue_lock, flags);
@@ -394,6 +403,7 @@ static void __blk_drain_queue(struct request_queue *q, bool drain_all)
 	int i;
 
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	while (true) {
 		bool drain = false;
@@ -472,6 +482,8 @@ static void __blk_drain_queue(struct request_queue *q, bool drain_all)
  */
 void blk_queue_bypass_start(struct request_queue *q)
 {
+	WARN_ON_ONCE(q->mq_ops);
+
 	spin_lock_irq(q->queue_lock);
 	q->bypass_depth++;
 	queue_flag_set(QUEUE_FLAG_BYPASS, q);
@@ -498,6 +510,9 @@ EXPORT_SYMBOL_GPL(blk_queue_bypass_start);
  * @q: queue of interest
  *
  * Leave bypass mode and restore the normal queueing behavior.
+ *
+ * Note: although blk_queue_bypass_start() is only called for blk-sq queues,
+ * this function is called for both blk-sq and blk-mq queues.
  */
 void blk_queue_bypass_end(struct request_queue *q)
 {
@@ -895,6 +910,8 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio);
 
 int blk_init_allocated_queue(struct request_queue *q)
 {
+	WARN_ON_ONCE(q->mq_ops);
+
 	q->fq = blk_alloc_flush_queue(q, NUMA_NO_NODE, q->cmd_size);
 	if (!q->fq)
 		return -ENOMEM;
@@ -1032,6 +1049,8 @@ int blk_update_nr_requests(struct request_queue *q, unsigned int nr)
 	struct request_list *rl;
 	int on_thresh, off_thresh;
 
+	WARN_ON_ONCE(q->mq_ops);
+
 	spin_lock_irq(q->queue_lock);
 	q->nr_requests = nr;
 	blk_queue_congestion_threshold(q);
@@ -1270,6 +1289,7 @@ static struct request *get_request(struct request_queue *q, unsigned int op,
 	struct request *rq;
 
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	rl = blk_get_rl(q, bio);	/* transferred to @rq on success */
 retry:
@@ -1309,6 +1329,8 @@ static struct request *blk_old_get_request(struct request_queue *q,
 {
 	struct request *rq;
 
+	WARN_ON_ONCE(q->mq_ops);
+
 	/* create ioc upfront */
 	create_io_context(gfp_mask, q->node);
 
@@ -1358,6 +1380,7 @@ EXPORT_SYMBOL(blk_get_request);
 void blk_requeue_request(struct request_queue *q, struct request *rq)
 {
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	blk_delete_timer(rq);
 	blk_clear_rq_complete(rq);
@@ -2414,6 +2437,7 @@ struct request *blk_peek_request(struct request_queue *q)
 	int ret;
 
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	while ((rq = __elv_next_request(q)) != NULL) {
 
@@ -2535,6 +2559,7 @@ void blk_dequeue_request(struct request *rq)
 void blk_start_request(struct request *req)
 {
 	lockdep_assert_held(req->q->queue_lock);
+	WARN_ON_ONCE(req->q->mq_ops);
 
 	blk_dequeue_request(req);
 
@@ -2566,6 +2591,7 @@ struct request *blk_fetch_request(struct request_queue *q)
 	struct request *rq;
 
 	lockdep_assert_held(q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	rq = blk_peek_request(q);
 	if (rq)
@@ -2747,6 +2773,7 @@ void blk_finish_request(struct request *req, int error)
 	struct request_queue *q = req->q;
 
 	lockdep_assert_held(req->q->queue_lock);
+	WARN_ON_ONCE(q->mq_ops);
 
 	if (req->rq_flags & RQF_STATS)
 		blk_stat_add(req);
@@ -2801,6 +2828,8 @@ static bool blk_end_bidi_request(struct request *rq, int error,
 	struct request_queue *q = rq->q;
 	unsigned long flags;
 
+	WARN_ON_ONCE(q->mq_ops);
+
 	if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes))
 		return true;
 
@@ -2830,6 +2859,7 @@ static bool __blk_end_bidi_request(struct request *rq, int error,
 				   unsigned int nr_bytes, unsigned int bidi_bytes)
 {
 	lockdep_assert_held(rq->q->queue_lock);
+	WARN_ON_ONCE(rq->q->mq_ops);
 
 	if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes))
 		return true;
@@ -2855,6 +2885,7 @@ static bool __blk_end_bidi_request(struct request *rq, int error,
  **/
 bool blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
 {
+	WARN_ON_ONCE(rq->q->mq_ops);
 	return blk_end_bidi_request(rq, error, nr_bytes, 0);
 }
 EXPORT_SYMBOL(blk_end_request);
@@ -2896,6 +2927,7 @@ EXPORT_SYMBOL(blk_end_request_all);
 bool __blk_end_request(struct request *rq, int error, unsigned int nr_bytes)
 {
 	lockdep_assert_held(rq->q->queue_lock);
+	WARN_ON_ONCE(rq->q->mq_ops);
 
 	return __blk_end_bidi_request(rq, error, nr_bytes, 0);
 }
@@ -2915,6 +2947,7 @@ void __blk_end_request_all(struct request *rq, int error)
 	unsigned int bidi_bytes = 0;
 
 	lockdep_assert_held(rq->q->queue_lock);
+	WARN_ON_ONCE(rq->q->mq_ops);
 
 	if (unlikely(blk_bidi_rq(rq)))
 		bidi_bytes = blk_rq_bytes(rq->next_rq);
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 09/12] blk-mq: Document locking assumptions
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (7 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 08/12] block: Document what queue type each function is intended for Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-06-01  6:11   ` Christoph Hellwig
  2017-05-31 22:52 ` [PATCH v2 10/12] block: Constify disk_type Bart Van Assche
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Ming Lei

Document the locking assumptions in functions that modify
blk_mq_ctx.rq_list to make it easier for humans to verify
this code.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq-sched.c | 2 ++
 block/blk-mq.c       | 4 ++++
 2 files changed, 6 insertions(+)

diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index c4e2afb9d12d..88aa460b2e8a 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -232,6 +232,8 @@ static bool blk_mq_attempt_merge(struct request_queue *q,
 	struct request *rq;
 	int checked = 8;
 
+	lockdep_assert_held(&ctx->lock);
+
 	list_for_each_entry_reverse(rq, &ctx->rq_list, queuelist) {
 		bool merged = false;
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 488c6ca2ad91..b56cb3d9060f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1274,6 +1274,8 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx,
 {
 	struct blk_mq_ctx *ctx = rq->mq_ctx;
 
+	lockdep_assert_held(&ctx->lock);
+
 	trace_block_rq_insert(hctx->queue, rq);
 
 	if (at_head)
@@ -1287,6 +1289,8 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 {
 	struct blk_mq_ctx *ctx = rq->mq_ctx;
 
+	lockdep_assert_held(&ctx->lock);
+
 	__blk_mq_insert_req_list(hctx, rq, at_head);
 	blk_mq_hctx_mark_pending(hctx, ctx);
 }
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 10/12] block: Constify disk_type
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (8 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 09/12] blk-mq: Document locking assumptions Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 11/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped Bart Van Assche
  2017-05-31 22:52 ` [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu() Bart Van Assche
  11 siblings, 0 replies; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Ming Lei

The variable 'disk_type' is never modified so constify it.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/genhd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/genhd.c b/block/genhd.c
index d252d29fe837..7f520fa25d16 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -36,7 +36,7 @@ struct kobject *block_depr;
 static DEFINE_SPINLOCK(ext_devt_lock);
 static DEFINE_IDR(ext_devt_idr);
 
-static struct device_type disk_type;
+static const struct device_type disk_type;
 
 static void disk_check_events(struct disk_events *ev,
 			      unsigned int *clearing_ptr);
@@ -1183,7 +1183,7 @@ static char *block_devnode(struct device *dev, umode_t *mode,
 	return NULL;
 }
 
-static struct device_type disk_type = {
+static const struct device_type disk_type = {
 	.name		= "disk",
 	.groups		= disk_attr_groups,
 	.release	= disk_release,
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 11/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (9 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 10/12] block: Constify disk_type Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-06-01  6:11   ` Christoph Hellwig
  2017-05-31 22:52 ` [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu() Bart Van Assche
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval, Ming Lei

A queue must be frozen while the mapped state of a hardware queue
is changed. Additionally, any change of the mapped state is
followed by a call to blk_mq_map_swqueue() (see also
blk_mq_init_allocated_queue() and blk_mq_update_nr_hw_queues()).
Since blk_mq_map_swqueue() does not map any unmapped hardware
queue onto any software queue, no attempt will be made to run
an unmapped hardware queue. Hence issue a warning upon attempts
to run an unmapped hardware queue.

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
Cc: Ming Lei <ming.lei@redhat.com>
---
 block/blk-mq.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index b56cb3d9060f..12a627743842 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1091,8 +1091,9 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
 static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
 					unsigned long msecs)
 {
-	if (unlikely(blk_mq_hctx_stopped(hctx) ||
-		     !blk_mq_hw_queue_mapped(hctx)))
+	WARN_ON_ONCE(!blk_mq_hw_queue_mapped(hctx));
+
+	if (unlikely(blk_mq_hctx_stopped(hctx)))
 		return;
 
 	if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
@@ -1252,7 +1253,7 @@ static void blk_mq_run_work_fn(struct work_struct *work)
 
 void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
 {
-	if (unlikely(!blk_mq_hw_queue_mapped(hctx)))
+	if (WARN_ON_ONCE(!blk_mq_hw_queue_mapped(hctx)))
 		return;
 
 	/*
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu()
  2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
                   ` (10 preceding siblings ...)
  2017-05-31 22:52 ` [PATCH v2 11/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped Bart Van Assche
@ 2017-05-31 22:52 ` Bart Van Assche
  2017-06-01  6:08   ` Christoph Hellwig
  11 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-05-31 22:52 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Christoph Hellwig, Bart Van Assche, Hannes Reinecke,
	Omar Sandoval

Commit 6d247d7f71d1 ("block: allow specifying size for extra command
data") added support for .cmd_size to blk-sq. Due to that patch the
blk_mq_rq_{to,from}_pdu() functions are also useful for single-queue
block drivers. Hence remove "_mq" from the name of these functions.
This patch does not change any functionality. Most of this patch has
been generated by running the following shell command:

    sed -i 's/blk_mq_rq_to_pdu/blk_rq_to_pdu/g;
            s/blk_mq_rq_from_pdu/blk_rq_from_pdu/g' \
        $(git grep -lE 'blk_mq_rq_(to|from)_pdu')

Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Omar Sandoval <osandov@fb.com>
---
 drivers/block/loop.c              |  8 ++++----
 drivers/block/mtip32xx/mtip32xx.c | 28 ++++++++++++++--------------
 drivers/block/nbd.c               | 18 +++++++++---------
 drivers/block/null_blk.c          |  4 ++--
 drivers/block/rbd.c               |  6 +++---
 drivers/block/virtio_blk.c        | 12 ++++++------
 drivers/block/xen-blkfront.c      |  2 +-
 drivers/ide/ide-probe.c           |  2 +-
 drivers/md/dm-rq.c                |  6 +++---
 drivers/mtd/ubi/block.c           |  8 ++++----
 drivers/nvme/host/fc.c            | 20 ++++++++++----------
 drivers/nvme/host/nvme.h          |  2 +-
 drivers/nvme/host/pci.c           | 22 +++++++++++-----------
 drivers/nvme/host/rdma.c          | 18 +++++++++---------
 drivers/nvme/target/loop.c        | 10 +++++-----
 drivers/scsi/scsi_lib.c           | 18 +++++++++---------
 include/linux/blk-mq.h            | 13 -------------
 include/linux/blkdev.h            | 13 +++++++++++++
 include/linux/ide.h               |  2 +-
 include/scsi/scsi_request.h       |  2 +-
 20 files changed, 107 insertions(+), 107 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 28d932906f24..42e18601daa2 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -447,7 +447,7 @@ static int lo_req_flush(struct loop_device *lo, struct request *rq)
 
 static void lo_complete_rq(struct request *rq)
 {
-	struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct loop_cmd *cmd = blk_rq_to_pdu(rq);
 
 	if (unlikely(req_op(cmd->rq) == REQ_OP_READ && cmd->use_aio &&
 		     cmd->ret >= 0 && cmd->ret < blk_rq_bytes(cmd->rq))) {
@@ -507,7 +507,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
 
 static int do_req_filebacked(struct loop_device *lo, struct request *rq)
 {
-	struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct loop_cmd *cmd = blk_rq_to_pdu(rq);
 	loff_t pos = ((loff_t) blk_rq_pos(rq) << 9) + lo->lo_offset;
 
 	/*
@@ -1645,7 +1645,7 @@ EXPORT_SYMBOL(loop_unregister_transfer);
 static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
 		const struct blk_mq_queue_data *bd)
 {
-	struct loop_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
+	struct loop_cmd *cmd = blk_rq_to_pdu(bd->rq);
 	struct loop_device *lo = cmd->rq->q->queuedata;
 
 	blk_mq_start_request(bd->rq);
@@ -1700,7 +1700,7 @@ static void loop_queue_work(struct kthread_work *work)
 static int loop_init_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx, unsigned int numa_node)
 {
-	struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct loop_cmd *cmd = blk_rq_to_pdu(rq);
 
 	cmd->rq = rq;
 	kthread_init_work(&cmd->work, loop_queue_work);
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 3a779a4f5653..7b58a5a16324 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -173,7 +173,7 @@ static bool mtip_check_surprise_removal(struct pci_dev *pdev)
 static void mtip_init_cmd_header(struct request *rq)
 {
 	struct driver_data *dd = rq->q->queuedata;
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(rq);
 	u32 host_cap_64 = readl(dd->mmio + HOST_CAP) & HOST_CAP_64;
 
 	/* Point the command headers at the command tables. */
@@ -202,7 +202,7 @@ static struct mtip_cmd *mtip_get_int_command(struct driver_data *dd)
 	/* Internal cmd isn't submitted via .queue_rq */
 	mtip_init_cmd_header(rq);
 
-	return blk_mq_rq_to_pdu(rq);
+	return blk_rq_to_pdu(rq);
 }
 
 static struct mtip_cmd *mtip_cmd_from_tag(struct driver_data *dd,
@@ -210,7 +210,7 @@ static struct mtip_cmd *mtip_cmd_from_tag(struct driver_data *dd,
 {
 	struct blk_mq_hw_ctx *hctx = dd->queue->queue_hw_ctx[0];
 
-	return blk_mq_rq_to_pdu(blk_mq_tag_to_rq(hctx->tags, tag));
+	return blk_rq_to_pdu(blk_mq_tag_to_rq(hctx->tags, tag));
 }
 
 /*
@@ -534,7 +534,7 @@ static int mtip_get_smart_attr(struct mtip_port *port, unsigned int id,
 
 static void mtip_complete_command(struct mtip_cmd *cmd, int status)
 {
-	struct request *req = blk_mq_rq_from_pdu(cmd);
+	struct request *req = blk_rq_from_pdu(cmd);
 
 	cmd->status = status;
 	blk_mq_complete_request(req);
@@ -1033,7 +1033,7 @@ static int mtip_exec_internal_command(struct mtip_port *port,
 		dbg_printk(MTIP_DRV_NAME "Unable to allocate tag for PIO cmd\n");
 		return -EFAULT;
 	}
-	rq = blk_mq_rq_from_pdu(int_cmd);
+	rq = blk_rq_from_pdu(int_cmd);
 	rq->special = &icmd;
 
 	set_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags);
@@ -2731,7 +2731,7 @@ static int mtip_ftl_rebuild_poll(struct driver_data *dd)
 
 static void mtip_softirq_done_fn(struct request *rq)
 {
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(rq);
 	struct driver_data *dd = rq->q->queuedata;
 
 	/* Unmap the DMA scatter list entries */
@@ -2747,7 +2747,7 @@ static void mtip_softirq_done_fn(struct request *rq)
 static void mtip_abort_cmd(struct request *req, void *data,
 							bool reserved)
 {
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(req);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(req);
 	struct driver_data *dd = data;
 
 	dbg_printk(MTIP_DRV_NAME " Aborting request, tag = %d\n", req->tag);
@@ -3569,7 +3569,7 @@ static inline bool is_se_active(struct driver_data *dd)
 static int mtip_submit_request(struct blk_mq_hw_ctx *hctx, struct request *rq)
 {
 	struct driver_data *dd = hctx->queue->queuedata;
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(rq);
 	unsigned int nents;
 
 	if (is_se_active(dd))
@@ -3613,7 +3613,7 @@ static bool mtip_check_unal_depth(struct blk_mq_hw_ctx *hctx,
 				  struct request *rq)
 {
 	struct driver_data *dd = hctx->queue->queuedata;
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(rq);
 
 	if (rq_data_dir(rq) == READ || !dd->unal_qdepth)
 		return false;
@@ -3638,7 +3638,7 @@ static int mtip_issue_reserved_cmd(struct blk_mq_hw_ctx *hctx,
 {
 	struct driver_data *dd = hctx->queue->queuedata;
 	struct mtip_int_cmd *icmd = rq->special;
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(rq);
 	struct mtip_cmd_sg *command_sg;
 
 	if (mtip_commands_active(dd->port))
@@ -3696,7 +3696,7 @@ static void mtip_free_cmd(struct blk_mq_tag_set *set, struct request *rq,
 			  unsigned int hctx_idx)
 {
 	struct driver_data *dd = set->driver_data;
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(rq);
 
 	if (!cmd->command)
 		return;
@@ -3709,7 +3709,7 @@ static int mtip_init_cmd(struct blk_mq_tag_set *set, struct request *rq,
 			 unsigned int hctx_idx, unsigned int numa_node)
 {
 	struct driver_data *dd = set->driver_data;
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(rq);
 
 	cmd->command = dmam_alloc_coherent(&dd->pdev->dev, CMD_DMA_ALLOC_SZ,
 			&cmd->command_dma, GFP_KERNEL);
@@ -3728,7 +3728,7 @@ static enum blk_eh_timer_return mtip_cmd_timeout(struct request *req,
 	struct driver_data *dd = req->q->queuedata;
 
 	if (reserved) {
-		struct mtip_cmd *cmd = blk_mq_rq_to_pdu(req);
+		struct mtip_cmd *cmd = blk_rq_to_pdu(req);
 
 		cmd->status = -ETIME;
 		return BLK_EH_HANDLED;
@@ -3959,7 +3959,7 @@ static int mtip_block_initialize(struct driver_data *dd)
 
 static void mtip_no_dev_cleanup(struct request *rq, void *data, bool reserv)
 {
-	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct mtip_cmd *cmd = blk_rq_to_pdu(rq);
 
 	cmd->status = -ENODEV;
 	blk_mq_complete_request(rq);
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index c5e52f66d3d4..271552fe27f1 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -248,7 +248,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
 
 static void nbd_complete_rq(struct request *req)
 {
-	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
+	struct nbd_cmd *cmd = blk_rq_to_pdu(req);
 
 	dev_dbg(nbd_to_dev(cmd->nbd), "request %p: %s\n", cmd,
 		cmd->status ? "failed" : "done");
@@ -281,7 +281,7 @@ static void sock_shutdown(struct nbd_device *nbd)
 static enum blk_eh_timer_return nbd_xmit_timeout(struct request *req,
 						 bool reserved)
 {
-	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(req);
+	struct nbd_cmd *cmd = blk_rq_to_pdu(req);
 	struct nbd_device *nbd = cmd->nbd;
 	struct nbd_config *config;
 
@@ -390,7 +390,7 @@ static int sock_xmit(struct nbd_device *nbd, int index, int send,
 /* always call with the tx_lock held */
 static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
 {
-	struct request *req = blk_mq_rq_from_pdu(cmd);
+	struct request *req = blk_rq_from_pdu(cmd);
 	struct nbd_config *config = nbd->config;
 	struct nbd_sock *nsock = config->socks[index];
 	int result;
@@ -574,7 +574,7 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index)
 			tag, req);
 		return ERR_PTR(-ENOENT);
 	}
-	cmd = blk_mq_rq_to_pdu(req);
+	cmd = blk_rq_to_pdu(req);
 	if (ntohl(reply.error)) {
 		dev_err(disk_to_dev(nbd->disk), "Other side returned error (%d)\n",
 			ntohl(reply.error));
@@ -640,7 +640,7 @@ static void recv_work(struct work_struct *work)
 			break;
 		}
 
-		blk_mq_complete_request(blk_mq_rq_from_pdu(cmd));
+		blk_mq_complete_request(blk_rq_from_pdu(cmd));
 	}
 	atomic_dec(&config->recv_threads);
 	wake_up(&config->recv_wq);
@@ -654,7 +654,7 @@ static void nbd_clear_req(struct request *req, void *data, bool reserved)
 
 	if (!blk_mq_request_started(req))
 		return;
-	cmd = blk_mq_rq_to_pdu(req);
+	cmd = blk_rq_to_pdu(req);
 	cmd->status = -EIO;
 	blk_mq_complete_request(req);
 }
@@ -725,7 +725,7 @@ static int wait_for_reconnect(struct nbd_device *nbd)
 
 static int nbd_handle_cmd(struct nbd_cmd *cmd, int index)
 {
-	struct request *req = blk_mq_rq_from_pdu(cmd);
+	struct request *req = blk_rq_from_pdu(cmd);
 	struct nbd_device *nbd = cmd->nbd;
 	struct nbd_config *config;
 	struct nbd_sock *nsock;
@@ -801,7 +801,7 @@ static int nbd_handle_cmd(struct nbd_cmd *cmd, int index)
 static int nbd_queue_rq(struct blk_mq_hw_ctx *hctx,
 			const struct blk_mq_queue_data *bd)
 {
-	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
+	struct nbd_cmd *cmd = blk_rq_to_pdu(bd->rq);
 	int ret;
 
 	/*
@@ -1410,7 +1410,7 @@ static void nbd_dbg_close(void)
 static int nbd_init_request(struct blk_mq_tag_set *set, struct request *rq,
 			    unsigned int hctx_idx, unsigned int numa_node)
 {
-	struct nbd_cmd *cmd = blk_mq_rq_to_pdu(rq);
+	struct nbd_cmd *cmd = blk_rq_to_pdu(rq);
 	cmd->nbd = set->driver_data;
 	return 0;
 }
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index d946e1eeac8e..e1819f31c0ed 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -269,7 +269,7 @@ static void null_cmd_end_timer(struct nullb_cmd *cmd)
 static void null_softirq_done_fn(struct request *rq)
 {
 	if (queue_mode == NULL_Q_MQ)
-		end_cmd(blk_mq_rq_to_pdu(rq));
+		end_cmd(blk_rq_to_pdu(rq));
 	else
 		end_cmd(rq->special);
 }
@@ -359,7 +359,7 @@ static void null_request_fn(struct request_queue *q)
 static int null_queue_rq(struct blk_mq_hw_ctx *hctx,
 			 const struct blk_mq_queue_data *bd)
 {
-	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
+	struct nullb_cmd *cmd = blk_rq_to_pdu(bd->rq);
 
 	might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
 
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 454bf9c34882..c8c1988dff0a 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4010,7 +4010,7 @@ static void rbd_wait_state_locked(struct rbd_device *rbd_dev)
 
 static void rbd_queue_workfn(struct work_struct *work)
 {
-	struct request *rq = blk_mq_rq_from_pdu(work);
+	struct request *rq = blk_rq_from_pdu(work);
 	struct rbd_device *rbd_dev = rq->q->queuedata;
 	struct rbd_img_request *img_request;
 	struct ceph_snap_context *snapc = NULL;
@@ -4156,7 +4156,7 @@ static int rbd_queue_rq(struct blk_mq_hw_ctx *hctx,
 		const struct blk_mq_queue_data *bd)
 {
 	struct request *rq = bd->rq;
-	struct work_struct *work = blk_mq_rq_to_pdu(rq);
+	struct work_struct *work = blk_rq_to_pdu(rq);
 
 	queue_work(rbd_wq, work);
 	return BLK_MQ_RQ_QUEUE_OK;
@@ -4351,7 +4351,7 @@ static int rbd_dev_refresh(struct rbd_device *rbd_dev)
 static int rbd_init_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx, unsigned int numa_node)
 {
-	struct work_struct *work = blk_mq_rq_to_pdu(rq);
+	struct work_struct *work = blk_rq_to_pdu(rq);
 
 	INIT_WORK(work, rbd_queue_workfn);
 	return 0;
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 553cc4c542b4..712831085da0 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -113,7 +113,7 @@ static int virtblk_add_req_scsi(struct virtqueue *vq, struct virtblk_req *vbr,
 
 static inline void virtblk_scsi_request_done(struct request *req)
 {
-	struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
+	struct virtblk_req *vbr = blk_rq_to_pdu(req);
 	struct virtio_blk *vblk = req->q->queuedata;
 	struct scsi_request *sreq = &vbr->sreq;
 
@@ -174,7 +174,7 @@ static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr,
 
 static inline void virtblk_request_done(struct request *req)
 {
-	struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
+	struct virtblk_req *vbr = blk_rq_to_pdu(req);
 
 	switch (req_op(req)) {
 	case REQ_OP_SCSI_IN:
@@ -199,7 +199,7 @@ static void virtblk_done(struct virtqueue *vq)
 	do {
 		virtqueue_disable_cb(vq);
 		while ((vbr = virtqueue_get_buf(vblk->vqs[qid].vq, &len)) != NULL) {
-			struct request *req = blk_mq_rq_from_pdu(vbr);
+			struct request *req = blk_rq_from_pdu(vbr);
 
 			blk_mq_complete_request(req);
 			req_done = true;
@@ -219,7 +219,7 @@ static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
 {
 	struct virtio_blk *vblk = hctx->queue->queuedata;
 	struct request *req = bd->rq;
-	struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
+	struct virtblk_req *vbr = blk_rq_to_pdu(req);
 	unsigned long flags;
 	unsigned int num;
 	int qid = hctx->queue_num;
@@ -307,7 +307,7 @@ static int virtblk_get_id(struct gendisk *disk, char *id_str)
 		goto out;
 
 	blk_execute_rq(vblk->disk->queue, vblk->disk, req, false);
-	err = virtblk_result(blk_mq_rq_to_pdu(req));
+	err = virtblk_result(blk_rq_to_pdu(req));
 out:
 	blk_put_request(req);
 	return err;
@@ -576,7 +576,7 @@ static int virtblk_init_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx, unsigned int numa_node)
 {
 	struct virtio_blk *vblk = set->driver_data;
-	struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
+	struct virtblk_req *vbr = blk_rq_to_pdu(rq);
 
 #ifdef CONFIG_VIRTIO_BLK_SCSI
 	vbr->sreq.sense = vbr->sense;
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 39459631667c..d7b3b6229976 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -121,7 +121,7 @@ struct blkif_req {
 
 static inline struct blkif_req *blkif_req(struct request *rq)
 {
-	return blk_mq_rq_to_pdu(rq);
+	return blk_rq_to_pdu(rq);
 }
 
 static DEFINE_MUTEX(blkfront_mutex);
diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
index 01b2adfd8226..38b4356639fe 100644
--- a/drivers/ide/ide-probe.c
+++ b/drivers/ide/ide-probe.c
@@ -743,7 +743,7 @@ static void ide_port_tune_devices(ide_hwif_t *hwif)
 
 static void ide_initialize_rq(struct request *rq)
 {
-	struct ide_request *req = blk_mq_rq_to_pdu(rq);
+	struct ide_request *req = blk_rq_to_pdu(rq);
 
 	scsi_req_init(&req->sreq);
 	req->sreq.sense = req->sense;
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index b639fa7246ee..3c0725e414fa 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -163,7 +163,7 @@ static void end_clone_bio(struct bio *clone)
 
 static struct dm_rq_target_io *tio_from_request(struct request *rq)
 {
-	return blk_mq_rq_to_pdu(rq);
+	return blk_rq_to_pdu(rq);
 }
 
 static void rq_end_stats(struct mapped_device *md, struct request *orig)
@@ -551,7 +551,7 @@ static void dm_start_request(struct mapped_device *md, struct request *orig)
 
 static int __dm_rq_init_rq(struct mapped_device *md, struct request *rq)
 {
-	struct dm_rq_target_io *tio = blk_mq_rq_to_pdu(rq);
+	struct dm_rq_target_io *tio = blk_rq_to_pdu(rq);
 
 	/*
 	 * Must initialize md member of tio, otherwise it won't
@@ -731,7 +731,7 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
 			  const struct blk_mq_queue_data *bd)
 {
 	struct request *rq = bd->rq;
-	struct dm_rq_target_io *tio = blk_mq_rq_to_pdu(rq);
+	struct dm_rq_target_io *tio = blk_rq_to_pdu(rq);
 	struct mapped_device *md = tio->md;
 	struct dm_target *ti = md->immutable_target;
 
diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c
index 5497e65439df..7eacc6fdc39f 100644
--- a/drivers/mtd/ubi/block.c
+++ b/drivers/mtd/ubi/block.c
@@ -191,7 +191,7 @@ static int ubiblock_read(struct ubiblock_pdu *pdu)
 {
 	int ret, leb, offset, bytes_left, to_read;
 	u64 pos;
-	struct request *req = blk_mq_rq_from_pdu(pdu);
+	struct request *req = blk_rq_from_pdu(pdu);
 	struct ubiblock *dev = req->q->queuedata;
 
 	to_read = blk_rq_bytes(req);
@@ -299,7 +299,7 @@ static void ubiblock_do_work(struct work_struct *work)
 {
 	int ret;
 	struct ubiblock_pdu *pdu = container_of(work, struct ubiblock_pdu, work);
-	struct request *req = blk_mq_rq_from_pdu(pdu);
+	struct request *req = blk_rq_from_pdu(pdu);
 
 	blk_mq_start_request(req);
 
@@ -321,7 +321,7 @@ static int ubiblock_queue_rq(struct blk_mq_hw_ctx *hctx,
 {
 	struct request *req = bd->rq;
 	struct ubiblock *dev = hctx->queue->queuedata;
-	struct ubiblock_pdu *pdu = blk_mq_rq_to_pdu(req);
+	struct ubiblock_pdu *pdu = blk_rq_to_pdu(req);
 
 	switch (req_op(req)) {
 	case REQ_OP_READ:
@@ -338,7 +338,7 @@ static int ubiblock_init_request(struct blk_mq_tag_set *set,
 		struct request *req, unsigned int hctx_idx,
 		unsigned int numa_node)
 {
-	struct ubiblock_pdu *pdu = blk_mq_rq_to_pdu(req);
+	struct ubiblock_pdu *pdu = blk_rq_to_pdu(req);
 
 	sg_init_table(pdu->usgl.sg, UBI_MAX_SG_COUNT);
 	INIT_WORK(&pdu->work, ubiblock_do_work);
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 5b14cbefb724..c62cf5626e91 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -1143,7 +1143,7 @@ static void __nvme_fc_final_op_cleanup(struct request *rq);
 static int
 nvme_fc_reinit_request(void *data, struct request *rq)
 {
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(rq);
 	struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
 
 	memset(cmdiu, 0, sizeof(*cmdiu));
@@ -1171,7 +1171,7 @@ static void
 nvme_fc_exit_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx)
 {
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(rq);
 
 	return __nvme_fc_exit_request(set->driver_data, op);
 }
@@ -1434,7 +1434,7 @@ nvme_fc_init_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx, unsigned int numa_node)
 {
 	struct nvme_fc_ctrl *ctrl = set->driver_data;
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(rq);
 	struct nvme_fc_queue *queue = &ctrl->queues[hctx_idx+1];
 
 	return __nvme_fc_init_request(ctrl, queue, op, rq, queue->rqcnt++);
@@ -1445,7 +1445,7 @@ nvme_fc_init_admin_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx, unsigned int numa_node)
 {
 	struct nvme_fc_ctrl *ctrl = set->driver_data;
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(rq);
 	struct nvme_fc_queue *queue = &ctrl->queues[0];
 
 	return __nvme_fc_init_request(ctrl, queue, op, rq, queue->rqcnt++);
@@ -1770,7 +1770,7 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
 static enum blk_eh_timer_return
 nvme_fc_timeout(struct request *rq, bool reserved)
 {
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(rq);
 	struct nvme_fc_ctrl *ctrl = op->ctrl;
 	int ret;
 
@@ -1986,7 +1986,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
 	struct nvme_fc_queue *queue = hctx->driver_data;
 	struct nvme_fc_ctrl *ctrl = queue->ctrl;
 	struct request *rq = bd->rq;
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(rq);
 	struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
 	struct nvme_command *sqe = &cmdiu->sqe;
 	enum nvmefc_fcp_datadir	io_dir;
@@ -2029,7 +2029,7 @@ nvme_fc_poll(struct blk_mq_hw_ctx *hctx, unsigned int tag)
 	if (!req)
 		return 0;
 
-	op = blk_mq_rq_to_pdu(req);
+	op = blk_rq_to_pdu(req);
 
 	if ((atomic_read(&op->state) == FCPOP_STATE_ACTIVE) &&
 		 (ctrl->lport->ops->poll_queue))
@@ -2071,7 +2071,7 @@ nvme_fc_submit_async_event(struct nvme_ctrl *arg, int aer_idx)
 static void
 __nvme_fc_final_op_cleanup(struct request *rq)
 {
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(rq);
 	struct nvme_fc_ctrl *ctrl = op->ctrl;
 
 	atomic_set(&op->state, FCPOP_STATE_IDLE);
@@ -2088,7 +2088,7 @@ __nvme_fc_final_op_cleanup(struct request *rq)
 static void
 nvme_fc_complete_rq(struct request *rq)
 {
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(rq);
 	struct nvme_fc_ctrl *ctrl = op->ctrl;
 	unsigned long flags;
 	bool completed = false;
@@ -2130,7 +2130,7 @@ nvme_fc_terminate_exchange(struct request *req, void *data, bool reserved)
 {
 	struct nvme_ctrl *nctrl = data;
 	struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl);
-	struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(req);
+	struct nvme_fc_fcp_op *op = blk_rq_to_pdu(req);
 	unsigned long flags;
 	int status;
 
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9d6a070d4391..575871ca7ef3 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -96,7 +96,7 @@ enum {
 
 static inline struct nvme_request *nvme_req(struct request *req)
 {
-	return blk_mq_rq_to_pdu(req);
+	return blk_rq_to_pdu(req);
 }
 
 /* The below value is the specific amount of delay needed before checking
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index d52701df7245..2011540214a1 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -355,7 +355,7 @@ static int nvme_admin_init_request(struct blk_mq_tag_set *set,
 		unsigned int numa_node)
 {
 	struct nvme_dev *dev = set->driver_data;
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	struct nvme_queue *nvmeq = dev->queues[0];
 
 	BUG_ON(!nvmeq);
@@ -381,7 +381,7 @@ static int nvme_init_request(struct blk_mq_tag_set *set, struct request *req,
 		unsigned int hctx_idx, unsigned int numa_node)
 {
 	struct nvme_dev *dev = set->driver_data;
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	struct nvme_queue *nvmeq = dev->queues[hctx_idx + 1];
 
 	BUG_ON(!nvmeq);
@@ -423,13 +423,13 @@ static void __nvme_submit_cmd(struct nvme_queue *nvmeq,
 
 static __le64 **iod_list(struct request *req)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	return (__le64 **)(iod->sg + blk_rq_nr_phys_segments(req));
 }
 
 static int nvme_init_iod(struct request *rq, struct nvme_dev *dev)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(rq);
+	struct nvme_iod *iod = blk_rq_to_pdu(rq);
 	int nseg = blk_rq_nr_phys_segments(rq);
 	unsigned int size = blk_rq_payload_bytes(rq);
 
@@ -451,7 +451,7 @@ static int nvme_init_iod(struct request *rq, struct nvme_dev *dev)
 
 static void nvme_free_iod(struct nvme_dev *dev, struct request *req)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	const int last_prp = dev->ctrl.page_size / 8 - 1;
 	int i;
 	__le64 **list = iod_list(req);
@@ -539,7 +539,7 @@ static void nvme_dif_complete(u32 p, u32 v, struct t10_pi_tuple *pi)
 
 static bool nvme_setup_prps(struct nvme_dev *dev, struct request *req)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	struct dma_pool *pool;
 	int length = blk_rq_payload_bytes(req);
 	struct scatterlist *sg = iod->sg;
@@ -619,7 +619,7 @@ static bool nvme_setup_prps(struct nvme_dev *dev, struct request *req)
 static int nvme_map_data(struct nvme_dev *dev, struct request *req,
 		struct nvme_command *cmnd)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	struct request_queue *q = req->q;
 	enum dma_data_direction dma_dir = rq_data_dir(req) ?
 			DMA_TO_DEVICE : DMA_FROM_DEVICE;
@@ -668,7 +668,7 @@ static int nvme_map_data(struct nvme_dev *dev, struct request *req,
 
 static void nvme_unmap_data(struct nvme_dev *dev, struct request *req)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	enum dma_data_direction dma_dir = rq_data_dir(req) ?
 			DMA_TO_DEVICE : DMA_FROM_DEVICE;
 
@@ -746,7 +746,7 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 
 static void nvme_pci_complete_rq(struct request *req)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 
 	nvme_unmap_data(iod->nvmeq->dev, req);
 	nvme_complete_rq(req);
@@ -941,7 +941,7 @@ static int adapter_delete_sq(struct nvme_dev *dev, u16 sqid)
 
 static void abort_endio(struct request *req, int error)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	struct nvme_queue *nvmeq = iod->nvmeq;
 
 	dev_warn(nvmeq->dev->ctrl.device,
@@ -952,7 +952,7 @@ static void abort_endio(struct request *req, int error)
 
 static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
 {
-	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_iod *iod = blk_rq_to_pdu(req);
 	struct nvme_queue *nvmeq = iod->nvmeq;
 	struct nvme_dev *dev = nvmeq->dev;
 	struct request *abort_req;
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 28bd255c144d..ede0e3bdf96d 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -279,7 +279,7 @@ static int nvme_rdma_reinit_request(void *data, struct request *rq)
 {
 	struct nvme_rdma_ctrl *ctrl = data;
 	struct nvme_rdma_device *dev = ctrl->device;
-	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	struct nvme_rdma_request *req = blk_rq_to_pdu(rq);
 	int ret = 0;
 
 	if (!req->mr->need_inval)
@@ -304,7 +304,7 @@ static int nvme_rdma_reinit_request(void *data, struct request *rq)
 static void __nvme_rdma_exit_request(struct nvme_rdma_ctrl *ctrl,
 		struct request *rq, unsigned int queue_idx)
 {
-	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	struct nvme_rdma_request *req = blk_rq_to_pdu(rq);
 	struct nvme_rdma_queue *queue = &ctrl->queues[queue_idx];
 	struct nvme_rdma_device *dev = queue->device;
 
@@ -330,7 +330,7 @@ static void nvme_rdma_exit_admin_request(struct blk_mq_tag_set *set,
 static int __nvme_rdma_init_request(struct nvme_rdma_ctrl *ctrl,
 		struct request *rq, unsigned int queue_idx)
 {
-	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	struct nvme_rdma_request *req = blk_rq_to_pdu(rq);
 	struct nvme_rdma_queue *queue = &ctrl->queues[queue_idx];
 	struct nvme_rdma_device *dev = queue->device;
 	struct ib_device *ibdev = dev->dev;
@@ -881,7 +881,7 @@ static int nvme_rdma_inv_rkey(struct nvme_rdma_queue *queue,
 static void nvme_rdma_unmap_data(struct nvme_rdma_queue *queue,
 		struct request *rq)
 {
-	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	struct nvme_rdma_request *req = blk_rq_to_pdu(rq);
 	struct nvme_rdma_ctrl *ctrl = queue->ctrl;
 	struct nvme_rdma_device *dev = queue->device;
 	struct ib_device *ibdev = dev->dev;
@@ -990,7 +990,7 @@ static int nvme_rdma_map_sg_fr(struct nvme_rdma_queue *queue,
 static int nvme_rdma_map_data(struct nvme_rdma_queue *queue,
 		struct request *rq, struct nvme_command *c)
 {
-	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	struct nvme_rdma_request *req = blk_rq_to_pdu(rq);
 	struct nvme_rdma_device *dev = queue->device;
 	struct ib_device *ibdev = dev->dev;
 	int count, ret;
@@ -1179,7 +1179,7 @@ static int nvme_rdma_process_nvme_rsp(struct nvme_rdma_queue *queue,
 		nvme_rdma_error_recovery(queue->ctrl);
 		return ret;
 	}
-	req = blk_mq_rq_to_pdu(rq);
+	req = blk_rq_to_pdu(rq);
 
 	if (rq->tag == tag)
 		ret = 1;
@@ -1419,7 +1419,7 @@ static int nvme_rdma_cm_handler(struct rdma_cm_id *cm_id,
 static enum blk_eh_timer_return
 nvme_rdma_timeout(struct request *rq, bool reserved)
 {
-	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	struct nvme_rdma_request *req = blk_rq_to_pdu(rq);
 
 	/* queue error recovery */
 	nvme_rdma_error_recovery(req->queue->ctrl);
@@ -1454,7 +1454,7 @@ static int nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
 	struct nvme_ns *ns = hctx->queue->queuedata;
 	struct nvme_rdma_queue *queue = hctx->driver_data;
 	struct request *rq = bd->rq;
-	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	struct nvme_rdma_request *req = blk_rq_to_pdu(rq);
 	struct nvme_rdma_qe *sqe = &req->sqe;
 	struct nvme_command *c = sqe->data;
 	bool flush = false;
@@ -1526,7 +1526,7 @@ static int nvme_rdma_poll(struct blk_mq_hw_ctx *hctx, unsigned int tag)
 
 static void nvme_rdma_complete_rq(struct request *rq)
 {
-	struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
+	struct nvme_rdma_request *req = blk_rq_to_pdu(rq);
 
 	nvme_rdma_unmap_data(req->queue, rq);
 	nvme_complete_rq(rq);
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index e503cfff0337..fc3794b718e4 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -90,7 +90,7 @@ static inline int nvme_loop_queue_idx(struct nvme_loop_queue *queue)
 
 static void nvme_loop_complete_rq(struct request *req)
 {
-	struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_loop_iod *iod = blk_rq_to_pdu(req);
 
 	nvme_cleanup_cmd(req);
 	sg_free_table_chained(&iod->sg_table, true);
@@ -148,7 +148,7 @@ static void nvme_loop_execute_work(struct work_struct *work)
 static enum blk_eh_timer_return
 nvme_loop_timeout(struct request *rq, bool reserved)
 {
-	struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(rq);
+	struct nvme_loop_iod *iod = blk_rq_to_pdu(rq);
 
 	/* queue error recovery */
 	schedule_work(&iod->queue->ctrl->reset_work);
@@ -165,7 +165,7 @@ static int nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
 	struct nvme_ns *ns = hctx->queue->queuedata;
 	struct nvme_loop_queue *queue = hctx->driver_data;
 	struct request *req = bd->rq;
-	struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_loop_iod *iod = blk_rq_to_pdu(req);
 	int ret;
 
 	ret = nvme_setup_cmd(ns, req, &iod->cmd);
@@ -234,7 +234,7 @@ static int nvme_loop_init_request(struct blk_mq_tag_set *set,
 		struct request *req, unsigned int hctx_idx,
 		unsigned int numa_node)
 {
-	return nvme_loop_init_iod(set->driver_data, blk_mq_rq_to_pdu(req),
+	return nvme_loop_init_iod(set->driver_data, blk_rq_to_pdu(req),
 			hctx_idx + 1);
 }
 
@@ -242,7 +242,7 @@ static int nvme_loop_init_admin_request(struct blk_mq_tag_set *set,
 		struct request *req, unsigned int hctx_idx,
 		unsigned int numa_node)
 {
-	return nvme_loop_init_iod(set->driver_data, blk_mq_rq_to_pdu(req), 0);
+	return nvme_loop_init_iod(set->driver_data, blk_rq_to_pdu(req), 0);
 }
 
 static int nvme_loop_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index b629d8cbf0d1..5a5c18b02a5c 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1136,7 +1136,7 @@ EXPORT_SYMBOL(scsi_init_io);
 /* Called from inside blk_get_request() */
 static void scsi_initialize_rq(struct request *rq)
 {
-	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+	struct scsi_cmnd *cmd = blk_rq_to_pdu(rq);
 
 	scsi_req_init(&cmd->req);
 }
@@ -1319,7 +1319,7 @@ scsi_prep_return(struct request_queue *q, struct request *req, int ret)
 static int scsi_prep_fn(struct request_queue *q, struct request *req)
 {
 	struct scsi_device *sdev = q->queuedata;
-	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
+	struct scsi_cmnd *cmd = blk_rq_to_pdu(req);
 	int ret;
 
 	ret = scsi_prep_state_check(sdev, req);
@@ -1851,7 +1851,7 @@ static inline int prep_to_mq(int ret)
 
 static int scsi_mq_prep_fn(struct request *req)
 {
-	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
+	struct scsi_cmnd *cmd = blk_rq_to_pdu(req);
 	struct scsi_device *sdev = req->q->queuedata;
 	struct Scsi_Host *shost = sdev->host;
 	unsigned char *sense_buf = cmd->sense_buffer;
@@ -1897,7 +1897,7 @@ static int scsi_mq_prep_fn(struct request *req)
 
 	if (blk_bidi_rq(req)) {
 		struct request *next_rq = req->next_rq;
-		struct scsi_data_buffer *bidi_sdb = blk_mq_rq_to_pdu(next_rq);
+		struct scsi_data_buffer *bidi_sdb = blk_rq_to_pdu(next_rq);
 
 		memset(bidi_sdb, 0, sizeof(struct scsi_data_buffer));
 		bidi_sdb->table.sgl =
@@ -1924,7 +1924,7 @@ static int scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
 	struct request_queue *q = req->q;
 	struct scsi_device *sdev = q->queuedata;
 	struct Scsi_Host *shost = sdev->host;
-	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(req);
+	struct scsi_cmnd *cmd = blk_rq_to_pdu(req);
 	int ret;
 	int reason;
 
@@ -2012,7 +2012,7 @@ static int scsi_init_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx, unsigned int numa_node)
 {
 	struct Scsi_Host *shost = set->driver_data;
-	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+	struct scsi_cmnd *cmd = blk_rq_to_pdu(rq);
 
 	cmd->sense_buffer =
 		scsi_alloc_sense_buffer(shost, GFP_KERNEL, numa_node);
@@ -2026,7 +2026,7 @@ static void scsi_exit_request(struct blk_mq_tag_set *set, struct request *rq,
 		unsigned int hctx_idx)
 {
 	struct Scsi_Host *shost = set->driver_data;
-	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+	struct scsi_cmnd *cmd = blk_rq_to_pdu(rq);
 
 	scsi_free_sense_buffer(shost, cmd->sense_buffer);
 }
@@ -2105,7 +2105,7 @@ EXPORT_SYMBOL_GPL(__scsi_init_queue);
 static int scsi_init_rq(struct request_queue *q, struct request *rq, gfp_t gfp)
 {
 	struct Scsi_Host *shost = q->rq_alloc_data;
-	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+	struct scsi_cmnd *cmd = blk_rq_to_pdu(rq);
 
 	memset(cmd, 0, sizeof(*cmd));
 
@@ -2131,7 +2131,7 @@ static int scsi_init_rq(struct request_queue *q, struct request *rq, gfp_t gfp)
 static void scsi_exit_rq(struct request_queue *q, struct request *rq)
 {
 	struct Scsi_Host *shost = q->rq_alloc_data;
-	struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+	struct scsi_cmnd *cmd = blk_rq_to_pdu(rq);
 
 	if (cmd->prot_sdb)
 		kmem_cache_free(scsi_sdb_cache, cmd->prot_sdb);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index a4759fd34e7e..df0e5aa2a410 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -265,19 +265,6 @@ int blk_mq_reinit_tagset(struct blk_mq_tag_set *set);
 int blk_mq_map_queues(struct blk_mq_tag_set *set);
 void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
 
-/*
- * Driver command data is immediately after the request. So subtract request
- * size to get back to the original request, add request size to get the PDU.
- */
-static inline struct request *blk_mq_rq_from_pdu(void *pdu)
-{
-	return pdu - sizeof(struct request);
-}
-static inline void *blk_mq_rq_to_pdu(struct request *rq)
-{
-	return rq + 1;
-}
-
 #define queue_for_each_hw_ctx(q, hctx, i)				\
 	for ((i) = 0; (i) < (q)->nr_hw_queues &&			\
 	     ({ hctx = (q)->queue_hw_ctx[i]; 1; }); (i)++)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1e73b4df13a9..912eaff71c09 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -258,6 +258,19 @@ static inline unsigned short req_get_ioprio(struct request *req)
 	return req->ioprio;
 }
 
+/*
+ * Driver command data is immediately after the request. So subtract request
+ * size to get back to the original request, add request size to get the PDU.
+ */
+static inline struct request *blk_rq_from_pdu(void *pdu)
+{
+	return pdu - sizeof(struct request);
+}
+static inline void *blk_rq_to_pdu(struct request *rq)
+{
+	return rq + 1;
+}
+
 #include <linux/elevator.h>
 
 struct blk_queue_ctx;
diff --git a/include/linux/ide.h b/include/linux/ide.h
index 6980ca322074..64809a58ee85 100644
--- a/include/linux/ide.h
+++ b/include/linux/ide.h
@@ -58,7 +58,7 @@ struct ide_request {
 
 static inline struct ide_request *ide_req(struct request *rq)
 {
-	return blk_mq_rq_to_pdu(rq);
+	return blk_rq_to_pdu(rq);
 }
 
 static inline bool ata_misc_request(struct request *rq)
diff --git a/include/scsi/scsi_request.h b/include/scsi/scsi_request.h
index e0afa445ee4e..be5b62d5347c 100644
--- a/include/scsi/scsi_request.h
+++ b/include/scsi/scsi_request.h
@@ -18,7 +18,7 @@ struct scsi_request {
 
 static inline struct scsi_request *scsi_req(struct request *rq)
 {
-	return blk_mq_rq_to_pdu(rq);
+	return blk_rq_to_pdu(rq);
 }
 
 static inline void scsi_req_free_cmd(struct scsi_request *req)
-- 
2.12.2

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 02/12] block: Introduce request_queue.initialize_rq_fn()
  2017-05-31 22:52 ` [PATCH v2 02/12] block: Introduce request_queue.initialize_rq_fn() Bart Van Assche
@ 2017-06-01  6:06   ` Christoph Hellwig
  0 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2017-06-01  6:06 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
	Omar Sandoval

On Wed, May 31, 2017 at 03:52:36PM -0700, Bart Van Assche wrote:
> Several block drivers need to initialize the driver-private data
> after having called blk_get_request() and before .prep_rq_fn() is
> called, e.g. when submitting a REQ_OP_SCSI_* request. Avoid that
> that initialization code has to be repeated after every
> blk_get_request() call by adding a new callback function to struct
> request_queue.

Can we please still have a initialize_rq mq_op for the blk-mq path
to avoid using the operation vectors directly in the requeuest_queue
for blk-mq?

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu()
  2017-05-31 22:52 ` [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu() Bart Van Assche
@ 2017-06-01  6:08   ` Christoph Hellwig
  2017-06-01 13:11     ` Bart Van Assche
  2017-06-01 19:04     ` Jens Axboe
  0 siblings, 2 replies; 26+ messages in thread
From: Christoph Hellwig @ 2017-06-01  6:08 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
	Omar Sandoval

On Wed, May 31, 2017 at 03:52:46PM -0700, Bart Van Assche wrote:
> Commit 6d247d7f71d1 ("block: allow specifying size for extra command
> data") added support for .cmd_size to blk-sq. Due to that patch the
> blk_mq_rq_{to,from}_pdu() functions are also useful for single-queue
> block drivers. Hence remove "_mq" from the name of these functions.
> This patch does not change any functionality. Most of this patch has
> been generated by running the following shell command:

I don't really see the point of this as it's primarily a blk-mq API
and we still hope to get rid of the old code.  But I'm not necessarily
against it either.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 03/12] block: Make most scsi_req_init() calls implicit
  2017-05-31 22:52 ` [PATCH v2 03/12] block: Make most scsi_req_init() calls implicit Bart Van Assche
@ 2017-06-01  6:08   ` Christoph Hellwig
  0 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2017-06-01  6:08 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
	Omar Sandoval, Nicholas Bellinger

On Wed, May 31, 2017 at 03:52:37PM -0700, Bart Van Assche wrote:
> Instead of explicitly calling scsi_req_init() after blk_get_request(),
> call that function from inside blk_get_request(). Add an
> .initialize_rq_fn() callback function to the block drivers that need
> it. Merge the IDE .init_rq_fn() function into .initialize_rq_fn()
> because it is too small to keep it as a separate function. Keep the
> scsi_req_init() call in ide_prep_sense() because it follows a
> blk_rq_init() call.

Looks fine except for the method placement in the mq case mentioned
in the previous patch:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 05/12] blk-mq: Initialize a request before assigning a tag
  2017-05-31 22:52 ` [PATCH v2 05/12] blk-mq: Initialize a request before assigning a tag Bart Van Assche
@ 2017-06-01  6:09   ` Christoph Hellwig
  0 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2017-06-01  6:09 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
	Omar Sandoval, Ming Lei

Looks fine,

Reviewed-by: Christoph Hellwig <hch@lst.de>

Btw, blk_mq_rq_ctx_init should be marked static, but I'll send a separate
patch for that.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 07/12] block: Check locking assumptions at runtime
  2017-05-31 22:52 ` [PATCH v2 07/12] block: Check locking assumptions at runtime Bart Van Assche
@ 2017-06-01  6:09   ` Christoph Hellwig
  0 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2017-06-01  6:09 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
	Omar Sandoval, Ming Lei

On Wed, May 31, 2017 at 03:52:41PM -0700, Bart Van Assche wrote:
> Instead of documenting the locking assumptions of most block layer
> functions as a comment, use lockdep_assert_held() to verify locking
> assumptions at runtime.

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 08/12] block: Document what queue type each function is intended for
  2017-05-31 22:52 ` [PATCH v2 08/12] block: Document what queue type each function is intended for Bart Van Assche
@ 2017-06-01  6:10   ` Christoph Hellwig
  0 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2017-06-01  6:10 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
	Omar Sandoval, Ming Lei

On Wed, May 31, 2017 at 03:52:42PM -0700, Bart Van Assche wrote:
> Some functions in block/blk-core.c must only be used on blk-sq queues
> while others are safe to use against any queue type. Document which
> functions are intended for blk-sq queues and issue a warning if the
> blk-sq API is misused.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>

Looks fine,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 09/12] blk-mq: Document locking assumptions
  2017-05-31 22:52 ` [PATCH v2 09/12] blk-mq: Document locking assumptions Bart Van Assche
@ 2017-06-01  6:11   ` Christoph Hellwig
  0 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2017-06-01  6:11 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
	Omar Sandoval, Ming Lei

On Wed, May 31, 2017 at 03:52:43PM -0700, Bart Van Assche wrote:
> Document the locking assumptions in functions that modify
> blk_mq_ctx.rq_list to make it easier for humans to verify
> this code.
> 
> Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 11/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped
  2017-05-31 22:52 ` [PATCH v2 11/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped Bart Van Assche
@ 2017-06-01  6:11   ` Christoph Hellwig
  0 siblings, 0 replies; 26+ messages in thread
From: Christoph Hellwig @ 2017-06-01  6:11 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, linux-block, Christoph Hellwig, Hannes Reinecke,
	Omar Sandoval, Ming Lei

Looks fine,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu()
  2017-06-01  6:08   ` Christoph Hellwig
@ 2017-06-01 13:11     ` Bart Van Assche
  2017-06-01 19:06       ` Jens Axboe
  2017-06-01 19:04     ` Jens Axboe
  1 sibling, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-06-01 13:11 UTC (permalink / raw)
  To: hch; +Cc: osandov, linux-block, hare, axboe

T24gVGh1LCAyMDE3LTA2LTAxIGF0IDA4OjA4ICswMjAwLCBDaHJpc3RvcGggSGVsbHdpZyB3cm90
ZToNCj4gT24gV2VkLCBNYXkgMzEsIDIwMTcgYXQgMDM6NTI6NDZQTSAtMDcwMCwgQmFydCBWYW4g
QXNzY2hlIHdyb3RlOg0KPiA+IENvbW1pdCA2ZDI0N2Q3ZjcxZDEgKCJibG9jazogYWxsb3cgc3Bl
Y2lmeWluZyBzaXplIGZvciBleHRyYSBjb21tYW5kDQo+ID4gZGF0YSIpIGFkZGVkIHN1cHBvcnQg
Zm9yIC5jbWRfc2l6ZSB0byBibGstc3EuIER1ZSB0byB0aGF0IHBhdGNoIHRoZQ0KPiA+IGJsa19t
cV9ycV97dG8sZnJvbX1fcGR1KCkgZnVuY3Rpb25zIGFyZSBhbHNvIHVzZWZ1bCBmb3Igc2luZ2xl
LXF1ZXVlDQo+ID4gYmxvY2sgZHJpdmVycy4gSGVuY2UgcmVtb3ZlICJfbXEiIGZyb20gdGhlIG5h
bWUgb2YgdGhlc2UgZnVuY3Rpb25zLg0KPiA+IFRoaXMgcGF0Y2ggZG9lcyBub3QgY2hhbmdlIGFu
eSBmdW5jdGlvbmFsaXR5LiBNb3N0IG9mIHRoaXMgcGF0Y2ggaGFzDQo+ID4gYmVlbiBnZW5lcmF0
ZWQgYnkgcnVubmluZyB0aGUgZm9sbG93aW5nIHNoZWxsIGNvbW1hbmQ6DQo+IA0KPiBJIGRvbid0
IHJlYWxseSBzZWUgdGhlIHBvaW50IG9mIHRoaXMgYXMgaXQncyBwcmltYXJpbHkgYSBibGstbXEg
QVBJDQo+IGFuZCB3ZSBzdGlsbCBob3BlIHRvIGdldCByaWQgb2YgdGhlIG9sZCBjb2RlLiAgQnV0
IEknbSBub3QgbmVjZXNzYXJpbHkNCj4gYWdhaW5zdCBpdCBlaXRoZXIuDQoNCkhlbGxvIENocmlz
dG9waCwNCg0KSSB3b3VsZCBsaWtlIHRvIGludHJvZHVjZSBjYWxscyB0byB0aGVzZSBmdW5jdGlv
bnMgaW4gc2V2ZXJhbCBzY3NpLXNxDQpmdW5jdGlvbnMuIElmIEkgd291bGQgZG8gdGhhdCB3aXRo
b3V0IHJlbmFtaW5nIHRoZXNlIGZ1bmN0aW9ucyB0aGVuIGFueW9uZQ0Kd2hvIHJlYWRzIHRoZSBj
b2RlIG9mIHRoZXNlIGZ1bmN0aW9ucyBhbmQgc2VlcyBjYWxscyB0byBmdW5jdGlvbnMgd2l0aCBh
DQpibGtfbXFfIHByZWZpeCBjb3VsZCBnZXQgcmVhbGx5IGNvbmZ1c2VkIHdoZW4gdHJ5aW5nIHRv
IGZpZ3VyZSBvdXQgd2hldGhlcg0KdGhlc2UgZnVuY3Rpb25zIGFyZSB1c2VkIGJ5IHNjc2ktc3Es
IHNjc2ktbXEgb3IgcGVyaGFwcyBib3RoLg0KDQpCYXJ0Lg==

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu()
  2017-06-01  6:08   ` Christoph Hellwig
  2017-06-01 13:11     ` Bart Van Assche
@ 2017-06-01 19:04     ` Jens Axboe
  1 sibling, 0 replies; 26+ messages in thread
From: Jens Axboe @ 2017-06-01 19:04 UTC (permalink / raw)
  To: Christoph Hellwig, Bart Van Assche
  Cc: linux-block, Hannes Reinecke, Omar Sandoval

On 05/31/2017 11:08 PM, Christoph Hellwig wrote:
> On Wed, May 31, 2017 at 03:52:46PM -0700, Bart Van Assche wrote:
>> Commit 6d247d7f71d1 ("block: allow specifying size for extra command
>> data") added support for .cmd_size to blk-sq. Due to that patch the
>> blk_mq_rq_{to,from}_pdu() functions are also useful for single-queue
>> block drivers. Hence remove "_mq" from the name of these functions.
>> This patch does not change any functionality. Most of this patch has
>> been generated by running the following shell command:
> 
> I don't really see the point of this as it's primarily a blk-mq API
> and we still hope to get rid of the old code.  But I'm not necessarily
> against it either.

I agree, I think that's pointless churn.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu()
  2017-06-01 13:11     ` Bart Van Assche
@ 2017-06-01 19:06       ` Jens Axboe
  2017-06-01 19:17         ` Bart Van Assche
  0 siblings, 1 reply; 26+ messages in thread
From: Jens Axboe @ 2017-06-01 19:06 UTC (permalink / raw)
  To: Bart Van Assche, hch; +Cc: osandov, linux-block, hare

On 06/01/2017 06:11 AM, Bart Van Assche wrote:
> On Thu, 2017-06-01 at 08:08 +0200, Christoph Hellwig wrote:
>> On Wed, May 31, 2017 at 03:52:46PM -0700, Bart Van Assche wrote:
>>> Commit 6d247d7f71d1 ("block: allow specifying size for extra command
>>> data") added support for .cmd_size to blk-sq. Due to that patch the
>>> blk_mq_rq_{to,from}_pdu() functions are also useful for single-queue
>>> block drivers. Hence remove "_mq" from the name of these functions.
>>> This patch does not change any functionality. Most of this patch has
>>> been generated by running the following shell command:
>>
>> I don't really see the point of this as it's primarily a blk-mq API
>> and we still hope to get rid of the old code.  But I'm not necessarily
>> against it either.
> 
> Hello Christoph,
> 
> I would like to introduce calls to these functions in several scsi-sq
> functions. If I would do that without renaming these functions then anyone
> who reads the code of these functions and sees calls to functions with a
> blk_mq_ prefix could get really confused when trying to figure out whether
> these functions are used by scsi-sq, scsi-mq or perhaps both.

But that should go away, eventually.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu()
  2017-06-01 19:06       ` Jens Axboe
@ 2017-06-01 19:17         ` Bart Van Assche
  2017-06-01 19:28           ` Jens Axboe
  0 siblings, 1 reply; 26+ messages in thread
From: Bart Van Assche @ 2017-06-01 19:17 UTC (permalink / raw)
  To: hch, axboe; +Cc: osandov, linux-block, hare

On Thu, 2017-06-01 at 12:06 -0700, Jens Axboe wrote:
> On 06/01/2017 06:11 AM, Bart Van Assche wrote:
> > On Thu, 2017-06-01 at 08:08 +0200, Christoph Hellwig wrote:
> > > On Wed, May 31, 2017 at 03:52:46PM -0700, Bart Van Assche wrote:
> > > > Commit 6d247d7f71d1 ("block: allow specifying size for extra comman=
d
> > > > data") added support for .cmd_size to blk-sq. Due to that patch the
> > > > blk_mq_rq_{to,from}_pdu() functions are also useful for single-queu=
e
> > > > block drivers. Hence remove "_mq" from the name of these functions.
> > > > This patch does not change any functionality. Most of this patch ha=
s
> > > > been generated by running the following shell command:
> > >=20
> > > I don't really see the point of this as it's primarily a blk-mq API
> > > and we still hope to get rid of the old code.  But I'm not necessaril=
y
> > > against it either.
> >=20
> > I would like to introduce calls to these functions in several scsi-sq
> > functions. If I would do that without renaming these functions then any=
one
> > who reads the code of these functions and sees calls to functions with =
a
> > blk_mq_ prefix could get really confused when trying to figure out whet=
her
> > these functions are used by scsi-sq, scsi-mq or perhaps both.
>=20
> But that should go away, eventually.

Hello Jens,

I agree that we should work towards removal of the single queue block layer=
.
But how long will it take before that code is removed?

Due to recent patches from Christoph the 'request' member in struct scsi_cm=
nd
is now superfluous. I'd like to replace accesses to that member by a call t=
o
blk_mq_rq_from_pdu(). I'm afraid that doing that in scsi-sq code paths will
make that code look weird.

Bart.=

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu()
  2017-06-01 19:17         ` Bart Van Assche
@ 2017-06-01 19:28           ` Jens Axboe
  0 siblings, 0 replies; 26+ messages in thread
From: Jens Axboe @ 2017-06-01 19:28 UTC (permalink / raw)
  To: Bart Van Assche, hch; +Cc: osandov, linux-block, hare

On 06/01/2017 12:17 PM, Bart Van Assche wrote:
> On Thu, 2017-06-01 at 12:06 -0700, Jens Axboe wrote:
>> On 06/01/2017 06:11 AM, Bart Van Assche wrote:
>>> On Thu, 2017-06-01 at 08:08 +0200, Christoph Hellwig wrote:
>>>> On Wed, May 31, 2017 at 03:52:46PM -0700, Bart Van Assche wrote:
>>>>> Commit 6d247d7f71d1 ("block: allow specifying size for extra command
>>>>> data") added support for .cmd_size to blk-sq. Due to that patch the
>>>>> blk_mq_rq_{to,from}_pdu() functions are also useful for single-queue
>>>>> block drivers. Hence remove "_mq" from the name of these functions.
>>>>> This patch does not change any functionality. Most of this patch has
>>>>> been generated by running the following shell command:
>>>>
>>>> I don't really see the point of this as it's primarily a blk-mq API
>>>> and we still hope to get rid of the old code.  But I'm not necessarily
>>>> against it either.
>>>
>>> I would like to introduce calls to these functions in several scsi-sq
>>> functions. If I would do that without renaming these functions then anyone
>>> who reads the code of these functions and sees calls to functions with a
>>> blk_mq_ prefix could get really confused when trying to figure out whether
>>> these functions are used by scsi-sq, scsi-mq or perhaps both.
>>
>> But that should go away, eventually.
> 
> Hello Jens,
> 
> I agree that we should work towards removal of the single queue block layer.
> But how long will it take before that code is removed?
> 
> Due to recent patches from Christoph the 'request' member in struct scsi_cmnd
> is now superfluous. I'd like to replace accesses to that member by a call to
> blk_mq_rq_from_pdu(). I'm afraid that doing that in scsi-sq code paths will
> make that code look weird.

For the old path, I'd suggest that you just wrap the blk_mq_*_pdu() calls
with a non-mq named version, and stuff that in blkdev.h. That way we can
keep the blk-mq API more logical, and we can just kill those wrappers
when the last user goes away.

That's also a lot less churn than renaming all of the existing callers.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2017-06-01 19:28 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-31 22:52 [PATCH v2 00/12] More patches for kernel v4.13 Bart Van Assche
2017-05-31 22:52 ` [PATCH v2 01/12] block: Make request operation type argument declarations consistent Bart Van Assche
2017-05-31 22:52 ` [PATCH v2 02/12] block: Introduce request_queue.initialize_rq_fn() Bart Van Assche
2017-06-01  6:06   ` Christoph Hellwig
2017-05-31 22:52 ` [PATCH v2 03/12] block: Make most scsi_req_init() calls implicit Bart Van Assche
2017-06-01  6:08   ` Christoph Hellwig
2017-05-31 22:52 ` [PATCH v2 04/12] block: Change argument type of scsi_req_init() Bart Van Assche
2017-05-31 22:52 ` [PATCH v2 05/12] blk-mq: Initialize a request before assigning a tag Bart Van Assche
2017-06-01  6:09   ` Christoph Hellwig
2017-05-31 22:52 ` [PATCH v2 06/12] block: Add a comment above queue_lockdep_assert_held() Bart Van Assche
2017-05-31 22:52 ` [PATCH v2 07/12] block: Check locking assumptions at runtime Bart Van Assche
2017-06-01  6:09   ` Christoph Hellwig
2017-05-31 22:52 ` [PATCH v2 08/12] block: Document what queue type each function is intended for Bart Van Assche
2017-06-01  6:10   ` Christoph Hellwig
2017-05-31 22:52 ` [PATCH v2 09/12] blk-mq: Document locking assumptions Bart Van Assche
2017-06-01  6:11   ` Christoph Hellwig
2017-05-31 22:52 ` [PATCH v2 10/12] block: Constify disk_type Bart Van Assche
2017-05-31 22:52 ` [PATCH v2 11/12] blk-mq: Warn when attempting to run a hardware queue that is not mapped Bart Van Assche
2017-06-01  6:11   ` Christoph Hellwig
2017-05-31 22:52 ` [PATCH v2 12/12] block: Rename blk_mq_rq_{to,from}_pdu() Bart Van Assche
2017-06-01  6:08   ` Christoph Hellwig
2017-06-01 13:11     ` Bart Van Assche
2017-06-01 19:06       ` Jens Axboe
2017-06-01 19:17         ` Bart Van Assche
2017-06-01 19:28           ` Jens Axboe
2017-06-01 19:04     ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.