linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V6 00/12] block: support bio based io polling
@ 2021-04-22 12:20 Ming Lei
  2021-04-22 12:20 ` [PATCH V6 01/12] block: add helper of blk_queue_poll Ming Lei
                   ` (12 more replies)
  0 siblings, 13 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei

Hi Jens,

Add per-task io poll context for holding HIPRI blk-mq/underlying bios
queued from bio based driver's io submission context, and reuse one bio
padding field for storing 'cookie' returned from submit_bio() for these
bios. Also explicitly end these bios in poll context by adding two
new bio flags.

In this way, we needn't to poll all underlying hw queues any more,
which is implemented in Jeffle's patches. And we can just poll hw queues
in which there is HIPRI IO queued.

Usually io submission and io poll share same context, so the added io
poll context data is just like one stack variable, and the cost for
saving bios is cheap.

V6:
	- move poll code into block/blk-poll.c, as suggested by Christoph
	- define bvec_iter as __packed, and add one new field to bio, as
	  suggested by Christoph
	- re-organize patch order, as suggested by Christoph
	- add one flag for checking if the disk is capable of bio polling
	  and remove .poll_capable(), as suggested by Christoph
	- fix type of .bi_poll

V5:
	- fix one use-after-free issue in case that polling is from another
	context: adds one new cookie of BLK_QC_T_NOT_READY for preventing
	this issue in patch 8/12
	- add reviewed-by & tested-by tag

V4:
	- cover one more test_bit(QUEUE_FLAG_POLL, ...) suggested by
	  Jeffle(01/12)
	- drop patch of 'block: add helper of blk_create_io_context'
	- add new helper of blk_create_io_poll_context() (03/12)
	- drain submission queues in exit_io_context(), suggested by
	  Jeffle(08/13)
	- considering shared io context case for blk_bio_poll_io_drain()
	(08/13)
	- fix one issue in blk_bio_poll_pack_groups() as suggested by
	Jeffle(08/13)
	- add reviewed-by tag
V3:
	- fix cookie returned for bio based driver, as suggested by Jeffle Xu
	- draining pending bios when submission context is exiting
	- patch style and comment fix, as suggested by Mike
	- allow poll context data to be NULL by always polling on submission queue
	- remove RFC, and reviewed-by

V2:
	- address queue depth scalability issue reported by Jeffle via bio
	group list. Reuse .bi_end_io for linking bios which share same
	.bi_end_io, and support 32 such groups in submit queue. With this way,
	the scalability issue caused by kfifio is solved. Before really
	ending bio, .bi_end_io is recovered from the group head.



Jeffle Xu (2):
  block: extract one helper function polling hw queue
  dm: support IO polling for bio-based dm device

Ming Lei (10):
  block: add helper of blk_queue_poll
  block: define 'struct bvec_iter' as packed
  block: add one helper to free io_context
  block: move block polling code into one dedicated source file
  block: prepare for supporting bio_list via other link
  block: create io poll context for submission and poll task
  block: add req flag of REQ_POLL_CTX
  block: use per-task poll context to implement bio based io polling
  block: limit hw queues to be polled in each blk_poll()
  block: allow to control FLAG_POLL via sysfs for bio poll capable queue

 block/Makefile                |   3 +-
 block/bio.c                   |   5 +
 block/blk-core.c              |  68 +++-
 block/blk-ioc.c               |  15 +-
 block/blk-mq.c                | 231 -------------
 block/blk-mq.h                |  40 +++
 block/blk-poll.c              | 632 ++++++++++++++++++++++++++++++++++
 block/blk-sysfs.c             |  16 +-
 block/blk.h                   | 112 ++++++
 drivers/md/dm-table.c         |  24 ++
 drivers/md/dm.c               |   2 +
 drivers/nvme/host/core.c      |   2 +-
 include/linux/bio.h           | 132 +++----
 include/linux/blk_types.h     |  31 +-
 include/linux/blkdev.h        |   1 +
 include/linux/bvec.h          |   2 +-
 include/linux/device-mapper.h |   1 +
 include/linux/genhd.h         |   2 +
 include/linux/iocontext.h     |   2 +
 19 files changed, 1003 insertions(+), 318 deletions(-)
 create mode 100644 block/blk-poll.c

-- 
2.29.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH V6 01/12] block: add helper of blk_queue_poll
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-22 12:20 ` [PATCH V6 02/12] block: define 'struct bvec_iter' as packed Ming Lei
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei, Chaitanya Kulkarni

There has been 3 users, and will be more, so add one such helper.

Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c         | 2 +-
 block/blk-mq.c           | 3 +--
 block/blk-sysfs.c        | 2 +-
 drivers/nvme/host/core.c | 2 +-
 include/linux/blkdev.h   | 1 +
 5 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index ca7f833e25a8..d44a8b934608 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -868,7 +868,7 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 		}
 	}
 
-	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (!blk_queue_poll(q))
 		bio->bi_opf &= ~REQ_HIPRI;
 
 	switch (bio_op(bio)) {
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7d2ea6357c7d..47e650bb836b 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3887,8 +3887,7 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
 	struct blk_mq_hw_ctx *hctx;
 	long state;
 
-	if (!blk_qc_t_valid(cookie) ||
-	    !test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (!blk_qc_t_valid(cookie) || !blk_queue_poll(q))
 		return 0;
 
 	if (current->plug)
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index e03bedf180ab..fed4981b1f7a 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -422,7 +422,7 @@ static ssize_t queue_poll_delay_store(struct request_queue *q, const char *page,
 
 static ssize_t queue_poll_show(struct request_queue *q, char *page)
 {
-	return queue_var_show(test_bit(QUEUE_FLAG_POLL, &q->queue_flags), page);
+	return queue_var_show(blk_queue_poll(q), page);
 }
 
 static ssize_t queue_poll_store(struct request_queue *q, const char *page,
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0896e21642be..34b8c78f88e0 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -956,7 +956,7 @@ static void nvme_execute_rq_polled(struct request_queue *q,
 {
 	DECLARE_COMPLETION_ONSTACK(wait);
 
-	WARN_ON_ONCE(!test_bit(QUEUE_FLAG_POLL, &q->queue_flags));
+	WARN_ON_ONCE(!blk_queue_poll(q));
 
 	rq->cmd_flags |= REQ_HIPRI;
 	rq->end_io_data = &wait;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f2e77ba97550..668223168412 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -675,6 +675,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_fua(q)	test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags)
 #define blk_queue_registered(q)	test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags)
 #define blk_queue_nowait(q)	test_bit(QUEUE_FLAG_NOWAIT, &(q)->queue_flags)
+#define blk_queue_poll(q)	test_bit(QUEUE_FLAG_POLL, &(q)->queue_flags)
 
 extern void blk_set_pm_only(struct request_queue *q);
 extern void blk_clear_pm_only(struct request_queue *q);
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 02/12] block: define 'struct bvec_iter' as packed
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
  2021-04-22 12:20 ` [PATCH V6 01/12] block: add helper of blk_queue_poll Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-22 13:18   ` Hannes Reinecke
  2021-04-22 12:20 ` [PATCH V6 03/12] block: add one helper to free io_context Ming Lei
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei, Christoph Hellwig

'struct bvec_iter' is embedded into 'struct bio', define it as packed
so that we can get one extra 4bytes for other uses without expanding
bio.

'struct bvec_iter' is often allocated on stack, so making it packed
doesn't affect performance. Also I have run io_uring on both
nvme/null_blk, and not observe performance effect in this way.

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/bvec.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index ff832e698efb..a0c4f41dfc83 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -43,7 +43,7 @@ struct bvec_iter {
 
 	unsigned int            bi_bvec_done;	/* number of bytes completed in
 						   current bvec */
-};
+} __packed;
 
 struct bvec_iter_all {
 	struct bio_vec	bv;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 03/12] block: add one helper to free io_context
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
  2021-04-22 12:20 ` [PATCH V6 01/12] block: add helper of blk_queue_poll Ming Lei
  2021-04-22 12:20 ` [PATCH V6 02/12] block: define 'struct bvec_iter' as packed Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-22 12:20 ` [PATCH V6 04/12] block: move block polling code into one dedicated source file Ming Lei
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei

Prepare for putting bio poll queue into io_context, so add one helper
for free io_context.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-ioc.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 57299f860d41..b0cde18c4b8c 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -17,6 +17,11 @@
  */
 static struct kmem_cache *iocontext_cachep;
 
+static inline void free_io_context(struct io_context *ioc)
+{
+	kmem_cache_free(iocontext_cachep, ioc);
+}
+
 /**
  * get_io_context - increment reference count to io_context
  * @ioc: io_context to get
@@ -129,7 +134,7 @@ static void ioc_release_fn(struct work_struct *work)
 
 	spin_unlock_irq(&ioc->lock);
 
-	kmem_cache_free(iocontext_cachep, ioc);
+	free_io_context(ioc);
 }
 
 /**
@@ -164,7 +169,7 @@ void put_io_context(struct io_context *ioc)
 	}
 
 	if (free_ioc)
-		kmem_cache_free(iocontext_cachep, ioc);
+		free_io_context(ioc);
 }
 
 /**
@@ -278,7 +283,7 @@ int create_task_io_context(struct task_struct *task, gfp_t gfp_flags, int node)
 	    (task == current || !(task->flags & PF_EXITING)))
 		task->io_context = ioc;
 	else
-		kmem_cache_free(iocontext_cachep, ioc);
+		free_io_context(ioc);
 
 	ret = task->io_context ? 0 : -EBUSY;
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 04/12] block: move block polling code into one dedicated source file
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (2 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 03/12] block: add one helper to free io_context Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-22 13:19   ` Hannes Reinecke
  2021-04-26  7:12   ` Hannes Reinecke
  2021-04-22 12:20 ` [PATCH V6 05/12] block: extract one helper function polling hw queue Ming Lei
                   ` (8 subsequent siblings)
  12 siblings, 2 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei, Christoph Hellwig

Prepare for supporting bio based io polling, and move blk polling
code into one dedicated source file. And three shared functions are
put into private header of blk-mq.h

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/Makefile   |   3 +-
 block/blk-mq.c   | 230 -----------------------------------------------
 block/blk-mq.h   |  40 +++++++++
 block/blk-poll.c | 196 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 238 insertions(+), 231 deletions(-)
 create mode 100644 block/blk-poll.c

diff --git a/block/Makefile b/block/Makefile
index 8d841f5f986f..d7abe2333407 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -8,7 +8,8 @@ obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-sysfs.o \
 			blk-exec.o blk-merge.o blk-timeout.o \
 			blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
 			blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \
-			genhd.o ioprio.o badblocks.o partitions/ blk-rq-qos.o
+			genhd.o ioprio.o badblocks.o partitions/ blk-rq-qos.o \
+			blk-poll.o
 
 obj-$(CONFIG_BOUNCE)		+= bounce.o
 obj-$(CONFIG_BLK_SCSI_REQUEST)	+= scsi_ioctl.o
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 47e650bb836b..f9162295f4f2 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -43,26 +43,6 @@
 
 static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
 
-static void blk_mq_poll_stats_start(struct request_queue *q);
-static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
-
-static int blk_mq_poll_stats_bkt(const struct request *rq)
-{
-	int ddir, sectors, bucket;
-
-	ddir = rq_data_dir(rq);
-	sectors = blk_rq_stats_sectors(rq);
-
-	bucket = ddir + 2 * ilog2(sectors);
-
-	if (bucket < 0)
-		return -1;
-	else if (bucket >= BLK_MQ_POLL_STATS_BKTS)
-		return ddir + BLK_MQ_POLL_STATS_BKTS - 2;
-
-	return bucket;
-}
-
 /*
  * Check if any of the ctx, dispatch list or elevator
  * have pending work in this hardware queue.
@@ -3726,216 +3706,6 @@ void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues)
 }
 EXPORT_SYMBOL_GPL(blk_mq_update_nr_hw_queues);
 
-/* Enable polling stats and return whether they were already enabled. */
-static bool blk_poll_stats_enable(struct request_queue *q)
-{
-	if (test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags) ||
-	    blk_queue_flag_test_and_set(QUEUE_FLAG_POLL_STATS, q))
-		return true;
-	blk_stat_add_callback(q, q->poll_cb);
-	return false;
-}
-
-static void blk_mq_poll_stats_start(struct request_queue *q)
-{
-	/*
-	 * We don't arm the callback if polling stats are not enabled or the
-	 * callback is already active.
-	 */
-	if (!test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags) ||
-	    blk_stat_is_active(q->poll_cb))
-		return;
-
-	blk_stat_activate_msecs(q->poll_cb, 100);
-}
-
-static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb)
-{
-	struct request_queue *q = cb->data;
-	int bucket;
-
-	for (bucket = 0; bucket < BLK_MQ_POLL_STATS_BKTS; bucket++) {
-		if (cb->stat[bucket].nr_samples)
-			q->poll_stat[bucket] = cb->stat[bucket];
-	}
-}
-
-static unsigned long blk_mq_poll_nsecs(struct request_queue *q,
-				       struct request *rq)
-{
-	unsigned long ret = 0;
-	int bucket;
-
-	/*
-	 * If stats collection isn't on, don't sleep but turn it on for
-	 * future users
-	 */
-	if (!blk_poll_stats_enable(q))
-		return 0;
-
-	/*
-	 * As an optimistic guess, use half of the mean service time
-	 * for this type of request. We can (and should) make this smarter.
-	 * For instance, if the completion latencies are tight, we can
-	 * get closer than just half the mean. This is especially
-	 * important on devices where the completion latencies are longer
-	 * than ~10 usec. We do use the stats for the relevant IO size
-	 * if available which does lead to better estimates.
-	 */
-	bucket = blk_mq_poll_stats_bkt(rq);
-	if (bucket < 0)
-		return ret;
-
-	if (q->poll_stat[bucket].nr_samples)
-		ret = (q->poll_stat[bucket].mean + 1) / 2;
-
-	return ret;
-}
-
-static bool blk_mq_poll_hybrid_sleep(struct request_queue *q,
-				     struct request *rq)
-{
-	struct hrtimer_sleeper hs;
-	enum hrtimer_mode mode;
-	unsigned int nsecs;
-	ktime_t kt;
-
-	if (rq->rq_flags & RQF_MQ_POLL_SLEPT)
-		return false;
-
-	/*
-	 * If we get here, hybrid polling is enabled. Hence poll_nsec can be:
-	 *
-	 *  0:	use half of prev avg
-	 * >0:	use this specific value
-	 */
-	if (q->poll_nsec > 0)
-		nsecs = q->poll_nsec;
-	else
-		nsecs = blk_mq_poll_nsecs(q, rq);
-
-	if (!nsecs)
-		return false;
-
-	rq->rq_flags |= RQF_MQ_POLL_SLEPT;
-
-	/*
-	 * This will be replaced with the stats tracking code, using
-	 * 'avg_completion_time / 2' as the pre-sleep target.
-	 */
-	kt = nsecs;
-
-	mode = HRTIMER_MODE_REL;
-	hrtimer_init_sleeper_on_stack(&hs, CLOCK_MONOTONIC, mode);
-	hrtimer_set_expires(&hs.timer, kt);
-
-	do {
-		if (blk_mq_rq_state(rq) == MQ_RQ_COMPLETE)
-			break;
-		set_current_state(TASK_UNINTERRUPTIBLE);
-		hrtimer_sleeper_start_expires(&hs, mode);
-		if (hs.task)
-			io_schedule();
-		hrtimer_cancel(&hs.timer);
-		mode = HRTIMER_MODE_ABS;
-	} while (hs.task && !signal_pending(current));
-
-	__set_current_state(TASK_RUNNING);
-	destroy_hrtimer_on_stack(&hs.timer);
-	return true;
-}
-
-static bool blk_mq_poll_hybrid(struct request_queue *q,
-			       struct blk_mq_hw_ctx *hctx, blk_qc_t cookie)
-{
-	struct request *rq;
-
-	if (q->poll_nsec == BLK_MQ_POLL_CLASSIC)
-		return false;
-
-	if (!blk_qc_t_is_internal(cookie))
-		rq = blk_mq_tag_to_rq(hctx->tags, blk_qc_t_to_tag(cookie));
-	else {
-		rq = blk_mq_tag_to_rq(hctx->sched_tags, blk_qc_t_to_tag(cookie));
-		/*
-		 * With scheduling, if the request has completed, we'll
-		 * get a NULL return here, as we clear the sched tag when
-		 * that happens. The request still remains valid, like always,
-		 * so we should be safe with just the NULL check.
-		 */
-		if (!rq)
-			return false;
-	}
-
-	return blk_mq_poll_hybrid_sleep(q, rq);
-}
-
-/**
- * blk_poll - poll for IO completions
- * @q:  the queue
- * @cookie: cookie passed back at IO submission time
- * @spin: whether to spin for completions
- *
- * Description:
- *    Poll for completions on the passed in queue. Returns number of
- *    completed entries found. If @spin is true, then blk_poll will continue
- *    looping until at least one completion is found, unless the task is
- *    otherwise marked running (or we need to reschedule).
- */
-int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
-{
-	struct blk_mq_hw_ctx *hctx;
-	long state;
-
-	if (!blk_qc_t_valid(cookie) || !blk_queue_poll(q))
-		return 0;
-
-	if (current->plug)
-		blk_flush_plug_list(current->plug, false);
-
-	hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
-
-	/*
-	 * If we sleep, have the caller restart the poll loop to reset
-	 * the state. Like for the other success return cases, the
-	 * caller is responsible for checking if the IO completed. If
-	 * the IO isn't complete, we'll get called again and will go
-	 * straight to the busy poll loop. If specified not to spin,
-	 * we also should not sleep.
-	 */
-	if (spin && blk_mq_poll_hybrid(q, hctx, cookie))
-		return 1;
-
-	hctx->poll_considered++;
-
-	state = current->state;
-	do {
-		int ret;
-
-		hctx->poll_invoked++;
-
-		ret = q->mq_ops->poll(hctx);
-		if (ret > 0) {
-			hctx->poll_success++;
-			__set_current_state(TASK_RUNNING);
-			return ret;
-		}
-
-		if (signal_pending_state(state, current))
-			__set_current_state(TASK_RUNNING);
-
-		if (current->state == TASK_RUNNING)
-			return 1;
-		if (ret < 0 || !spin)
-			break;
-		cpu_relax();
-	} while (!need_resched());
-
-	__set_current_state(TASK_RUNNING);
-	return 0;
-}
-EXPORT_SYMBOL_GPL(blk_poll);
-
 unsigned int blk_mq_rq_cpu(struct request *rq)
 {
 	return rq->mq_ctx->cpu;
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 9ccb1818303b..2eea38cd8048 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -324,5 +324,45 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
 	return __blk_mq_active_requests(hctx) < depth;
 }
 
+static inline int blk_mq_poll_stats_bkt(const struct request *rq)
+{
+	int ddir, sectors, bucket;
+
+	ddir = rq_data_dir(rq);
+	sectors = blk_rq_stats_sectors(rq);
+
+	bucket = ddir + 2 * ilog2(sectors);
+
+	if (bucket < 0)
+		return -1;
+	else if (bucket >= BLK_MQ_POLL_STATS_BKTS)
+		return ddir + BLK_MQ_POLL_STATS_BKTS - 2;
+
+	return bucket;
+}
+
+static inline void blk_mq_poll_stats_start(struct request_queue *q)
+{
+	/*
+	 * We don't arm the callback if polling stats are not enabled or the
+	 * callback is already active.
+	 */
+	if (!test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags) ||
+	    blk_stat_is_active(q->poll_cb))
+		return;
+
+	blk_stat_activate_msecs(q->poll_cb, 100);
+}
+
+static inline void blk_mq_poll_stats_fn(struct blk_stat_callback *cb)
+{
+	struct request_queue *q = cb->data;
+	int bucket;
+
+	for (bucket = 0; bucket < BLK_MQ_POLL_STATS_BKTS; bucket++) {
+		if (cb->stat[bucket].nr_samples)
+			q->poll_stat[bucket] = cb->stat[bucket];
+	}
+}
 
 #endif
diff --git a/block/blk-poll.c b/block/blk-poll.c
new file mode 100644
index 000000000000..daa307f84792
--- /dev/null
+++ b/block/blk-poll.c
@@ -0,0 +1,196 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/blkdev.h>
+#include <linux/sched.h>
+#include <linux/hrtimer.h>
+
+#include <linux/blk-mq.h>
+#include "blk.h"
+#include "blk-mq.h"
+
+/* Enable polling stats and return whether they were already enabled. */
+static bool blk_poll_stats_enable(struct request_queue *q)
+{
+	if (test_bit(QUEUE_FLAG_POLL_STATS, &q->queue_flags) ||
+	    blk_queue_flag_test_and_set(QUEUE_FLAG_POLL_STATS, q))
+		return true;
+	blk_stat_add_callback(q, q->poll_cb);
+	return false;
+}
+
+static unsigned long blk_mq_poll_nsecs(struct request_queue *q,
+				       struct request *rq)
+{
+	unsigned long ret = 0;
+	int bucket;
+
+	/*
+	 * If stats collection isn't on, don't sleep but turn it on for
+	 * future users
+	 */
+	if (!blk_poll_stats_enable(q))
+		return 0;
+
+	/*
+	 * As an optimistic guess, use half of the mean service time
+	 * for this type of request. We can (and should) make this smarter.
+	 * For instance, if the completion latencies are tight, we can
+	 * get closer than just half the mean. This is especially
+	 * important on devices where the completion latencies are longer
+	 * than ~10 usec. We do use the stats for the relevant IO size
+	 * if available which does lead to better estimates.
+	 */
+	bucket = blk_mq_poll_stats_bkt(rq);
+	if (bucket < 0)
+		return ret;
+
+	if (q->poll_stat[bucket].nr_samples)
+		ret = (q->poll_stat[bucket].mean + 1) / 2;
+
+	return ret;
+}
+
+static bool blk_mq_poll_hybrid_sleep(struct request_queue *q,
+				     struct request *rq)
+{
+	struct hrtimer_sleeper hs;
+	enum hrtimer_mode mode;
+	unsigned int nsecs;
+	ktime_t kt;
+
+	if (rq->rq_flags & RQF_MQ_POLL_SLEPT)
+		return false;
+
+	/*
+	 * If we get here, hybrid polling is enabled. Hence poll_nsec can be:
+	 *
+	 *  0:	use half of prev avg
+	 * >0:	use this specific value
+	 */
+	if (q->poll_nsec > 0)
+		nsecs = q->poll_nsec;
+	else
+		nsecs = blk_mq_poll_nsecs(q, rq);
+
+	if (!nsecs)
+		return false;
+
+	rq->rq_flags |= RQF_MQ_POLL_SLEPT;
+
+	/*
+	 * This will be replaced with the stats tracking code, using
+	 * 'avg_completion_time / 2' as the pre-sleep target.
+	 */
+	kt = nsecs;
+
+	mode = HRTIMER_MODE_REL;
+	hrtimer_init_sleeper_on_stack(&hs, CLOCK_MONOTONIC, mode);
+	hrtimer_set_expires(&hs.timer, kt);
+
+	do {
+		if (blk_mq_rq_state(rq) == MQ_RQ_COMPLETE)
+			break;
+		set_current_state(TASK_UNINTERRUPTIBLE);
+		hrtimer_sleeper_start_expires(&hs, mode);
+		if (hs.task)
+			io_schedule();
+		hrtimer_cancel(&hs.timer);
+		mode = HRTIMER_MODE_ABS;
+	} while (hs.task && !signal_pending(current));
+
+	__set_current_state(TASK_RUNNING);
+	destroy_hrtimer_on_stack(&hs.timer);
+	return true;
+}
+
+static bool blk_mq_poll_hybrid(struct request_queue *q,
+			       struct blk_mq_hw_ctx *hctx, blk_qc_t cookie)
+{
+	struct request *rq;
+
+	if (q->poll_nsec == BLK_MQ_POLL_CLASSIC)
+		return false;
+
+	if (!blk_qc_t_is_internal(cookie))
+		rq = blk_mq_tag_to_rq(hctx->tags, blk_qc_t_to_tag(cookie));
+	else {
+		rq = blk_mq_tag_to_rq(hctx->sched_tags, blk_qc_t_to_tag(cookie));
+		/*
+		 * With scheduling, if the request has completed, we'll
+		 * get a NULL return here, as we clear the sched tag when
+		 * that happens. The request still remains valid, like always,
+		 * so we should be safe with just the NULL check.
+		 */
+		if (!rq)
+			return false;
+	}
+
+	return blk_mq_poll_hybrid_sleep(q, rq);
+}
+
+/**
+ * blk_poll - poll for IO completions
+ * @q:  the queue
+ * @cookie: cookie passed back at IO submission time
+ * @spin: whether to spin for completions
+ *
+ * Description:
+ *    Poll for completions on the passed in queue. Returns number of
+ *    completed entries found. If @spin is true, then blk_poll will continue
+ *    looping until at least one completion is found, unless the task is
+ *    otherwise marked running (or we need to reschedule).
+ */
+int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
+{
+	struct blk_mq_hw_ctx *hctx;
+	long state;
+
+	if (!blk_qc_t_valid(cookie) || !blk_queue_poll(q))
+		return 0;
+
+	if (current->plug)
+		blk_flush_plug_list(current->plug, false);
+
+	hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
+
+	/*
+	 * If we sleep, have the caller restart the poll loop to reset
+	 * the state. Like for the other success return cases, the
+	 * caller is responsible for checking if the IO completed. If
+	 * the IO isn't complete, we'll get called again and will go
+	 * straight to the busy poll loop. If specified not to spin,
+	 * we also should not sleep.
+	 */
+	if (spin && blk_mq_poll_hybrid(q, hctx, cookie))
+		return 1;
+
+	hctx->poll_considered++;
+
+	state = current->state;
+	do {
+		int ret;
+
+		hctx->poll_invoked++;
+
+		ret = q->mq_ops->poll(hctx);
+		if (ret > 0) {
+			hctx->poll_success++;
+			__set_current_state(TASK_RUNNING);
+			return ret;
+		}
+
+		if (signal_pending_state(state, current))
+			__set_current_state(TASK_RUNNING);
+
+		if (current->state == TASK_RUNNING)
+			return 1;
+		if (ret < 0 || !spin)
+			break;
+		cpu_relax();
+	} while (!need_resched());
+
+	__set_current_state(TASK_RUNNING);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(blk_poll);
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 05/12] block: extract one helper function polling hw queue
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (3 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 04/12] block: move block polling code into one dedicated source file Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-22 12:20 ` [PATCH V6 06/12] block: prepare for supporting bio_list via other link Ming Lei
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Christoph Hellwig, Ming Lei

From: Jeffle Xu <jefflexu@linux.alibaba.com>

Extract the logic of polling a hw queue and related statistics
handling out as the helper function.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-poll.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/block/blk-poll.c b/block/blk-poll.c
index daa307f84792..0a38c25bcee5 100644
--- a/block/blk-poll.c
+++ b/block/blk-poll.c
@@ -129,6 +129,19 @@ static bool blk_mq_poll_hybrid(struct request_queue *q,
 	return blk_mq_poll_hybrid_sleep(q, rq);
 }
 
+static inline int blk_mq_poll_hctx(struct request_queue *q,
+				   struct blk_mq_hw_ctx *hctx)
+{
+	int ret;
+
+	hctx->poll_invoked++;
+	ret = q->mq_ops->poll(hctx);
+	if (ret > 0)
+		hctx->poll_success++;
+
+	return ret;
+}
+
 /**
  * blk_poll - poll for IO completions
  * @q:  the queue
@@ -171,11 +184,8 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
 	do {
 		int ret;
 
-		hctx->poll_invoked++;
-
-		ret = q->mq_ops->poll(hctx);
+		ret = blk_mq_poll_hctx(q, hctx);
 		if (ret > 0) {
-			hctx->poll_success++;
 			__set_current_state(TASK_RUNNING);
 			return ret;
 		}
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 06/12] block: prepare for supporting bio_list via other link
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (4 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 05/12] block: extract one helper function polling hw queue Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-22 12:20 ` [PATCH V6 07/12] block: create io poll context for submission and poll task Ming Lei
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei

So far bio list helpers always use .bi_next to traverse the list, we
will support to link bios by other bio field.

Prepare for such support by adding a macro so that users can define
another helpers for linking bios by other bio field.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 include/linux/bio.h | 132 +++++++++++++++++++++++---------------------
 1 file changed, 68 insertions(+), 64 deletions(-)

diff --git a/include/linux/bio.h b/include/linux/bio.h
index a0b4cfdf62a4..c95f0e4fe530 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -602,75 +602,11 @@ static inline unsigned bio_list_size(const struct bio_list *bl)
 	return sz;
 }
 
-static inline void bio_list_add(struct bio_list *bl, struct bio *bio)
-{
-	bio->bi_next = NULL;
-
-	if (bl->tail)
-		bl->tail->bi_next = bio;
-	else
-		bl->head = bio;
-
-	bl->tail = bio;
-}
-
-static inline void bio_list_add_head(struct bio_list *bl, struct bio *bio)
-{
-	bio->bi_next = bl->head;
-
-	bl->head = bio;
-
-	if (!bl->tail)
-		bl->tail = bio;
-}
-
-static inline void bio_list_merge(struct bio_list *bl, struct bio_list *bl2)
-{
-	if (!bl2->head)
-		return;
-
-	if (bl->tail)
-		bl->tail->bi_next = bl2->head;
-	else
-		bl->head = bl2->head;
-
-	bl->tail = bl2->tail;
-}
-
-static inline void bio_list_merge_head(struct bio_list *bl,
-				       struct bio_list *bl2)
-{
-	if (!bl2->head)
-		return;
-
-	if (bl->head)
-		bl2->tail->bi_next = bl->head;
-	else
-		bl->tail = bl2->tail;
-
-	bl->head = bl2->head;
-}
-
 static inline struct bio *bio_list_peek(struct bio_list *bl)
 {
 	return bl->head;
 }
 
-static inline struct bio *bio_list_pop(struct bio_list *bl)
-{
-	struct bio *bio = bl->head;
-
-	if (bio) {
-		bl->head = bl->head->bi_next;
-		if (!bl->head)
-			bl->tail = NULL;
-
-		bio->bi_next = NULL;
-	}
-
-	return bio;
-}
-
 static inline struct bio *bio_list_get(struct bio_list *bl)
 {
 	struct bio *bio = bl->head;
@@ -680,6 +616,74 @@ static inline struct bio *bio_list_get(struct bio_list *bl)
 	return bio;
 }
 
+#define BIO_LIST_HELPERS(_pre, link)					\
+									\
+static inline void _pre##_add(struct bio_list *bl, struct bio *bio)	\
+{									\
+	bio->bi_##link = NULL;						\
+									\
+	if (bl->tail)							\
+		bl->tail->bi_##link = bio;				\
+	else								\
+		bl->head = bio;						\
+									\
+	bl->tail = bio;							\
+}									\
+									\
+static inline void _pre##_add_head(struct bio_list *bl, struct bio *bio) \
+{									\
+	bio->bi_##link = bl->head;					\
+									\
+	bl->head = bio;							\
+									\
+	if (!bl->tail)							\
+		bl->tail = bio;						\
+}									\
+									\
+static inline void _pre##_merge(struct bio_list *bl, struct bio_list *bl2) \
+{									\
+	if (!bl2->head)							\
+		return;							\
+									\
+	if (bl->tail)							\
+		bl->tail->bi_##link = bl2->head;			\
+	else								\
+		bl->head = bl2->head;					\
+									\
+	bl->tail = bl2->tail;						\
+}									\
+									\
+static inline void _pre##_merge_head(struct bio_list *bl,		\
+				       struct bio_list *bl2)		\
+{									\
+	if (!bl2->head)							\
+		return;							\
+									\
+	if (bl->head)							\
+		bl2->tail->bi_##link = bl->head;			\
+	else								\
+		bl->tail = bl2->tail;					\
+									\
+	bl->head = bl2->head;						\
+}									\
+									\
+static inline struct bio *_pre##_pop(struct bio_list *bl)		\
+{									\
+	struct bio *bio = bl->head;					\
+									\
+	if (bio) {							\
+		bl->head = bl->head->bi_##link;				\
+		if (!bl->head)						\
+			bl->tail = NULL;				\
+									\
+		bio->bi_##link = NULL;					\
+	}								\
+									\
+	return bio;							\
+}									\
+
+BIO_LIST_HELPERS(bio_list, next);
+
 /*
  * Increment chain count for the bio. Make sure the CHAIN flag update
  * is visible before the raised count.
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 07/12] block: create io poll context for submission and poll task
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (5 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 06/12] block: prepare for supporting bio_list via other link Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-22 12:20 ` [PATCH V6 08/12] block: add req flag of REQ_POLL_CTX Ming Lei
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei

Create per-task io poll context for both IO submission and poll task
if the queue is bio based and supports polling.

This io polling context includes two queues:

1) submission queue(sq) for storing HIPRI bio, written by submission task
   and read by poll task.
2) polling queue(pq) for holding data moved from sq, only used in poll
   context for running bio polling.

Following patches will support bio based io polling.

Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c          | 24 +++++++++-------
 block/blk-ioc.c           |  1 +
 block/blk-poll.c          | 51 +++++++++++++++++++++++++++++++++
 block/blk.h               | 60 +++++++++++++++++++++++++++++++++++++++
 include/linux/iocontext.h |  2 ++
 5 files changed, 127 insertions(+), 11 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d44a8b934608..5830ef4d733e 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -868,8 +868,19 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 		}
 	}
 
-	if (!blk_queue_poll(q))
-		bio->bi_opf &= ~REQ_HIPRI;
+	/*
+	 * Various block parts want %current->io_context, so allocate it up
+	 * front rather than dealing with lots of pain to allocate it only
+	 * where needed. This may fail and the block layer knows how to live
+	 * with it.
+	 */
+	if (unlikely(!current->io_context))
+		create_task_io_context(current, GFP_ATOMIC, q->node);
+
+	if ((bio->bi_opf & REQ_HIPRI) && blk_queue_support_bio_poll(q))
+		blk_create_io_poll_context(q);
+
+	blk_poll_prepare(q, bio);
 
 	switch (bio_op(bio)) {
 	case REQ_OP_DISCARD:
@@ -908,15 +919,6 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 		break;
 	}
 
-	/*
-	 * Various block parts want %current->io_context, so allocate it up
-	 * front rather than dealing with lots of pain to allocate it only
-	 * where needed. This may fail and the block layer knows how to live
-	 * with it.
-	 */
-	if (unlikely(!current->io_context))
-		create_task_io_context(current, GFP_ATOMIC, q->node);
-
 	if (blk_throtl_bio(bio)) {
 		blkcg_bio_issue_init(bio);
 		return false;
diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index b0cde18c4b8c..5574c398eff6 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -19,6 +19,7 @@ static struct kmem_cache *iocontext_cachep;
 
 static inline void free_io_context(struct io_context *ioc)
 {
+	kfree(ioc->data);
 	kmem_cache_free(iocontext_cachep, ioc);
 }
 
diff --git a/block/blk-poll.c b/block/blk-poll.c
index 0a38c25bcee5..8e4bec55293e 100644
--- a/block/blk-poll.c
+++ b/block/blk-poll.c
@@ -4,11 +4,14 @@
 #include <linux/blkdev.h>
 #include <linux/sched.h>
 #include <linux/hrtimer.h>
+#include <linux/bio.h>
 
 #include <linux/blk-mq.h>
 #include "blk.h"
 #include "blk-mq.h"
 
+static int blk_bio_poll(struct request_queue *q, blk_qc_t cookie, bool spin);
+
 /* Enable polling stats and return whether they were already enabled. */
 static bool blk_poll_stats_enable(struct request_queue *q)
 {
@@ -165,6 +168,9 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
 	if (current->plug)
 		blk_flush_plug_list(current->plug, false);
 
+	if (!queue_is_mq(q))
+		return blk_bio_poll(q, cookie, spin);
+
 	hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
 
 	/*
@@ -204,3 +210,48 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
 	return 0;
 }
 EXPORT_SYMBOL_GPL(blk_poll);
+
+/* bio base io polling  */
+static int blk_bio_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
+{
+	/*
+	 * Create poll queue for storing poll bio and its cookie from
+	 * submission queue
+	 */
+	blk_create_io_poll_context(q);
+
+	return 0;
+}
+
+static inline unsigned int bio_grp_list_size(unsigned int nr_grps)
+{
+	return sizeof(struct bio_grp_list) + nr_grps *
+		sizeof(struct bio_grp_list_data);
+}
+
+static void bio_poll_ctx_init(struct blk_bio_poll_ctx *pc)
+{
+	pc->sq = (void *)pc + sizeof(*pc);
+	pc->sq->max_nr_grps = BLK_BIO_POLL_SQ_SZ;
+
+	pc->pq = (void *)pc->sq + bio_grp_list_size(BLK_BIO_POLL_SQ_SZ);
+	pc->pq->max_nr_grps = BLK_BIO_POLL_PQ_SZ;
+
+	spin_lock_init(&pc->sq_lock);
+	spin_lock_init(&pc->pq_lock);
+}
+
+void bio_poll_ctx_alloc(struct io_context *ioc)
+{
+	struct blk_bio_poll_ctx *pc;
+	unsigned int size = sizeof(*pc) +
+		bio_grp_list_size(BLK_BIO_POLL_SQ_SZ) +
+		bio_grp_list_size(BLK_BIO_POLL_PQ_SZ);
+
+	pc = kzalloc(GFP_ATOMIC, size);
+	if (pc) {
+		bio_poll_ctx_init(pc);
+		if (cmpxchg(&ioc->data, NULL, (void *)pc))
+			kfree(pc);
+	}
+}
diff --git a/block/blk.h b/block/blk.h
index d88b0823738c..bc6d63ae36b7 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -352,4 +352,64 @@ int bio_add_hw_page(struct request_queue *q, struct bio *bio,
 		struct page *page, unsigned int len, unsigned int offset,
 		unsigned int max_sectors, bool *same_page);
 
+/* Grouping bios that share same data into one list */
+struct bio_grp_list_data {
+	void *grp_data;
+
+	/* all bios in this list share same 'grp_data' */
+	struct bio_list list;
+};
+
+struct bio_grp_list {
+	unsigned int max_nr_grps, nr_grps;
+	struct bio_grp_list_data head[0];
+};
+
+struct blk_bio_poll_ctx {
+	spinlock_t sq_lock;
+	struct bio_grp_list *sq;
+
+	spinlock_t pq_lock;
+	struct bio_grp_list *pq;
+};
+
+#define BLK_BIO_POLL_SQ_SZ		16U
+#define BLK_BIO_POLL_PQ_SZ		(BLK_BIO_POLL_SQ_SZ * 2)
+
+void bio_poll_ctx_alloc(struct io_context *ioc);
+
+static inline bool blk_queue_support_bio_poll(struct request_queue *q)
+{
+	return !queue_is_mq(q) && blk_queue_poll(q);
+}
+
+static inline struct blk_bio_poll_ctx *blk_get_bio_poll_ctx(void)
+{
+	struct io_context *ioc = current->io_context;
+
+	return ioc ? ioc->data : NULL;
+}
+
+static inline void blk_poll_prepare(struct request_queue *q,
+		struct bio *bio)
+{
+	if (!(bio->bi_opf & REQ_HIPRI))
+		return;
+
+	if (!blk_queue_poll(q) || (!queue_is_mq(q) && !blk_get_bio_poll_ctx()))
+		bio->bi_opf &= ~REQ_HIPRI;
+}
+
+static inline void blk_create_io_poll_context(struct request_queue *q)
+{
+	struct io_context *ioc;
+
+	if (unlikely(!current->io_context))
+		create_task_io_context(current, GFP_ATOMIC, q->node);
+
+	ioc = current->io_context;
+	if (unlikely(ioc && !ioc->data))
+		bio_poll_ctx_alloc(ioc);
+}
+
 #endif /* BLK_INTERNAL_H */
diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
index 0a9dc40b7be8..f9a467571356 100644
--- a/include/linux/iocontext.h
+++ b/include/linux/iocontext.h
@@ -110,6 +110,8 @@ struct io_context {
 	struct io_cq __rcu	*icq_hint;
 	struct hlist_head	icq_list;
 
+	void			*data;
+
 	struct work_struct release_work;
 };
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 08/12] block: add req flag of REQ_POLL_CTX
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (6 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 07/12] block: create io poll context for submission and poll task Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-22 12:20 ` [PATCH V6 09/12] block: use per-task poll context to implement bio based io polling Ming Lei
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei

Add one req flag REQ_POLL_CTX which will be used in the following patch for
supporting bio based IO polling.

Exactly this flag can help us to do:

1) request flag is cloned in bio_fast_clone(), so if we mark one FS bio
as REQ_POLL_CTX, all bios cloned from this FS bio will be marked as
REQ_POLL_CTX too.

2) create per-task io polling context if the bio based queue supports
polling and the submitted bio is HIPRI. Per-task io poll context will be
created during submit_bio() before marking this HIPRI bio as REQ_POLL_CTX.
Then we can avoid to create such io polling context if one cloned bio with
REQ_POLL_CTX is submitted from another kernel context.

3) for supporting bio based io polling, we need to poll IOs from all
underlying queues of the bio device, this way help us to recognize which
IO needs to polled in bio based style, which will be applied in
following patch.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c          |  7 ++++++-
 block/blk.h               | 21 ++++++++++++++++++++-
 include/linux/blk_types.h |  4 ++++
 3 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 5830ef4d733e..ad57e04d5297 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -877,7 +877,12 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 	if (unlikely(!current->io_context))
 		create_task_io_context(current, GFP_ATOMIC, q->node);
 
-	if ((bio->bi_opf & REQ_HIPRI) && blk_queue_support_bio_poll(q))
+	/*
+	 * If REQ_POLL_CTX isn't set for this HIPRI bio, we think it
+	 * originated from FS and allocate io polling context.
+	 */
+	if ((bio->bi_opf & REQ_HIPRI) && !(bio->bi_opf & REQ_POLL_CTX) &&
+			blk_queue_support_bio_poll(q))
 		blk_create_io_poll_context(q);
 
 	blk_poll_prepare(q, bio);
diff --git a/block/blk.h b/block/blk.h
index bc6d63ae36b7..47f60612957a 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -393,11 +393,30 @@ static inline struct blk_bio_poll_ctx *blk_get_bio_poll_ctx(void)
 static inline void blk_poll_prepare(struct request_queue *q,
 		struct bio *bio)
 {
+	bool mq;
+
 	if (!(bio->bi_opf & REQ_HIPRI))
 		return;
 
-	if (!blk_queue_poll(q) || (!queue_is_mq(q) && !blk_get_bio_poll_ctx()))
+	/*
+	 * Can't support bio based IO polling without per-task poll ctx
+	 *
+	 * We have created per-task io poll context, and mark this
+	 * bio as REQ_POLL_CTX, so: 1) if any cloned bio from this bio is
+	 * submitted from another kernel context, we won't create bio
+	 * poll context for it, and that bio can be completed by IRQ;
+	 * 2) If such bio is submitted from current context, we will
+	 * complete it via blk_poll(); 3) If driver knows that one
+	 * underlying bio allocated from driver is for FS bio, meantime
+	 * it is submitted in current context, driver can mark such bio
+	 * as REQ_HIPRI & REQ_POLL_CTX manually, so the bio can be completed
+	 * via blk_poll too.
+	 */
+	mq = queue_is_mq(q);
+	if (!blk_queue_poll(q) || (!mq && !blk_get_bio_poll_ctx()))
 		bio->bi_opf &= ~REQ_HIPRI;
+	else if (!mq)
+		bio->bi_opf |= REQ_POLL_CTX;
 }
 
 static inline void blk_create_io_poll_context(struct request_queue *q)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index db026b6ec15a..99160d588c2d 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -394,6 +394,9 @@ enum req_flag_bits {
 
 	__REQ_HIPRI,
 
+	/* for marking IOs originated from same FS bio in same context */
+	__REQ_POLL_CTX,
+
 	/* for driver use */
 	__REQ_DRV,
 	__REQ_SWAP,		/* swapping request. */
@@ -418,6 +421,7 @@ enum req_flag_bits {
 
 #define REQ_NOUNMAP		(1ULL << __REQ_NOUNMAP)
 #define REQ_HIPRI		(1ULL << __REQ_HIPRI)
+#define REQ_POLL_CTX			(1ULL << __REQ_POLL_CTX)
 
 #define REQ_DRV			(1ULL << __REQ_DRV)
 #define REQ_SWAP		(1ULL << __REQ_SWAP)
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 09/12] block: use per-task poll context to implement bio based io polling
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (7 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 08/12] block: add req flag of REQ_POLL_CTX Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-26  7:17   ` Hannes Reinecke
  2021-04-22 12:20 ` [PATCH V6 10/12] block: limit hw queues to be polled in each blk_poll() Ming Lei
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei

Currently bio based IO polling needs to poll all hw queue blindly, this
way is very inefficient, and one big reason is that we can't pass any
bio submission result to blk_poll().

In IO submission context, track associated underlying bios by per-task
submission queue and store returned 'cookie' in bio->bi_poll_data which
is added by filling a hole of .bi_iter, and return current->pid to
caller of submit_bio() for any bio based driver's IO, which is
submitted from FS.

In IO poll context, the passed cookie tells us the PID of submission
context, then we can find bios from the per-task io pull context of
submission context. Moving bios from submission queue to poll queue of
the poll context, and keep polling until these bios are ended. Remove
bio from poll queue if the bio is ended. Add bio flags of BIO_DONE and
BIO_END_BY_POLL for such purpose.

In was found in Jeffle Xu's test that kfifo doesn't scale well for a
submission queue as queue depth is increased, so a new mechanism for
tracking bios is needed. So far bio's size is close to 2 cacheline size,
and it may not be accepted to add new field into bio for solving the
scalability issue by tracking bios via linked list, switch to bio group
list for tracking bio, the idea is to reuse .bi_end_io for linking bios
into a linked list for all sharing same .bi_end_io(call it bio group),
which is recovered before ending bio really, since BIO_END_BY_POLL is
added for enhancing this point. Usually .bi_end_bio is same for all
bios in same layer, so it is enough to provide very limited groups, such
as 16 or less for fixing the scalability issue.

Usually submission shares context with io poll. The per-task poll context
is just like stack variable, and it is cheap to move data between the two
per-task queues.

Also when the submission task is exiting, drain pending IOs in the context
until all are done.

Tested-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/bio.c               |   5 +
 block/blk-core.c          |  39 ++++-
 block/blk-ioc.c           |   3 +
 block/blk-poll.c          | 345 +++++++++++++++++++++++++++++++++++++-
 block/blk.h               |  33 ++++
 include/linux/blk_types.h |  27 ++-
 6 files changed, 448 insertions(+), 4 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 303298996afe..3cf9cf4479db 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1365,6 +1365,11 @@ static inline bool bio_remaining_done(struct bio *bio)
  **/
 void bio_endio(struct bio *bio)
 {
+	/* BIO_END_BY_POLL has to be set before calling submit_bio */
+	if (bio_flagged(bio, BIO_END_BY_POLL)) {
+		bio_set_flag(bio, BIO_DONE);
+		return;
+	}
 again:
 	if (!bio_remaining_done(bio))
 		return;
diff --git a/block/blk-core.c b/block/blk-core.c
index ad57e04d5297..06acb233e606 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -982,7 +982,7 @@ static blk_qc_t __submit_bio(struct bio *bio)
  * bio_list_on_stack[1] contains bios that were submitted before the current
  *	->submit_bio_bio, but that haven't been processed yet.
  */
-static blk_qc_t __submit_bio_noacct(struct bio *bio)
+static blk_qc_t __submit_bio_noacct_ctx(struct bio *bio, struct io_context *ioc)
 {
 	struct bio_list bio_list_on_stack[2];
 	blk_qc_t ret = BLK_QC_T_NONE;
@@ -1005,7 +1005,15 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio)
 		bio_list_on_stack[1] = bio_list_on_stack[0];
 		bio_list_init(&bio_list_on_stack[0]);
 
-		ret = __submit_bio(bio);
+		if (ioc && queue_is_mq(q) && (bio->bi_opf & REQ_HIPRI)) {
+			bool queued = blk_bio_poll_prep_submit(ioc, bio);
+
+			ret = __submit_bio(bio);
+			if (queued)
+				bio_set_poll_data(bio, ret);
+		} else {
+			ret = __submit_bio(bio);
+		}
 
 		/*
 		 * Sort new bios into those for a lower level and those for the
@@ -1031,6 +1039,33 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio)
 	return ret;
 }
 
+static inline blk_qc_t __submit_bio_noacct_poll(struct bio *bio,
+		struct io_context *ioc)
+{
+	struct blk_bio_poll_ctx *pc = ioc->data;
+
+	__submit_bio_noacct_ctx(bio, ioc);
+
+	/* bio submissions queued to per-task poll context */
+	if (READ_ONCE(pc->sq->nr_grps))
+		return current->pid;
+
+	/* swapper's pid is 0, but it can't submit poll IO for us */
+	return BLK_QC_T_BIO_NONE;
+}
+
+static inline blk_qc_t __submit_bio_noacct(struct bio *bio)
+{
+	struct io_context *ioc = current->io_context;
+
+	if (ioc && ioc->data && (bio->bi_opf & REQ_HIPRI))
+		return __submit_bio_noacct_poll(bio, ioc);
+
+	__submit_bio_noacct_ctx(bio, NULL);
+
+	return BLK_QC_T_BIO_NONE;
+}
+
 static blk_qc_t __submit_bio_noacct_mq(struct bio *bio)
 {
 	struct bio_list bio_list[2] = { };
diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 5574c398eff6..c1fd7c593a54 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -211,6 +211,9 @@ void exit_io_context(struct task_struct *task)
 	task->io_context = NULL;
 	task_unlock(task);
 
+	/* drain io poll submissions */
+	blk_bio_poll_io_drain(ioc);
+
 	atomic_dec(&ioc->nr_tasks);
 	put_io_context_active(ioc);
 }
diff --git a/block/blk-poll.c b/block/blk-poll.c
index 8e4bec55293e..249d73ff6f81 100644
--- a/block/blk-poll.c
+++ b/block/blk-poll.c
@@ -162,7 +162,7 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
 	struct blk_mq_hw_ctx *hctx;
 	long state;
 
-	if (!blk_qc_t_valid(cookie) || !blk_queue_poll(q))
+	if (queue_is_mq(q) && (!blk_qc_t_valid(cookie) || !blk_queue_poll(q)))
 		return 0;
 
 	if (current->plug)
@@ -212,14 +212,330 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
 EXPORT_SYMBOL_GPL(blk_poll);
 
 /* bio base io polling  */
+static inline void *bio_grp_data(struct bio *bio)
+{
+	return bio->bi_poll;
+}
+
+/* add bio into bio group list, return true if it is added */
+static bool bio_grp_list_add(struct bio_grp_list *list, struct bio *bio)
+{
+	int i;
+	struct bio_grp_list_data *grp;
+
+	for (i = 0; i < list->nr_grps; i++) {
+		grp = &list->head[i];
+		if (grp->grp_data == bio_grp_data(bio)) {
+			__bio_grp_list_add(&grp->list, bio);
+			return true;
+		}
+	}
+
+	if (i == list->max_nr_grps)
+		return false;
+
+	/* create a new group */
+	grp = &list->head[i];
+	bio_list_init(&grp->list);
+	grp->grp_data = bio_grp_data(bio);
+	__bio_grp_list_add(&grp->list, bio);
+	list->nr_grps++;
+
+	return true;
+}
+
+static int bio_grp_list_find_grp(struct bio_grp_list *list, void *grp_data)
+{
+	int i;
+	struct bio_grp_list_data *grp;
+
+	for (i = 0; i < list->nr_grps; i++) {
+		grp = &list->head[i];
+		if (grp->grp_data == grp_data)
+			return i;
+	}
+
+	if (i < list->max_nr_grps) {
+		grp = &list->head[i];
+		bio_list_init(&grp->list);
+		return i;
+	}
+
+	return -1;
+}
+
+/* Move as many as possible groups from 'src' to 'dst' */
+static void bio_grp_list_move(struct bio_grp_list *dst,
+		struct bio_grp_list *src)
+{
+	int i, j, cnt = 0;
+	struct bio_grp_list_data *grp;
+
+	for (i = src->nr_grps - 1; i >= 0; i--) {
+		grp = &src->head[i];
+		j = bio_grp_list_find_grp(dst, grp->grp_data);
+		if (j < 0)
+			break;
+		if (bio_grp_list_grp_empty(&dst->head[j])) {
+			dst->head[j].grp_data = grp->grp_data;
+			dst->nr_grps++;
+		}
+		__bio_grp_list_merge(&dst->head[j].list, &grp->list);
+		bio_list_init(&grp->list);
+		cnt++;
+	}
+
+	src->nr_grps -= cnt;
+}
+
+static int blk_mq_poll_io(struct bio *bio)
+{
+	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
+	blk_qc_t cookie = bio_get_poll_data(bio);
+	int ret = 0;
+
+	/* wait until the bio is submitted really */
+	if (!blk_qc_t_ready(cookie))
+		return 0;
+
+	if (!bio_flagged(bio, BIO_DONE) && blk_qc_t_valid(cookie)) {
+		struct blk_mq_hw_ctx *hctx =
+			q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
+
+		ret += blk_mq_poll_hctx(q, hctx);
+	}
+	return ret;
+}
+
+static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
+{
+	int ret = 0;
+	int i;
+
+	/*
+	 * Poll hw queue first.
+	 *
+	 * TODO: limit max poll times and make sure to not poll same
+	 * hw queue one more time.
+	 */
+	for (i = 0; i < grps->nr_grps; i++) {
+		struct bio_grp_list_data *grp = &grps->head[i];
+		struct bio *bio;
+
+		if (bio_grp_list_grp_empty(grp))
+			continue;
+
+		for (bio = grp->list.head; bio; bio = bio->bi_poll)
+			ret += blk_mq_poll_io(bio);
+	}
+
+	/* reap bios */
+	for (i = 0; i < grps->nr_grps; i++) {
+		struct bio_grp_list_data *grp = &grps->head[i];
+		struct bio *bio;
+		struct bio_list bl;
+
+		if (bio_grp_list_grp_empty(grp))
+			continue;
+
+		bio_list_init(&bl);
+
+		while ((bio = __bio_grp_list_pop(&grp->list))) {
+			if (bio_flagged(bio, BIO_DONE)) {
+				/* now recover original data */
+				bio->bi_poll = grp->grp_data;
+
+				/* clear BIO_END_BY_POLL and end me really */
+				bio_clear_flag(bio, BIO_END_BY_POLL);
+				bio_endio(bio);
+			} else {
+				__bio_grp_list_add(&bl, bio);
+			}
+		}
+		__bio_grp_list_merge(&grp->list, &bl);
+	}
+	return ret;
+}
+
+static void blk_bio_poll_pack_groups(struct bio_grp_list *grps)
+{
+	int i, j, k = 0;
+	int cnt = 0;
+
+	for (i = grps->nr_grps - 1; i >= 0; i--) {
+		struct bio_grp_list_data *grp = &grps->head[i];
+		struct bio_grp_list_data *hole = NULL;
+
+		if (bio_grp_list_grp_empty(grp)) {
+			cnt++;
+			continue;
+		}
+
+		for (j = k; j < i; j++) {
+			if (bio_grp_list_grp_empty(&grps->head[j])) {
+				hole = &grps->head[j];
+				break;
+			}
+		}
+		if (hole == NULL)
+			break;
+		*hole = *grp;
+		cnt++;
+		k = j;
+	}
+
+	grps->nr_grps -= cnt;
+}
+
+#define  MAX_BIO_GRPS_ON_STACK  8
+struct bio_grp_list_stack {
+	unsigned int max_nr_grps, nr_grps;
+	struct bio_grp_list_data head[MAX_BIO_GRPS_ON_STACK];
+};
+
+static int blk_bio_poll_io(struct io_context *submit_ioc)
+
+{
+	struct bio_grp_list_stack _bio_grps = {
+		.max_nr_grps	= ARRAY_SIZE(_bio_grps.head),
+		.nr_grps	= 0
+	};
+	struct bio_grp_list *bio_grps = (struct bio_grp_list *)&_bio_grps;
+	struct blk_bio_poll_ctx *submit_ctx = submit_ioc->data;
+	struct blk_bio_poll_ctx *poll_ctx = blk_get_bio_poll_ctx();
+	int ret = 0;
+
+	/*
+	 * Move IO submission result from submission queue in submission
+	 * context to poll queue of poll context.
+	 */
+	if (READ_ONCE(submit_ctx->sq->nr_grps) > 0) {
+		spin_lock(&submit_ctx->sq_lock);
+		bio_grp_list_move(bio_grps, submit_ctx->sq);
+		spin_unlock(&submit_ctx->sq_lock);
+	}
+
+	/* merge new bios first, then start to poll bios from pq */
+	if (poll_ctx) {
+		spin_lock(&poll_ctx->pq_lock);
+		bio_grp_list_move(poll_ctx->pq, bio_grps);
+		bio_grp_list_move(bio_grps, poll_ctx->pq);
+		spin_unlock(&poll_ctx->pq_lock);
+	}
+
+	do {
+		ret += blk_bio_poll_and_end_io(bio_grps);
+		blk_bio_poll_pack_groups(bio_grps);
+
+		if (bio_grps->nr_grps) {
+			/*
+			 * move back, and keep polling until all can be
+			 * held in either poll queue or submission queue.
+			 */
+			if (poll_ctx) {
+				spin_lock(&poll_ctx->pq_lock);
+				bio_grp_list_move(poll_ctx->pq, bio_grps);
+				spin_unlock(&poll_ctx->pq_lock);
+			} else {
+				spin_lock(&submit_ctx->sq_lock);
+				bio_grp_list_move(submit_ctx->sq, bio_grps);
+				spin_unlock(&submit_ctx->sq_lock);
+			}
+		}
+	} while (bio_grps->nr_grps > 0);
+
+	return ret;
+}
+
+void blk_bio_poll_io_drain(struct io_context *submit_ioc)
+{
+	struct blk_bio_poll_ctx *submit_ctx = submit_ioc->data;
+
+	if (!submit_ctx)
+		return;
+
+	spin_lock(&submit_ctx->sq_lock);
+	while (READ_ONCE(submit_ctx->sq->nr_grps) > 0) {
+		blk_bio_poll_and_end_io(submit_ctx->sq);
+		blk_bio_poll_pack_groups(submit_ctx->sq);
+		cpu_relax();
+	}
+	spin_unlock(&submit_ctx->sq_lock);
+}
+
+static bool blk_bio_ioc_valid(struct task_struct *t)
+{
+	if (!t)
+		return false;
+
+	if (!t->io_context)
+		return false;
+
+	if (!t->io_context->data)
+		return false;
+
+	return true;
+}
+
+static int __blk_bio_poll(blk_qc_t cookie)
+{
+	struct io_context *poll_ioc = current->io_context;
+	pid_t pid;
+	struct task_struct *submit_task;
+	int ret;
+
+	pid = (pid_t)cookie;
+
+	/* io poll often share io submission context */
+	if (likely(current->pid == pid && blk_bio_ioc_valid(current)))
+		return blk_bio_poll_io(poll_ioc);
+
+	submit_task = find_get_task_by_vpid(pid);
+	if (likely(blk_bio_ioc_valid(submit_task)))
+		ret = blk_bio_poll_io(submit_task->io_context);
+	else
+		ret = 0;
+	if (likely(submit_task))
+		put_task_struct(submit_task);
+
+	return ret;
+}
+
 static int blk_bio_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
 {
+	long state;
+
+	/* no need to poll */
+	if (cookie == BLK_QC_T_BIO_NONE)
+		return 0;
+
 	/*
 	 * Create poll queue for storing poll bio and its cookie from
 	 * submission queue
 	 */
 	blk_create_io_poll_context(q);
 
+	state = current->state;
+	do {
+		int ret;
+
+		ret = __blk_bio_poll(cookie);
+		if (ret > 0) {
+			__set_current_state(TASK_RUNNING);
+			return ret;
+		}
+
+		if (signal_pending_state(state, current))
+			__set_current_state(TASK_RUNNING);
+
+		if (current->state == TASK_RUNNING)
+			return 1;
+		if (ret < 0 || !spin)
+			break;
+		cpu_relax();
+	} while (!need_resched());
+
+	__set_current_state(TASK_RUNNING);
 	return 0;
 }
 
@@ -255,3 +571,30 @@ void bio_poll_ctx_alloc(struct io_context *ioc)
 			kfree(pc);
 	}
 }
+
+bool blk_bio_poll_prep_submit(struct io_context *ioc, struct bio *bio)
+{
+	struct blk_bio_poll_ctx *pc = ioc->data;
+	unsigned int queued;
+
+	/*
+	 * We rely on immutable .bi_end_io between blk-mq bio submission
+	 * and completion. However, bio crypt may update .bi_end_io during
+	 * submission, so simply don't support bio based polling for this
+	 * setting.
+	 */
+	if (likely(!bio_has_crypt_ctx(bio))) {
+		/* track this bio via bio group list */
+		spin_lock(&pc->sq_lock);
+		queued = bio_grp_list_add(pc->sq, bio);
+		blk_bio_poll_mark_queued(bio, queued);
+		if (queued)
+			bio_set_poll_data(bio, BLK_QC_T_NOT_READY);
+		spin_unlock(&pc->sq_lock);
+	} else {
+		queued = false;
+		blk_bio_poll_mark_queued(bio, false);
+	}
+
+	return queued;
+}
diff --git a/block/blk.h b/block/blk.h
index 47f60612957a..4590da07f8f6 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -376,6 +376,8 @@ struct blk_bio_poll_ctx {
 #define BLK_BIO_POLL_SQ_SZ		16U
 #define BLK_BIO_POLL_PQ_SZ		(BLK_BIO_POLL_SQ_SZ * 2)
 
+bool blk_bio_poll_prep_submit(struct io_context *ioc, struct bio *bio);
+void blk_bio_poll_io_drain(struct io_context *submit_ioc);
 void bio_poll_ctx_alloc(struct io_context *ioc);
 
 static inline bool blk_queue_support_bio_poll(struct request_queue *q)
@@ -431,4 +433,35 @@ static inline void blk_create_io_poll_context(struct request_queue *q)
 		bio_poll_ctx_alloc(ioc);
 }
 
+BIO_LIST_HELPERS(__bio_grp_list, poll);
+
+static inline bool bio_grp_list_grp_empty(struct bio_grp_list_data *grp)
+{
+	return bio_list_empty(&grp->list);
+}
+
+static inline void blk_bio_poll_mark_queued(struct bio *bio, bool queued)
+{
+	/*
+	 * The bio has been added to per-task poll queue, mark it as
+	 * END_BY_POLL, so that this bio is always completed from
+	 * blk_poll() which is provided with cookied from this bio's
+	 * submission.
+	 */
+	if (!queued)
+		bio->bi_opf &= ~(REQ_HIPRI | REQ_POLL_CTX);
+	else
+		bio_set_flag(bio, BIO_END_BY_POLL);
+}
+
+static inline unsigned int bio_get_poll_data(struct bio *bio)
+{
+	return bio->bi_poll_data;
+}
+
+static inline void bio_set_poll_data(struct bio *bio, unsigned int data)
+{
+	bio->bi_poll_data = data;
+}
+
 #endif /* BLK_INTERNAL_H */
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 99160d588c2d..3c276d163480 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -234,8 +234,20 @@ struct bio {
 	atomic_t		__bi_remaining;
 
 	struct bvec_iter	bi_iter;
+	unsigned int		bi_poll_data;	/* fill hole of bi_iter */
 
-	bio_end_io_t		*bi_end_io;
+	union {
+		bio_end_io_t		*bi_end_io;
+		/*
+		 * bio based io polling needs to track bio via bio group
+		 * list which links bios by their .bi_end_io, and original
+		 * .bi_end_io is saved into the group head. Will recover
+		 * .bi_end_io before really ending bio. BIO_END_BY_POLL
+		 * will make sure that this bio won't be ended before
+		 * recovering .bi_end_io.
+		 */
+		void			*bi_poll;
+	};
 
 	void			*bi_private;
 #ifdef CONFIG_BLK_CGROUP
@@ -304,6 +316,9 @@ enum {
 	BIO_CGROUP_ACCT,	/* has been accounted to a cgroup */
 	BIO_TRACKED,		/* set if bio goes through the rq_qos path */
 	BIO_REMAPPED,
+	BIO_END_BY_POLL,	/* end by blk_bio_poll() explicitly */
+	/* set when bio can be ended, used for bio with BIO_END_BY_POLL */
+	BIO_DONE,
 	BIO_FLAG_LAST
 };
 
@@ -513,6 +528,16 @@ typedef unsigned int blk_qc_t;
 #define BLK_QC_T_NONE		-1U
 #define BLK_QC_T_SHIFT		16
 #define BLK_QC_T_INTERNAL	(1U << 31)
+/* only used for bio based submission, has to be defined as 0 */
+#define BLK_QC_T_BIO_NONE	0
+/* only used for bio based polling, not ready for polling */
+#define BLK_QC_T_NOT_READY	-2U
+
+/* not ready for bio based polling since this bio isn't submitted really */
+static inline bool blk_qc_t_ready(blk_qc_t cookie)
+{
+	return cookie != BLK_QC_T_NOT_READY;
+}
 
 static inline bool blk_qc_t_valid(blk_qc_t cookie)
 {
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 10/12] block: limit hw queues to be polled in each blk_poll()
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (8 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 09/12] block: use per-task poll context to implement bio based io polling Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-26  7:19   ` Hannes Reinecke
  2021-04-22 12:20 ` [PATCH V6 11/12] block: allow to control FLAG_POLL via sysfs for bio poll capable queue Ming Lei
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei

Limit at most 8 queues are polled in each blk_pull(), avoid to
add extra latency when queue depth is high.

Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-poll.c | 78 ++++++++++++++++++++++++++++++++++--------------
 1 file changed, 55 insertions(+), 23 deletions(-)

diff --git a/block/blk-poll.c b/block/blk-poll.c
index 249d73ff6f81..20e7c47cc984 100644
--- a/block/blk-poll.c
+++ b/block/blk-poll.c
@@ -288,36 +288,32 @@ static void bio_grp_list_move(struct bio_grp_list *dst,
 	src->nr_grps -= cnt;
 }
 
-static int blk_mq_poll_io(struct bio *bio)
+#define POLL_HCTX_MAX_CNT 8
+
+static bool blk_add_unique_hctx(struct blk_mq_hw_ctx **data, int *cnt,
+		struct blk_mq_hw_ctx *hctx)
 {
-	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
-	blk_qc_t cookie = bio_get_poll_data(bio);
-	int ret = 0;
+	int i;
 
-	/* wait until the bio is submitted really */
-	if (!blk_qc_t_ready(cookie))
-		return 0;
 
-	if (!bio_flagged(bio, BIO_DONE) && blk_qc_t_valid(cookie)) {
-		struct blk_mq_hw_ctx *hctx =
-			q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
+	for (i = 0; i < *cnt; i++) {
+		if (data[i] == hctx)
+			goto exit;
+	}
 
-		ret += blk_mq_poll_hctx(q, hctx);
+	if (i < POLL_HCTX_MAX_CNT) {
+		data[i] = hctx;
+		(*cnt)++;
 	}
-	return ret;
+ exit:
+	return *cnt == POLL_HCTX_MAX_CNT;
 }
 
-static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
+static void blk_build_poll_queues(struct bio_grp_list *grps,
+		struct blk_mq_hw_ctx **data, int *cnt)
 {
-	int ret = 0;
 	int i;
 
-	/*
-	 * Poll hw queue first.
-	 *
-	 * TODO: limit max poll times and make sure to not poll same
-	 * hw queue one more time.
-	 */
 	for (i = 0; i < grps->nr_grps; i++) {
 		struct bio_grp_list_data *grp = &grps->head[i];
 		struct bio *bio;
@@ -325,11 +321,31 @@ static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
 		if (bio_grp_list_grp_empty(grp))
 			continue;
 
-		for (bio = grp->list.head; bio; bio = bio->bi_poll)
-			ret += blk_mq_poll_io(bio);
+		for (bio = grp->list.head; bio; bio = bio->bi_poll) {
+			blk_qc_t  cookie;
+			struct blk_mq_hw_ctx *hctx;
+			struct request_queue *q;
+
+			if (bio_flagged(bio, BIO_DONE))
+				continue;
+
+			/* wait until the bio is submitted really */
+			cookie = bio_get_poll_data(bio);
+			if (!blk_qc_t_ready(cookie) || !blk_qc_t_valid(cookie))
+				continue;
+
+			q = bio->bi_bdev->bd_disk->queue;
+			hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
+			if (blk_add_unique_hctx(data, cnt, hctx))
+				return;
+		}
 	}
+}
+
+static void blk_bio_poll_reap_ios(struct bio_grp_list *grps)
+{
+	int i;
 
-	/* reap bios */
 	for (i = 0; i < grps->nr_grps; i++) {
 		struct bio_grp_list_data *grp = &grps->head[i];
 		struct bio *bio;
@@ -354,6 +370,22 @@ static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
 		}
 		__bio_grp_list_merge(&grp->list, &bl);
 	}
+}
+
+static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
+{
+	int ret = 0;
+	int i;
+	struct blk_mq_hw_ctx *hctx[POLL_HCTX_MAX_CNT];
+	int cnt = 0;
+
+	blk_build_poll_queues(grps, hctx, &cnt);
+
+	for (i = 0; i < cnt; i++)
+		ret += blk_mq_poll_hctx(hctx[i]->queue, hctx[i]);
+
+	blk_bio_poll_reap_ios(grps);
+
 	return ret;
 }
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 11/12] block: allow to control FLAG_POLL via sysfs for bio poll capable queue
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (9 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 10/12] block: limit hw queues to be polled in each blk_poll() Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-26  7:20   ` Hannes Reinecke
  2021-04-22 12:20 ` [PATCH V6 12/12] dm: support IO polling for bio-based dm device Ming Lei
  2021-05-17  6:16 ` [PATCH V6 00/12] block: support bio based io polling JeffleXu
  12 siblings, 1 reply; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei, Christoph Hellwig

Prepare for supporting bio based io polling. If one disk is capable of
bio polling, we allow user to control FLAG_POLL via sysfs.

Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-sysfs.c     | 14 ++++++++++++--
 include/linux/genhd.h |  2 ++
 2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index fed4981b1f7a..3620db390658 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -430,9 +430,14 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page,
 {
 	unsigned long poll_on;
 	ssize_t ret;
+	struct gendisk *disk = queue_to_disk(q);
 
-	if (!q->tag_set || q->tag_set->nr_maps <= HCTX_TYPE_POLL ||
-	    !q->tag_set->map[HCTX_TYPE_POLL].nr_queues)
+	if (!queue_is_mq(q) && !(disk->flags & GENHD_FL_CAP_BIO_POLL))
+		return -EINVAL;
+
+	if (queue_is_mq(q) && (!q->tag_set ||
+	    q->tag_set->nr_maps <= HCTX_TYPE_POLL ||
+	    !q->tag_set->map[HCTX_TYPE_POLL].nr_queues))
 		return -EINVAL;
 
 	ret = queue_var_store(&poll_on, page, count);
@@ -442,6 +447,11 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page,
 	if (poll_on) {
 		blk_queue_flag_set(QUEUE_FLAG_POLL, q);
 	} else {
+		/*
+		 * For bio queue, it is safe to just freeze bio submission
+		 * activity because we don't read FLAG_POLL after bio is
+		 * submitted.
+		 */
 		blk_mq_freeze_queue(q);
 		blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
 		blk_mq_unfreeze_queue(q);
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 7e9660ea967d..e5ae77cba853 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -104,6 +104,8 @@ struct partition_meta_info {
 #define GENHD_FL_BLOCK_EVENTS_ON_EXCL_WRITE	0x0100
 #define GENHD_FL_NO_PART_SCAN			0x0200
 #define GENHD_FL_HIDDEN				0x0400
+/* only valid for bio based disk */
+#define GENHD_FL_CAP_BIO_POLL			0x0800
 
 enum {
 	DISK_EVENT_MEDIA_CHANGE			= 1 << 0, /* media changed */
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH V6 12/12] dm: support IO polling for bio-based dm device
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (10 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 11/12] block: allow to control FLAG_POLL via sysfs for bio poll capable queue Ming Lei
@ 2021-04-22 12:20 ` Ming Lei
  2021-04-23  1:32   ` JeffleXu
  2021-04-23  2:38   ` [PATCH V7 " Ming Lei
  2021-05-17  6:16 ` [PATCH V6 00/12] block: support bio based io polling JeffleXu
  12 siblings, 2 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-22 12:20 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke,
	Ming Lei

From: Jeffle Xu <jefflexu@linux.alibaba.com>

IO polling is enabled when all underlying target devices are capable
of IO polling. The sanity check supports the stacked device model, in
which one dm device may be build upon another dm device. In this case,
the mapped device will check if the underlying dm target device
supports IO polling.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/md/dm-table.c         | 24 ++++++++++++++++++++++++
 drivers/md/dm.c               |  2 ++
 include/linux/device-mapper.h |  1 +
 3 files changed, 27 insertions(+)

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 95391f78b8d5..a8f3575fb118 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1509,6 +1509,12 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
 	return &t->targets[(KEYS_PER_NODE * n) + k];
 }
 
+static int device_not_poll_capable(struct dm_target *ti, struct dm_dev *dev,
+				   sector_t start, sector_t len, void *data)
+{
+	return !blk_queue_poll(bdev_get_queue(dev->bdev));
+}
+
 /*
  * type->iterate_devices() should be called when the sanity check needs to
  * iterate and check all underlying data devices. iterate_devices() will
@@ -1559,6 +1565,11 @@ static int count_device(struct dm_target *ti, struct dm_dev *dev,
 	return 0;
 }
 
+int dm_table_supports_poll(struct dm_table *t)
+{
+	return !dm_table_any_dev_attr(t, device_not_poll_capable, NULL);
+}
+
 /*
  * Check whether a table has no data devices attached using each
  * target's iterate_devices method.
@@ -2079,6 +2090,19 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 
 	dm_update_keyslot_manager(q, t);
 	blk_queue_update_readahead(q);
+
+	/*
+	 * Check for request-based device is remained to
+	 * dm_mq_init_request_queue()->blk_mq_init_allocated_queue().
+	 * For bio-based device, only set QUEUE_FLAG_POLL when all underlying
+	 * devices supporting polling.
+	 */
+	if (__table_type_bio_based(t->type)) {
+		if (dm_table_supports_poll(t))
+			blk_queue_flag_set(QUEUE_FLAG_POLL, q);
+		else
+			blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
+	}
 }
 
 unsigned int dm_table_get_num_targets(struct dm_table *t)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 50b693d776d6..1b160e4e6446 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2175,6 +2175,8 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
 		}
 		break;
 	case DM_TYPE_BIO_BASED:
+		/* tell block layer we are capable of bio polling */
+		md->disk->flags |= GENHD_FL_CAP_BIO_POLL;
 	case DM_TYPE_DAX_BIO_BASED:
 		break;
 	case DM_TYPE_NONE:
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 7f4ac87c0b32..31bfd6f70013 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -538,6 +538,7 @@ unsigned int dm_table_get_num_targets(struct dm_table *t);
 fmode_t dm_table_get_mode(struct dm_table *t);
 struct mapped_device *dm_table_get_md(struct dm_table *t);
 const char *dm_table_device_name(struct dm_table *t);
+int dm_table_supports_poll(struct dm_table *t);
 
 /*
  * Trigger an event.
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 02/12] block: define 'struct bvec_iter' as packed
  2021-04-22 12:20 ` [PATCH V6 02/12] block: define 'struct bvec_iter' as packed Ming Lei
@ 2021-04-22 13:18   ` Hannes Reinecke
  0 siblings, 0 replies; 26+ messages in thread
From: Hannes Reinecke @ 2021-04-22 13:18 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Christoph Hellwig

On 4/22/21 2:20 PM, Ming Lei wrote:
> 'struct bvec_iter' is embedded into 'struct bio', define it as packed
> so that we can get one extra 4bytes for other uses without expanding
> bio.
> 
> 'struct bvec_iter' is often allocated on stack, so making it packed
> doesn't affect performance. Also I have run io_uring on both
> nvme/null_blk, and not observe performance effect in this way.
> 
> Suggested-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>   include/linux/bvec.h | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/linux/bvec.h b/include/linux/bvec.h
> index ff832e698efb..a0c4f41dfc83 100644
> --- a/include/linux/bvec.h
> +++ b/include/linux/bvec.h
> @@ -43,7 +43,7 @@ struct bvec_iter {
>   
>   	unsigned int            bi_bvec_done;	/* number of bytes completed in
>   						   current bvec */
> -};
> +} __packed;
>   
>   struct bvec_iter_all {
>   	struct bio_vec	bv;
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 04/12] block: move block polling code into one dedicated source file
  2021-04-22 12:20 ` [PATCH V6 04/12] block: move block polling code into one dedicated source file Ming Lei
@ 2021-04-22 13:19   ` Hannes Reinecke
  2021-04-26  7:12   ` Hannes Reinecke
  1 sibling, 0 replies; 26+ messages in thread
From: Hannes Reinecke @ 2021-04-22 13:19 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Christoph Hellwig

On 4/22/21 2:20 PM, Ming Lei wrote:
> Prepare for supporting bio based io polling, and move blk polling
> code into one dedicated source file. And three shared functions are
> put into private header of blk-mq.h
> 
> Suggested-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>   block/Makefile   |   3 +-
>   block/blk-mq.c   | 230 -----------------------------------------------
>   block/blk-mq.h   |  40 +++++++++
>   block/blk-poll.c | 196 ++++++++++++++++++++++++++++++++++++++++
>   4 files changed, 238 insertions(+), 231 deletions(-)
>   create mode 100644 block/blk-poll.c
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 12/12] dm: support IO polling for bio-based dm device
  2021-04-22 12:20 ` [PATCH V6 12/12] dm: support IO polling for bio-based dm device Ming Lei
@ 2021-04-23  1:32   ` JeffleXu
  2021-04-23  2:39     ` Ming Lei
  2021-04-23  2:38   ` [PATCH V7 " Ming Lei
  1 sibling, 1 reply; 26+ messages in thread
From: JeffleXu @ 2021-04-23  1:32 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe; +Cc: linux-block, Mike Snitzer, dm-devel, Hannes Reinecke



On 4/22/21 8:20 PM, Ming Lei wrote:
> From: Jeffle Xu <jefflexu@linux.alibaba.com>
> 
> IO polling is enabled when all underlying target devices are capable
> of IO polling. The sanity check supports the stacked device model, in
> which one dm device may be build upon another dm device. In this case,
> the mapped device will check if the underlying dm target device
> supports IO polling.
> 
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Reviewed-by: Mike Snitzer <snitzer@redhat.com>
> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  drivers/md/dm-table.c         | 24 ++++++++++++++++++++++++
>  drivers/md/dm.c               |  2 ++
>  include/linux/device-mapper.h |  1 +
>  3 files changed, 27 insertions(+)
> 
> diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> index 95391f78b8d5..a8f3575fb118 100644
> --- a/drivers/md/dm-table.c
> +++ b/drivers/md/dm-table.c
> @@ -1509,6 +1509,12 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
>  	return &t->targets[(KEYS_PER_NODE * n) + k];
>  }
>  
> +static int device_not_poll_capable(struct dm_target *ti, struct dm_dev *dev,
> +				   sector_t start, sector_t len, void *data)
> +{
> +	return !blk_queue_poll(bdev_get_queue(dev->bdev));
> +}
> +
>  /*
>   * type->iterate_devices() should be called when the sanity check needs to
>   * iterate and check all underlying data devices. iterate_devices() will
> @@ -1559,6 +1565,11 @@ static int count_device(struct dm_target *ti, struct dm_dev *dev,
>  	return 0;
>  }
>  
> +int dm_table_supports_poll(struct dm_table *t)
> +{
> +	return !dm_table_any_dev_attr(t, device_not_poll_capable, NULL);
> +}
> +

Since .poll_capable() has been dropped, dm_table_supports_poll() can be
declared as 'static' here.

>  /*
>   * Check whether a table has no data devices attached using each
>   * target's iterate_devices method.
> @@ -2079,6 +2090,19 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
>  
>  	dm_update_keyslot_manager(q, t);
>  	blk_queue_update_readahead(q);
> +
> +	/*
> +	 * Check for request-based device is remained to
> +	 * dm_mq_init_request_queue()->blk_mq_init_allocated_queue().
> +	 * For bio-based device, only set QUEUE_FLAG_POLL when all underlying
> +	 * devices supporting polling.
> +	 */
> +	if (__table_type_bio_based(t->type)) {
> +		if (dm_table_supports_poll(t))
> +			blk_queue_flag_set(QUEUE_FLAG_POLL, q);
> +		else
> +			blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
> +	}
>  }
>  
>  unsigned int dm_table_get_num_targets(struct dm_table *t)
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index 50b693d776d6..1b160e4e6446 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -2175,6 +2175,8 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
>  		}
>  		break;
>  	case DM_TYPE_BIO_BASED:
> +		/* tell block layer we are capable of bio polling */
> +		md->disk->flags |= GENHD_FL_CAP_BIO_POLL;
>  	case DM_TYPE_DAX_BIO_BASED:
>  		break;
>  	case DM_TYPE_NONE:


> diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
> index 7f4ac87c0b32..31bfd6f70013 100644
> --- a/include/linux/device-mapper.h
> +++ b/include/linux/device-mapper.h
> @@ -538,6 +538,7 @@ unsigned int dm_table_get_num_targets(struct dm_table *t);
>  fmode_t dm_table_get_mode(struct dm_table *t);
>  struct mapped_device *dm_table_get_md(struct dm_table *t);
>  const char *dm_table_device_name(struct dm_table *t);
> +int dm_table_supports_poll(struct dm_table *t);

Similarly, dm_table_supports_poll() doesn't need to be exported.

-- 
Thanks,
Jeffle

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH V7 12/12] dm: support IO polling for bio-based dm device
  2021-04-22 12:20 ` [PATCH V6 12/12] dm: support IO polling for bio-based dm device Ming Lei
  2021-04-23  1:32   ` JeffleXu
@ 2021-04-23  2:38   ` Ming Lei
  1 sibling, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-23  2:38 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Hannes Reinecke

From 98a73d99c3c663a3cfaadfec2825d6d88289a102 Mon Sep 17 00:00:00 2001
From: Jeffle Xu <jefflexu@linux.alibaba.com>
Date: Mon, 8 Feb 2021 16:52:41 +0800
Subject: [PATCH V7 12/12] dm: support IO polling for bio-based dm device

IO polling is enabled when all underlying target devices are capable
of IO polling. The sanity check supports the stacked device model, in
which one dm device may be build upon another dm device. In this case,
the mapped device will check if the underlying dm target device
supports IO polling.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
V7:
	- don't export dm_table_supports_poll, as suggested by Jeffle

 drivers/md/dm-table.c | 24 ++++++++++++++++++++++++
 drivers/md/dm.c       |  2 ++
 2 files changed, 26 insertions(+)

diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 95391f78b8d5..0b3e34cbe241 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1509,6 +1509,12 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
 	return &t->targets[(KEYS_PER_NODE * n) + k];
 }
 
+static int device_not_poll_capable(struct dm_target *ti, struct dm_dev *dev,
+				   sector_t start, sector_t len, void *data)
+{
+	return !blk_queue_poll(bdev_get_queue(dev->bdev));
+}
+
 /*
  * type->iterate_devices() should be called when the sanity check needs to
  * iterate and check all underlying data devices. iterate_devices() will
@@ -1559,6 +1565,11 @@ static int count_device(struct dm_target *ti, struct dm_dev *dev,
 	return 0;
 }
 
+static int dm_table_supports_poll(struct dm_table *t)
+{
+	return !dm_table_any_dev_attr(t, device_not_poll_capable, NULL);
+}
+
 /*
  * Check whether a table has no data devices attached using each
  * target's iterate_devices method.
@@ -2079,6 +2090,19 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 
 	dm_update_keyslot_manager(q, t);
 	blk_queue_update_readahead(q);
+
+	/*
+	 * Check for request-based device is remained to
+	 * dm_mq_init_request_queue()->blk_mq_init_allocated_queue().
+	 * For bio-based device, only set QUEUE_FLAG_POLL when all underlying
+	 * devices supporting polling.
+	 */
+	if (__table_type_bio_based(t->type)) {
+		if (dm_table_supports_poll(t))
+			blk_queue_flag_set(QUEUE_FLAG_POLL, q);
+		else
+			blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
+	}
 }
 
 unsigned int dm_table_get_num_targets(struct dm_table *t)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 50b693d776d6..1b160e4e6446 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2175,6 +2175,8 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
 		}
 		break;
 	case DM_TYPE_BIO_BASED:
+		/* tell block layer we are capable of bio polling */
+		md->disk->flags |= GENHD_FL_CAP_BIO_POLL;
 	case DM_TYPE_DAX_BIO_BASED:
 		break;
 	case DM_TYPE_NONE:
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 12/12] dm: support IO polling for bio-based dm device
  2021-04-23  1:32   ` JeffleXu
@ 2021-04-23  2:39     ` Ming Lei
  0 siblings, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-04-23  2:39 UTC (permalink / raw)
  To: JeffleXu; +Cc: Jens Axboe, linux-block, Mike Snitzer, dm-devel, Hannes Reinecke

On Fri, Apr 23, 2021 at 09:32:38AM +0800, JeffleXu wrote:
> 
> 
> On 4/22/21 8:20 PM, Ming Lei wrote:
> > From: Jeffle Xu <jefflexu@linux.alibaba.com>
> > 
> > IO polling is enabled when all underlying target devices are capable
> > of IO polling. The sanity check supports the stacked device model, in
> > which one dm device may be build upon another dm device. In this case,
> > the mapped device will check if the underlying dm target device
> > supports IO polling.
> > 
> > Reviewed-by: Hannes Reinecke <hare@suse.de>
> > Reviewed-by: Mike Snitzer <snitzer@redhat.com>
> > Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >  drivers/md/dm-table.c         | 24 ++++++++++++++++++++++++
> >  drivers/md/dm.c               |  2 ++
> >  include/linux/device-mapper.h |  1 +
> >  3 files changed, 27 insertions(+)
> > 
> > diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> > index 95391f78b8d5..a8f3575fb118 100644
> > --- a/drivers/md/dm-table.c
> > +++ b/drivers/md/dm-table.c
> > @@ -1509,6 +1509,12 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
> >  	return &t->targets[(KEYS_PER_NODE * n) + k];
> >  }
> >  
> > +static int device_not_poll_capable(struct dm_target *ti, struct dm_dev *dev,
> > +				   sector_t start, sector_t len, void *data)
> > +{
> > +	return !blk_queue_poll(bdev_get_queue(dev->bdev));
> > +}
> > +
> >  /*
> >   * type->iterate_devices() should be called when the sanity check needs to
> >   * iterate and check all underlying data devices. iterate_devices() will
> > @@ -1559,6 +1565,11 @@ static int count_device(struct dm_target *ti, struct dm_dev *dev,
> >  	return 0;
> >  }
> >  
> > +int dm_table_supports_poll(struct dm_table *t)
> > +{
> > +	return !dm_table_any_dev_attr(t, device_not_poll_capable, NULL);
> > +}
> > +
> 
> Since .poll_capable() has been dropped, dm_table_supports_poll() can be
> declared as 'static' here.
> 
> >  /*
> >   * Check whether a table has no data devices attached using each
> >   * target's iterate_devices method.
> > @@ -2079,6 +2090,19 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
> >  
> >  	dm_update_keyslot_manager(q, t);
> >  	blk_queue_update_readahead(q);
> > +
> > +	/*
> > +	 * Check for request-based device is remained to
> > +	 * dm_mq_init_request_queue()->blk_mq_init_allocated_queue().
> > +	 * For bio-based device, only set QUEUE_FLAG_POLL when all underlying
> > +	 * devices supporting polling.
> > +	 */
> > +	if (__table_type_bio_based(t->type)) {
> > +		if (dm_table_supports_poll(t))
> > +			blk_queue_flag_set(QUEUE_FLAG_POLL, q);
> > +		else
> > +			blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
> > +	}
> >  }
> >  
> >  unsigned int dm_table_get_num_targets(struct dm_table *t)
> > diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> > index 50b693d776d6..1b160e4e6446 100644
> > --- a/drivers/md/dm.c
> > +++ b/drivers/md/dm.c
> > @@ -2175,6 +2175,8 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
> >  		}
> >  		break;
> >  	case DM_TYPE_BIO_BASED:
> > +		/* tell block layer we are capable of bio polling */
> > +		md->disk->flags |= GENHD_FL_CAP_BIO_POLL;
> >  	case DM_TYPE_DAX_BIO_BASED:
> >  		break;
> >  	case DM_TYPE_NONE:
> 
> 
> > diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
> > index 7f4ac87c0b32..31bfd6f70013 100644
> > --- a/include/linux/device-mapper.h
> > +++ b/include/linux/device-mapper.h
> > @@ -538,6 +538,7 @@ unsigned int dm_table_get_num_targets(struct dm_table *t);
> >  fmode_t dm_table_get_mode(struct dm_table *t);
> >  struct mapped_device *dm_table_get_md(struct dm_table *t);
> >  const char *dm_table_device_name(struct dm_table *t);
> > +int dm_table_supports_poll(struct dm_table *t);
> 
> Similarly, dm_table_supports_poll() doesn't need to be exported.

Yeah, has fixed it in V7.

Thanks,
Ming


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 04/12] block: move block polling code into one dedicated source file
  2021-04-22 12:20 ` [PATCH V6 04/12] block: move block polling code into one dedicated source file Ming Lei
  2021-04-22 13:19   ` Hannes Reinecke
@ 2021-04-26  7:12   ` Hannes Reinecke
  1 sibling, 0 replies; 26+ messages in thread
From: Hannes Reinecke @ 2021-04-26  7:12 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Christoph Hellwig

On 4/22/21 2:20 PM, Ming Lei wrote:
> Prepare for supporting bio based io polling, and move blk polling
> code into one dedicated source file. And three shared functions are
> put into private header of blk-mq.h
> 
> Suggested-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/Makefile   |   3 +-
>  block/blk-mq.c   | 230 -----------------------------------------------
>  block/blk-mq.h   |  40 +++++++++
>  block/blk-poll.c | 196 ++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 238 insertions(+), 231 deletions(-)
>  create mode 100644 block/blk-poll.c
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		        Kernel Storage Architect
hare@suse.de			               +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 09/12] block: use per-task poll context to implement bio based io polling
  2021-04-22 12:20 ` [PATCH V6 09/12] block: use per-task poll context to implement bio based io polling Ming Lei
@ 2021-04-26  7:17   ` Hannes Reinecke
  0 siblings, 0 replies; 26+ messages in thread
From: Hannes Reinecke @ 2021-04-26  7:17 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe; +Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel

On 4/22/21 2:20 PM, Ming Lei wrote:
> Currently bio based IO polling needs to poll all hw queue blindly, this
> way is very inefficient, and one big reason is that we can't pass any
> bio submission result to blk_poll().
> 
> In IO submission context, track associated underlying bios by per-task
> submission queue and store returned 'cookie' in bio->bi_poll_data which
> is added by filling a hole of .bi_iter, and return current->pid to
> caller of submit_bio() for any bio based driver's IO, which is
> submitted from FS.
> 
> In IO poll context, the passed cookie tells us the PID of submission
> context, then we can find bios from the per-task io pull context of
> submission context. Moving bios from submission queue to poll queue of
> the poll context, and keep polling until these bios are ended. Remove
> bio from poll queue if the bio is ended. Add bio flags of BIO_DONE and
> BIO_END_BY_POLL for such purpose.
> 
> In was found in Jeffle Xu's test that kfifo doesn't scale well for a
> submission queue as queue depth is increased, so a new mechanism for
> tracking bios is needed. So far bio's size is close to 2 cacheline size,
> and it may not be accepted to add new field into bio for solving the
> scalability issue by tracking bios via linked list, switch to bio group
> list for tracking bio, the idea is to reuse .bi_end_io for linking bios
> into a linked list for all sharing same .bi_end_io(call it bio group),
> which is recovered before ending bio really, since BIO_END_BY_POLL is
> added for enhancing this point. Usually .bi_end_bio is same for all
> bios in same layer, so it is enough to provide very limited groups, such
> as 16 or less for fixing the scalability issue.
> 
> Usually submission shares context with io poll. The per-task poll context
> is just like stack variable, and it is cheap to move data between the two
> per-task queues.
> 
> Also when the submission task is exiting, drain pending IOs in the context
> until all are done.
> 
> Tested-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/bio.c               |   5 +
>  block/blk-core.c          |  39 ++++-
>  block/blk-ioc.c           |   3 +
>  block/blk-poll.c          | 345 +++++++++++++++++++++++++++++++++++++-
>  block/blk.h               |  33 ++++
>  include/linux/blk_types.h |  27 ++-
>  6 files changed, 448 insertions(+), 4 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		        Kernel Storage Architect
hare@suse.de			               +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 10/12] block: limit hw queues to be polled in each blk_poll()
  2021-04-22 12:20 ` [PATCH V6 10/12] block: limit hw queues to be polled in each blk_poll() Ming Lei
@ 2021-04-26  7:19   ` Hannes Reinecke
  2021-04-26  8:00     ` Ming Lei
  0 siblings, 1 reply; 26+ messages in thread
From: Hannes Reinecke @ 2021-04-26  7:19 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe; +Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel

On 4/22/21 2:20 PM, Ming Lei wrote:
> Limit at most 8 queues are polled in each blk_pull(), avoid to
> add extra latency when queue depth is high.
> 
> Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-poll.c | 78 ++++++++++++++++++++++++++++++++++--------------
>  1 file changed, 55 insertions(+), 23 deletions(-)
> 
> diff --git a/block/blk-poll.c b/block/blk-poll.c
> index 249d73ff6f81..20e7c47cc984 100644
> --- a/block/blk-poll.c
> +++ b/block/blk-poll.c
> @@ -288,36 +288,32 @@ static void bio_grp_list_move(struct bio_grp_list *dst,
>  	src->nr_grps -= cnt;
>  }
>  
> -static int blk_mq_poll_io(struct bio *bio)
> +#define POLL_HCTX_MAX_CNT 8
> +
> +static bool blk_add_unique_hctx(struct blk_mq_hw_ctx **data, int *cnt,
> +		struct blk_mq_hw_ctx *hctx)
>  {
> -	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
> -	blk_qc_t cookie = bio_get_poll_data(bio);
> -	int ret = 0;
> +	int i;
>  
> -	/* wait until the bio is submitted really */
> -	if (!blk_qc_t_ready(cookie))
> -		return 0;
>  
> -	if (!bio_flagged(bio, BIO_DONE) && blk_qc_t_valid(cookie)) {
> -		struct blk_mq_hw_ctx *hctx =
> -			q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
> +	for (i = 0; i < *cnt; i++) {
> +		if (data[i] == hctx)
> +			goto exit;
> +	}
>  
> -		ret += blk_mq_poll_hctx(q, hctx);
> +	if (i < POLL_HCTX_MAX_CNT) {
> +		data[i] = hctx;
> +		(*cnt)++;
>  	}
> -	return ret;
> + exit:
> +	return *cnt == POLL_HCTX_MAX_CNT;
>  }
>  
> -static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
> +static void blk_build_poll_queues(struct bio_grp_list *grps,
> +		struct blk_mq_hw_ctx **data, int *cnt)
>  {
> -	int ret = 0;
>  	int i;
>  
> -	/*
> -	 * Poll hw queue first.
> -	 *
> -	 * TODO: limit max poll times and make sure to not poll same
> -	 * hw queue one more time.
> -	 */
>  	for (i = 0; i < grps->nr_grps; i++) {
>  		struct bio_grp_list_data *grp = &grps->head[i];
>  		struct bio *bio;
> @@ -325,11 +321,31 @@ static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
>  		if (bio_grp_list_grp_empty(grp))
>  			continue;
>  
> -		for (bio = grp->list.head; bio; bio = bio->bi_poll)
> -			ret += blk_mq_poll_io(bio);
> +		for (bio = grp->list.head; bio; bio = bio->bi_poll) {
> +			blk_qc_t  cookie;
> +			struct blk_mq_hw_ctx *hctx;
> +			struct request_queue *q;
> +
> +			if (bio_flagged(bio, BIO_DONE))
> +				continue;
> +
> +			/* wait until the bio is submitted really */
> +			cookie = bio_get_poll_data(bio);
> +			if (!blk_qc_t_ready(cookie) || !blk_qc_t_valid(cookie))
> +				continue;
> +
> +			q = bio->bi_bdev->bd_disk->queue;
> +			hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
> +			if (blk_add_unique_hctx(data, cnt, hctx))
> +				return;
> +		}
>  	}
> +}
> +
> +static void blk_bio_poll_reap_ios(struct bio_grp_list *grps)
> +{
> +	int i;
>  
> -	/* reap bios */
>  	for (i = 0; i < grps->nr_grps; i++) {
>  		struct bio_grp_list_data *grp = &grps->head[i];
>  		struct bio *bio;
> @@ -354,6 +370,22 @@ static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
>  		}
>  		__bio_grp_list_merge(&grp->list, &bl);
>  	}
> +}
> +
> +static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
> +{
> +	int ret = 0;
> +	int i;
> +	struct blk_mq_hw_ctx *hctx[POLL_HCTX_MAX_CNT];
> +	int cnt = 0;
> +
> +	blk_build_poll_queues(grps, hctx, &cnt);
> +
> +	for (i = 0; i < cnt; i++)
> +		ret += blk_mq_poll_hctx(hctx[i]->queue, hctx[i]);
> +
> +	blk_bio_poll_reap_ios(grps);
> +
>  	return ret;
>  }
>  
> 
Can't we make it a sysfs attribute instead of hard-coding it?
'8' seems a bit arbitrary to me, I'd rather have the ability to modify it...

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		        Kernel Storage Architect
hare@suse.de			               +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 11/12] block: allow to control FLAG_POLL via sysfs for bio poll capable queue
  2021-04-22 12:20 ` [PATCH V6 11/12] block: allow to control FLAG_POLL via sysfs for bio poll capable queue Ming Lei
@ 2021-04-26  7:20   ` Hannes Reinecke
  0 siblings, 0 replies; 26+ messages in thread
From: Hannes Reinecke @ 2021-04-26  7:20 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Jeffle Xu, Mike Snitzer, dm-devel, Christoph Hellwig

On 4/22/21 2:20 PM, Ming Lei wrote:
> Prepare for supporting bio based io polling. If one disk is capable of
> bio polling, we allow user to control FLAG_POLL via sysfs.
> 
> Suggested-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  block/blk-sysfs.c     | 14 ++++++++++++--
>  include/linux/genhd.h |  2 ++
>  2 files changed, 14 insertions(+), 2 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		        Kernel Storage Architect
hare@suse.de			               +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 10/12] block: limit hw queues to be polled in each blk_poll()
  2021-04-26  7:19   ` Hannes Reinecke
@ 2021-04-26  8:00     ` Ming Lei
  2021-04-26  9:05       ` Hannes Reinecke
  0 siblings, 1 reply; 26+ messages in thread
From: Ming Lei @ 2021-04-26  8:00 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Jens Axboe, linux-block, Jeffle Xu, Mike Snitzer, dm-devel

On Mon, Apr 26, 2021 at 09:19:20AM +0200, Hannes Reinecke wrote:
> On 4/22/21 2:20 PM, Ming Lei wrote:
> > Limit at most 8 queues are polled in each blk_pull(), avoid to
> > add extra latency when queue depth is high.
> > 
> > Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >  block/blk-poll.c | 78 ++++++++++++++++++++++++++++++++++--------------
> >  1 file changed, 55 insertions(+), 23 deletions(-)
> > 
> > diff --git a/block/blk-poll.c b/block/blk-poll.c
> > index 249d73ff6f81..20e7c47cc984 100644
> > --- a/block/blk-poll.c
> > +++ b/block/blk-poll.c
> > @@ -288,36 +288,32 @@ static void bio_grp_list_move(struct bio_grp_list *dst,
> >  	src->nr_grps -= cnt;
> >  }
> >  
> > -static int blk_mq_poll_io(struct bio *bio)
> > +#define POLL_HCTX_MAX_CNT 8
> > +
> > +static bool blk_add_unique_hctx(struct blk_mq_hw_ctx **data, int *cnt,
> > +		struct blk_mq_hw_ctx *hctx)
> >  {
> > -	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
> > -	blk_qc_t cookie = bio_get_poll_data(bio);
> > -	int ret = 0;
> > +	int i;
> >  
> > -	/* wait until the bio is submitted really */
> > -	if (!blk_qc_t_ready(cookie))
> > -		return 0;
> >  
> > -	if (!bio_flagged(bio, BIO_DONE) && blk_qc_t_valid(cookie)) {
> > -		struct blk_mq_hw_ctx *hctx =
> > -			q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
> > +	for (i = 0; i < *cnt; i++) {
> > +		if (data[i] == hctx)
> > +			goto exit;
> > +	}
> >  
> > -		ret += blk_mq_poll_hctx(q, hctx);
> > +	if (i < POLL_HCTX_MAX_CNT) {
> > +		data[i] = hctx;
> > +		(*cnt)++;
> >  	}
> > -	return ret;
> > + exit:
> > +	return *cnt == POLL_HCTX_MAX_CNT;
> >  }
> >  
> > -static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
> > +static void blk_build_poll_queues(struct bio_grp_list *grps,
> > +		struct blk_mq_hw_ctx **data, int *cnt)
> >  {
> > -	int ret = 0;
> >  	int i;
> >  
> > -	/*
> > -	 * Poll hw queue first.
> > -	 *
> > -	 * TODO: limit max poll times and make sure to not poll same
> > -	 * hw queue one more time.
> > -	 */
> >  	for (i = 0; i < grps->nr_grps; i++) {
> >  		struct bio_grp_list_data *grp = &grps->head[i];
> >  		struct bio *bio;
> > @@ -325,11 +321,31 @@ static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
> >  		if (bio_grp_list_grp_empty(grp))
> >  			continue;
> >  
> > -		for (bio = grp->list.head; bio; bio = bio->bi_poll)
> > -			ret += blk_mq_poll_io(bio);
> > +		for (bio = grp->list.head; bio; bio = bio->bi_poll) {
> > +			blk_qc_t  cookie;
> > +			struct blk_mq_hw_ctx *hctx;
> > +			struct request_queue *q;
> > +
> > +			if (bio_flagged(bio, BIO_DONE))
> > +				continue;
> > +
> > +			/* wait until the bio is submitted really */
> > +			cookie = bio_get_poll_data(bio);
> > +			if (!blk_qc_t_ready(cookie) || !blk_qc_t_valid(cookie))
> > +				continue;
> > +
> > +			q = bio->bi_bdev->bd_disk->queue;
> > +			hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
> > +			if (blk_add_unique_hctx(data, cnt, hctx))
> > +				return;
> > +		}
> >  	}
> > +}
> > +
> > +static void blk_bio_poll_reap_ios(struct bio_grp_list *grps)
> > +{
> > +	int i;
> >  
> > -	/* reap bios */
> >  	for (i = 0; i < grps->nr_grps; i++) {
> >  		struct bio_grp_list_data *grp = &grps->head[i];
> >  		struct bio *bio;
> > @@ -354,6 +370,22 @@ static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
> >  		}
> >  		__bio_grp_list_merge(&grp->list, &bl);
> >  	}
> > +}
> > +
> > +static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
> > +{
> > +	int ret = 0;
> > +	int i;
> > +	struct blk_mq_hw_ctx *hctx[POLL_HCTX_MAX_CNT];
> > +	int cnt = 0;
> > +
> > +	blk_build_poll_queues(grps, hctx, &cnt);
> > +
> > +	for (i = 0; i < cnt; i++)
> > +		ret += blk_mq_poll_hctx(hctx[i]->queue, hctx[i]);
> > +
> > +	blk_bio_poll_reap_ios(grps);
> > +
> >  	return ret;
> >  }
> >  
> > 
> Can't we make it a sysfs attribute instead of hard-coding it?
> '8' seems a bit arbitrary to me, I'd rather have the ability to modify it...

I'd rather not add such code in the feature 'enablement' stage since I doesn't
observe the number plays a big role yet. It is added for holding hw queues to
be polled on stack variables, also avoid to add too much latency if there is
too many bios from too many hw queues to be reaped.

Also the actual polled hw queues can be observed easily via bpftrace, so debug
purpose from sysfs isn't necessary too.


Thanks, 
Ming


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 10/12] block: limit hw queues to be polled in each blk_poll()
  2021-04-26  8:00     ` Ming Lei
@ 2021-04-26  9:05       ` Hannes Reinecke
  0 siblings, 0 replies; 26+ messages in thread
From: Hannes Reinecke @ 2021-04-26  9:05 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, linux-block, Jeffle Xu, Mike Snitzer, dm-devel

On 4/26/21 10:00 AM, Ming Lei wrote:
> On Mon, Apr 26, 2021 at 09:19:20AM +0200, Hannes Reinecke wrote:
>> On 4/22/21 2:20 PM, Ming Lei wrote:
>>> Limit at most 8 queues are polled in each blk_pull(), avoid to
>>> add extra latency when queue depth is high.
>>>
>>> Reviewed-by: Jeffle Xu <jefflexu@linux.alibaba.com>
>>> Signed-off-by: Ming Lei <ming.lei@redhat.com>
>>> ---
>>>  block/blk-poll.c | 78 ++++++++++++++++++++++++++++++++++--------------
>>>  1 file changed, 55 insertions(+), 23 deletions(-)
>>>
>>> diff --git a/block/blk-poll.c b/block/blk-poll.c
>>> index 249d73ff6f81..20e7c47cc984 100644
>>> --- a/block/blk-poll.c
>>> +++ b/block/blk-poll.c
>>> @@ -288,36 +288,32 @@ static void bio_grp_list_move(struct bio_grp_list *dst,
>>>  	src->nr_grps -= cnt;
>>>  }
>>>  
>>> -static int blk_mq_poll_io(struct bio *bio)
>>> +#define POLL_HCTX_MAX_CNT 8
>>> +
>>> +static bool blk_add_unique_hctx(struct blk_mq_hw_ctx **data, int *cnt,
>>> +		struct blk_mq_hw_ctx *hctx)
>>>  {
>>> -	struct request_queue *q = bio->bi_bdev->bd_disk->queue;
>>> -	blk_qc_t cookie = bio_get_poll_data(bio);
>>> -	int ret = 0;
>>> +	int i;
>>>  
>>> -	/* wait until the bio is submitted really */
>>> -	if (!blk_qc_t_ready(cookie))
>>> -		return 0;
>>>  
>>> -	if (!bio_flagged(bio, BIO_DONE) && blk_qc_t_valid(cookie)) {
>>> -		struct blk_mq_hw_ctx *hctx =
>>> -			q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
>>> +	for (i = 0; i < *cnt; i++) {
>>> +		if (data[i] == hctx)
>>> +			goto exit;
>>> +	}
>>>  
>>> -		ret += blk_mq_poll_hctx(q, hctx);
>>> +	if (i < POLL_HCTX_MAX_CNT) {
>>> +		data[i] = hctx;
>>> +		(*cnt)++;
>>>  	}
>>> -	return ret;
>>> + exit:
>>> +	return *cnt == POLL_HCTX_MAX_CNT;
>>>  }
>>>  
>>> -static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
>>> +static void blk_build_poll_queues(struct bio_grp_list *grps,
>>> +		struct blk_mq_hw_ctx **data, int *cnt)
>>>  {
>>> -	int ret = 0;
>>>  	int i;
>>>  
>>> -	/*
>>> -	 * Poll hw queue first.
>>> -	 *
>>> -	 * TODO: limit max poll times and make sure to not poll same
>>> -	 * hw queue one more time.
>>> -	 */
>>>  	for (i = 0; i < grps->nr_grps; i++) {
>>>  		struct bio_grp_list_data *grp = &grps->head[i];
>>>  		struct bio *bio;
>>> @@ -325,11 +321,31 @@ static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
>>>  		if (bio_grp_list_grp_empty(grp))
>>>  			continue;
>>>  
>>> -		for (bio = grp->list.head; bio; bio = bio->bi_poll)
>>> -			ret += blk_mq_poll_io(bio);
>>> +		for (bio = grp->list.head; bio; bio = bio->bi_poll) {
>>> +			blk_qc_t  cookie;
>>> +			struct blk_mq_hw_ctx *hctx;
>>> +			struct request_queue *q;
>>> +
>>> +			if (bio_flagged(bio, BIO_DONE))
>>> +				continue;
>>> +
>>> +			/* wait until the bio is submitted really */
>>> +			cookie = bio_get_poll_data(bio);
>>> +			if (!blk_qc_t_ready(cookie) || !blk_qc_t_valid(cookie))
>>> +				continue;
>>> +
>>> +			q = bio->bi_bdev->bd_disk->queue;
>>> +			hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
>>> +			if (blk_add_unique_hctx(data, cnt, hctx))
>>> +				return;
>>> +		}
>>>  	}
>>> +}
>>> +
>>> +static void blk_bio_poll_reap_ios(struct bio_grp_list *grps)
>>> +{
>>> +	int i;
>>>  
>>> -	/* reap bios */
>>>  	for (i = 0; i < grps->nr_grps; i++) {
>>>  		struct bio_grp_list_data *grp = &grps->head[i];
>>>  		struct bio *bio;
>>> @@ -354,6 +370,22 @@ static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
>>>  		}
>>>  		__bio_grp_list_merge(&grp->list, &bl);
>>>  	}
>>> +}
>>> +
>>> +static int blk_bio_poll_and_end_io(struct bio_grp_list *grps)
>>> +{
>>> +	int ret = 0;
>>> +	int i;
>>> +	struct blk_mq_hw_ctx *hctx[POLL_HCTX_MAX_CNT];
>>> +	int cnt = 0;
>>> +
>>> +	blk_build_poll_queues(grps, hctx, &cnt);
>>> +
>>> +	for (i = 0; i < cnt; i++)
>>> +		ret += blk_mq_poll_hctx(hctx[i]->queue, hctx[i]);
>>> +
>>> +	blk_bio_poll_reap_ios(grps);
>>> +
>>>  	return ret;
>>>  }
>>>  
>>>
>> Can't we make it a sysfs attribute instead of hard-coding it?
>> '8' seems a bit arbitrary to me, I'd rather have the ability to modify it...
> 
> I'd rather not add such code in the feature 'enablement' stage since I doesn't
> observe the number plays a big role yet. It is added for holding hw queues to
> be polled on stack variables, also avoid to add too much latency if there is
> too many bios from too many hw queues to be reaped.
> 
> Also the actual polled hw queues can be observed easily via bpftrace, so debug
> purpose from sysfs isn't necessary too.
> 
Okay. You can add my

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		        Kernel Storage Architect
hare@suse.de			               +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 00/12] block: support bio based io polling
  2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
                   ` (11 preceding siblings ...)
  2021-04-22 12:20 ` [PATCH V6 12/12] dm: support IO polling for bio-based dm device Ming Lei
@ 2021-05-17  6:16 ` JeffleXu
  2021-05-17  7:13   ` Ming Lei
  12 siblings, 1 reply; 26+ messages in thread
From: JeffleXu @ 2021-05-17  6:16 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe; +Cc: linux-block, Mike Snitzer, dm-devel, Hannes Reinecke

Hi all,

What's the latest progress of this bio-based polling feature?

I've noticed that hch has also sent a patch set on this [1]. But as far
as I know, hch's patch set only refactors the interface of polling in
the block layer. It indeed helps bio-based polling for some kind of
bio-based driver, but for DM/MD where one bio could be mapped to several
split bios, more work is obviously needed, just like Lei Ming's
io_context related code in this patch set.

hch may have better idea, after all [1] is just a preparation patch set.


[1]
https://lore.kernel.org/linux-block/20210427161619.1294399-2-hch@lst.de/T/


-- 
Thanks,
Jeffle


On 4/22/21 8:20 PM, Ming Lei wrote:
> Hi Jens,
> 
> Add per-task io poll context for holding HIPRI blk-mq/underlying bios
> queued from bio based driver's io submission context, and reuse one bio
> padding field for storing 'cookie' returned from submit_bio() for these
> bios. Also explicitly end these bios in poll context by adding two
> new bio flags.
> 
> In this way, we needn't to poll all underlying hw queues any more,
> which is implemented in Jeffle's patches. And we can just poll hw queues
> in which there is HIPRI IO queued.
> 
> Usually io submission and io poll share same context, so the added io
> poll context data is just like one stack variable, and the cost for
> saving bios is cheap.
> 
> V6:
> 	- move poll code into block/blk-poll.c, as suggested by Christoph
> 	- define bvec_iter as __packed, and add one new field to bio, as
> 	  suggested by Christoph
> 	- re-organize patch order, as suggested by Christoph
> 	- add one flag for checking if the disk is capable of bio polling
> 	  and remove .poll_capable(), as suggested by Christoph
> 	- fix type of .bi_poll
> 
> V5:
> 	- fix one use-after-free issue in case that polling is from another
> 	context: adds one new cookie of BLK_QC_T_NOT_READY for preventing
> 	this issue in patch 8/12
> 	- add reviewed-by & tested-by tag
> 
> V4:
> 	- cover one more test_bit(QUEUE_FLAG_POLL, ...) suggested by
> 	  Jeffle(01/12)
> 	- drop patch of 'block: add helper of blk_create_io_context'
> 	- add new helper of blk_create_io_poll_context() (03/12)
> 	- drain submission queues in exit_io_context(), suggested by
> 	  Jeffle(08/13)
> 	- considering shared io context case for blk_bio_poll_io_drain()
> 	(08/13)
> 	- fix one issue in blk_bio_poll_pack_groups() as suggested by
> 	Jeffle(08/13)
> 	- add reviewed-by tag
> V3:
> 	- fix cookie returned for bio based driver, as suggested by Jeffle Xu
> 	- draining pending bios when submission context is exiting
> 	- patch style and comment fix, as suggested by Mike
> 	- allow poll context data to be NULL by always polling on submission queue
> 	- remove RFC, and reviewed-by
> 
> V2:
> 	- address queue depth scalability issue reported by Jeffle via bio
> 	group list. Reuse .bi_end_io for linking bios which share same
> 	.bi_end_io, and support 32 such groups in submit queue. With this way,
> 	the scalability issue caused by kfifio is solved. Before really
> 	ending bio, .bi_end_io is recovered from the group head.
> 
> 
> 
> Jeffle Xu (2):
>   block: extract one helper function polling hw queue
>   dm: support IO polling for bio-based dm device
> 
> Ming Lei (10):
>   block: add helper of blk_queue_poll
>   block: define 'struct bvec_iter' as packed
>   block: add one helper to free io_context
>   block: move block polling code into one dedicated source file
>   block: prepare for supporting bio_list via other link
>   block: create io poll context for submission and poll task
>   block: add req flag of REQ_POLL_CTX
>   block: use per-task poll context to implement bio based io polling
>   block: limit hw queues to be polled in each blk_poll()
>   block: allow to control FLAG_POLL via sysfs for bio poll capable queue
> 
>  block/Makefile                |   3 +-
>  block/bio.c                   |   5 +
>  block/blk-core.c              |  68 +++-
>  block/blk-ioc.c               |  15 +-
>  block/blk-mq.c                | 231 -------------
>  block/blk-mq.h                |  40 +++
>  block/blk-poll.c              | 632 ++++++++++++++++++++++++++++++++++
>  block/blk-sysfs.c             |  16 +-
>  block/blk.h                   | 112 ++++++
>  drivers/md/dm-table.c         |  24 ++
>  drivers/md/dm.c               |   2 +
>  drivers/nvme/host/core.c      |   2 +-
>  include/linux/bio.h           | 132 +++----
>  include/linux/blk_types.h     |  31 +-
>  include/linux/blkdev.h        |   1 +
>  include/linux/bvec.h          |   2 +-
>  include/linux/device-mapper.h |   1 +
>  include/linux/genhd.h         |   2 +
>  include/linux/iocontext.h     |   2 +
>  19 files changed, 1003 insertions(+), 318 deletions(-)
>  create mode 100644 block/blk-poll.c
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH V6 00/12] block: support bio based io polling
  2021-05-17  6:16 ` [PATCH V6 00/12] block: support bio based io polling JeffleXu
@ 2021-05-17  7:13   ` Ming Lei
  0 siblings, 0 replies; 26+ messages in thread
From: Ming Lei @ 2021-05-17  7:13 UTC (permalink / raw)
  To: JeffleXu; +Cc: Jens Axboe, linux-block, Mike Snitzer, dm-devel, Hannes Reinecke

Hi JeffleXu,

On Mon, May 17, 2021 at 02:16:39PM +0800, JeffleXu wrote:
> Hi all,
> 
> What's the latest progress of this bio-based polling feature?
> 
> I've noticed that hch has also sent a patch set on this [1]. But as far
> as I know, hch's patch set only refactors the interface of polling in
> the block layer. It indeed helps bio-based polling for some kind of
> bio-based driver, but for DM/MD where one bio could be mapped to several
> split bios, more work is obviously needed, just like Lei Ming's
> io_context related code in this patch set.
> 
> hch may have better idea, after all [1] is just a preparation patch set.

Yeah, we have to rebase V6 against Christoph's patchset anyway.

Looks there is at least two approaches left for us:

1) keep the generic approach in V6, just rebase after Christoph's patch
is finalized

2) support io polling simply in bio driver, since bio->bi_cookie is
assigned for underlying bio, and it shouldn't be very difficult to
support that in DM/MD. I have been thinking of it a while, but not
coding it yet. BTW, all underlying bios can be linked to DM
bio->bi_next, and we can add one new callback of .io_poll for polling
DM/MD's bio.


Thanks,
Ming


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2021-05-17  7:15 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-22 12:20 [PATCH V6 00/12] block: support bio based io polling Ming Lei
2021-04-22 12:20 ` [PATCH V6 01/12] block: add helper of blk_queue_poll Ming Lei
2021-04-22 12:20 ` [PATCH V6 02/12] block: define 'struct bvec_iter' as packed Ming Lei
2021-04-22 13:18   ` Hannes Reinecke
2021-04-22 12:20 ` [PATCH V6 03/12] block: add one helper to free io_context Ming Lei
2021-04-22 12:20 ` [PATCH V6 04/12] block: move block polling code into one dedicated source file Ming Lei
2021-04-22 13:19   ` Hannes Reinecke
2021-04-26  7:12   ` Hannes Reinecke
2021-04-22 12:20 ` [PATCH V6 05/12] block: extract one helper function polling hw queue Ming Lei
2021-04-22 12:20 ` [PATCH V6 06/12] block: prepare for supporting bio_list via other link Ming Lei
2021-04-22 12:20 ` [PATCH V6 07/12] block: create io poll context for submission and poll task Ming Lei
2021-04-22 12:20 ` [PATCH V6 08/12] block: add req flag of REQ_POLL_CTX Ming Lei
2021-04-22 12:20 ` [PATCH V6 09/12] block: use per-task poll context to implement bio based io polling Ming Lei
2021-04-26  7:17   ` Hannes Reinecke
2021-04-22 12:20 ` [PATCH V6 10/12] block: limit hw queues to be polled in each blk_poll() Ming Lei
2021-04-26  7:19   ` Hannes Reinecke
2021-04-26  8:00     ` Ming Lei
2021-04-26  9:05       ` Hannes Reinecke
2021-04-22 12:20 ` [PATCH V6 11/12] block: allow to control FLAG_POLL via sysfs for bio poll capable queue Ming Lei
2021-04-26  7:20   ` Hannes Reinecke
2021-04-22 12:20 ` [PATCH V6 12/12] dm: support IO polling for bio-based dm device Ming Lei
2021-04-23  1:32   ` JeffleXu
2021-04-23  2:39     ` Ming Lei
2021-04-23  2:38   ` [PATCH V7 " Ming Lei
2021-05-17  6:16 ` [PATCH V6 00/12] block: support bio based io polling JeffleXu
2021-05-17  7:13   ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).