All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V4 0/8] block: improve iops limit throttle
@ 2022-02-16  4:45 Ming Lei
  2022-02-16  4:45 ` [PATCH V4 1/8] block: move submit_bio_checks() into submit_bio_noacct Ming Lei
                   ` (8 more replies)
  0 siblings, 9 replies; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei

Hello Guys,

Lining reported that iops limit throttle doesn't work on dm-thin, also
iops limit throttle works bad on plain disk in case of excessive split.

Commit 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios")
was for addressing this issue, but the taken approach is just to run
post-accounting, then current split bios won't be throttled actually,
so actual iops throttle result isn't good in case of excessive bio
splitting.

The 1st three patches are cleanup.

The 4th patches add one new local helper of submit_bio_noacct_nocheck() for
blk_throtl_dispatch_work_fn(), so that bios won't be throttled any more
when blk-throttle code dispatches throttled bios.

The 5th patch merges merge submit_bio_checks() into submit_bio_noacct
as suggested by Christoph.

The 6th and 7th patch makes the real difference for throttling split bio wrt.
iops limit.

The last patch is to revert commit 4f1e9630afe6 ("blk-throtl: optimize IOPS
throttle for large IO scenarios").

Ning Li has verified that iops throttle is improved much on the posted
RFC V1 version.

V4:
	- remove wrapper in 4/8
	- early return in 5/8

V3:
	- add reviewed-by/acked-by tag
	- patch style change 2/8
	- mark submit_bio_checks as static 3/8
	- move ubmit_bio_checks() into submit_bio_noacct 5/8

V2:
	- remove RFC
	- don't add/export __submit_bio_noacct(), instead add one new local
	helper of submit_bio_noacct_nocheck() per Christoph's suggestion



Ming Lei (8):
  block: move submit_bio_checks() into submit_bio_noacct
  block: move blk_crypto_bio_prep() out of blk-mq.c
  block: don't declare submit_bio_checks in local header
  block: don't check bio in blk_throtl_dispatch_work_fn
  block: merge submit_bio_checks() into submit_bio_noacct
  block: throttle split bio in case of iops limit
  block: don't try to throttle split bio if iops limit isn't set
  block: revert 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for
    large IO scenarios")

 block/blk-core.c     | 248 +++++++++++++++++++++----------------------
 block/blk-merge.c    |   2 -
 block/blk-mq.c       |   3 -
 block/blk-throttle.c |  61 ++++-------
 block/blk-throttle.h |  16 +--
 block/blk.h          |   2 +-
 6 files changed, 153 insertions(+), 179 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH V4 1/8] block: move submit_bio_checks() into submit_bio_noacct
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
@ 2022-02-16  4:45 ` Ming Lei
  2022-02-16  9:14   ` Chaitanya Kulkarni
  2022-02-16  4:45 ` [PATCH V4 2/8] block: move blk_crypto_bio_prep() out of blk-mq.c Ming Lei
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei,
	Christoph Hellwig

It is more clean & readable to check bio when starting to submit it,
instead of just before calling ->submit_bio() or blk_mq_submit_bio().

Also it provides us chance to optimize bio submission without checking
bio.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 5a4a59041629..d4a023667ac1 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -797,9 +797,6 @@ static void __submit_bio(struct bio *bio)
 {
 	struct gendisk *disk = bio->bi_bdev->bd_disk;
 
-	if (unlikely(!submit_bio_checks(bio)))
-		return;
-
 	if (!disk->fops->submit_bio)
 		blk_mq_submit_bio(bio);
 	else
@@ -893,6 +890,9 @@ static void __submit_bio_noacct_mq(struct bio *bio)
  */
 void submit_bio_noacct(struct bio *bio)
 {
+	if (unlikely(!submit_bio_checks(bio)))
+		return;
+
 	/*
 	 * We only want one ->submit_bio to be active at a time, else stack
 	 * usage with stacked devices could be a problem.  Use current->bio_list
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V4 2/8] block: move blk_crypto_bio_prep() out of blk-mq.c
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
  2022-02-16  4:45 ` [PATCH V4 1/8] block: move submit_bio_checks() into submit_bio_noacct Ming Lei
@ 2022-02-16  4:45 ` Ming Lei
  2022-02-16  9:15   ` Chaitanya Kulkarni
  2022-02-16  4:45 ` [PATCH V4 3/8] block: don't declare submit_bio_checks in local header Ming Lei
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei,
	Christoph Hellwig

blk_crypto_bio_prep() is called for both bio based and blk-mq drivers,
so move it out of blk-mq.c, then we can unify this kind of handling.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c | 21 ++++++++-------------
 block/blk-mq.c   |  3 ---
 2 files changed, 8 insertions(+), 16 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index d4a023667ac1..f03fff1fa391 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -783,24 +783,19 @@ noinline_for_stack bool submit_bio_checks(struct bio *bio)
 	return false;
 }
 
-static void __submit_bio_fops(struct gendisk *disk, struct bio *bio)
-{
-	if (blk_crypto_bio_prep(&bio)) {
-		if (likely(bio_queue_enter(bio) == 0)) {
-			disk->fops->submit_bio(bio);
-			blk_queue_exit(disk->queue);
-		}
-	}
-}
-
 static void __submit_bio(struct bio *bio)
 {
 	struct gendisk *disk = bio->bi_bdev->bd_disk;
 
-	if (!disk->fops->submit_bio)
+	if (unlikely(!blk_crypto_bio_prep(&bio)))
+		return;
+
+	if (!disk->fops->submit_bio) {
 		blk_mq_submit_bio(bio);
-	else
-		__submit_bio_fops(disk, bio);
+	} else if (likely(bio_queue_enter(bio) == 0)) {
+		disk->fops->submit_bio(bio);
+		blk_queue_exit(disk->queue);
+	}
 }
 
 /*
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6c59ffe765fd..dc62a47ceb26 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2788,9 +2788,6 @@ void blk_mq_submit_bio(struct bio *bio)
 	unsigned int nr_segs = 1;
 	blk_status_t ret;
 
-	if (unlikely(!blk_crypto_bio_prep(&bio)))
-		return;
-
 	blk_queue_bounce(q, &bio);
 	if (blk_may_split(q, bio))
 		__blk_queue_split(q, &bio, &nr_segs);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V4 3/8] block: don't declare submit_bio_checks in local header
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
  2022-02-16  4:45 ` [PATCH V4 1/8] block: move submit_bio_checks() into submit_bio_noacct Ming Lei
  2022-02-16  4:45 ` [PATCH V4 2/8] block: move blk_crypto_bio_prep() out of blk-mq.c Ming Lei
@ 2022-02-16  4:45 ` Ming Lei
  2022-02-16  9:16   ` Chaitanya Kulkarni
  2022-02-16  4:45 ` [PATCH V4 4/8] block: don't check bio in blk_throtl_dispatch_work_fn Ming Lei
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei,
	Christoph Hellwig

submit_bio_checks() won't be called outside of block/blk-core.c any more
since commit 9d497e2941c3 ("block: don't protect submit_bio_checks by
q_usage_counter"), so mark it as one local helper.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c | 2 +-
 block/blk.h      | 1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index f03fff1fa391..5248b94d276b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -676,7 +676,7 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q,
 	return BLK_STS_OK;
 }
 
-noinline_for_stack bool submit_bio_checks(struct bio *bio)
+static noinline_for_stack bool submit_bio_checks(struct bio *bio)
 {
 	struct block_device *bdev = bio->bi_bdev;
 	struct request_queue *q = bdev_get_queue(bdev);
diff --git a/block/blk.h b/block/blk.h
index abb663a2a147..b2516cb4f98e 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -46,7 +46,6 @@ void blk_freeze_queue(struct request_queue *q);
 void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
 void blk_queue_start_drain(struct request_queue *q);
 int __bio_queue_enter(struct request_queue *q, struct bio *bio);
-bool submit_bio_checks(struct bio *bio);
 
 static inline bool blk_try_enter_queue(struct request_queue *q, bool pm)
 {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V4 4/8] block: don't check bio in blk_throtl_dispatch_work_fn
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
                   ` (2 preceding siblings ...)
  2022-02-16  4:45 ` [PATCH V4 3/8] block: don't declare submit_bio_checks in local header Ming Lei
@ 2022-02-16  4:45 ` Ming Lei
  2022-02-16  7:36   ` Christoph Hellwig
  2022-02-16  4:45 ` [PATCH V4 5/8] block: merge submit_bio_checks() into submit_bio_noacct Ming Lei
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei

The bio has been checked already before throttling, so no need to check
it again before dispatching it from throttle queue.

Add a helper of submit_bio_noacct_nocheck() for this purpose.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c     | 30 +++++++++++++++++-------------
 block/blk-throttle.c |  2 +-
 block/blk.h          |  1 +
 3 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 5248b94d276b..72b7b2214c70 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -874,20 +874,8 @@ static void __submit_bio_noacct_mq(struct bio *bio)
 	current->bio_list = NULL;
 }
 
-/**
- * submit_bio_noacct - re-submit a bio to the block device layer for I/O
- * @bio:  The bio describing the location in memory and on the device.
- *
- * This is a version of submit_bio() that shall only be used for I/O that is
- * resubmitted to lower level drivers by stacking block drivers.  All file
- * systems and other upper level users of the block layer should use
- * submit_bio() instead.
- */
-void submit_bio_noacct(struct bio *bio)
+void submit_bio_noacct_nocheck(struct bio *bio)
 {
-	if (unlikely(!submit_bio_checks(bio)))
-		return;
-
 	/*
 	 * We only want one ->submit_bio to be active at a time, else stack
 	 * usage with stacked devices could be a problem.  Use current->bio_list
@@ -901,6 +889,22 @@ void submit_bio_noacct(struct bio *bio)
 	else
 		__submit_bio_noacct(bio);
 }
+
+/**
+ * submit_bio_noacct - re-submit a bio to the block device layer for I/O
+ * @bio:  The bio describing the location in memory and on the device.
+ *
+ * This is a version of submit_bio() that shall only be used for I/O that is
+ * resubmitted to lower level drivers by stacking block drivers.  All file
+ * systems and other upper level users of the block layer should use
+ * submit_bio() instead.
+ */
+void submit_bio_noacct(struct bio *bio)
+{
+	if (unlikely(!submit_bio_checks(bio)))
+		return;
+	submit_bio_noacct_nocheck(bio);
+}
 EXPORT_SYMBOL(submit_bio_noacct);
 
 /**
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 73640d80e99e..8770768f1000 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -1218,7 +1218,7 @@ static void blk_throtl_dispatch_work_fn(struct work_struct *work)
 	if (!bio_list_empty(&bio_list_on_stack)) {
 		blk_start_plug(&plug);
 		while ((bio = bio_list_pop(&bio_list_on_stack)))
-			submit_bio_noacct(bio);
+			submit_bio_noacct_nocheck(bio);
 		blk_finish_plug(&plug);
 	}
 }
diff --git a/block/blk.h b/block/blk.h
index b2516cb4f98e..ebaa59ca46ca 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -46,6 +46,7 @@ void blk_freeze_queue(struct request_queue *q);
 void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
 void blk_queue_start_drain(struct request_queue *q);
 int __bio_queue_enter(struct request_queue *q, struct bio *bio);
+void submit_bio_noacct_nocheck(struct bio *bio);
 
 static inline bool blk_try_enter_queue(struct request_queue *q, bool pm)
 {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V4 5/8] block: merge submit_bio_checks() into submit_bio_noacct
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
                   ` (3 preceding siblings ...)
  2022-02-16  4:45 ` [PATCH V4 4/8] block: don't check bio in blk_throtl_dispatch_work_fn Ming Lei
@ 2022-02-16  4:45 ` Ming Lei
  2022-02-16  9:17   ` Chaitanya Kulkarni
  2022-02-16  4:45 ` [PATCH V4 6/8] block: throttle split bio in case of iops limit Ming Lei
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe
  Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei,
	Christoph Hellwig

Now submit_bio_checks() is only called by submit_bio_noacct(), so merge
it into submit_bio_noacct().

Suggested-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-core.c | 209 +++++++++++++++++++++++------------------------
 1 file changed, 101 insertions(+), 108 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 72b7b2214c70..94bf37f8e61d 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -676,113 +676,6 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q,
 	return BLK_STS_OK;
 }
 
-static noinline_for_stack bool submit_bio_checks(struct bio *bio)
-{
-	struct block_device *bdev = bio->bi_bdev;
-	struct request_queue *q = bdev_get_queue(bdev);
-	blk_status_t status = BLK_STS_IOERR;
-	struct blk_plug *plug;
-
-	might_sleep();
-
-	plug = blk_mq_plug(q, bio);
-	if (plug && plug->nowait)
-		bio->bi_opf |= REQ_NOWAIT;
-
-	/*
-	 * For a REQ_NOWAIT based request, return -EOPNOTSUPP
-	 * if queue does not support NOWAIT.
-	 */
-	if ((bio->bi_opf & REQ_NOWAIT) && !blk_queue_nowait(q))
-		goto not_supported;
-
-	if (should_fail_bio(bio))
-		goto end_io;
-	if (unlikely(bio_check_ro(bio)))
-		goto end_io;
-	if (!bio_flagged(bio, BIO_REMAPPED)) {
-		if (unlikely(bio_check_eod(bio)))
-			goto end_io;
-		if (bdev->bd_partno && unlikely(blk_partition_remap(bio)))
-			goto end_io;
-	}
-
-	/*
-	 * Filter flush bio's early so that bio based drivers without flush
-	 * support don't have to worry about them.
-	 */
-	if (op_is_flush(bio->bi_opf) &&
-	    !test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
-		bio->bi_opf &= ~(REQ_PREFLUSH | REQ_FUA);
-		if (!bio_sectors(bio)) {
-			status = BLK_STS_OK;
-			goto end_io;
-		}
-	}
-
-	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
-		bio_clear_polled(bio);
-
-	switch (bio_op(bio)) {
-	case REQ_OP_DISCARD:
-		if (!blk_queue_discard(q))
-			goto not_supported;
-		break;
-	case REQ_OP_SECURE_ERASE:
-		if (!blk_queue_secure_erase(q))
-			goto not_supported;
-		break;
-	case REQ_OP_WRITE_SAME:
-		if (!q->limits.max_write_same_sectors)
-			goto not_supported;
-		break;
-	case REQ_OP_ZONE_APPEND:
-		status = blk_check_zone_append(q, bio);
-		if (status != BLK_STS_OK)
-			goto end_io;
-		break;
-	case REQ_OP_ZONE_RESET:
-	case REQ_OP_ZONE_OPEN:
-	case REQ_OP_ZONE_CLOSE:
-	case REQ_OP_ZONE_FINISH:
-		if (!blk_queue_is_zoned(q))
-			goto not_supported;
-		break;
-	case REQ_OP_ZONE_RESET_ALL:
-		if (!blk_queue_is_zoned(q) || !blk_queue_zone_resetall(q))
-			goto not_supported;
-		break;
-	case REQ_OP_WRITE_ZEROES:
-		if (!q->limits.max_write_zeroes_sectors)
-			goto not_supported;
-		break;
-	default:
-		break;
-	}
-
-	if (blk_throtl_bio(bio))
-		return false;
-
-	blk_cgroup_bio_start(bio);
-	blkcg_bio_issue_init(bio);
-
-	if (!bio_flagged(bio, BIO_TRACE_COMPLETION)) {
-		trace_block_bio_queue(bio);
-		/* Now that enqueuing has been traced, we need to trace
-		 * completion as well.
-		 */
-		bio_set_flag(bio, BIO_TRACE_COMPLETION);
-	}
-	return true;
-
-not_supported:
-	status = BLK_STS_NOTSUPP;
-end_io:
-	bio->bi_status = status;
-	bio_endio(bio);
-	return false;
-}
-
 static void __submit_bio(struct bio *bio)
 {
 	struct gendisk *disk = bio->bi_bdev->bd_disk;
@@ -901,9 +794,109 @@ void submit_bio_noacct_nocheck(struct bio *bio)
  */
 void submit_bio_noacct(struct bio *bio)
 {
-	if (unlikely(!submit_bio_checks(bio)))
+	struct block_device *bdev = bio->bi_bdev;
+	struct request_queue *q = bdev_get_queue(bdev);
+	blk_status_t status = BLK_STS_IOERR;
+	struct blk_plug *plug;
+
+	might_sleep();
+
+	plug = blk_mq_plug(q, bio);
+	if (plug && plug->nowait)
+		bio->bi_opf |= REQ_NOWAIT;
+
+	/*
+	 * For a REQ_NOWAIT based request, return -EOPNOTSUPP
+	 * if queue does not support NOWAIT.
+	 */
+	if ((bio->bi_opf & REQ_NOWAIT) && !blk_queue_nowait(q))
+		goto not_supported;
+
+	if (should_fail_bio(bio))
+		goto end_io;
+	if (unlikely(bio_check_ro(bio)))
+		goto end_io;
+	if (!bio_flagged(bio, BIO_REMAPPED)) {
+		if (unlikely(bio_check_eod(bio)))
+			goto end_io;
+		if (bdev->bd_partno && unlikely(blk_partition_remap(bio)))
+			goto end_io;
+	}
+
+	/*
+	 * Filter flush bio's early so that bio based drivers without flush
+	 * support don't have to worry about them.
+	 */
+	if (op_is_flush(bio->bi_opf) &&
+	    !test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
+		bio->bi_opf &= ~(REQ_PREFLUSH | REQ_FUA);
+		if (!bio_sectors(bio)) {
+			status = BLK_STS_OK;
+			goto end_io;
+		}
+	}
+
+	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+		bio_clear_polled(bio);
+
+	switch (bio_op(bio)) {
+	case REQ_OP_DISCARD:
+		if (!blk_queue_discard(q))
+			goto not_supported;
+		break;
+	case REQ_OP_SECURE_ERASE:
+		if (!blk_queue_secure_erase(q))
+			goto not_supported;
+		break;
+	case REQ_OP_WRITE_SAME:
+		if (!q->limits.max_write_same_sectors)
+			goto not_supported;
+		break;
+	case REQ_OP_ZONE_APPEND:
+		status = blk_check_zone_append(q, bio);
+		if (status != BLK_STS_OK)
+			goto end_io;
+		break;
+	case REQ_OP_ZONE_RESET:
+	case REQ_OP_ZONE_OPEN:
+	case REQ_OP_ZONE_CLOSE:
+	case REQ_OP_ZONE_FINISH:
+		if (!blk_queue_is_zoned(q))
+			goto not_supported;
+		break;
+	case REQ_OP_ZONE_RESET_ALL:
+		if (!blk_queue_is_zoned(q) || !blk_queue_zone_resetall(q))
+			goto not_supported;
+		break;
+	case REQ_OP_WRITE_ZEROES:
+		if (!q->limits.max_write_zeroes_sectors)
+			goto not_supported;
+		break;
+	default:
+		break;
+	}
+
+	if (blk_throtl_bio(bio))
 		return;
+
+	blk_cgroup_bio_start(bio);
+	blkcg_bio_issue_init(bio);
+
+	if (!bio_flagged(bio, BIO_TRACE_COMPLETION)) {
+		trace_block_bio_queue(bio);
+		/* Now that enqueuing has been traced, we need to trace
+		 * completion as well.
+		 */
+		bio_set_flag(bio, BIO_TRACE_COMPLETION);
+	}
 	submit_bio_noacct_nocheck(bio);
+	return;
+
+not_supported:
+	status = BLK_STS_NOTSUPP;
+end_io:
+	bio->bi_status = status;
+	bio_endio(bio);
 }
 EXPORT_SYMBOL(submit_bio_noacct);
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V4 6/8] block: throttle split bio in case of iops limit
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
                   ` (4 preceding siblings ...)
  2022-02-16  4:45 ` [PATCH V4 5/8] block: merge submit_bio_checks() into submit_bio_noacct Ming Lei
@ 2022-02-16  4:45 ` Ming Lei
  2022-02-16  4:45 ` [PATCH V4 7/8] block: don't try to throttle split bio if iops limit isn't set Ming Lei
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei

Commit 111be8839817 ("block-throttle: avoid double charge") marks bio as
BIO_THROTTLED unconditionally if __blk_throtl_bio() is called on this bio,
then this bio won't be called into __blk_throtl_bio() any more. This way
is to avoid double charge in case of bio splitting. It is reasonable for
read/write throughput limit, but not reasonable for IOPS limit because
block layer provides io accounting against split bio.

Chunguang Xu has already observed this issue and fixed it in commit
4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios").
However, that patch only covers bio splitting in __blk_queue_split(), and
we have other kind of bio splitting, such as bio_split() &
submit_bio_noacct() and other ways.

This patch tries to fix the issue in one generic way by always charging
the bio for iops limit in blk_throtl_bio(). This way is reasonable:
re-submission & fast-cloned bio is charged if it is submitted to same
disk/queue, and BIO_THROTTLED will be cleared if bio->bi_bdev is changed.

This new approach can get much more smooth/stable iops limit compared with
commit 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO
scenarios") since that commit can't throttle current split bios actually.

Also this way won't cause new double bio iops charge in
blk_throtl_dispatch_work_fn() in which blk_throtl_bio() won't be called
any more.

Reported-by: Ning Li <lining2020x@163.com>
Acked-by: Tejun Heo <tj@kernel.org>
Cc: Chunguang Xu <brookxu@tencent.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-merge.c    |  2 --
 block/blk-throttle.c | 10 +++++++---
 block/blk-throttle.h |  2 --
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 4de34a332c9f..f5255991b773 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -368,8 +368,6 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio,
 		trace_block_split(split, (*bio)->bi_iter.bi_sector);
 		submit_bio_noacct(*bio);
 		*bio = split;
-
-		blk_throtl_charge_bio_split(*bio);
 	}
 }
 
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 8770768f1000..c7aa26d52e84 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -807,7 +807,8 @@ static bool tg_with_in_bps_limit(struct throtl_grp *tg, struct bio *bio,
 	unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
 	unsigned int bio_size = throtl_bio_data_size(bio);
 
-	if (bps_limit == U64_MAX) {
+	/* no need to throttle if this bio's bytes have been accounted */
+	if (bps_limit == U64_MAX || bio_flagged(bio, BIO_THROTTLED)) {
 		if (wait)
 			*wait = 0;
 		return true;
@@ -919,9 +920,12 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)
 	unsigned int bio_size = throtl_bio_data_size(bio);
 
 	/* Charge the bio to the group */
-	tg->bytes_disp[rw] += bio_size;
+	if (!bio_flagged(bio, BIO_THROTTLED)) {
+		tg->bytes_disp[rw] += bio_size;
+		tg->last_bytes_disp[rw] += bio_size;
+	}
+
 	tg->io_disp[rw]++;
-	tg->last_bytes_disp[rw] += bio_size;
 	tg->last_io_disp[rw]++;
 
 	/*
diff --git a/block/blk-throttle.h b/block/blk-throttle.h
index 175f03abd9e4..cb43f4417d6e 100644
--- a/block/blk-throttle.h
+++ b/block/blk-throttle.h
@@ -170,8 +170,6 @@ static inline bool blk_throtl_bio(struct bio *bio)
 {
 	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
 
-	if (bio_flagged(bio, BIO_THROTTLED))
-		return false;
 	if (!tg->has_rules[bio_data_dir(bio)])
 		return false;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V4 7/8] block: don't try to throttle split bio if iops limit isn't set
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
                   ` (5 preceding siblings ...)
  2022-02-16  4:45 ` [PATCH V4 6/8] block: throttle split bio in case of iops limit Ming Lei
@ 2022-02-16  4:45 ` Ming Lei
  2022-02-16  4:45 ` [PATCH V4 8/8] block: revert 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios") Ming Lei
  2022-02-17  2:42 ` [PATCH V4 0/8] block: improve iops limit throttle Jens Axboe
  8 siblings, 0 replies; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei

We need to throttle split bio in case of IOPS limit even though the
split bio has been marked as BIO_THROTTLED since block layer
accounts split bio actually.

If only throughput throttle is setup, no need to throttle any more
if BIO_THROTTLED is set since we have accounted & considered the
whole bio bytes already.

Add one flag of THROTL_TG_HAS_IOPS_LIMIT for serving this purpose.

Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-throttle.c | 21 ++++++++++++++-------
 block/blk-throttle.h | 11 +++++++++++
 2 files changed, 25 insertions(+), 7 deletions(-)

diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index c7aa26d52e84..ec72eced24d2 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -41,11 +41,6 @@
 /* A workqueue to queue throttle related work */
 static struct workqueue_struct *kthrotld_workqueue;
 
-enum tg_state_flags {
-	THROTL_TG_PENDING	= 1 << 0,	/* on parent's pending tree */
-	THROTL_TG_WAS_EMPTY	= 1 << 1,	/* bio_lists[] became non-empty */
-};
-
 #define rb_entry_tg(node)	rb_entry((node), struct throtl_grp, rb_node)
 
 /* We measure latency for request size from <= 4k to >= 1M */
@@ -425,12 +420,24 @@ static void tg_update_has_rules(struct throtl_grp *tg)
 	struct throtl_grp *parent_tg = sq_to_tg(tg->service_queue.parent_sq);
 	struct throtl_data *td = tg->td;
 	int rw;
+	int has_iops_limit = 0;
+
+	for (rw = READ; rw <= WRITE; rw++) {
+		unsigned int iops_limit = tg_iops_limit(tg, rw);
 
-	for (rw = READ; rw <= WRITE; rw++)
 		tg->has_rules[rw] = (parent_tg && parent_tg->has_rules[rw]) ||
 			(td->limit_valid[td->limit_index] &&
 			 (tg_bps_limit(tg, rw) != U64_MAX ||
-			  tg_iops_limit(tg, rw) != UINT_MAX));
+			  iops_limit != UINT_MAX));
+
+		if (iops_limit != UINT_MAX)
+			has_iops_limit = 1;
+	}
+
+	if (has_iops_limit)
+		tg->flags |= THROTL_TG_HAS_IOPS_LIMIT;
+	else
+		tg->flags &= ~THROTL_TG_HAS_IOPS_LIMIT;
 }
 
 static void throtl_pd_online(struct blkg_policy_data *pd)
diff --git a/block/blk-throttle.h b/block/blk-throttle.h
index cb43f4417d6e..c996a15f290e 100644
--- a/block/blk-throttle.h
+++ b/block/blk-throttle.h
@@ -52,6 +52,12 @@ struct throtl_service_queue {
 	struct timer_list	pending_timer;	/* fires on first_pending_disptime */
 };
 
+enum tg_state_flags {
+	THROTL_TG_PENDING	= 1 << 0,	/* on parent's pending tree */
+	THROTL_TG_WAS_EMPTY	= 1 << 1,	/* bio_lists[] became non-empty */
+	THROTL_TG_HAS_IOPS_LIMIT = 1 << 2,	/* tg has iops limit */
+};
+
 enum {
 	LIMIT_LOW,
 	LIMIT_MAX,
@@ -170,6 +176,11 @@ static inline bool blk_throtl_bio(struct bio *bio)
 {
 	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
 
+	/* no need to throttle bps any more if the bio has been throttled */
+	if (bio_flagged(bio, BIO_THROTTLED) &&
+	    !(tg->flags & THROTL_TG_HAS_IOPS_LIMIT))
+		return false;
+
 	if (!tg->has_rules[bio_data_dir(bio)])
 		return false;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V4 8/8] block: revert 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios")
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
                   ` (6 preceding siblings ...)
  2022-02-16  4:45 ` [PATCH V4 7/8] block: don't try to throttle split bio if iops limit isn't set Ming Lei
@ 2022-02-16  4:45 ` Ming Lei
  2022-02-17  2:42 ` [PATCH V4 0/8] block: improve iops limit throttle Jens Axboe
  8 siblings, 0 replies; 15+ messages in thread
From: Ming Lei @ 2022-02-16  4:45 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Ming Lei

Revert commit 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large
IO scenarios") since we have another easier way to address this issue and
get better iops throttling result.

Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 block/blk-throttle.c | 28 ----------------------------
 block/blk-throttle.h |  5 -----
 2 files changed, 33 deletions(-)

diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index ec72eced24d2..a3b3ebc72dd4 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -640,8 +640,6 @@ static inline void throtl_start_new_slice_with_credit(struct throtl_grp *tg,
 	tg->bytes_disp[rw] = 0;
 	tg->io_disp[rw] = 0;
 
-	atomic_set(&tg->io_split_cnt[rw], 0);
-
 	/*
 	 * Previous slice has expired. We must have trimmed it after last
 	 * bio dispatch. That means since start of last slice, we never used
@@ -665,8 +663,6 @@ static inline void throtl_start_new_slice(struct throtl_grp *tg, bool rw)
 	tg->slice_start[rw] = jiffies;
 	tg->slice_end[rw] = jiffies + tg->td->throtl_slice;
 
-	atomic_set(&tg->io_split_cnt[rw], 0);
-
 	throtl_log(&tg->service_queue,
 		   "[%c] new slice start=%lu end=%lu jiffies=%lu",
 		   rw == READ ? 'R' : 'W', tg->slice_start[rw],
@@ -900,9 +896,6 @@ static bool tg_may_dispatch(struct throtl_grp *tg, struct bio *bio,
 				jiffies + tg->td->throtl_slice);
 	}
 
-	if (iops_limit != UINT_MAX)
-		tg->io_disp[rw] += atomic_xchg(&tg->io_split_cnt[rw], 0);
-
 	if (tg_with_in_bps_limit(tg, bio, bps_limit, &bps_wait) &&
 	    tg_with_in_iops_limit(tg, bio, iops_limit, &iops_wait)) {
 		if (wait)
@@ -1927,14 +1920,12 @@ static void throtl_downgrade_check(struct throtl_grp *tg)
 	}
 
 	if (tg->iops[READ][LIMIT_LOW]) {
-		tg->last_io_disp[READ] += atomic_xchg(&tg->last_io_split_cnt[READ], 0);
 		iops = tg->last_io_disp[READ] * HZ / elapsed_time;
 		if (iops >= tg->iops[READ][LIMIT_LOW])
 			tg->last_low_overflow_time[READ] = now;
 	}
 
 	if (tg->iops[WRITE][LIMIT_LOW]) {
-		tg->last_io_disp[WRITE] += atomic_xchg(&tg->last_io_split_cnt[WRITE], 0);
 		iops = tg->last_io_disp[WRITE] * HZ / elapsed_time;
 		if (iops >= tg->iops[WRITE][LIMIT_LOW])
 			tg->last_low_overflow_time[WRITE] = now;
@@ -2053,25 +2044,6 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
 }
 #endif
 
-void blk_throtl_charge_bio_split(struct bio *bio)
-{
-	struct blkcg_gq *blkg = bio->bi_blkg;
-	struct throtl_grp *parent = blkg_to_tg(blkg);
-	struct throtl_service_queue *parent_sq;
-	bool rw = bio_data_dir(bio);
-
-	do {
-		if (!parent->has_rules[rw])
-			break;
-
-		atomic_inc(&parent->io_split_cnt[rw]);
-		atomic_inc(&parent->last_io_split_cnt[rw]);
-
-		parent_sq = parent->service_queue.parent_sq;
-		parent = sq_to_tg(parent_sq);
-	} while (parent);
-}
-
 bool __blk_throtl_bio(struct bio *bio)
 {
 	struct request_queue *q = bdev_get_queue(bio->bi_bdev);
diff --git a/block/blk-throttle.h b/block/blk-throttle.h
index c996a15f290e..b23a9f3abb82 100644
--- a/block/blk-throttle.h
+++ b/block/blk-throttle.h
@@ -138,9 +138,6 @@ struct throtl_grp {
 	unsigned int bad_bio_cnt; /* bios exceeding latency threshold */
 	unsigned long bio_cnt_reset_time;
 
-	atomic_t io_split_cnt[2];
-	atomic_t last_io_split_cnt[2];
-
 	struct blkg_rwstat stat_bytes;
 	struct blkg_rwstat stat_ios;
 };
@@ -164,13 +161,11 @@ static inline struct throtl_grp *blkg_to_tg(struct blkcg_gq *blkg)
 static inline int blk_throtl_init(struct request_queue *q) { return 0; }
 static inline void blk_throtl_exit(struct request_queue *q) { }
 static inline void blk_throtl_register_queue(struct request_queue *q) { }
-static inline void blk_throtl_charge_bio_split(struct bio *bio) { }
 static inline bool blk_throtl_bio(struct bio *bio) { return false; }
 #else /* CONFIG_BLK_DEV_THROTTLING */
 int blk_throtl_init(struct request_queue *q);
 void blk_throtl_exit(struct request_queue *q);
 void blk_throtl_register_queue(struct request_queue *q);
-void blk_throtl_charge_bio_split(struct bio *bio);
 bool __blk_throtl_bio(struct bio *bio);
 static inline bool blk_throtl_bio(struct bio *bio)
 {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH V4 4/8] block: don't check bio in blk_throtl_dispatch_work_fn
  2022-02-16  4:45 ` [PATCH V4 4/8] block: don't check bio in blk_throtl_dispatch_work_fn Ming Lei
@ 2022-02-16  7:36   ` Christoph Hellwig
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2022-02-16  7:36 UTC (permalink / raw)
  To: Ming Lei; +Cc: Jens Axboe, linux-block, Ning Li, Tejun Heo, Chunguang Xu

On Wed, Feb 16, 2022 at 12:45:10PM +0800, Ming Lei wrote:
> The bio has been checked already before throttling, so no need to check
> it again before dispatching it from throttle queue.
> 
> Add a helper of submit_bio_noacct_nocheck() for this purpose.

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V4 1/8] block: move submit_bio_checks() into submit_bio_noacct
  2022-02-16  4:45 ` [PATCH V4 1/8] block: move submit_bio_checks() into submit_bio_noacct Ming Lei
@ 2022-02-16  9:14   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2022-02-16  9:14 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Christoph Hellwig

On 2/15/22 20:45, Ming Lei wrote:
> It is more clean & readable to check bio when starting to submit it,
> instead of just before calling ->submit_bio() or blk_mq_submit_bio().
> 
> Also it provides us chance to optimize bio submission without checking
> bio.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V4 2/8] block: move blk_crypto_bio_prep() out of blk-mq.c
  2022-02-16  4:45 ` [PATCH V4 2/8] block: move blk_crypto_bio_prep() out of blk-mq.c Ming Lei
@ 2022-02-16  9:15   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2022-02-16  9:15 UTC (permalink / raw)
  To: Ming Lei
  Cc: linux-block, Jens Axboe, Ning Li, Tejun Heo, Chunguang Xu,
	Christoph Hellwig

On 2/15/22 20:45, Ming Lei wrote:
> blk_crypto_bio_prep() is called for both bio based and blk-mq drivers,
> so move it out of blk-mq.c, then we can unify this kind of handling.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>   block/blk-core.c | 21 ++++++++-------------

indeed it is, looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V4 3/8] block: don't declare submit_bio_checks in local header
  2022-02-16  4:45 ` [PATCH V4 3/8] block: don't declare submit_bio_checks in local header Ming Lei
@ 2022-02-16  9:16   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2022-02-16  9:16 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe
  Cc: linux-block, Ning Li, Tejun Heo, Chunguang Xu, Christoph Hellwig

On 2/15/22 20:45, Ming Lei wrote:
> submit_bio_checks() won't be called outside of block/blk-core.c any more
> since commit 9d497e2941c3 ("block: don't protect submit_bio_checks by
> q_usage_counter"), so mark it as one local helper.
> 
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V4 5/8] block: merge submit_bio_checks() into submit_bio_noacct
  2022-02-16  4:45 ` [PATCH V4 5/8] block: merge submit_bio_checks() into submit_bio_noacct Ming Lei
@ 2022-02-16  9:17   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2022-02-16  9:17 UTC (permalink / raw)
  To: Ming Lei
  Cc: linux-block, Jens Axboe, Ning Li, Tejun Heo, Chunguang Xu,
	Christoph Hellwig

On 2/15/22 20:45, Ming Lei wrote:
> Now submit_bio_checks() is only called by submit_bio_noacct(), so merge
> it into submit_bio_noacct().
> 
> Suggested-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---

Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V4 0/8] block: improve iops limit throttle
  2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
                   ` (7 preceding siblings ...)
  2022-02-16  4:45 ` [PATCH V4 8/8] block: revert 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios") Ming Lei
@ 2022-02-17  2:42 ` Jens Axboe
  8 siblings, 0 replies; 15+ messages in thread
From: Jens Axboe @ 2022-02-17  2:42 UTC (permalink / raw)
  To: Ming Lei; +Cc: Chunguang Xu, linux-block, Ning Li, Tejun Heo

On Wed, 16 Feb 2022 12:45:06 +0800, Ming Lei wrote:
> Lining reported that iops limit throttle doesn't work on dm-thin, also
> iops limit throttle works bad on plain disk in case of excessive split.
> 
> Commit 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios")
> was for addressing this issue, but the taken approach is just to run
> post-accounting, then current split bios won't be throttled actually,
> so actual iops throttle result isn't good in case of excessive bio
> splitting.
> 
> [...]

Applied, thanks!

[1/8] block: move submit_bio_checks() into submit_bio_noacct
      commit: a650628bde77f6ac5b1d532092346feff7b58c52
[2/8] block: move blk_crypto_bio_prep() out of blk-mq.c
      commit: 7f36b7d02a287ed18d02ae821868aa07b0235521
[3/8] block: don't declare submit_bio_checks in local header
      commit: 29ff23624e21c89d3321d6429dec8ad3847b534a
[4/8] block: don't check bio in blk_throtl_dispatch_work_fn
      commit: 3f98c753717c600eb5708e9b78b3eba6664bddf1
[5/8] block: merge submit_bio_checks() into submit_bio_noacct
      commit: d24c670ec1f9f1dc320e59004e61f3491ae24546
[6/8] block: throttle split bio in case of iops limit
      commit: 9f5ede3c01f9951b0ae7d68b28762ad51d9bacc8
[7/8] block: don't try to throttle split bio if iops limit isn't set
      commit: 5a93b6027eb4ef5db60a4bc5bdbeba5fb9f29384
[8/8] block: revert 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios")
      commit: 34841e6fb125aa3f0e33e4eaac9f5eb86b2bb34b

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2022-02-17  2:43 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-16  4:45 [PATCH V4 0/8] block: improve iops limit throttle Ming Lei
2022-02-16  4:45 ` [PATCH V4 1/8] block: move submit_bio_checks() into submit_bio_noacct Ming Lei
2022-02-16  9:14   ` Chaitanya Kulkarni
2022-02-16  4:45 ` [PATCH V4 2/8] block: move blk_crypto_bio_prep() out of blk-mq.c Ming Lei
2022-02-16  9:15   ` Chaitanya Kulkarni
2022-02-16  4:45 ` [PATCH V4 3/8] block: don't declare submit_bio_checks in local header Ming Lei
2022-02-16  9:16   ` Chaitanya Kulkarni
2022-02-16  4:45 ` [PATCH V4 4/8] block: don't check bio in blk_throtl_dispatch_work_fn Ming Lei
2022-02-16  7:36   ` Christoph Hellwig
2022-02-16  4:45 ` [PATCH V4 5/8] block: merge submit_bio_checks() into submit_bio_noacct Ming Lei
2022-02-16  9:17   ` Chaitanya Kulkarni
2022-02-16  4:45 ` [PATCH V4 6/8] block: throttle split bio in case of iops limit Ming Lei
2022-02-16  4:45 ` [PATCH V4 7/8] block: don't try to throttle split bio if iops limit isn't set Ming Lei
2022-02-16  4:45 ` [PATCH V4 8/8] block: revert 4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios") Ming Lei
2022-02-17  2:42 ` [PATCH V4 0/8] block: improve iops limit throttle Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.