All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] A few cleanup patches for blk-iolatency.c
@ 2022-10-18 11:12 ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-10-18 11:12 UTC (permalink / raw)
  To: tj, josef, axboe; +Cc: cgroups, linux-block, linux-kernel, shikemeng

This series contains three cleanup patches to remove redundant check,
correct comment and simplify struct iolatency_grp in blk-iolatency.c.

---
v2:
 Thanks Josef for review and comment!
 Add Reviewed-by tag from Josef 
 Improve the corrected comment in patch 2/3
---

Kemeng Shi (3):
  block: Remove redundant parent blkcg_gp check in check_scale_change
  block: Correct comment for scale_cookie_change
  block: Replace struct rq_depth with unsigned int in struct
    iolatency_grp

 block/blk-iolatency.c | 37 +++++++++++++++++--------------------
 1 file changed, 17 insertions(+), 20 deletions(-)

-- 
2.30.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 0/3] A few cleanup patches for blk-iolatency.c
@ 2022-10-18 11:12 ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-10-18 11:12 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, josef-DigfWCa+lFGyeJad7bwFQA,
	axboe-tSWWG44O7X1aa/9Udqfwiw
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	shikemeng-hv44wF8Li93QT0dZR+AlfA

This series contains three cleanup patches to remove redundant check,
correct comment and simplify struct iolatency_grp in blk-iolatency.c.

---
v2:
 Thanks Josef for review and comment!
 Add Reviewed-by tag from Josef 
 Improve the corrected comment in patch 2/3
---

Kemeng Shi (3):
  block: Remove redundant parent blkcg_gp check in check_scale_change
  block: Correct comment for scale_cookie_change
  block: Replace struct rq_depth with unsigned int in struct
    iolatency_grp

 block/blk-iolatency.c | 37 +++++++++++++++++--------------------
 1 file changed, 17 insertions(+), 20 deletions(-)

-- 
2.30.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change
@ 2022-10-18 11:12   ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-10-18 11:12 UTC (permalink / raw)
  To: tj, josef, axboe; +Cc: cgroups, linux-block, linux-kernel, shikemeng

Function blkcg_iolatency_throttle will make sure blkg->parent is not
NULL before calls check_scale_change. And function check_scale_change
is only called in blkcg_iolatency_throttle.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 block/blk-iolatency.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 571fa95aafe9..b24d7b788ba3 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -403,9 +403,6 @@ static void check_scale_change(struct iolatency_grp *iolat)
 	u64 scale_lat;
 	int direction = 0;
 
-	if (lat_to_blkg(iolat)->parent == NULL)
-		return;
-
 	parent = blkg_to_lat(lat_to_blkg(iolat)->parent);
 	if (!parent)
 		return;
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change
@ 2022-10-18 11:12   ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-10-18 11:12 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, josef-DigfWCa+lFGyeJad7bwFQA,
	axboe-tSWWG44O7X1aa/9Udqfwiw
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	shikemeng-hv44wF8Li93QT0dZR+AlfA

Function blkcg_iolatency_throttle will make sure blkg->parent is not
NULL before calls check_scale_change. And function check_scale_change
is only called in blkcg_iolatency_throttle.

Signed-off-by: Kemeng Shi <shikemeng-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Reviewed-by: Josef Bacik <josef-DigfWCa+lFGyeJad7bwFQA@public.gmane.org>
---
 block/blk-iolatency.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 571fa95aafe9..b24d7b788ba3 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -403,9 +403,6 @@ static void check_scale_change(struct iolatency_grp *iolat)
 	u64 scale_lat;
 	int direction = 0;
 
-	if (lat_to_blkg(iolat)->parent == NULL)
-		return;
-
 	parent = blkg_to_lat(lat_to_blkg(iolat)->parent);
 	if (!parent)
 		return;
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/3] block: Correct comment for scale_cookie_change
@ 2022-10-18 11:12   ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-10-18 11:12 UTC (permalink / raw)
  To: tj, josef, axboe; +Cc: cgroups, linux-block, linux-kernel, shikemeng

Default queue depth of iolatency_grp is unlimited, so we scale down
quickly(once by half) in scale_cookie_change. Remove the "subtract
1/16th" part which is not the truth and add the actual way we
scale down.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/blk-iolatency.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index b24d7b788ba3..2c574f98c8d1 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat,
 }
 
 /*
- * Change the queue depth of the iolatency_grp.  We add/subtract 1/16th of the
+ * Change the queue depth of the iolatency_grp.  We add 1/16th of the
  * queue depth at a time so we don't get wild swings and hopefully dial in to
- * fairer distribution of the overall queue depth.
+ * fairer distribution of the overall queue depth.  We halve the queue depth
+ * at a time so we can scale down queue depth quickly from default unlimited
+ * to target.
  */
 static void scale_change(struct iolatency_grp *iolat, bool up)
 {
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/3] block: Correct comment for scale_cookie_change
@ 2022-10-18 11:12   ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-10-18 11:12 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, josef-DigfWCa+lFGyeJad7bwFQA,
	axboe-tSWWG44O7X1aa/9Udqfwiw
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	shikemeng-hv44wF8Li93QT0dZR+AlfA

Default queue depth of iolatency_grp is unlimited, so we scale down
quickly(once by half) in scale_cookie_change. Remove the "subtract
1/16th" part which is not the truth and add the actual way we
scale down.

Signed-off-by: Kemeng Shi <shikemeng-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
---
 block/blk-iolatency.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index b24d7b788ba3..2c574f98c8d1 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat,
 }
 
 /*
- * Change the queue depth of the iolatency_grp.  We add/subtract 1/16th of the
+ * Change the queue depth of the iolatency_grp.  We add 1/16th of the
  * queue depth at a time so we don't get wild swings and hopefully dial in to
- * fairer distribution of the overall queue depth.
+ * fairer distribution of the overall queue depth.  We halve the queue depth
+ * at a time so we can scale down queue depth quickly from default unlimited
+ * to target.
  */
 static void scale_change(struct iolatency_grp *iolat, bool up)
 {
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp
@ 2022-10-18 11:12   ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-10-18 11:12 UTC (permalink / raw)
  To: tj, josef, axboe; +Cc: cgroups, linux-block, linux-kernel, shikemeng

We only need a max queue depth for every iolatency to limit the inflight io
number. Replace struct rq_depth with unsigned int to simplfy "struct
iolatency_grp" and save memory.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 block/blk-iolatency.c | 28 +++++++++++++---------------
 1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 2c574f98c8d1..778a0057193e 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -141,7 +141,7 @@ struct iolatency_grp {
 	struct latency_stat __percpu *stats;
 	struct latency_stat cur_stat;
 	struct blk_iolatency *blkiolat;
-	struct rq_depth rq_depth;
+	unsigned int max_depth;
 	struct rq_wait rq_wait;
 	atomic64_t window_start;
 	atomic_t scale_cookie;
@@ -280,7 +280,7 @@ static void iolat_cleanup_cb(struct rq_wait *rqw, void *private_data)
 static bool iolat_acquire_inflight(struct rq_wait *rqw, void *private_data)
 {
 	struct iolatency_grp *iolat = private_data;
-	return rq_wait_inc_below(rqw, iolat->rq_depth.max_depth);
+	return rq_wait_inc_below(rqw, iolat->max_depth);
 }
 
 static void __blkcg_iolatency_throttle(struct rq_qos *rqos,
@@ -374,7 +374,7 @@ static void scale_change(struct iolatency_grp *iolat, bool up)
 {
 	unsigned long qd = iolat->blkiolat->rqos.q->nr_requests;
 	unsigned long scale = scale_amount(qd, up);
-	unsigned long old = iolat->rq_depth.max_depth;
+	unsigned long old = iolat->max_depth;
 
 	if (old > qd)
 		old = qd;
@@ -386,12 +386,12 @@ static void scale_change(struct iolatency_grp *iolat, bool up)
 		if (old < qd) {
 			old += scale;
 			old = min(old, qd);
-			iolat->rq_depth.max_depth = old;
+			iolat->max_depth = old;
 			wake_up_all(&iolat->rq_wait.wait);
 		}
 	} else {
 		old >>= 1;
-		iolat->rq_depth.max_depth = max(old, 1UL);
+		iolat->max_depth = max(old, 1UL);
 	}
 }
 
@@ -444,7 +444,7 @@ static void check_scale_change(struct iolatency_grp *iolat)
 	}
 
 	/* We're as low as we can go. */
-	if (iolat->rq_depth.max_depth == 1 && direction < 0) {
+	if (iolat->max_depth == 1 && direction < 0) {
 		blkcg_use_delay(lat_to_blkg(iolat));
 		return;
 	}
@@ -452,7 +452,7 @@ static void check_scale_change(struct iolatency_grp *iolat)
 	/* We're back to the default cookie, unthrottle all the things. */
 	if (cur_cookie == DEFAULT_SCALE_COOKIE) {
 		blkcg_clear_delay(lat_to_blkg(iolat));
-		iolat->rq_depth.max_depth = UINT_MAX;
+		iolat->max_depth = UINT_MAX;
 		wake_up_all(&iolat->rq_wait.wait);
 		return;
 	}
@@ -507,7 +507,7 @@ static void iolatency_record_time(struct iolatency_grp *iolat,
 	 * We don't want to count issue_as_root bio's in the cgroups latency
 	 * statistics as it could skew the numbers downwards.
 	 */
-	if (unlikely(issue_as_root && iolat->rq_depth.max_depth != UINT_MAX)) {
+	if (unlikely(issue_as_root && iolat->max_depth != UINT_MAX)) {
 		u64 sub = iolat->min_lat_nsec;
 		if (req_time < sub)
 			blkcg_add_delay(lat_to_blkg(iolat), now, sub - req_time);
@@ -919,7 +919,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
 	}
 	preempt_enable();
 
-	if (iolat->rq_depth.max_depth == UINT_MAX)
+	if (iolat->max_depth == UINT_MAX)
 		seq_printf(s, " missed=%llu total=%llu depth=max",
 			(unsigned long long)stat.ps.missed,
 			(unsigned long long)stat.ps.total);
@@ -927,7 +927,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
 		seq_printf(s, " missed=%llu total=%llu depth=%u",
 			(unsigned long long)stat.ps.missed,
 			(unsigned long long)stat.ps.total,
-			iolat->rq_depth.max_depth);
+			iolat->max_depth);
 }
 
 static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
@@ -944,12 +944,12 @@ static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
 
 	avg_lat = div64_u64(iolat->lat_avg, NSEC_PER_USEC);
 	cur_win = div64_u64(iolat->cur_win_nsec, NSEC_PER_MSEC);
-	if (iolat->rq_depth.max_depth == UINT_MAX)
+	if (iolat->max_depth == UINT_MAX)
 		seq_printf(s, " depth=max avg_lat=%llu win=%llu",
 			avg_lat, cur_win);
 	else
 		seq_printf(s, " depth=%u avg_lat=%llu win=%llu",
-			iolat->rq_depth.max_depth, avg_lat, cur_win);
+			iolat->max_depth, avg_lat, cur_win);
 }
 
 static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp,
@@ -993,9 +993,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd)
 	latency_stat_init(iolat, &iolat->cur_stat);
 	rq_wait_init(&iolat->rq_wait);
 	spin_lock_init(&iolat->child_lat.lock);
-	iolat->rq_depth.queue_depth = blkg->q->nr_requests;
-	iolat->rq_depth.max_depth = UINT_MAX;
-	iolat->rq_depth.default_depth = iolat->rq_depth.queue_depth;
+	iolat->max_depth = UINT_MAX;
 	iolat->blkiolat = blkiolat;
 	iolat->cur_win_nsec = 100 * NSEC_PER_MSEC;
 	atomic64_set(&iolat->window_start, now);
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp
@ 2022-10-18 11:12   ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-10-18 11:12 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, josef-DigfWCa+lFGyeJad7bwFQA,
	axboe-tSWWG44O7X1aa/9Udqfwiw
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	shikemeng-hv44wF8Li93QT0dZR+AlfA

We only need a max queue depth for every iolatency to limit the inflight io
number. Replace struct rq_depth with unsigned int to simplfy "struct
iolatency_grp" and save memory.

Signed-off-by: Kemeng Shi <shikemeng-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
Reviewed-by: Josef Bacik <josef-DigfWCa+lFGyeJad7bwFQA@public.gmane.org>
---
 block/blk-iolatency.c | 28 +++++++++++++---------------
 1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 2c574f98c8d1..778a0057193e 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -141,7 +141,7 @@ struct iolatency_grp {
 	struct latency_stat __percpu *stats;
 	struct latency_stat cur_stat;
 	struct blk_iolatency *blkiolat;
-	struct rq_depth rq_depth;
+	unsigned int max_depth;
 	struct rq_wait rq_wait;
 	atomic64_t window_start;
 	atomic_t scale_cookie;
@@ -280,7 +280,7 @@ static void iolat_cleanup_cb(struct rq_wait *rqw, void *private_data)
 static bool iolat_acquire_inflight(struct rq_wait *rqw, void *private_data)
 {
 	struct iolatency_grp *iolat = private_data;
-	return rq_wait_inc_below(rqw, iolat->rq_depth.max_depth);
+	return rq_wait_inc_below(rqw, iolat->max_depth);
 }
 
 static void __blkcg_iolatency_throttle(struct rq_qos *rqos,
@@ -374,7 +374,7 @@ static void scale_change(struct iolatency_grp *iolat, bool up)
 {
 	unsigned long qd = iolat->blkiolat->rqos.q->nr_requests;
 	unsigned long scale = scale_amount(qd, up);
-	unsigned long old = iolat->rq_depth.max_depth;
+	unsigned long old = iolat->max_depth;
 
 	if (old > qd)
 		old = qd;
@@ -386,12 +386,12 @@ static void scale_change(struct iolatency_grp *iolat, bool up)
 		if (old < qd) {
 			old += scale;
 			old = min(old, qd);
-			iolat->rq_depth.max_depth = old;
+			iolat->max_depth = old;
 			wake_up_all(&iolat->rq_wait.wait);
 		}
 	} else {
 		old >>= 1;
-		iolat->rq_depth.max_depth = max(old, 1UL);
+		iolat->max_depth = max(old, 1UL);
 	}
 }
 
@@ -444,7 +444,7 @@ static void check_scale_change(struct iolatency_grp *iolat)
 	}
 
 	/* We're as low as we can go. */
-	if (iolat->rq_depth.max_depth == 1 && direction < 0) {
+	if (iolat->max_depth == 1 && direction < 0) {
 		blkcg_use_delay(lat_to_blkg(iolat));
 		return;
 	}
@@ -452,7 +452,7 @@ static void check_scale_change(struct iolatency_grp *iolat)
 	/* We're back to the default cookie, unthrottle all the things. */
 	if (cur_cookie == DEFAULT_SCALE_COOKIE) {
 		blkcg_clear_delay(lat_to_blkg(iolat));
-		iolat->rq_depth.max_depth = UINT_MAX;
+		iolat->max_depth = UINT_MAX;
 		wake_up_all(&iolat->rq_wait.wait);
 		return;
 	}
@@ -507,7 +507,7 @@ static void iolatency_record_time(struct iolatency_grp *iolat,
 	 * We don't want to count issue_as_root bio's in the cgroups latency
 	 * statistics as it could skew the numbers downwards.
 	 */
-	if (unlikely(issue_as_root && iolat->rq_depth.max_depth != UINT_MAX)) {
+	if (unlikely(issue_as_root && iolat->max_depth != UINT_MAX)) {
 		u64 sub = iolat->min_lat_nsec;
 		if (req_time < sub)
 			blkcg_add_delay(lat_to_blkg(iolat), now, sub - req_time);
@@ -919,7 +919,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
 	}
 	preempt_enable();
 
-	if (iolat->rq_depth.max_depth == UINT_MAX)
+	if (iolat->max_depth == UINT_MAX)
 		seq_printf(s, " missed=%llu total=%llu depth=max",
 			(unsigned long long)stat.ps.missed,
 			(unsigned long long)stat.ps.total);
@@ -927,7 +927,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
 		seq_printf(s, " missed=%llu total=%llu depth=%u",
 			(unsigned long long)stat.ps.missed,
 			(unsigned long long)stat.ps.total,
-			iolat->rq_depth.max_depth);
+			iolat->max_depth);
 }
 
 static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
@@ -944,12 +944,12 @@ static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
 
 	avg_lat = div64_u64(iolat->lat_avg, NSEC_PER_USEC);
 	cur_win = div64_u64(iolat->cur_win_nsec, NSEC_PER_MSEC);
-	if (iolat->rq_depth.max_depth == UINT_MAX)
+	if (iolat->max_depth == UINT_MAX)
 		seq_printf(s, " depth=max avg_lat=%llu win=%llu",
 			avg_lat, cur_win);
 	else
 		seq_printf(s, " depth=%u avg_lat=%llu win=%llu",
-			iolat->rq_depth.max_depth, avg_lat, cur_win);
+			iolat->max_depth, avg_lat, cur_win);
 }
 
 static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp,
@@ -993,9 +993,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd)
 	latency_stat_init(iolat, &iolat->cur_stat);
 	rq_wait_init(&iolat->rq_wait);
 	spin_lock_init(&iolat->child_lat.lock);
-	iolat->rq_depth.queue_depth = blkg->q->nr_requests;
-	iolat->rq_depth.max_depth = UINT_MAX;
-	iolat->rq_depth.default_depth = iolat->rq_depth.queue_depth;
+	iolat->max_depth = UINT_MAX;
 	iolat->blkiolat = blkiolat;
 	iolat->cur_win_nsec = 100 * NSEC_PER_MSEC;
 	atomic64_set(&iolat->window_start, now);
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/3] block: Correct comment for scale_cookie_change
@ 2022-11-01  9:38     ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:38 UTC (permalink / raw)
  To: tj, josef, axboe; +Cc: cgroups, linux-block, linux-kernel

Friendly ping.

on 10/18/2022 7:12 PM, Kemeng Shi wrote:
> Default queue depth of iolatency_grp is unlimited, so we scale down
> quickly(once by half) in scale_cookie_change. Remove the "subtract
> 1/16th" part which is not the truth and add the actual way we
> scale down.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
> ---
>  block/blk-iolatency.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
> index b24d7b788ba3..2c574f98c8d1 100644
> --- a/block/blk-iolatency.c
> +++ b/block/blk-iolatency.c
> @@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat,
>  }
>  
>  /*
> - * Change the queue depth of the iolatency_grp.  We add/subtract 1/16th of the
> + * Change the queue depth of the iolatency_grp.  We add 1/16th of the
>   * queue depth at a time so we don't get wild swings and hopefully dial in to
> - * fairer distribution of the overall queue depth.
> + * fairer distribution of the overall queue depth.  We halve the queue depth
> + * at a time so we can scale down queue depth quickly from default unlimited
> + * to target.
>   */
>  static void scale_change(struct iolatency_grp *iolat, bool up)
>  {
> 

-- 
Best wishes
Kemeng Shi

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/3] block: Correct comment for scale_cookie_change
@ 2022-11-01  9:38     ` Kemeng Shi
  0 siblings, 0 replies; 13+ messages in thread
From: Kemeng Shi @ 2022-11-01  9:38 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, josef-DigfWCa+lFGyeJad7bwFQA,
	axboe-tSWWG44O7X1aa/9Udqfwiw
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

Friendly ping.

on 10/18/2022 7:12 PM, Kemeng Shi wrote:
> Default queue depth of iolatency_grp is unlimited, so we scale down
> quickly(once by half) in scale_cookie_change. Remove the "subtract
> 1/16th" part which is not the truth and add the actual way we
> scale down.
> 
> Signed-off-by: Kemeng Shi <shikemeng-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> ---
>  block/blk-iolatency.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
> index b24d7b788ba3..2c574f98c8d1 100644
> --- a/block/blk-iolatency.c
> +++ b/block/blk-iolatency.c
> @@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat,
>  }
>  
>  /*
> - * Change the queue depth of the iolatency_grp.  We add/subtract 1/16th of the
> + * Change the queue depth of the iolatency_grp.  We add 1/16th of the
>   * queue depth at a time so we don't get wild swings and hopefully dial in to
> - * fairer distribution of the overall queue depth.
> + * fairer distribution of the overall queue depth.  We halve the queue depth
> + * at a time so we can scale down queue depth quickly from default unlimited
> + * to target.
>   */
>  static void scale_change(struct iolatency_grp *iolat, bool up)
>  {
> 

-- 
Best wishes
Kemeng Shi

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 0/3] A few cleanup patches for blk-iolatency.c
  2022-10-18 11:12 ` Kemeng Shi
                   ` (3 preceding siblings ...)
  (?)
@ 2022-11-01 14:17 ` Jens Axboe
  -1 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-11-01 14:17 UTC (permalink / raw)
  To: Kemeng Shi, josef, tj; +Cc: linux-kernel, cgroups, linux-block

On Tue, 18 Oct 2022 19:12:37 +0800, Kemeng Shi wrote:
> This series contains three cleanup patches to remove redundant check,
> correct comment and simplify struct iolatency_grp in blk-iolatency.c.
> 

Applied, thanks!

[1/3] block: Remove redundant parent blkcg_gp check in check_scale_change
      commit: b6e03072a91ecc815c9f62037871d621b09df2b3
[2/3] block: Correct comment for scale_cookie_change
      commit: 77445572e643538019d9919669ddf36efb6a353a
[3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp
      commit: 718ac0e343f8fd81ab042aa4936d25fe10f6b318

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/3] block: Correct comment for scale_cookie_change
@ 2022-11-02 14:11     ` Josef Bacik
  0 siblings, 0 replies; 13+ messages in thread
From: Josef Bacik @ 2022-11-02 14:11 UTC (permalink / raw)
  To: Kemeng Shi; +Cc: tj, axboe, cgroups, linux-block, linux-kernel

On Tue, Oct 18, 2022 at 07:12:39PM +0800, Kemeng Shi wrote:
> Default queue depth of iolatency_grp is unlimited, so we scale down
> quickly(once by half) in scale_cookie_change. Remove the "subtract
> 1/16th" part which is not the truth and add the actual way we
> scale down.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huawei.com>

This is perfect, thanks Kemeng

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/3] block: Correct comment for scale_cookie_change
@ 2022-11-02 14:11     ` Josef Bacik
  0 siblings, 0 replies; 13+ messages in thread
From: Josef Bacik @ 2022-11-02 14:11 UTC (permalink / raw)
  To: Kemeng Shi
  Cc: tj-DgEjT+Ai2ygdnm+yROfE0A, axboe-tSWWG44O7X1aa/9Udqfwiw,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-block-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

On Tue, Oct 18, 2022 at 07:12:39PM +0800, Kemeng Shi wrote:
> Default queue depth of iolatency_grp is unlimited, so we scale down
> quickly(once by half) in scale_cookie_change. Remove the "subtract
> 1/16th" part which is not the truth and add the actual way we
> scale down.
> 
> Signed-off-by: Kemeng Shi <shikemeng-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>

This is perfect, thanks Kemeng

Reviewed-by: Josef Bacik <josef-DigfWCa+lFGyeJad7bwFQA@public.gmane.org>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-11-02 14:12 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-18 11:12 [PATCH v2 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
2022-10-18 11:12 ` Kemeng Shi
2022-10-18 11:12 ` [PATCH v2 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change Kemeng Shi
2022-10-18 11:12   ` Kemeng Shi
2022-10-18 11:12 ` [PATCH v2 2/3] block: Correct comment for scale_cookie_change Kemeng Shi
2022-10-18 11:12   ` Kemeng Shi
2022-11-01  9:38   ` Kemeng Shi
2022-11-01  9:38     ` Kemeng Shi
2022-11-02 14:11   ` Josef Bacik
2022-11-02 14:11     ` Josef Bacik
2022-10-18 11:12 ` [PATCH v2 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp Kemeng Shi
2022-10-18 11:12   ` Kemeng Shi
2022-11-01 14:17 ` [PATCH v2 0/3] A few cleanup patches for blk-iolatency.c Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.