linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
@ 2023-04-01  9:47 Jinke Han
  2023-04-03 15:30 ` Michal Koutný
  2023-04-28 19:05 ` Andrea Righi
  0 siblings, 2 replies; 9+ messages in thread
From: Jinke Han @ 2023-04-01  9:47 UTC (permalink / raw)
  To: tj, josef, axboe; +Cc: cgroups, linux-block, linux-kernel, Jinke Han

From: Jinke Han <hanjinke.666@bytedance.com>

After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
the only stable io stats interface of cgroup v1, and these statistics
are done in the blk-throttle code. But the current code only counts the
bios that are actually throttled. When the user does not add the throttle
limit, the io stats for cgroup v1 has nothing. I fix it according to the
statistical method of v2, and made it count all ios accurately.

Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>
---
 block/blk-cgroup.c   | 6 ++++--
 block/blk-throttle.c | 6 ------
 block/blk-throttle.h | 9 +++++++++
 3 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index bd50b55bdb61..33263d0d0e0f 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -2033,6 +2033,9 @@ void blk_cgroup_bio_start(struct bio *bio)
 	struct blkg_iostat_set *bis;
 	unsigned long flags;
 
+	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
+		return;
+
 	/* Root-level stats are sourced from system-wide IO stats */
 	if (!cgroup_parent(blkcg->css.cgroup))
 		return;
@@ -2064,8 +2067,7 @@ void blk_cgroup_bio_start(struct bio *bio)
 	}
 
 	u64_stats_update_end_irqrestore(&bis->sync, flags);
-	if (cgroup_subsys_on_dfl(io_cgrp_subsys))
-		cgroup_rstat_updated(blkcg->css.cgroup, cpu);
+	cgroup_rstat_updated(blkcg->css.cgroup, cpu);
 	put_cpu();
 }
 
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 47e9d8be68f3..2be66e9430f7 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2174,12 +2174,6 @@ bool __blk_throtl_bio(struct bio *bio)
 
 	rcu_read_lock();
 
-	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
-		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
-				bio->bi_iter.bi_size);
-		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
-	}
-
 	spin_lock_irq(&q->queue_lock);
 
 	throtl_update_latency_buckets(td);
diff --git a/block/blk-throttle.h b/block/blk-throttle.h
index ef4b7a4de987..d1ccbfe9f797 100644
--- a/block/blk-throttle.h
+++ b/block/blk-throttle.h
@@ -185,6 +185,15 @@ static inline bool blk_should_throtl(struct bio *bio)
 	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
 	int rw = bio_data_dir(bio);
 
+	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
+		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
+			bio_set_flag(bio, BIO_CGROUP_ACCT);
+			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
+					bio->bi_iter.bi_size);
+		}
+		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
+	}
+
 	/* iops limit is always counted */
 	if (tg->has_rules_iops[rw])
 		return true;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
  2023-04-01  9:47 [PATCH v2] blk-throttle: Fix io statistics for cgroup v1 Jinke Han
@ 2023-04-03 15:30 ` Michal Koutný
  2023-04-03 17:56   ` [External] " hanjinke
  2023-04-28 19:05 ` Andrea Righi
  1 sibling, 1 reply; 9+ messages in thread
From: Michal Koutný @ 2023-04-03 15:30 UTC (permalink / raw)
  To: Jinke Han; +Cc: tj, josef, axboe, cgroups, linux-block, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1410 bytes --]

On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han <hanjinke.666@bytedance.com> wrote:
> From: Jinke Han <hanjinke.666@bytedance.com>
> 
> After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
> blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
> the only stable io stats interface of cgroup v1,

There is also blkio.bfq.{io_serviced,io_service_bytes} couple, so it's
not the only. Or do you mean stable in terms of used IO scheduler?

> and these statistics are done in the blk-throttle code. But the
> current code only counts the bios that are actually throttled. When
> the user does not add the throttle limit,

... "or the limit doesn't kick in"

> the io stats for cgroup v1 has nothing.


> I fix it according to the statistical method of v2, and made it count
> all ios accurately.

s/all ios/all bios and split ios/ 

(IIUC you fix two things)

> Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")

Good catch.

Does it also undo the performance gain from that commit? (Or rather,
have you observed effect of your patch on v2-only performance?)

> Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>
> ---
>  block/blk-cgroup.c   | 6 ++++--
>  block/blk-throttle.c | 6 ------
>  block/blk-throttle.h | 9 +++++++++
>  3 files changed, 13 insertions(+), 8 deletions(-)

The code looks correct.

Thanks,
Michal

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [External] Re: [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
  2023-04-03 15:30 ` Michal Koutný
@ 2023-04-03 17:56   ` hanjinke
  0 siblings, 0 replies; 9+ messages in thread
From: hanjinke @ 2023-04-03 17:56 UTC (permalink / raw)
  To: Michal Koutný; +Cc: tj, josef, axboe, cgroups, linux-block, linux-kernel



在 2023/4/3 下午11:30, Michal Koutný 写道:
> On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han <hanjinke.666@bytedance.com> wrote:
>> From: Jinke Han <hanjinke.666@bytedance.com>
>>
>> After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
>> blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
>> the only stable io stats interface of cgroup v1,
> 
> There is also blkio.bfq.{io_serviced,io_service_bytes} couple, so it's
> not the only. Or do you mean stable in terms of used IO scheduler?
> 

Oh, the stable here means that it always exists, and when the bfq 
scheduler is not used, the bfq interface may not exist.

>> and these statistics are done in the blk-throttle code. But the
>> current code only counts the bios that are actually throttled. When
>> the user does not add the throttle limit,
> 
> ... "or the limit doesn't kick in"
> 

Agree.

>> the io stats for cgroup v1 has nothing.
> 
> 
>> I fix it according to the statistical method of v2, and made it count
>> all ios accurately.
> 
> s/all ios/all bios and split ios/
> 
> (IIUC you fix two things)
> 
>> Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
> 
> Good catch.
> 
> Does it also undo the performance gain from that commit? (Or rather,
> have you observed effect of your patch on v2-only performance?)
> 

Under v1, this statistical overhead is unavoidable. Under v2, the static 
key is friendly to judging branches, so I think the performance 
difference before and after the patch is negligible.

>> Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>
>> ---
>>   block/blk-cgroup.c   | 6 ++++--
>>   block/blk-throttle.c | 6 ------
>>   block/blk-throttle.h | 9 +++++++++
>>   3 files changed, 13 insertions(+), 8 deletions(-)
> 
> The code looks correct.
> 
> Thanks,
> Michal

Thanks.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
  2023-04-01  9:47 [PATCH v2] blk-throttle: Fix io statistics for cgroup v1 Jinke Han
  2023-04-03 15:30 ` Michal Koutný
@ 2023-04-28 19:05 ` Andrea Righi
  2023-05-04 15:08   ` [External] " hanjinke
  1 sibling, 1 reply; 9+ messages in thread
From: Andrea Righi @ 2023-04-28 19:05 UTC (permalink / raw)
  To: Jinke Han; +Cc: tj, josef, axboe, cgroups, linux-block, linux-kernel

On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han wrote:
> From: Jinke Han <hanjinke.666@bytedance.com>
> 
> After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
> blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
> the only stable io stats interface of cgroup v1, and these statistics
> are done in the blk-throttle code. But the current code only counts the
> bios that are actually throttled. When the user does not add the throttle
> limit, the io stats for cgroup v1 has nothing. I fix it according to the
> statistical method of v2, and made it count all ios accurately.
> 
> Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
> Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>

Thanks for fixing this!

The code looks correct to me, but this seems to report io statistics
only if at least one throttling limit is defined. IIRC with cgroup v1 it
was possible to see the io statistics inside a cgroup also with no
throttling limits configured.

Basically to restore the old behavior we would need to drop the
cgroup_subsys_on_dfl() check, something like the following (on top of
your patch).

But I'm not sure if we're breaking other behaviors in this way...
opinions?

 block/blk-cgroup.c   |  3 ---
 block/blk-throttle.h | 12 +++++-------
 2 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 79138bfc6001..43af86db7cf3 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -2045,9 +2045,6 @@ void blk_cgroup_bio_start(struct bio *bio)
 	struct blkg_iostat_set *bis;
 	unsigned long flags;
 
-	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
-		return;
-
 	/* Root-level stats are sourced from system-wide IO stats */
 	if (!cgroup_parent(blkcg->css.cgroup))
 		return;
diff --git a/block/blk-throttle.h b/block/blk-throttle.h
index d1ccbfe9f797..bcb40ee2eeba 100644
--- a/block/blk-throttle.h
+++ b/block/blk-throttle.h
@@ -185,14 +185,12 @@ static inline bool blk_should_throtl(struct bio *bio)
 	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
 	int rw = bio_data_dir(bio);
 
-	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
-		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
-			bio_set_flag(bio, BIO_CGROUP_ACCT);
-			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
-					bio->bi_iter.bi_size);
-		}
-		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
+	if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
+		bio_set_flag(bio, BIO_CGROUP_ACCT);
+		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
+				bio->bi_iter.bi_size);
 	}
+	blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
 
 	/* iops limit is always counted */
 	if (tg->has_rules_iops[rw])

> ---
>  block/blk-cgroup.c   | 6 ++++--
>  block/blk-throttle.c | 6 ------
>  block/blk-throttle.h | 9 +++++++++
>  3 files changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index bd50b55bdb61..33263d0d0e0f 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -2033,6 +2033,9 @@ void blk_cgroup_bio_start(struct bio *bio)
>  	struct blkg_iostat_set *bis;
>  	unsigned long flags;
>  
> +	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
> +		return;
> +
>  	/* Root-level stats are sourced from system-wide IO stats */
>  	if (!cgroup_parent(blkcg->css.cgroup))
>  		return;
> @@ -2064,8 +2067,7 @@ void blk_cgroup_bio_start(struct bio *bio)
>  	}
>  
>  	u64_stats_update_end_irqrestore(&bis->sync, flags);
> -	if (cgroup_subsys_on_dfl(io_cgrp_subsys))
> -		cgroup_rstat_updated(blkcg->css.cgroup, cpu);
> +	cgroup_rstat_updated(blkcg->css.cgroup, cpu);
>  	put_cpu();
>  }
>  
> diff --git a/block/blk-throttle.c b/block/blk-throttle.c
> index 47e9d8be68f3..2be66e9430f7 100644
> --- a/block/blk-throttle.c
> +++ b/block/blk-throttle.c
> @@ -2174,12 +2174,6 @@ bool __blk_throtl_bio(struct bio *bio)
>  
>  	rcu_read_lock();
>  
> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
> -		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> -				bio->bi_iter.bi_size);
> -		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
> -	}
> -
>  	spin_lock_irq(&q->queue_lock);
>  
>  	throtl_update_latency_buckets(td);
> diff --git a/block/blk-throttle.h b/block/blk-throttle.h
> index ef4b7a4de987..d1ccbfe9f797 100644
> --- a/block/blk-throttle.h
> +++ b/block/blk-throttle.h
> @@ -185,6 +185,15 @@ static inline bool blk_should_throtl(struct bio *bio)
>  	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
>  	int rw = bio_data_dir(bio);
>  
> +	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
> +		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> +			bio_set_flag(bio, BIO_CGROUP_ACCT);
> +			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> +					bio->bi_iter.bi_size);
> +		}
> +		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
> +	}
> +
>  	/* iops limit is always counted */
>  	if (tg->has_rules_iops[rw])
>  		return true;
> -- 
> 2.20.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [External] Re: [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
  2023-04-28 19:05 ` Andrea Righi
@ 2023-05-04 15:08   ` hanjinke
  2023-05-04 21:13     ` Andrea Righi
  0 siblings, 1 reply; 9+ messages in thread
From: hanjinke @ 2023-05-04 15:08 UTC (permalink / raw)
  To: Andrea Righi; +Cc: tj, josef, axboe, cgroups, linux-block, linux-kernel

Hi

Sorry for delay(Chinese Labor Day holiday).

在 2023/4/29 上午3:05, Andrea Righi 写道:
> On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han wrote:
>> From: Jinke Han <hanjinke.666@bytedance.com>
>>
>> After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
>> blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
>> the only stable io stats interface of cgroup v1, and these statistics
>> are done in the blk-throttle code. But the current code only counts the
>> bios that are actually throttled. When the user does not add the throttle
>> limit, the io stats for cgroup v1 has nothing. I fix it according to the
>> statistical method of v2, and made it count all ios accurately.
>>
>> Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
>> Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>
> 
> Thanks for fixing this!
> 
> The code looks correct to me, but this seems to report io statistics
> only if at least one throttling limit is defined. IIRC with cgroup v1 it
> was possible to see the io statistics inside a cgroup also with no
> throttling limits configured.
> 
> Basically to restore the old behavior we would need to drop the
> cgroup_subsys_on_dfl() check, something like the following (on top of
> your patch).
> 
> But I'm not sure if we're breaking other behaviors in this way...
> opinions?
> 
>   block/blk-cgroup.c   |  3 ---
>   block/blk-throttle.h | 12 +++++-------
>   2 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index 79138bfc6001..43af86db7cf3 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -2045,9 +2045,6 @@ void blk_cgroup_bio_start(struct bio *bio)
>   	struct blkg_iostat_set *bis;
>   	unsigned long flags;
>   
> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
> -		return;
> -
>   	/* Root-level stats are sourced from system-wide IO stats */
>   	if (!cgroup_parent(blkcg->css.cgroup))
>   		return;
> diff --git a/block/blk-throttle.h b/block/blk-throttle.h
> index d1ccbfe9f797..bcb40ee2eeba 100644
> --- a/block/blk-throttle.h
> +++ b/block/blk-throttle.h
> @@ -185,14 +185,12 @@ static inline bool blk_should_throtl(struct bio *bio)
>   	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
>   	int rw = bio_data_dir(bio);
>   
> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
> -		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> -			bio_set_flag(bio, BIO_CGROUP_ACCT);
> -			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> -					bio->bi_iter.bi_size);
> -		}
> -		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
> +	if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> +		bio_set_flag(bio, BIO_CGROUP_ACCT);
> +		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> +				bio->bi_iter.bi_size);
>   	}
> +	blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);

It seems that statistics have been carried out in both v1 and v2,we can 
get the statistics of v2 from io.stat, is it necessary to count v2 here?



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [External] Re: [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
  2023-05-04 15:08   ` [External] " hanjinke
@ 2023-05-04 21:13     ` Andrea Righi
  2023-05-05 13:35       ` hanjinke
  0 siblings, 1 reply; 9+ messages in thread
From: Andrea Righi @ 2023-05-04 21:13 UTC (permalink / raw)
  To: hanjinke; +Cc: tj, josef, axboe, cgroups, linux-block, linux-kernel

On Thu, May 04, 2023 at 11:08:53PM +0800, hanjinke wrote:
> Hi
> 
> Sorry for delay(Chinese Labor Day holiday).

No problem, it was also Labor Day in Italy. :)

> 
> 在 2023/4/29 上午3:05, Andrea Righi 写道:
> > On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han wrote:
> > > From: Jinke Han <hanjinke.666@bytedance.com>
> > > 
> > > After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
> > > blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
> > > the only stable io stats interface of cgroup v1, and these statistics
> > > are done in the blk-throttle code. But the current code only counts the
> > > bios that are actually throttled. When the user does not add the throttle
> > > limit, the io stats for cgroup v1 has nothing. I fix it according to the
> > > statistical method of v2, and made it count all ios accurately.
> > > 
> > > Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
> > > Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>
> > 
> > Thanks for fixing this!
> > 
> > The code looks correct to me, but this seems to report io statistics
> > only if at least one throttling limit is defined. IIRC with cgroup v1 it
> > was possible to see the io statistics inside a cgroup also with no
> > throttling limits configured.
> > 
> > Basically to restore the old behavior we would need to drop the
> > cgroup_subsys_on_dfl() check, something like the following (on top of
> > your patch).
> > 
> > But I'm not sure if we're breaking other behaviors in this way...
> > opinions?
> > 
> >   block/blk-cgroup.c   |  3 ---
> >   block/blk-throttle.h | 12 +++++-------
> >   2 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> > index 79138bfc6001..43af86db7cf3 100644
> > --- a/block/blk-cgroup.c
> > +++ b/block/blk-cgroup.c
> > @@ -2045,9 +2045,6 @@ void blk_cgroup_bio_start(struct bio *bio)
> >   	struct blkg_iostat_set *bis;
> >   	unsigned long flags;
> > -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
> > -		return;
> > -
> >   	/* Root-level stats are sourced from system-wide IO stats */
> >   	if (!cgroup_parent(blkcg->css.cgroup))
> >   		return;
> > diff --git a/block/blk-throttle.h b/block/blk-throttle.h
> > index d1ccbfe9f797..bcb40ee2eeba 100644
> > --- a/block/blk-throttle.h
> > +++ b/block/blk-throttle.h
> > @@ -185,14 +185,12 @@ static inline bool blk_should_throtl(struct bio *bio)
> >   	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
> >   	int rw = bio_data_dir(bio);
> > -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
> > -		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> > -			bio_set_flag(bio, BIO_CGROUP_ACCT);
> > -			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> > -					bio->bi_iter.bi_size);
> > -		}
> > -		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
> > +	if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> > +		bio_set_flag(bio, BIO_CGROUP_ACCT);
> > +		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> > +				bio->bi_iter.bi_size);
> >   	}
> > +	blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
> 
> It seems that statistics have been carried out in both v1 and v2,we can get
> the statistics of v2 from io.stat, is it necessary to count v2 here?
> 

I think this code is affecting (and should affect) only v1, stats for v2
are accounted via blk_cgroup_bio_start() in a different way. And the
behavior in v2 is the same as with this patch applied, that means io
stats are always reported even if we don't set any io limit.

Thanks,
-Andrea

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [External] Re: [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
  2023-05-04 21:13     ` Andrea Righi
@ 2023-05-05 13:35       ` hanjinke
  2023-05-06 11:44         ` Andrea Righi
  0 siblings, 1 reply; 9+ messages in thread
From: hanjinke @ 2023-05-05 13:35 UTC (permalink / raw)
  To: Andrea Righi; +Cc: tj, josef, axboe, cgroups, linux-block, linux-kernel



在 2023/5/5 上午5:13, Andrea Righi 写道:
> On Thu, May 04, 2023 at 11:08:53PM +0800, hanjinke wrote:
>> Hi
>>
>> Sorry for delay(Chinese Labor Day holiday).
> 
> No problem, it was also Labor Day in Italy. :)
> 
>>
>> 在 2023/4/29 上午3:05, Andrea Righi 写道:
>>> On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han wrote:
>>>> From: Jinke Han <hanjinke.666@bytedance.com>
>>>>
>>>> After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
>>>> blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
>>>> the only stable io stats interface of cgroup v1, and these statistics
>>>> are done in the blk-throttle code. But the current code only counts the
>>>> bios that are actually throttled. When the user does not add the throttle
>>>> limit, the io stats for cgroup v1 has nothing. I fix it according to the
>>>> statistical method of v2, and made it count all ios accurately.
>>>>
>>>> Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
>>>> Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>
>>>
>>> Thanks for fixing this!
>>>
>>> The code looks correct to me, but this seems to report io statistics
>>> only if at least one throttling limit is defined. IIRC with cgroup v1 it
>>> was possible to see the io statistics inside a cgroup also with no
>>> throttling limits configured.
>>>
>>> Basically to restore the old behavior we would need to drop the
>>> cgroup_subsys_on_dfl() check, something like the following (on top of
>>> your patch).
>>>
>>> But I'm not sure if we're breaking other behaviors in this way...
>>> opinions?
>>>
>>>    block/blk-cgroup.c   |  3 ---
>>>    block/blk-throttle.h | 12 +++++-------
>>>    2 files changed, 5 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
>>> index 79138bfc6001..43af86db7cf3 100644
>>> --- a/block/blk-cgroup.c
>>> +++ b/block/blk-cgroup.c
>>> @@ -2045,9 +2045,6 @@ void blk_cgroup_bio_start(struct bio *bio)
>>>    	struct blkg_iostat_set *bis;
>>>    	unsigned long flags;
>>> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
>>> -		return;
>>> -
>>>    	/* Root-level stats are sourced from system-wide IO stats */
>>>    	if (!cgroup_parent(blkcg->css.cgroup))
>>>    		return;
>>> diff --git a/block/blk-throttle.h b/block/blk-throttle.h
>>> index d1ccbfe9f797..bcb40ee2eeba 100644
>>> --- a/block/blk-throttle.h
>>> +++ b/block/blk-throttle.h
>>> @@ -185,14 +185,12 @@ static inline bool blk_should_throtl(struct bio *bio)
>>>    	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
>>>    	int rw = bio_data_dir(bio);
>>> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
>>> -		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
>>> -			bio_set_flag(bio, BIO_CGROUP_ACCT);
>>> -			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
>>> -					bio->bi_iter.bi_size);
>>> -		}
>>> -		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
>>> +	if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
>>> +		bio_set_flag(bio, BIO_CGROUP_ACCT);
>>> +		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
>>> +				bio->bi_iter.bi_size);
>>>    	}
>>> +	blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
>>

I checked the code again. If we remove cgroup_subsys_on_dfl check here, 
io statistics will still be performed in the case of v2, which I think 
is unnecessary, and this information will be counted to 
io_service_bytes/io_serviced, these two files are not visible in v2. Am 
I missing something?

Thanks.
Jinke

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [External] Re: [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
  2023-05-05 13:35       ` hanjinke
@ 2023-05-06 11:44         ` Andrea Righi
  2023-05-07 15:32           ` hanjinke
  0 siblings, 1 reply; 9+ messages in thread
From: Andrea Righi @ 2023-05-06 11:44 UTC (permalink / raw)
  To: hanjinke; +Cc: tj, josef, axboe, cgroups, linux-block, linux-kernel

On Fri, May 05, 2023 at 09:35:21PM +0800, hanjinke wrote:
> 
> 
> 在 2023/5/5 上午5:13, Andrea Righi 写道:
> > On Thu, May 04, 2023 at 11:08:53PM +0800, hanjinke wrote:
> > > Hi
> > > 
> > > Sorry for delay(Chinese Labor Day holiday).
> > 
> > No problem, it was also Labor Day in Italy. :)
> > 
> > > 
> > > 在 2023/4/29 上午3:05, Andrea Righi 写道:
> > > > On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han wrote:
> > > > > From: Jinke Han <hanjinke.666@bytedance.com>
> > > > > 
> > > > > After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
> > > > > blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
> > > > > the only stable io stats interface of cgroup v1, and these statistics
> > > > > are done in the blk-throttle code. But the current code only counts the
> > > > > bios that are actually throttled. When the user does not add the throttle
> > > > > limit, the io stats for cgroup v1 has nothing. I fix it according to the
> > > > > statistical method of v2, and made it count all ios accurately.
> > > > > 
> > > > > Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
> > > > > Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>
> > > > 
> > > > Thanks for fixing this!
> > > > 
> > > > The code looks correct to me, but this seems to report io statistics
> > > > only if at least one throttling limit is defined. IIRC with cgroup v1 it
> > > > was possible to see the io statistics inside a cgroup also with no
> > > > throttling limits configured.
> > > > 
> > > > Basically to restore the old behavior we would need to drop the
> > > > cgroup_subsys_on_dfl() check, something like the following (on top of
> > > > your patch).
> > > > 
> > > > But I'm not sure if we're breaking other behaviors in this way...
> > > > opinions?
> > > > 
> > > >    block/blk-cgroup.c   |  3 ---
> > > >    block/blk-throttle.h | 12 +++++-------
> > > >    2 files changed, 5 insertions(+), 10 deletions(-)
> > > > 
> > > > diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> > > > index 79138bfc6001..43af86db7cf3 100644
> > > > --- a/block/blk-cgroup.c
> > > > +++ b/block/blk-cgroup.c
> > > > @@ -2045,9 +2045,6 @@ void blk_cgroup_bio_start(struct bio *bio)
> > > >    	struct blkg_iostat_set *bis;
> > > >    	unsigned long flags;
> > > > -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
> > > > -		return;
> > > > -
> > > >    	/* Root-level stats are sourced from system-wide IO stats */
> > > >    	if (!cgroup_parent(blkcg->css.cgroup))
> > > >    		return;
> > > > diff --git a/block/blk-throttle.h b/block/blk-throttle.h
> > > > index d1ccbfe9f797..bcb40ee2eeba 100644
> > > > --- a/block/blk-throttle.h
> > > > +++ b/block/blk-throttle.h
> > > > @@ -185,14 +185,12 @@ static inline bool blk_should_throtl(struct bio *bio)
> > > >    	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
> > > >    	int rw = bio_data_dir(bio);
> > > > -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
> > > > -		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> > > > -			bio_set_flag(bio, BIO_CGROUP_ACCT);
> > > > -			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> > > > -					bio->bi_iter.bi_size);
> > > > -		}
> > > > -		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
> > > > +	if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> > > > +		bio_set_flag(bio, BIO_CGROUP_ACCT);
> > > > +		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> > > > +				bio->bi_iter.bi_size);
> > > >    	}
> > > > +	blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
> > > 
> 
> I checked the code again. If we remove cgroup_subsys_on_dfl check here, io
> statistics will still be performed in the case of v2, which I think is
> unnecessary, and this information will be counted to
> io_service_bytes/io_serviced, these two files are not visible in v2. Am I
> missing something?

You are absolutely right. Sorry, I have just re-tested your fix and it
seems to handle the cgroup v1 case correctly, you can add my:

Tested-by: Andrea Righi <andrea.righi@canonical.com>

And we definitely need the cgroup_subsys_on_dfl() check, otherwise we'd
account extra IO in the v2 case that is not really needed.

Thanks,
-Andrea

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [External] Re: [PATCH v2] blk-throttle: Fix io statistics for cgroup v1
  2023-05-06 11:44         ` Andrea Righi
@ 2023-05-07 15:32           ` hanjinke
  0 siblings, 0 replies; 9+ messages in thread
From: hanjinke @ 2023-05-07 15:32 UTC (permalink / raw)
  To: Andrea Righi; +Cc: tj, josef, axboe, cgroups, linux-block, linux-kernel



在 2023/5/6 下午7:44, Andrea Righi 写道:
> On Fri, May 05, 2023 at 09:35:21PM +0800, hanjinke wrote:
>>
>>
>> 在 2023/5/5 上午5:13, Andrea Righi 写道:
>>> On Thu, May 04, 2023 at 11:08:53PM +0800, hanjinke wrote:
>>>> Hi
>>>>
>>>> Sorry for delay(Chinese Labor Day holiday).
>>>
>>> No problem, it was also Labor Day in Italy. :)
>>>
>>>>
>>>> 在 2023/4/29 上午3:05, Andrea Righi 写道:
>>>>> On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han wrote:
>>>>>> From: Jinke Han <hanjinke.666@bytedance.com>
>>>>>>
>>>>>> After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
>>>>>> blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
>>>>>> the only stable io stats interface of cgroup v1, and these statistics
>>>>>> are done in the blk-throttle code. But the current code only counts the
>>>>>> bios that are actually throttled. When the user does not add the throttle
>>>>>> limit, the io stats for cgroup v1 has nothing. I fix it according to the
>>>>>> statistical method of v2, and made it count all ios accurately.
>>>>>>
>>>>>> Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
>>>>>> Signed-off-by: Jinke Han <hanjinke.666@bytedance.com>
>>>>>
>>>>> Thanks for fixing this!
>>>>>
>>>>> The code looks correct to me, but this seems to report io statistics
>>>>> only if at least one throttling limit is defined. IIRC with cgroup v1 it
>>>>> was possible to see the io statistics inside a cgroup also with no
>>>>> throttling limits configured.
>>>>>
>>>>> Basically to restore the old behavior we would need to drop the
>>>>> cgroup_subsys_on_dfl() check, something like the following (on top of
>>>>> your patch).
>>>>>
>>>>> But I'm not sure if we're breaking other behaviors in this way...
>>>>> opinions?
>>>>>
>>>>>     block/blk-cgroup.c   |  3 ---
>>>>>     block/blk-throttle.h | 12 +++++-------
>>>>>     2 files changed, 5 insertions(+), 10 deletions(-)
>>>>>
>>>>> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
>>>>> index 79138bfc6001..43af86db7cf3 100644
>>>>> --- a/block/blk-cgroup.c
>>>>> +++ b/block/blk-cgroup.c
>>>>> @@ -2045,9 +2045,6 @@ void blk_cgroup_bio_start(struct bio *bio)
>>>>>     	struct blkg_iostat_set *bis;
>>>>>     	unsigned long flags;
>>>>> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
>>>>> -		return;
>>>>> -
>>>>>     	/* Root-level stats are sourced from system-wide IO stats */
>>>>>     	if (!cgroup_parent(blkcg->css.cgroup))
>>>>>     		return;
>>>>> diff --git a/block/blk-throttle.h b/block/blk-throttle.h
>>>>> index d1ccbfe9f797..bcb40ee2eeba 100644
>>>>> --- a/block/blk-throttle.h
>>>>> +++ b/block/blk-throttle.h
>>>>> @@ -185,14 +185,12 @@ static inline bool blk_should_throtl(struct bio *bio)
>>>>>     	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
>>>>>     	int rw = bio_data_dir(bio);
>>>>> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
>>>>> -		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
>>>>> -			bio_set_flag(bio, BIO_CGROUP_ACCT);
>>>>> -			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
>>>>> -					bio->bi_iter.bi_size);
>>>>> -		}
>>>>> -		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
>>>>> +	if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
>>>>> +		bio_set_flag(bio, BIO_CGROUP_ACCT);
>>>>> +		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
>>>>> +				bio->bi_iter.bi_size);
>>>>>     	}
>>>>> +	blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
>>>>
>>
>> I checked the code again. If we remove cgroup_subsys_on_dfl check here, io
>> statistics will still be performed in the case of v2, which I think is
>> unnecessary, and this information will be counted to
>> io_service_bytes/io_serviced, these two files are not visible in v2. Am I
>> missing something?
> 
> You are absolutely right. Sorry, I have just re-tested your fix and it
> seems to handle the cgroup v1 case correctly, you can add my:
> 
> Tested-by: Andrea Righi <andrea.righi@canonical.com>
> 
> And we definitely need the cgroup_subsys_on_dfl() check, otherwise we'd
> account extra IO in the v2 case that is not really needed.
> 

Thanks a lot! I will add it and send a v3.

Thanks,
Jinke

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-05-07 15:32 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-01  9:47 [PATCH v2] blk-throttle: Fix io statistics for cgroup v1 Jinke Han
2023-04-03 15:30 ` Michal Koutný
2023-04-03 17:56   ` [External] " hanjinke
2023-04-28 19:05 ` Andrea Righi
2023-05-04 15:08   ` [External] " hanjinke
2023-05-04 21:13     ` Andrea Righi
2023-05-05 13:35       ` hanjinke
2023-05-06 11:44         ` Andrea Righi
2023-05-07 15:32           ` hanjinke

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).