linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* blk-cgroup cleanups
@ 2019-06-06 10:26 Christoph Hellwig
  2019-06-06 10:26 ` [PATCH 1/6] blk-cgroup: factor out a helper to read rwstat counter Christoph Hellwig
                   ` (8 more replies)
  0 siblings, 9 replies; 14+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

Hi all,

below are a couple of cleanups I came up with when trying to understand
the blk-cgroup code.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/6] blk-cgroup: factor out a helper to read rwstat counter
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
@ 2019-06-06 10:26 ` Christoph Hellwig
  2019-06-06 15:03   ` Chaitanya Kulkarni
  2019-06-06 10:26 ` [PATCH 2/6] blk-cgroup: pass blkg_rwstat structures by reference Christoph Hellwig
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

Trying to break up the crazy statements to something readable.
Also switch to an unsigned counter as it can't ever turn negative.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-cgroup.c         | 5 ++---
 include/linux/blk-cgroup.h | 7 +++++++
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index b97b479e4f64..6f79ace02be4 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -750,7 +750,7 @@ struct blkg_rwstat blkg_rwstat_recursive_sum(struct blkcg_gq *blkg,
 	struct blkcg_gq *pos_blkg;
 	struct cgroup_subsys_state *pos_css;
 	struct blkg_rwstat sum = { };
-	int i;
+	unsigned int i;
 
 	lockdep_assert_held(&blkg->q->queue_lock);
 
@@ -767,8 +767,7 @@ struct blkg_rwstat blkg_rwstat_recursive_sum(struct blkcg_gq *blkg,
 			rwstat = (void *)pos_blkg + off;
 
 		for (i = 0; i < BLKG_RWSTAT_NR; i++)
-			atomic64_add(atomic64_read(&rwstat->aux_cnt[i]) +
-				percpu_counter_sum_positive(&rwstat->cpu_cnt[i]),
+			atomic64_add(blkg_rwstat_read_counter(rwstat, i),
 				&sum.aux_cnt[i]);
 	}
 	rcu_read_unlock();
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 76c61318fda5..06236f56a840 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -198,6 +198,13 @@ int blkcg_activate_policy(struct request_queue *q,
 void blkcg_deactivate_policy(struct request_queue *q,
 			     const struct blkcg_policy *pol);
 
+static inline u64 blkg_rwstat_read_counter(struct blkg_rwstat *rwstat,
+		unsigned int idx)
+{
+	return atomic64_read(&rwstat->aux_cnt[idx]) +
+		percpu_counter_sum_positive(&rwstat->cpu_cnt[idx]);
+}
+
 const char *blkg_dev_name(struct blkcg_gq *blkg);
 void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg,
 		       u64 (*prfill)(struct seq_file *,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/6] blk-cgroup: pass blkg_rwstat structures by reference
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
  2019-06-06 10:26 ` [PATCH 1/6] blk-cgroup: factor out a helper to read rwstat counter Christoph Hellwig
@ 2019-06-06 10:26 ` Christoph Hellwig
  2019-06-06 10:26 ` [PATCH 3/6] blk-cgroup: introduce a new struct blkg_rwstat_sample Christoph Hellwig
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

Returning a structure generates rather bad code, so switch to passing
by reference.  Also don't require the structure to be zeroed and add
to the 0-initialized counters, but actually set the counters to the
calculated value.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bfq-cgroup.c         | 15 +++++++++------
 block/blk-cgroup.c         | 31 ++++++++++++++++---------------
 include/linux/blk-cgroup.h | 14 +++++++-------
 3 files changed, 32 insertions(+), 28 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index b3796a40a61a..66abc82179f3 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -935,9 +935,9 @@ static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
 static u64 bfqg_prfill_rwstat_recursive(struct seq_file *sf,
 					struct blkg_policy_data *pd, int off)
 {
-	struct blkg_rwstat sum = blkg_rwstat_recursive_sum(pd_to_blkg(pd),
-							   &blkcg_policy_bfq,
-							   off);
+	struct blkg_rwstat sum;
+
+	blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off, &sum);
 	return __blkg_prfill_rwstat(sf, pd, &sum);
 }
 
@@ -975,9 +975,12 @@ static int bfqg_print_stat_sectors(struct seq_file *sf, void *v)
 static u64 bfqg_prfill_sectors_recursive(struct seq_file *sf,
 					 struct blkg_policy_data *pd, int off)
 {
-	struct blkg_rwstat tmp = blkg_rwstat_recursive_sum(pd->blkg, NULL,
-					offsetof(struct blkcg_gq, stat_bytes));
-	u64 sum = atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) +
+	struct blkg_rwstat tmp;
+	u64 sum;
+
+	blkg_rwstat_recursive_sum(pd->blkg, NULL,
+			offsetof(struct blkcg_gq, stat_bytes), &tmp);
+	sum = atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) +
 		atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_WRITE]);
 
 	return __blkg_prfill_u64(sf, pd, sum >> 9);
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 6f79ace02be4..7d6bf9b03e24 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -602,8 +602,9 @@ EXPORT_SYMBOL_GPL(blkg_prfill_stat);
 u64 blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
 		       int off)
 {
-	struct blkg_rwstat rwstat = blkg_rwstat_read((void *)pd + off);
+	struct blkg_rwstat rwstat = { };
 
+	blkg_rwstat_read((void *)pd + off, &rwstat);
 	return __blkg_prfill_rwstat(sf, pd, &rwstat);
 }
 EXPORT_SYMBOL_GPL(blkg_prfill_rwstat);
@@ -611,8 +612,9 @@ EXPORT_SYMBOL_GPL(blkg_prfill_rwstat);
 static u64 blkg_prfill_rwstat_field(struct seq_file *sf,
 				    struct blkg_policy_data *pd, int off)
 {
-	struct blkg_rwstat rwstat = blkg_rwstat_read((void *)pd->blkg + off);
+	struct blkg_rwstat rwstat = { };
 
+	blkg_rwstat_read((void *)pd->blkg + off, &rwstat);
 	return __blkg_prfill_rwstat(sf, pd, &rwstat);
 }
 
@@ -654,8 +656,9 @@ static u64 blkg_prfill_rwstat_field_recursive(struct seq_file *sf,
 					      struct blkg_policy_data *pd,
 					      int off)
 {
-	struct blkg_rwstat rwstat = blkg_rwstat_recursive_sum(pd->blkg,
-							      NULL, off);
+	struct blkg_rwstat rwstat;
+
+	blkg_rwstat_recursive_sum(pd->blkg, NULL, off, &rwstat);
 	return __blkg_prfill_rwstat(sf, pd, &rwstat);
 }
 
@@ -736,6 +739,7 @@ EXPORT_SYMBOL_GPL(blkg_stat_recursive_sum);
  * @blkg: blkg of interest
  * @pol: blkcg_policy which contains the blkg_rwstat
  * @off: offset to the blkg_rwstat in blkg_policy_data or @blkg
+ * @sum: blkg_rwstat structure containing the results
  *
  * Collect the blkg_rwstat specified by @blkg, @pol and @off and all its
  * online descendants and their aux counts.  The caller must be holding the
@@ -744,12 +748,11 @@ EXPORT_SYMBOL_GPL(blkg_stat_recursive_sum);
  * If @pol is NULL, blkg_rwstat is at @off bytes into @blkg; otherwise, it
  * is at @off bytes into @blkg's blkg_policy_data of the policy.
  */
-struct blkg_rwstat blkg_rwstat_recursive_sum(struct blkcg_gq *blkg,
-					     struct blkcg_policy *pol, int off)
+void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
+		int off, struct blkg_rwstat *sum)
 {
 	struct blkcg_gq *pos_blkg;
 	struct cgroup_subsys_state *pos_css;
-	struct blkg_rwstat sum = { };
 	unsigned int i;
 
 	lockdep_assert_held(&blkg->q->queue_lock);
@@ -767,12 +770,10 @@ struct blkg_rwstat blkg_rwstat_recursive_sum(struct blkcg_gq *blkg,
 			rwstat = (void *)pos_blkg + off;
 
 		for (i = 0; i < BLKG_RWSTAT_NR; i++)
-			atomic64_add(blkg_rwstat_read_counter(rwstat, i),
-				&sum.aux_cnt[i]);
+			atomic64_set(&sum->aux_cnt[i],
+				blkg_rwstat_read_counter(rwstat, i));
 	}
 	rcu_read_unlock();
-
-	return sum;
 }
 EXPORT_SYMBOL_GPL(blkg_rwstat_recursive_sum);
 
@@ -958,14 +959,14 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
 
 		spin_lock_irq(&blkg->q->queue_lock);
 
-		rwstat = blkg_rwstat_recursive_sum(blkg, NULL,
-					offsetof(struct blkcg_gq, stat_bytes));
+		blkg_rwstat_recursive_sum(blkg, NULL,
+				offsetof(struct blkcg_gq, stat_bytes), &rwstat);
 		rbytes = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_READ]);
 		wbytes = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_WRITE]);
 		dbytes = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_DISCARD]);
 
-		rwstat = blkg_rwstat_recursive_sum(blkg, NULL,
-					offsetof(struct blkcg_gq, stat_ios));
+		blkg_rwstat_recursive_sum(blkg, NULL,
+					offsetof(struct blkcg_gq, stat_ios), &rwstat);
 		rios = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_READ]);
 		wios = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_WRITE]);
 		dios = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_DISCARD]);
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 06236f56a840..3ee858111274 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -224,8 +224,8 @@ int blkg_print_stat_ios_recursive(struct seq_file *sf, void *v);
 
 u64 blkg_stat_recursive_sum(struct blkcg_gq *blkg,
 			    struct blkcg_policy *pol, int off);
-struct blkg_rwstat blkg_rwstat_recursive_sum(struct blkcg_gq *blkg,
-					     struct blkcg_policy *pol, int off);
+void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
+		int off, struct blkg_rwstat *sum);
 
 struct blkg_conf_ctx {
 	struct gendisk			*disk;
@@ -700,15 +700,14 @@ static inline void blkg_rwstat_add(struct blkg_rwstat *rwstat,
  *
  * Read the current snapshot of @rwstat and return it in the aux counts.
  */
-static inline struct blkg_rwstat blkg_rwstat_read(struct blkg_rwstat *rwstat)
+static inline void blkg_rwstat_read(struct blkg_rwstat *rwstat,
+		struct blkg_rwstat *result)
 {
-	struct blkg_rwstat result;
 	int i;
 
 	for (i = 0; i < BLKG_RWSTAT_NR; i++)
-		atomic64_set(&result.aux_cnt[i],
+		atomic64_set(&result->aux_cnt[i],
 			     percpu_counter_sum_positive(&rwstat->cpu_cnt[i]));
-	return result;
 }
 
 /**
@@ -721,8 +720,9 @@ static inline struct blkg_rwstat blkg_rwstat_read(struct blkg_rwstat *rwstat)
  */
 static inline uint64_t blkg_rwstat_total(struct blkg_rwstat *rwstat)
 {
-	struct blkg_rwstat tmp = blkg_rwstat_read(rwstat);
+	struct blkg_rwstat tmp = { };
 
+	blkg_rwstat_read(rwstat, &tmp);
 	return atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) +
 		atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_WRITE]);
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/6] blk-cgroup: introduce a new struct blkg_rwstat_sample
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
  2019-06-06 10:26 ` [PATCH 1/6] blk-cgroup: factor out a helper to read rwstat counter Christoph Hellwig
  2019-06-06 10:26 ` [PATCH 2/6] blk-cgroup: pass blkg_rwstat structures by reference Christoph Hellwig
@ 2019-06-06 10:26 ` Christoph Hellwig
  2019-06-06 10:26 ` [PATCH 4/6] blk-cgroup: move struct blkg_stat to bfq Christoph Hellwig
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

When sampling the blkcg counts we don't need atomics or per-cpu
variables.  Introduce a new structure just containing plain u64
counters.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bfq-cgroup.c         | 10 ++++------
 block/blk-cgroup.c         | 39 +++++++++++++++++++-------------------
 include/linux/blk-cgroup.h | 22 +++++++++++----------
 3 files changed, 35 insertions(+), 36 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 66abc82179f3..624374a99c6e 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -935,7 +935,7 @@ static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
 static u64 bfqg_prfill_rwstat_recursive(struct seq_file *sf,
 					struct blkg_policy_data *pd, int off)
 {
-	struct blkg_rwstat sum;
+	struct blkg_rwstat_sample sum;
 
 	blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off, &sum);
 	return __blkg_prfill_rwstat(sf, pd, &sum);
@@ -975,15 +975,13 @@ static int bfqg_print_stat_sectors(struct seq_file *sf, void *v)
 static u64 bfqg_prfill_sectors_recursive(struct seq_file *sf,
 					 struct blkg_policy_data *pd, int off)
 {
-	struct blkg_rwstat tmp;
-	u64 sum;
+	struct blkg_rwstat_sample tmp;
 
 	blkg_rwstat_recursive_sum(pd->blkg, NULL,
 			offsetof(struct blkcg_gq, stat_bytes), &tmp);
-	sum = atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) +
-		atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_WRITE]);
 
-	return __blkg_prfill_u64(sf, pd, sum >> 9);
+	return __blkg_prfill_u64(sf, pd,
+		(tmp.cnt[BLKG_RWSTAT_READ] + tmp.cnt[BLKG_RWSTAT_WRITE]) >> 9);
 }
 
 static int bfqg_print_stat_sectors_recursive(struct seq_file *sf, void *v)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 7d6bf9b03e24..aad0f5d9a2ea 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -549,7 +549,7 @@ EXPORT_SYMBOL_GPL(__blkg_prfill_u64);
  * Print @rwstat to @sf for the device assocaited with @pd.
  */
 u64 __blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
-			 const struct blkg_rwstat *rwstat)
+			 const struct blkg_rwstat_sample *rwstat)
 {
 	static const char *rwstr[] = {
 		[BLKG_RWSTAT_READ]	= "Read",
@@ -567,12 +567,12 @@ u64 __blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
 
 	for (i = 0; i < BLKG_RWSTAT_NR; i++)
 		seq_printf(sf, "%s %s %llu\n", dname, rwstr[i],
-			   (unsigned long long)atomic64_read(&rwstat->aux_cnt[i]));
+			   rwstat->cnt[i]);
 
-	v = atomic64_read(&rwstat->aux_cnt[BLKG_RWSTAT_READ]) +
-		atomic64_read(&rwstat->aux_cnt[BLKG_RWSTAT_WRITE]) +
-		atomic64_read(&rwstat->aux_cnt[BLKG_RWSTAT_DISCARD]);
-	seq_printf(sf, "%s Total %llu\n", dname, (unsigned long long)v);
+	v = rwstat->cnt[BLKG_RWSTAT_READ] +
+		rwstat->cnt[BLKG_RWSTAT_WRITE] +
+		rwstat->cnt[BLKG_RWSTAT_DISCARD];
+	seq_printf(sf, "%s Total %llu\n", dname, v);
 	return v;
 }
 EXPORT_SYMBOL_GPL(__blkg_prfill_rwstat);
@@ -602,7 +602,7 @@ EXPORT_SYMBOL_GPL(blkg_prfill_stat);
 u64 blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
 		       int off)
 {
-	struct blkg_rwstat rwstat = { };
+	struct blkg_rwstat_sample rwstat = { };
 
 	blkg_rwstat_read((void *)pd + off, &rwstat);
 	return __blkg_prfill_rwstat(sf, pd, &rwstat);
@@ -612,7 +612,7 @@ EXPORT_SYMBOL_GPL(blkg_prfill_rwstat);
 static u64 blkg_prfill_rwstat_field(struct seq_file *sf,
 				    struct blkg_policy_data *pd, int off)
 {
-	struct blkg_rwstat rwstat = { };
+	struct blkg_rwstat_sample rwstat = { };
 
 	blkg_rwstat_read((void *)pd->blkg + off, &rwstat);
 	return __blkg_prfill_rwstat(sf, pd, &rwstat);
@@ -656,7 +656,7 @@ static u64 blkg_prfill_rwstat_field_recursive(struct seq_file *sf,
 					      struct blkg_policy_data *pd,
 					      int off)
 {
-	struct blkg_rwstat rwstat;
+	struct blkg_rwstat_sample rwstat;
 
 	blkg_rwstat_recursive_sum(pd->blkg, NULL, off, &rwstat);
 	return __blkg_prfill_rwstat(sf, pd, &rwstat);
@@ -739,7 +739,7 @@ EXPORT_SYMBOL_GPL(blkg_stat_recursive_sum);
  * @blkg: blkg of interest
  * @pol: blkcg_policy which contains the blkg_rwstat
  * @off: offset to the blkg_rwstat in blkg_policy_data or @blkg
- * @sum: blkg_rwstat structure containing the results
+ * @sum: blkg_rwstat_sample structure containing the results
  *
  * Collect the blkg_rwstat specified by @blkg, @pol and @off and all its
  * online descendants and their aux counts.  The caller must be holding the
@@ -749,7 +749,7 @@ EXPORT_SYMBOL_GPL(blkg_stat_recursive_sum);
  * is at @off bytes into @blkg's blkg_policy_data of the policy.
  */
 void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
-		int off, struct blkg_rwstat *sum)
+		int off, struct blkg_rwstat_sample *sum)
 {
 	struct blkcg_gq *pos_blkg;
 	struct cgroup_subsys_state *pos_css;
@@ -770,8 +770,7 @@ void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
 			rwstat = (void *)pos_blkg + off;
 
 		for (i = 0; i < BLKG_RWSTAT_NR; i++)
-			atomic64_set(&sum->aux_cnt[i],
-				blkg_rwstat_read_counter(rwstat, i));
+			sum->cnt[i] = blkg_rwstat_read_counter(rwstat, i);
 	}
 	rcu_read_unlock();
 }
@@ -939,7 +938,7 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
 	hlist_for_each_entry_rcu(blkg, &blkcg->blkg_list, blkcg_node) {
 		const char *dname;
 		char *buf;
-		struct blkg_rwstat rwstat;
+		struct blkg_rwstat_sample rwstat;
 		u64 rbytes, wbytes, rios, wios, dbytes, dios;
 		size_t size = seq_get_buf(sf, &buf), off = 0;
 		int i;
@@ -961,15 +960,15 @@ static int blkcg_print_stat(struct seq_file *sf, void *v)
 
 		blkg_rwstat_recursive_sum(blkg, NULL,
 				offsetof(struct blkcg_gq, stat_bytes), &rwstat);
-		rbytes = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_READ]);
-		wbytes = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_WRITE]);
-		dbytes = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_DISCARD]);
+		rbytes = rwstat.cnt[BLKG_RWSTAT_READ];
+		wbytes = rwstat.cnt[BLKG_RWSTAT_WRITE];
+		dbytes = rwstat.cnt[BLKG_RWSTAT_DISCARD];
 
 		blkg_rwstat_recursive_sum(blkg, NULL,
 					offsetof(struct blkcg_gq, stat_ios), &rwstat);
-		rios = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_READ]);
-		wios = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_WRITE]);
-		dios = atomic64_read(&rwstat.aux_cnt[BLKG_RWSTAT_DISCARD]);
+		rios = rwstat.cnt[BLKG_RWSTAT_READ];
+		wios = rwstat.cnt[BLKG_RWSTAT_WRITE];
+		dios = rwstat.cnt[BLKG_RWSTAT_DISCARD];
 
 		spin_unlock_irq(&blkg->q->queue_lock);
 
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 3ee858111274..e4a81767e111 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -63,8 +63,7 @@ struct blkcg {
 
 /*
  * blkg_[rw]stat->aux_cnt is excluded for local stats but included for
- * recursive.  Used to carry stats of dead children, and, for blkg_rwstat,
- * to carry result values from read and sum operations.
+ * recursive.  Used to carry stats of dead children.
  */
 struct blkg_stat {
 	struct percpu_counter		cpu_cnt;
@@ -76,6 +75,10 @@ struct blkg_rwstat {
 	atomic64_t			aux_cnt[BLKG_RWSTAT_NR];
 };
 
+struct blkg_rwstat_sample {
+	u64				cnt[BLKG_RWSTAT_NR];
+};
+
 /*
  * A blkcg_gq (blkg) is association between a block cgroup (blkcg) and a
  * request_queue (q).  This is used by blkcg policies which need to track
@@ -213,7 +216,7 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg,
 		       bool show_total);
 u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v);
 u64 __blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
-			 const struct blkg_rwstat *rwstat);
+			 const struct blkg_rwstat_sample *rwstat);
 u64 blkg_prfill_stat(struct seq_file *sf, struct blkg_policy_data *pd, int off);
 u64 blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
 		       int off);
@@ -225,7 +228,7 @@ int blkg_print_stat_ios_recursive(struct seq_file *sf, void *v);
 u64 blkg_stat_recursive_sum(struct blkcg_gq *blkg,
 			    struct blkcg_policy *pol, int off);
 void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
-		int off, struct blkg_rwstat *sum);
+		int off, struct blkg_rwstat_sample *sum);
 
 struct blkg_conf_ctx {
 	struct gendisk			*disk;
@@ -701,13 +704,13 @@ static inline void blkg_rwstat_add(struct blkg_rwstat *rwstat,
  * Read the current snapshot of @rwstat and return it in the aux counts.
  */
 static inline void blkg_rwstat_read(struct blkg_rwstat *rwstat,
-		struct blkg_rwstat *result)
+		struct blkg_rwstat_sample *result)
 {
 	int i;
 
 	for (i = 0; i < BLKG_RWSTAT_NR; i++)
-		atomic64_set(&result->aux_cnt[i],
-			     percpu_counter_sum_positive(&rwstat->cpu_cnt[i]));
+		result->cnt[i] =
+			percpu_counter_sum_positive(&rwstat->cpu_cnt[i]);
 }
 
 /**
@@ -720,11 +723,10 @@ static inline void blkg_rwstat_read(struct blkg_rwstat *rwstat,
  */
 static inline uint64_t blkg_rwstat_total(struct blkg_rwstat *rwstat)
 {
-	struct blkg_rwstat tmp = { };
+	struct blkg_rwstat_sample tmp = { };
 
 	blkg_rwstat_read(rwstat, &tmp);
-	return atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) +
-		atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_WRITE]);
+	return tmp.cnt[BLKG_RWSTAT_READ] + tmp.cnt[BLKG_RWSTAT_WRITE];
 }
 
 /**
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/6] blk-cgroup: move struct blkg_stat to bfq
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
                   ` (2 preceding siblings ...)
  2019-06-06 10:26 ` [PATCH 3/6] blk-cgroup: introduce a new struct blkg_rwstat_sample Christoph Hellwig
@ 2019-06-06 10:26 ` Christoph Hellwig
  2019-06-07  6:07   ` Paolo Valente
  2019-06-06 10:26 ` [PATCH 5/6] bfq-iosched: move bfq_stat_recursive_sum into the only caller Christoph Hellwig
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

This structure and assorted infrastructure is only used by the bfq I/O
scheduler.  Move it there instead of bloating the common code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bfq-cgroup.c         | 192 ++++++++++++++++++++++++++++++-------
 block/bfq-iosched.h        |  19 ++--
 block/blk-cgroup.c         |  56 -----------
 include/linux/blk-cgroup.h |  71 --------------
 4 files changed, 167 insertions(+), 171 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 624374a99c6e..a691dca7e966 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -17,6 +17,124 @@
 
 #if defined(CONFIG_BFQ_GROUP_IOSCHED) &&  defined(CONFIG_DEBUG_BLK_CGROUP)
 
+static int bfq_stat_init(struct bfq_stat *stat, gfp_t gfp)
+{
+	int ret;
+
+	ret = percpu_counter_init(&stat->cpu_cnt, 0, gfp);
+	if (ret)
+		return ret;
+
+	atomic64_set(&stat->aux_cnt, 0);
+	return 0;
+}
+
+static void bfq_stat_exit(struct bfq_stat *stat)
+{
+	percpu_counter_destroy(&stat->cpu_cnt);
+}
+
+/**
+ * bfq_stat_add - add a value to a bfq_stat
+ * @stat: target bfq_stat
+ * @val: value to add
+ *
+ * Add @val to @stat.  The caller must ensure that IRQ on the same CPU
+ * don't re-enter this function for the same counter.
+ */
+static inline void bfq_stat_add(struct bfq_stat *stat, uint64_t val)
+{
+	percpu_counter_add_batch(&stat->cpu_cnt, val, BLKG_STAT_CPU_BATCH);
+}
+
+/**
+ * bfq_stat_read - read the current value of a bfq_stat
+ * @stat: bfq_stat to read
+ */
+static inline uint64_t bfq_stat_read(struct bfq_stat *stat)
+{
+	return percpu_counter_sum_positive(&stat->cpu_cnt);
+}
+
+/**
+ * bfq_stat_reset - reset a bfq_stat
+ * @stat: bfq_stat to reset
+ */
+static inline void bfq_stat_reset(struct bfq_stat *stat)
+{
+	percpu_counter_set(&stat->cpu_cnt, 0);
+	atomic64_set(&stat->aux_cnt, 0);
+}
+
+/**
+ * bfq_stat_add_aux - add a bfq_stat into another's aux count
+ * @to: the destination bfq_stat
+ * @from: the source
+ *
+ * Add @from's count including the aux one to @to's aux count.
+ */
+static inline void bfq_stat_add_aux(struct bfq_stat *to,
+				     struct bfq_stat *from)
+{
+	atomic64_add(bfq_stat_read(from) + atomic64_read(&from->aux_cnt),
+		     &to->aux_cnt);
+}
+
+/**
+ * bfq_stat_recursive_sum - collect hierarchical bfq_stat
+ * @blkg: blkg of interest
+ * @pol: blkcg_policy which contains the bfq_stat
+ * @off: offset to the bfq_stat in blkg_policy_data or @blkg
+ *
+ * Collect the bfq_stat specified by @blkg, @pol and @off and all its
+ * online descendants and their aux counts.  The caller must be holding the
+ * queue lock for online tests.
+ *
+ * If @pol is NULL, bfq_stat is at @off bytes into @blkg; otherwise, it is
+ * at @off bytes into @blkg's blkg_policy_data of the policy.
+ */
+static u64 bfq_stat_recursive_sum(struct blkcg_gq *blkg,
+			    struct blkcg_policy *pol, int off)
+{
+	struct blkcg_gq *pos_blkg;
+	struct cgroup_subsys_state *pos_css;
+	u64 sum = 0;
+
+	lockdep_assert_held(&blkg->q->queue_lock);
+
+	rcu_read_lock();
+	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
+		struct bfq_stat *stat;
+
+		if (!pos_blkg->online)
+			continue;
+
+		if (pol)
+			stat = (void *)blkg_to_pd(pos_blkg, pol) + off;
+		else
+			stat = (void *)blkg + off;
+
+		sum += bfq_stat_read(stat) + atomic64_read(&stat->aux_cnt);
+	}
+	rcu_read_unlock();
+
+	return sum;
+}
+
+/**
+ * blkg_prfill_stat - prfill callback for bfq_stat
+ * @sf: seq_file to print to
+ * @pd: policy private data of interest
+ * @off: offset to the bfq_stat in @pd
+ *
+ * prfill callback for printing a bfq_stat.
+ */
+static u64 blkg_prfill_stat(struct seq_file *sf, struct blkg_policy_data *pd,
+		int off)
+{
+	return __blkg_prfill_u64(sf, pd, bfq_stat_read((void *)pd + off));
+}
+
 /* bfqg stats flags */
 enum bfqg_stats_flags {
 	BFQG_stats_waiting = 0,
@@ -53,7 +171,7 @@ static void bfqg_stats_update_group_wait_time(struct bfqg_stats *stats)
 
 	now = ktime_get_ns();
 	if (now > stats->start_group_wait_time)
-		blkg_stat_add(&stats->group_wait_time,
+		bfq_stat_add(&stats->group_wait_time,
 			      now - stats->start_group_wait_time);
 	bfqg_stats_clear_waiting(stats);
 }
@@ -82,14 +200,14 @@ static void bfqg_stats_end_empty_time(struct bfqg_stats *stats)
 
 	now = ktime_get_ns();
 	if (now > stats->start_empty_time)
-		blkg_stat_add(&stats->empty_time,
+		bfq_stat_add(&stats->empty_time,
 			      now - stats->start_empty_time);
 	bfqg_stats_clear_empty(stats);
 }
 
 void bfqg_stats_update_dequeue(struct bfq_group *bfqg)
 {
-	blkg_stat_add(&bfqg->stats.dequeue, 1);
+	bfq_stat_add(&bfqg->stats.dequeue, 1);
 }
 
 void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg)
@@ -119,7 +237,7 @@ void bfqg_stats_update_idle_time(struct bfq_group *bfqg)
 		u64 now = ktime_get_ns();
 
 		if (now > stats->start_idle_time)
-			blkg_stat_add(&stats->idle_time,
+			bfq_stat_add(&stats->idle_time,
 				      now - stats->start_idle_time);
 		bfqg_stats_clear_idling(stats);
 	}
@@ -137,9 +255,9 @@ void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg)
 {
 	struct bfqg_stats *stats = &bfqg->stats;
 
-	blkg_stat_add(&stats->avg_queue_size_sum,
+	bfq_stat_add(&stats->avg_queue_size_sum,
 		      blkg_rwstat_total(&stats->queued));
-	blkg_stat_add(&stats->avg_queue_size_samples, 1);
+	bfq_stat_add(&stats->avg_queue_size_samples, 1);
 	bfqg_stats_update_group_wait_time(stats);
 }
 
@@ -279,13 +397,13 @@ static void bfqg_stats_reset(struct bfqg_stats *stats)
 	blkg_rwstat_reset(&stats->merged);
 	blkg_rwstat_reset(&stats->service_time);
 	blkg_rwstat_reset(&stats->wait_time);
-	blkg_stat_reset(&stats->time);
-	blkg_stat_reset(&stats->avg_queue_size_sum);
-	blkg_stat_reset(&stats->avg_queue_size_samples);
-	blkg_stat_reset(&stats->dequeue);
-	blkg_stat_reset(&stats->group_wait_time);
-	blkg_stat_reset(&stats->idle_time);
-	blkg_stat_reset(&stats->empty_time);
+	bfq_stat_reset(&stats->time);
+	bfq_stat_reset(&stats->avg_queue_size_sum);
+	bfq_stat_reset(&stats->avg_queue_size_samples);
+	bfq_stat_reset(&stats->dequeue);
+	bfq_stat_reset(&stats->group_wait_time);
+	bfq_stat_reset(&stats->idle_time);
+	bfq_stat_reset(&stats->empty_time);
 #endif
 }
 
@@ -300,14 +418,14 @@ static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from)
 	blkg_rwstat_add_aux(&to->merged, &from->merged);
 	blkg_rwstat_add_aux(&to->service_time, &from->service_time);
 	blkg_rwstat_add_aux(&to->wait_time, &from->wait_time);
-	blkg_stat_add_aux(&from->time, &from->time);
-	blkg_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum);
-	blkg_stat_add_aux(&to->avg_queue_size_samples,
+	bfq_stat_add_aux(&from->time, &from->time);
+	bfq_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum);
+	bfq_stat_add_aux(&to->avg_queue_size_samples,
 			  &from->avg_queue_size_samples);
-	blkg_stat_add_aux(&to->dequeue, &from->dequeue);
-	blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time);
-	blkg_stat_add_aux(&to->idle_time, &from->idle_time);
-	blkg_stat_add_aux(&to->empty_time, &from->empty_time);
+	bfq_stat_add_aux(&to->dequeue, &from->dequeue);
+	bfq_stat_add_aux(&to->group_wait_time, &from->group_wait_time);
+	bfq_stat_add_aux(&to->idle_time, &from->idle_time);
+	bfq_stat_add_aux(&to->empty_time, &from->empty_time);
 #endif
 }
 
@@ -360,13 +478,13 @@ static void bfqg_stats_exit(struct bfqg_stats *stats)
 	blkg_rwstat_exit(&stats->service_time);
 	blkg_rwstat_exit(&stats->wait_time);
 	blkg_rwstat_exit(&stats->queued);
-	blkg_stat_exit(&stats->time);
-	blkg_stat_exit(&stats->avg_queue_size_sum);
-	blkg_stat_exit(&stats->avg_queue_size_samples);
-	blkg_stat_exit(&stats->dequeue);
-	blkg_stat_exit(&stats->group_wait_time);
-	blkg_stat_exit(&stats->idle_time);
-	blkg_stat_exit(&stats->empty_time);
+	bfq_stat_exit(&stats->time);
+	bfq_stat_exit(&stats->avg_queue_size_sum);
+	bfq_stat_exit(&stats->avg_queue_size_samples);
+	bfq_stat_exit(&stats->dequeue);
+	bfq_stat_exit(&stats->group_wait_time);
+	bfq_stat_exit(&stats->idle_time);
+	bfq_stat_exit(&stats->empty_time);
 #endif
 }
 
@@ -377,13 +495,13 @@ static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
 	    blkg_rwstat_init(&stats->service_time, gfp) ||
 	    blkg_rwstat_init(&stats->wait_time, gfp) ||
 	    blkg_rwstat_init(&stats->queued, gfp) ||
-	    blkg_stat_init(&stats->time, gfp) ||
-	    blkg_stat_init(&stats->avg_queue_size_sum, gfp) ||
-	    blkg_stat_init(&stats->avg_queue_size_samples, gfp) ||
-	    blkg_stat_init(&stats->dequeue, gfp) ||
-	    blkg_stat_init(&stats->group_wait_time, gfp) ||
-	    blkg_stat_init(&stats->idle_time, gfp) ||
-	    blkg_stat_init(&stats->empty_time, gfp)) {
+	    bfq_stat_init(&stats->time, gfp) ||
+	    bfq_stat_init(&stats->avg_queue_size_sum, gfp) ||
+	    bfq_stat_init(&stats->avg_queue_size_samples, gfp) ||
+	    bfq_stat_init(&stats->dequeue, gfp) ||
+	    bfq_stat_init(&stats->group_wait_time, gfp) ||
+	    bfq_stat_init(&stats->idle_time, gfp) ||
+	    bfq_stat_init(&stats->empty_time, gfp)) {
 		bfqg_stats_exit(stats);
 		return -ENOMEM;
 	}
@@ -927,7 +1045,7 @@ static int bfqg_print_rwstat(struct seq_file *sf, void *v)
 static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
 				      struct blkg_policy_data *pd, int off)
 {
-	u64 sum = blkg_stat_recursive_sum(pd_to_blkg(pd),
+	u64 sum = bfq_stat_recursive_sum(pd_to_blkg(pd),
 					  &blkcg_policy_bfq, off);
 	return __blkg_prfill_u64(sf, pd, sum);
 }
@@ -996,11 +1114,11 @@ static u64 bfqg_prfill_avg_queue_size(struct seq_file *sf,
 				      struct blkg_policy_data *pd, int off)
 {
 	struct bfq_group *bfqg = pd_to_bfqg(pd);
-	u64 samples = blkg_stat_read(&bfqg->stats.avg_queue_size_samples);
+	u64 samples = bfq_stat_read(&bfqg->stats.avg_queue_size_samples);
 	u64 v = 0;
 
 	if (samples) {
-		v = blkg_stat_read(&bfqg->stats.avg_queue_size_sum);
+		v = bfq_stat_read(&bfqg->stats.avg_queue_size_sum);
 		v = div64_u64(v, samples);
 	}
 	__blkg_prfill_u64(sf, pd, v);
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index c2faa77824f8..aef4fa0046b8 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -777,6 +777,11 @@ enum bfqq_expiration {
 	BFQQE_PREEMPTED		/* preemption in progress */
 };
 
+struct bfq_stat {
+	struct percpu_counter		cpu_cnt;
+	atomic64_t			aux_cnt;
+};
+
 struct bfqg_stats {
 #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
 	/* number of ios merged */
@@ -788,19 +793,19 @@ struct bfqg_stats {
 	/* number of IOs queued up */
 	struct blkg_rwstat		queued;
 	/* total disk time and nr sectors dispatched by this group */
-	struct blkg_stat		time;
+	struct bfq_stat		time;
 	/* sum of number of ios queued across all samples */
-	struct blkg_stat		avg_queue_size_sum;
+	struct bfq_stat		avg_queue_size_sum;
 	/* count of samples taken for average */
-	struct blkg_stat		avg_queue_size_samples;
+	struct bfq_stat		avg_queue_size_samples;
 	/* how many times this group has been removed from service tree */
-	struct blkg_stat		dequeue;
+	struct bfq_stat		dequeue;
 	/* total time spent waiting for it to be assigned a timeslice. */
-	struct blkg_stat		group_wait_time;
+	struct bfq_stat		group_wait_time;
 	/* time spent idling for this blkcg_gq */
-	struct blkg_stat		idle_time;
+	struct bfq_stat		idle_time;
 	/* total time with empty current active q with other requests queued */
-	struct blkg_stat		empty_time;
+	struct bfq_stat		empty_time;
 	/* fields after this shouldn't be cleared on stat reset */
 	u64				start_group_wait_time;
 	u64				start_idle_time;
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index aad0f5d9a2ea..1aa5c64675d6 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -577,20 +577,6 @@ u64 __blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
 }
 EXPORT_SYMBOL_GPL(__blkg_prfill_rwstat);
 
-/**
- * blkg_prfill_stat - prfill callback for blkg_stat
- * @sf: seq_file to print to
- * @pd: policy private data of interest
- * @off: offset to the blkg_stat in @pd
- *
- * prfill callback for printing a blkg_stat.
- */
-u64 blkg_prfill_stat(struct seq_file *sf, struct blkg_policy_data *pd, int off)
-{
-	return __blkg_prfill_u64(sf, pd, blkg_stat_read((void *)pd + off));
-}
-EXPORT_SYMBOL_GPL(blkg_prfill_stat);
-
 /**
  * blkg_prfill_rwstat - prfill callback for blkg_rwstat
  * @sf: seq_file to print to
@@ -692,48 +678,6 @@ int blkg_print_stat_ios_recursive(struct seq_file *sf, void *v)
 }
 EXPORT_SYMBOL_GPL(blkg_print_stat_ios_recursive);
 
-/**
- * blkg_stat_recursive_sum - collect hierarchical blkg_stat
- * @blkg: blkg of interest
- * @pol: blkcg_policy which contains the blkg_stat
- * @off: offset to the blkg_stat in blkg_policy_data or @blkg
- *
- * Collect the blkg_stat specified by @blkg, @pol and @off and all its
- * online descendants and their aux counts.  The caller must be holding the
- * queue lock for online tests.
- *
- * If @pol is NULL, blkg_stat is at @off bytes into @blkg; otherwise, it is
- * at @off bytes into @blkg's blkg_policy_data of the policy.
- */
-u64 blkg_stat_recursive_sum(struct blkcg_gq *blkg,
-			    struct blkcg_policy *pol, int off)
-{
-	struct blkcg_gq *pos_blkg;
-	struct cgroup_subsys_state *pos_css;
-	u64 sum = 0;
-
-	lockdep_assert_held(&blkg->q->queue_lock);
-
-	rcu_read_lock();
-	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
-		struct blkg_stat *stat;
-
-		if (!pos_blkg->online)
-			continue;
-
-		if (pol)
-			stat = (void *)blkg_to_pd(pos_blkg, pol) + off;
-		else
-			stat = (void *)blkg + off;
-
-		sum += blkg_stat_read(stat) + atomic64_read(&stat->aux_cnt);
-	}
-	rcu_read_unlock();
-
-	return sum;
-}
-EXPORT_SYMBOL_GPL(blkg_stat_recursive_sum);
-
 /**
  * blkg_rwstat_recursive_sum - collect hierarchical blkg_rwstat
  * @blkg: blkg of interest
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index e4a81767e111..33f23a858438 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -65,11 +65,6 @@ struct blkcg {
  * blkg_[rw]stat->aux_cnt is excluded for local stats but included for
  * recursive.  Used to carry stats of dead children.
  */
-struct blkg_stat {
-	struct percpu_counter		cpu_cnt;
-	atomic64_t			aux_cnt;
-};
-
 struct blkg_rwstat {
 	struct percpu_counter		cpu_cnt[BLKG_RWSTAT_NR];
 	atomic64_t			aux_cnt[BLKG_RWSTAT_NR];
@@ -217,7 +212,6 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg,
 u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v);
 u64 __blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
 			 const struct blkg_rwstat_sample *rwstat);
-u64 blkg_prfill_stat(struct seq_file *sf, struct blkg_policy_data *pd, int off);
 u64 blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
 		       int off);
 int blkg_print_stat_bytes(struct seq_file *sf, void *v);
@@ -225,8 +219,6 @@ int blkg_print_stat_ios(struct seq_file *sf, void *v);
 int blkg_print_stat_bytes_recursive(struct seq_file *sf, void *v);
 int blkg_print_stat_ios_recursive(struct seq_file *sf, void *v);
 
-u64 blkg_stat_recursive_sum(struct blkcg_gq *blkg,
-			    struct blkcg_policy *pol, int off);
 void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
 		int off, struct blkg_rwstat_sample *sum);
 
@@ -579,69 +571,6 @@ static inline void blkg_put(struct blkcg_gq *blkg)
 		if (((d_blkg) = __blkg_lookup(css_to_blkcg(pos_css),	\
 					      (p_blkg)->q, false)))
 
-static inline int blkg_stat_init(struct blkg_stat *stat, gfp_t gfp)
-{
-	int ret;
-
-	ret = percpu_counter_init(&stat->cpu_cnt, 0, gfp);
-	if (ret)
-		return ret;
-
-	atomic64_set(&stat->aux_cnt, 0);
-	return 0;
-}
-
-static inline void blkg_stat_exit(struct blkg_stat *stat)
-{
-	percpu_counter_destroy(&stat->cpu_cnt);
-}
-
-/**
- * blkg_stat_add - add a value to a blkg_stat
- * @stat: target blkg_stat
- * @val: value to add
- *
- * Add @val to @stat.  The caller must ensure that IRQ on the same CPU
- * don't re-enter this function for the same counter.
- */
-static inline void blkg_stat_add(struct blkg_stat *stat, uint64_t val)
-{
-	percpu_counter_add_batch(&stat->cpu_cnt, val, BLKG_STAT_CPU_BATCH);
-}
-
-/**
- * blkg_stat_read - read the current value of a blkg_stat
- * @stat: blkg_stat to read
- */
-static inline uint64_t blkg_stat_read(struct blkg_stat *stat)
-{
-	return percpu_counter_sum_positive(&stat->cpu_cnt);
-}
-
-/**
- * blkg_stat_reset - reset a blkg_stat
- * @stat: blkg_stat to reset
- */
-static inline void blkg_stat_reset(struct blkg_stat *stat)
-{
-	percpu_counter_set(&stat->cpu_cnt, 0);
-	atomic64_set(&stat->aux_cnt, 0);
-}
-
-/**
- * blkg_stat_add_aux - add a blkg_stat into another's aux count
- * @to: the destination blkg_stat
- * @from: the source
- *
- * Add @from's count including the aux one to @to's aux count.
- */
-static inline void blkg_stat_add_aux(struct blkg_stat *to,
-				     struct blkg_stat *from)
-{
-	atomic64_add(blkg_stat_read(from) + atomic64_read(&from->aux_cnt),
-		     &to->aux_cnt);
-}
-
 static inline int blkg_rwstat_init(struct blkg_rwstat *rwstat, gfp_t gfp)
 {
 	int i, ret;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/6] bfq-iosched: move bfq_stat_recursive_sum into the only caller
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
                   ` (3 preceding siblings ...)
  2019-06-06 10:26 ` [PATCH 4/6] blk-cgroup: move struct blkg_stat to bfq Christoph Hellwig
@ 2019-06-06 10:26 ` Christoph Hellwig
  2019-06-07  6:07   ` Paolo Valente
  2019-06-06 10:26 ` [PATCH 6/6] block: rename CONFIG_DEBUG_BLK_CGROUP to CONFIG_BFQ_CGROUP_DEBUG Christoph Hellwig
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

This function was moved from core block code and is way to generic.
Fold it into the only caller and simplify it based on the actually
passed arguments.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/bfq-cgroup.c | 62 ++++++++++++++--------------------------------
 1 file changed, 19 insertions(+), 43 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index a691dca7e966..d84302445e30 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -80,47 +80,6 @@ static inline void bfq_stat_add_aux(struct bfq_stat *to,
 		     &to->aux_cnt);
 }
 
-/**
- * bfq_stat_recursive_sum - collect hierarchical bfq_stat
- * @blkg: blkg of interest
- * @pol: blkcg_policy which contains the bfq_stat
- * @off: offset to the bfq_stat in blkg_policy_data or @blkg
- *
- * Collect the bfq_stat specified by @blkg, @pol and @off and all its
- * online descendants and their aux counts.  The caller must be holding the
- * queue lock for online tests.
- *
- * If @pol is NULL, bfq_stat is at @off bytes into @blkg; otherwise, it is
- * at @off bytes into @blkg's blkg_policy_data of the policy.
- */
-static u64 bfq_stat_recursive_sum(struct blkcg_gq *blkg,
-			    struct blkcg_policy *pol, int off)
-{
-	struct blkcg_gq *pos_blkg;
-	struct cgroup_subsys_state *pos_css;
-	u64 sum = 0;
-
-	lockdep_assert_held(&blkg->q->queue_lock);
-
-	rcu_read_lock();
-	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
-		struct bfq_stat *stat;
-
-		if (!pos_blkg->online)
-			continue;
-
-		if (pol)
-			stat = (void *)blkg_to_pd(pos_blkg, pol) + off;
-		else
-			stat = (void *)blkg + off;
-
-		sum += bfq_stat_read(stat) + atomic64_read(&stat->aux_cnt);
-	}
-	rcu_read_unlock();
-
-	return sum;
-}
-
 /**
  * blkg_prfill_stat - prfill callback for bfq_stat
  * @sf: seq_file to print to
@@ -1045,8 +1004,25 @@ static int bfqg_print_rwstat(struct seq_file *sf, void *v)
 static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
 				      struct blkg_policy_data *pd, int off)
 {
-	u64 sum = bfq_stat_recursive_sum(pd_to_blkg(pd),
-					  &blkcg_policy_bfq, off);
+	struct blkcg_gq *blkg = pd_to_blkg(pd);
+	struct blkcg_gq *pos_blkg;
+	struct cgroup_subsys_state *pos_css;
+	u64 sum = 0;
+
+	lockdep_assert_held(&blkg->q->queue_lock);
+
+	rcu_read_lock();
+	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
+		struct bfq_stat *stat;
+
+		if (!pos_blkg->online)
+			continue;
+
+		stat = (void *)blkg_to_pd(pos_blkg, &blkcg_policy_bfq) + off;
+		sum += bfq_stat_read(stat) + atomic64_read(&stat->aux_cnt);
+	}
+	rcu_read_unlock();
+
 	return __blkg_prfill_u64(sf, pd, sum);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 6/6] block: rename CONFIG_DEBUG_BLK_CGROUP to CONFIG_BFQ_CGROUP_DEBUG
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
                   ` (4 preceding siblings ...)
  2019-06-06 10:26 ` [PATCH 5/6] bfq-iosched: move bfq_stat_recursive_sum into the only caller Christoph Hellwig
@ 2019-06-06 10:26 ` Christoph Hellwig
  2019-06-07  6:07   ` Paolo Valente
  2019-06-10 18:23 ` blk-cgroup cleanups Tejun Heo
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Christoph Hellwig @ 2019-06-06 10:26 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

This option is entirely bfq specific, give it an appropinquate name.

Also make it depend on CONFIG_BFQ_GROUP_IOSCHED in Kconfig, as all
the functionality already does so anyway.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 Documentation/block/bfq-iosched.txt          | 12 ++++-----
 Documentation/cgroup-v1/blkio-controller.txt | 12 ++++-----
 block/Kconfig.iosched                        |  7 +++++
 block/bfq-cgroup.c                           | 27 ++++++++++----------
 block/bfq-iosched.c                          |  8 +++---
 block/bfq-iosched.h                          |  4 +--
 init/Kconfig                                 |  8 ------
 7 files changed, 38 insertions(+), 40 deletions(-)

diff --git a/Documentation/block/bfq-iosched.txt b/Documentation/block/bfq-iosched.txt
index 1a0f2ac02eb6..f02163fabf80 100644
--- a/Documentation/block/bfq-iosched.txt
+++ b/Documentation/block/bfq-iosched.txt
@@ -38,13 +38,13 @@ stack). To give an idea of the limits with BFQ, on slow or average
 CPUs, here are, first, the limits of BFQ for three different CPUs, on,
 respectively, an average laptop, an old desktop, and a cheap embedded
 system, in case full hierarchical support is enabled (i.e.,
-CONFIG_BFQ_GROUP_IOSCHED is set), but CONFIG_DEBUG_BLK_CGROUP is not
+CONFIG_BFQ_GROUP_IOSCHED is set), but CONFIG_BFQ_CGROUP_DEBUG is not
 set (Section 4-2):
 - Intel i7-4850HQ: 400 KIOPS
 - AMD A8-3850: 250 KIOPS
 - ARM CortexTM-A53 Octa-core: 80 KIOPS
 
-If CONFIG_DEBUG_BLK_CGROUP is set (and of course full hierarchical
+If CONFIG_BFQ_CGROUP_DEBUG is set (and of course full hierarchical
 support is enabled), then the sustainable throughput with BFQ
 decreases, because all blkio.bfq* statistics are created and updated
 (Section 4-2). For BFQ, this leads to the following maximum
@@ -537,19 +537,19 @@ or io.bfq.weight.
 
 As for cgroups-v1 (blkio controller), the exact set of stat files
 created, and kept up-to-date by bfq, depends on whether
-CONFIG_DEBUG_BLK_CGROUP is set. If it is set, then bfq creates all
+CONFIG_BFQ_CGROUP_DEBUG is set. If it is set, then bfq creates all
 the stat files documented in
 Documentation/cgroup-v1/blkio-controller.txt. If, instead,
-CONFIG_DEBUG_BLK_CGROUP is not set, then bfq creates only the files
+CONFIG_BFQ_CGROUP_DEBUG is not set, then bfq creates only the files
 blkio.bfq.io_service_bytes
 blkio.bfq.io_service_bytes_recursive
 blkio.bfq.io_serviced
 blkio.bfq.io_serviced_recursive
 
-The value of CONFIG_DEBUG_BLK_CGROUP greatly influences the maximum
+The value of CONFIG_BFQ_CGROUP_DEBUG greatly influences the maximum
 throughput sustainable with bfq, because updating the blkio.bfq.*
 stats is rather costly, especially for some of the stats enabled by
-CONFIG_DEBUG_BLK_CGROUP.
+CONFIG_BFQ_CGROUP_DEBUG.
 
 Parameters to set
 -----------------
diff --git a/Documentation/cgroup-v1/blkio-controller.txt b/Documentation/cgroup-v1/blkio-controller.txt
index 673dc34d3f78..47cf84102f88 100644
--- a/Documentation/cgroup-v1/blkio-controller.txt
+++ b/Documentation/cgroup-v1/blkio-controller.txt
@@ -126,7 +126,7 @@ Various user visible config options
 CONFIG_BLK_CGROUP
 	- Block IO controller.
 
-CONFIG_DEBUG_BLK_CGROUP
+CONFIG_BFQ_CGROUP_DEBUG
 	- Debug help. Right now some additional stats file show up in cgroup
 	  if this option is enabled.
 
@@ -246,13 +246,13 @@ Proportional weight policy files
 	  write, sync or async.
 
 - blkio.avg_queue_size
-	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
+	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y.
 	  The average queue size for this cgroup over the entire time of this
 	  cgroup's existence. Queue size samples are taken each time one of the
 	  queues of this cgroup gets a timeslice.
 
 - blkio.group_wait_time
-	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
+	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y.
 	  This is the amount of time the cgroup had to wait since it became busy
 	  (i.e., went from 0 to 1 request queued) to get a timeslice for one of
 	  its queues. This is different from the io_wait_time which is the
@@ -263,7 +263,7 @@ Proportional weight policy files
 	  got a timeslice and will not include the current delta.
 
 - blkio.empty_time
-	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
+	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y.
 	  This is the amount of time a cgroup spends without any pending
 	  requests when not being served, i.e., it does not include any time
 	  spent idling for one of the queues of the cgroup. This is in
@@ -272,7 +272,7 @@ Proportional weight policy files
 	  time it had a pending request and will not include the current delta.
 
 - blkio.idle_time
-	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
+	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y.
 	  This is the amount of time spent by the IO scheduler idling for a
 	  given cgroup in anticipation of a better request than the existing ones
 	  from other queues/cgroups. This is in nanoseconds. If this is read
@@ -281,7 +281,7 @@ Proportional weight policy files
 	  the current delta.
 
 - blkio.dequeue
-	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
+	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y. This
 	  gives the statistics about how many a times a group was dequeued
 	  from service tree of the device. First two fields specify the major
 	  and minor number of the device and third field specifies the number
diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
index 4626b88b2d5a..7a6b2f29a582 100644
--- a/block/Kconfig.iosched
+++ b/block/Kconfig.iosched
@@ -36,6 +36,13 @@ config BFQ_GROUP_IOSCHED
        Enable hierarchical scheduling in BFQ, using the blkio
        (cgroups-v1) or io (cgroups-v2) controller.
 
+config BFQ_CGROUP_DEBUG
+	bool "BFQ IO controller debugging"
+	depends on BFQ_GROUP_IOSCHED
+	---help---
+	Enable some debugging help. Currently it exports additional stat
+	files in a cgroup which can be useful for debugging.
+
 endmenu
 
 endif
diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index d84302445e30..0f6cd688924f 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -15,8 +15,7 @@
 
 #include "bfq-iosched.h"
 
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) &&  defined(CONFIG_DEBUG_BLK_CGROUP)
-
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 static int bfq_stat_init(struct bfq_stat *stat, gfp_t gfp)
 {
 	int ret;
@@ -253,7 +252,7 @@ void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
 				io_start_time_ns - start_time_ns);
 }
 
-#else /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
+#else /* CONFIG_BFQ_CGROUP_DEBUG */
 
 void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
 			      unsigned int op) { }
@@ -267,7 +266,7 @@ void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { }
 void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { }
 void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { }
 
-#endif /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
+#endif /* CONFIG_BFQ_CGROUP_DEBUG */
 
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
 
@@ -351,7 +350,7 @@ void bfqg_and_blkg_put(struct bfq_group *bfqg)
 /* @stats = 0 */
 static void bfqg_stats_reset(struct bfqg_stats *stats)
 {
-#ifdef CONFIG_DEBUG_BLK_CGROUP
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 	/* queued stats shouldn't be cleared */
 	blkg_rwstat_reset(&stats->merged);
 	blkg_rwstat_reset(&stats->service_time);
@@ -372,7 +371,7 @@ static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from)
 	if (!to || !from)
 		return;
 
-#ifdef CONFIG_DEBUG_BLK_CGROUP
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 	/* queued stats shouldn't be cleared */
 	blkg_rwstat_add_aux(&to->merged, &from->merged);
 	blkg_rwstat_add_aux(&to->service_time, &from->service_time);
@@ -432,7 +431,7 @@ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg)
 
 static void bfqg_stats_exit(struct bfqg_stats *stats)
 {
-#ifdef CONFIG_DEBUG_BLK_CGROUP
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 	blkg_rwstat_exit(&stats->merged);
 	blkg_rwstat_exit(&stats->service_time);
 	blkg_rwstat_exit(&stats->wait_time);
@@ -449,7 +448,7 @@ static void bfqg_stats_exit(struct bfqg_stats *stats)
 
 static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
 {
-#ifdef CONFIG_DEBUG_BLK_CGROUP
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 	if (blkg_rwstat_init(&stats->merged, gfp) ||
 	    blkg_rwstat_init(&stats->service_time, gfp) ||
 	    blkg_rwstat_init(&stats->wait_time, gfp) ||
@@ -986,7 +985,7 @@ static ssize_t bfq_io_set_weight(struct kernfs_open_file *of,
 	return ret ?: nbytes;
 }
 
-#ifdef CONFIG_DEBUG_BLK_CGROUP
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 static int bfqg_print_stat(struct seq_file *sf, void *v)
 {
 	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat,
@@ -1109,7 +1108,7 @@ static int bfqg_print_avg_queue_size(struct seq_file *sf, void *v)
 			  0, false);
 	return 0;
 }
-#endif /* CONFIG_DEBUG_BLK_CGROUP */
+#endif /* CONFIG_BFQ_CGROUP_DEBUG */
 
 struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
 {
@@ -1157,7 +1156,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
 		.private = (unsigned long)&blkcg_policy_bfq,
 		.seq_show = blkg_print_stat_ios,
 	},
-#ifdef CONFIG_DEBUG_BLK_CGROUP
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 	{
 		.name = "bfq.time",
 		.private = offsetof(struct bfq_group, stats.time),
@@ -1187,7 +1186,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
 		.private = offsetof(struct bfq_group, stats.queued),
 		.seq_show = bfqg_print_rwstat,
 	},
-#endif /* CONFIG_DEBUG_BLK_CGROUP */
+#endif /* CONFIG_BFQ_CGROUP_DEBUG */
 
 	/* the same statistics which cover the bfqg and its descendants */
 	{
@@ -1200,7 +1199,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
 		.private = (unsigned long)&blkcg_policy_bfq,
 		.seq_show = blkg_print_stat_ios_recursive,
 	},
-#ifdef CONFIG_DEBUG_BLK_CGROUP
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 	{
 		.name = "bfq.time_recursive",
 		.private = offsetof(struct bfq_group, stats.time),
@@ -1254,7 +1253,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
 		.private = offsetof(struct bfq_group, stats.dequeue),
 		.seq_show = bfqg_print_stat,
 	},
-#endif	/* CONFIG_DEBUG_BLK_CGROUP */
+#endif	/* CONFIG_BFQ_CGROUP_DEBUG */
 	{ }	/* terminate */
 };
 
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index f8d430f88d25..e9a587707d67 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4403,7 +4403,7 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
 	return rq;
 }
 
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 static void bfq_update_dispatch_stats(struct request_queue *q,
 				      struct request *rq,
 				      struct bfq_queue *in_serv_queue,
@@ -4453,7 +4453,7 @@ static inline void bfq_update_dispatch_stats(struct request_queue *q,
 					     struct request *rq,
 					     struct bfq_queue *in_serv_queue,
 					     bool idle_timer_disabled) {}
-#endif
+#endif /* CONFIG_BFQ_CGROUP_DEBUG */
 
 static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
 {
@@ -5007,7 +5007,7 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
 	return idle_timer_disabled;
 }
 
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 static void bfq_update_insert_stats(struct request_queue *q,
 				    struct bfq_queue *bfqq,
 				    bool idle_timer_disabled,
@@ -5037,7 +5037,7 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
 					   struct bfq_queue *bfqq,
 					   bool idle_timer_disabled,
 					   unsigned int cmd_flags) {}
-#endif
+#endif /* CONFIG_BFQ_CGROUP_DEBUG */
 
 static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
 			       bool at_head)
diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
index aef4fa0046b8..584d3c9ed8ba 100644
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -783,7 +783,7 @@ struct bfq_stat {
 };
 
 struct bfqg_stats {
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
+#ifdef CONFIG_BFQ_CGROUP_DEBUG
 	/* number of ios merged */
 	struct blkg_rwstat		merged;
 	/* total time spent on device in ns, may not be accurate w/ queueing */
@@ -811,7 +811,7 @@ struct bfqg_stats {
 	u64				start_idle_time;
 	u64				start_empty_time;
 	uint16_t			flags;
-#endif	/* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
+#endif /* CONFIG_BFQ_CGROUP_DEBUG */
 };
 
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
diff --git a/init/Kconfig b/init/Kconfig
index 36894c9fb420..df9d36ba80e3 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -800,14 +800,6 @@ config BLK_CGROUP
 
 	See Documentation/cgroup-v1/blkio-controller.txt for more information.
 
-config DEBUG_BLK_CGROUP
-	bool "IO controller debugging"
-	depends on BLK_CGROUP
-	default n
-	---help---
-	Enable some debugging help. Currently it exports additional stat
-	files in a cgroup which can be useful for debugging.
-
 config CGROUP_WRITEBACK
 	bool
 	depends on MEMCG && BLK_CGROUP
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/6] blk-cgroup: factor out a helper to read rwstat counter
  2019-06-06 10:26 ` [PATCH 1/6] blk-cgroup: factor out a helper to read rwstat counter Christoph Hellwig
@ 2019-06-06 15:03   ` Chaitanya Kulkarni
  0 siblings, 0 replies; 14+ messages in thread
From: Chaitanya Kulkarni @ 2019-06-06 15:03 UTC (permalink / raw)
  To: Christoph Hellwig, Jens Axboe
  Cc: Paolo Valente, linux-block, cgroups, linux-kernel

Looks good to me.

Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>

On 06/06/2019 03:27 AM, Christoph Hellwig wrote:
> Trying to break up the crazy statements to something readable.
> Also switch to an unsigned counter as it can't ever turn negative.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-cgroup.c         | 5 ++---
>   include/linux/blk-cgroup.h | 7 +++++++
>   2 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index b97b479e4f64..6f79ace02be4 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -750,7 +750,7 @@ struct blkg_rwstat blkg_rwstat_recursive_sum(struct blkcg_gq *blkg,
>   	struct blkcg_gq *pos_blkg;
>   	struct cgroup_subsys_state *pos_css;
>   	struct blkg_rwstat sum = { };
> -	int i;
> +	unsigned int i;
>
>   	lockdep_assert_held(&blkg->q->queue_lock);
>
> @@ -767,8 +767,7 @@ struct blkg_rwstat blkg_rwstat_recursive_sum(struct blkcg_gq *blkg,
>   			rwstat = (void *)pos_blkg + off;
>
>   		for (i = 0; i < BLKG_RWSTAT_NR; i++)
> -			atomic64_add(atomic64_read(&rwstat->aux_cnt[i]) +
> -				percpu_counter_sum_positive(&rwstat->cpu_cnt[i]),
> +			atomic64_add(blkg_rwstat_read_counter(rwstat, i),
>   				&sum.aux_cnt[i]);
>   	}
>   	rcu_read_unlock();
> diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
> index 76c61318fda5..06236f56a840 100644
> --- a/include/linux/blk-cgroup.h
> +++ b/include/linux/blk-cgroup.h
> @@ -198,6 +198,13 @@ int blkcg_activate_policy(struct request_queue *q,
>   void blkcg_deactivate_policy(struct request_queue *q,
>   			     const struct blkcg_policy *pol);
>
> +static inline u64 blkg_rwstat_read_counter(struct blkg_rwstat *rwstat,
> +		unsigned int idx)
> +{
> +	return atomic64_read(&rwstat->aux_cnt[idx]) +
> +		percpu_counter_sum_positive(&rwstat->cpu_cnt[idx]);
> +}
> +
>   const char *blkg_dev_name(struct blkcg_gq *blkg);
>   void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg,
>   		       u64 (*prfill)(struct seq_file *,
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/6] bfq-iosched: move bfq_stat_recursive_sum into the only caller
  2019-06-06 10:26 ` [PATCH 5/6] bfq-iosched: move bfq_stat_recursive_sum into the only caller Christoph Hellwig
@ 2019-06-07  6:07   ` Paolo Valente
  0 siblings, 0 replies; 14+ messages in thread
From: Paolo Valente @ 2019-06-07  6:07 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3020 bytes --]



> Il giorno 6 giu 2019, alle ore 12:26, Christoph Hellwig <hch@lst.de> ha scritto:
> 
> This function was moved from core block code and is way to generic.
> Fold it into the only caller and simplify it based on the actually
> passed arguments.
> 

Acked-by: Paolo Valente <paolo.valente@linaro.org>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/bfq-cgroup.c | 62 ++++++++++++++--------------------------------
> 1 file changed, 19 insertions(+), 43 deletions(-)
> 
> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
> index a691dca7e966..d84302445e30 100644
> --- a/block/bfq-cgroup.c
> +++ b/block/bfq-cgroup.c
> @@ -80,47 +80,6 @@ static inline void bfq_stat_add_aux(struct bfq_stat *to,
> 		     &to->aux_cnt);
> }
> 
> -/**
> - * bfq_stat_recursive_sum - collect hierarchical bfq_stat
> - * @blkg: blkg of interest
> - * @pol: blkcg_policy which contains the bfq_stat
> - * @off: offset to the bfq_stat in blkg_policy_data or @blkg
> - *
> - * Collect the bfq_stat specified by @blkg, @pol and @off and all its
> - * online descendants and their aux counts.  The caller must be holding the
> - * queue lock for online tests.
> - *
> - * If @pol is NULL, bfq_stat is at @off bytes into @blkg; otherwise, it is
> - * at @off bytes into @blkg's blkg_policy_data of the policy.
> - */
> -static u64 bfq_stat_recursive_sum(struct blkcg_gq *blkg,
> -			    struct blkcg_policy *pol, int off)
> -{
> -	struct blkcg_gq *pos_blkg;
> -	struct cgroup_subsys_state *pos_css;
> -	u64 sum = 0;
> -
> -	lockdep_assert_held(&blkg->q->queue_lock);
> -
> -	rcu_read_lock();
> -	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
> -		struct bfq_stat *stat;
> -
> -		if (!pos_blkg->online)
> -			continue;
> -
> -		if (pol)
> -			stat = (void *)blkg_to_pd(pos_blkg, pol) + off;
> -		else
> -			stat = (void *)blkg + off;
> -
> -		sum += bfq_stat_read(stat) + atomic64_read(&stat->aux_cnt);
> -	}
> -	rcu_read_unlock();
> -
> -	return sum;
> -}
> -
> /**
>  * blkg_prfill_stat - prfill callback for bfq_stat
>  * @sf: seq_file to print to
> @@ -1045,8 +1004,25 @@ static int bfqg_print_rwstat(struct seq_file *sf, void *v)
> static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
> 				      struct blkg_policy_data *pd, int off)
> {
> -	u64 sum = bfq_stat_recursive_sum(pd_to_blkg(pd),
> -					  &blkcg_policy_bfq, off);
> +	struct blkcg_gq *blkg = pd_to_blkg(pd);
> +	struct blkcg_gq *pos_blkg;
> +	struct cgroup_subsys_state *pos_css;
> +	u64 sum = 0;
> +
> +	lockdep_assert_held(&blkg->q->queue_lock);
> +
> +	rcu_read_lock();
> +	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
> +		struct bfq_stat *stat;
> +
> +		if (!pos_blkg->online)
> +			continue;
> +
> +		stat = (void *)blkg_to_pd(pos_blkg, &blkcg_policy_bfq) + off;
> +		sum += bfq_stat_read(stat) + atomic64_read(&stat->aux_cnt);
> +	}
> +	rcu_read_unlock();
> +
> 	return __blkg_prfill_u64(sf, pd, sum);
> }
> 
> --
> 2.20.1
> 


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/6] blk-cgroup: move struct blkg_stat to bfq
  2019-06-06 10:26 ` [PATCH 4/6] blk-cgroup: move struct blkg_stat to bfq Christoph Hellwig
@ 2019-06-07  6:07   ` Paolo Valente
  0 siblings, 0 replies; 14+ messages in thread
From: Paolo Valente @ 2019-06-07  6:07 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 18573 bytes --]



> Il giorno 6 giu 2019, alle ore 12:26, Christoph Hellwig <hch@lst.de> ha scritto:
> 
> This structure and assorted infrastructure is only used by the bfq I/O
> scheduler.  Move it there instead of bloating the common code.
> 

Acked-by: Paolo Valente <paolo.valente@linaro.org>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> block/bfq-cgroup.c         | 192 ++++++++++++++++++++++++++++++-------
> block/bfq-iosched.h        |  19 ++--
> block/blk-cgroup.c         |  56 -----------
> include/linux/blk-cgroup.h |  71 --------------
> 4 files changed, 167 insertions(+), 171 deletions(-)
> 
> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
> index 624374a99c6e..a691dca7e966 100644
> --- a/block/bfq-cgroup.c
> +++ b/block/bfq-cgroup.c
> @@ -17,6 +17,124 @@
> 
> #if defined(CONFIG_BFQ_GROUP_IOSCHED) &&  defined(CONFIG_DEBUG_BLK_CGROUP)
> 
> +static int bfq_stat_init(struct bfq_stat *stat, gfp_t gfp)
> +{
> +	int ret;
> +
> +	ret = percpu_counter_init(&stat->cpu_cnt, 0, gfp);
> +	if (ret)
> +		return ret;
> +
> +	atomic64_set(&stat->aux_cnt, 0);
> +	return 0;
> +}
> +
> +static void bfq_stat_exit(struct bfq_stat *stat)
> +{
> +	percpu_counter_destroy(&stat->cpu_cnt);
> +}
> +
> +/**
> + * bfq_stat_add - add a value to a bfq_stat
> + * @stat: target bfq_stat
> + * @val: value to add
> + *
> + * Add @val to @stat.  The caller must ensure that IRQ on the same CPU
> + * don't re-enter this function for the same counter.
> + */
> +static inline void bfq_stat_add(struct bfq_stat *stat, uint64_t val)
> +{
> +	percpu_counter_add_batch(&stat->cpu_cnt, val, BLKG_STAT_CPU_BATCH);
> +}
> +
> +/**
> + * bfq_stat_read - read the current value of a bfq_stat
> + * @stat: bfq_stat to read
> + */
> +static inline uint64_t bfq_stat_read(struct bfq_stat *stat)
> +{
> +	return percpu_counter_sum_positive(&stat->cpu_cnt);
> +}
> +
> +/**
> + * bfq_stat_reset - reset a bfq_stat
> + * @stat: bfq_stat to reset
> + */
> +static inline void bfq_stat_reset(struct bfq_stat *stat)
> +{
> +	percpu_counter_set(&stat->cpu_cnt, 0);
> +	atomic64_set(&stat->aux_cnt, 0);
> +}
> +
> +/**
> + * bfq_stat_add_aux - add a bfq_stat into another's aux count
> + * @to: the destination bfq_stat
> + * @from: the source
> + *
> + * Add @from's count including the aux one to @to's aux count.
> + */
> +static inline void bfq_stat_add_aux(struct bfq_stat *to,
> +				     struct bfq_stat *from)
> +{
> +	atomic64_add(bfq_stat_read(from) + atomic64_read(&from->aux_cnt),
> +		     &to->aux_cnt);
> +}
> +
> +/**
> + * bfq_stat_recursive_sum - collect hierarchical bfq_stat
> + * @blkg: blkg of interest
> + * @pol: blkcg_policy which contains the bfq_stat
> + * @off: offset to the bfq_stat in blkg_policy_data or @blkg
> + *
> + * Collect the bfq_stat specified by @blkg, @pol and @off and all its
> + * online descendants and their aux counts.  The caller must be holding the
> + * queue lock for online tests.
> + *
> + * If @pol is NULL, bfq_stat is at @off bytes into @blkg; otherwise, it is
> + * at @off bytes into @blkg's blkg_policy_data of the policy.
> + */
> +static u64 bfq_stat_recursive_sum(struct blkcg_gq *blkg,
> +			    struct blkcg_policy *pol, int off)
> +{
> +	struct blkcg_gq *pos_blkg;
> +	struct cgroup_subsys_state *pos_css;
> +	u64 sum = 0;
> +
> +	lockdep_assert_held(&blkg->q->queue_lock);
> +
> +	rcu_read_lock();
> +	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
> +		struct bfq_stat *stat;
> +
> +		if (!pos_blkg->online)
> +			continue;
> +
> +		if (pol)
> +			stat = (void *)blkg_to_pd(pos_blkg, pol) + off;
> +		else
> +			stat = (void *)blkg + off;
> +
> +		sum += bfq_stat_read(stat) + atomic64_read(&stat->aux_cnt);
> +	}
> +	rcu_read_unlock();
> +
> +	return sum;
> +}
> +
> +/**
> + * blkg_prfill_stat - prfill callback for bfq_stat
> + * @sf: seq_file to print to
> + * @pd: policy private data of interest
> + * @off: offset to the bfq_stat in @pd
> + *
> + * prfill callback for printing a bfq_stat.
> + */
> +static u64 blkg_prfill_stat(struct seq_file *sf, struct blkg_policy_data *pd,
> +		int off)
> +{
> +	return __blkg_prfill_u64(sf, pd, bfq_stat_read((void *)pd + off));
> +}
> +
> /* bfqg stats flags */
> enum bfqg_stats_flags {
> 	BFQG_stats_waiting = 0,
> @@ -53,7 +171,7 @@ static void bfqg_stats_update_group_wait_time(struct bfqg_stats *stats)
> 
> 	now = ktime_get_ns();
> 	if (now > stats->start_group_wait_time)
> -		blkg_stat_add(&stats->group_wait_time,
> +		bfq_stat_add(&stats->group_wait_time,
> 			      now - stats->start_group_wait_time);
> 	bfqg_stats_clear_waiting(stats);
> }
> @@ -82,14 +200,14 @@ static void bfqg_stats_end_empty_time(struct bfqg_stats *stats)
> 
> 	now = ktime_get_ns();
> 	if (now > stats->start_empty_time)
> -		blkg_stat_add(&stats->empty_time,
> +		bfq_stat_add(&stats->empty_time,
> 			      now - stats->start_empty_time);
> 	bfqg_stats_clear_empty(stats);
> }
> 
> void bfqg_stats_update_dequeue(struct bfq_group *bfqg)
> {
> -	blkg_stat_add(&bfqg->stats.dequeue, 1);
> +	bfq_stat_add(&bfqg->stats.dequeue, 1);
> }
> 
> void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg)
> @@ -119,7 +237,7 @@ void bfqg_stats_update_idle_time(struct bfq_group *bfqg)
> 		u64 now = ktime_get_ns();
> 
> 		if (now > stats->start_idle_time)
> -			blkg_stat_add(&stats->idle_time,
> +			bfq_stat_add(&stats->idle_time,
> 				      now - stats->start_idle_time);
> 		bfqg_stats_clear_idling(stats);
> 	}
> @@ -137,9 +255,9 @@ void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg)
> {
> 	struct bfqg_stats *stats = &bfqg->stats;
> 
> -	blkg_stat_add(&stats->avg_queue_size_sum,
> +	bfq_stat_add(&stats->avg_queue_size_sum,
> 		      blkg_rwstat_total(&stats->queued));
> -	blkg_stat_add(&stats->avg_queue_size_samples, 1);
> +	bfq_stat_add(&stats->avg_queue_size_samples, 1);
> 	bfqg_stats_update_group_wait_time(stats);
> }
> 
> @@ -279,13 +397,13 @@ static void bfqg_stats_reset(struct bfqg_stats *stats)
> 	blkg_rwstat_reset(&stats->merged);
> 	blkg_rwstat_reset(&stats->service_time);
> 	blkg_rwstat_reset(&stats->wait_time);
> -	blkg_stat_reset(&stats->time);
> -	blkg_stat_reset(&stats->avg_queue_size_sum);
> -	blkg_stat_reset(&stats->avg_queue_size_samples);
> -	blkg_stat_reset(&stats->dequeue);
> -	blkg_stat_reset(&stats->group_wait_time);
> -	blkg_stat_reset(&stats->idle_time);
> -	blkg_stat_reset(&stats->empty_time);
> +	bfq_stat_reset(&stats->time);
> +	bfq_stat_reset(&stats->avg_queue_size_sum);
> +	bfq_stat_reset(&stats->avg_queue_size_samples);
> +	bfq_stat_reset(&stats->dequeue);
> +	bfq_stat_reset(&stats->group_wait_time);
> +	bfq_stat_reset(&stats->idle_time);
> +	bfq_stat_reset(&stats->empty_time);
> #endif
> }
> 
> @@ -300,14 +418,14 @@ static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from)
> 	blkg_rwstat_add_aux(&to->merged, &from->merged);
> 	blkg_rwstat_add_aux(&to->service_time, &from->service_time);
> 	blkg_rwstat_add_aux(&to->wait_time, &from->wait_time);
> -	blkg_stat_add_aux(&from->time, &from->time);
> -	blkg_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum);
> -	blkg_stat_add_aux(&to->avg_queue_size_samples,
> +	bfq_stat_add_aux(&from->time, &from->time);
> +	bfq_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum);
> +	bfq_stat_add_aux(&to->avg_queue_size_samples,
> 			  &from->avg_queue_size_samples);
> -	blkg_stat_add_aux(&to->dequeue, &from->dequeue);
> -	blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time);
> -	blkg_stat_add_aux(&to->idle_time, &from->idle_time);
> -	blkg_stat_add_aux(&to->empty_time, &from->empty_time);
> +	bfq_stat_add_aux(&to->dequeue, &from->dequeue);
> +	bfq_stat_add_aux(&to->group_wait_time, &from->group_wait_time);
> +	bfq_stat_add_aux(&to->idle_time, &from->idle_time);
> +	bfq_stat_add_aux(&to->empty_time, &from->empty_time);
> #endif
> }
> 
> @@ -360,13 +478,13 @@ static void bfqg_stats_exit(struct bfqg_stats *stats)
> 	blkg_rwstat_exit(&stats->service_time);
> 	blkg_rwstat_exit(&stats->wait_time);
> 	blkg_rwstat_exit(&stats->queued);
> -	blkg_stat_exit(&stats->time);
> -	blkg_stat_exit(&stats->avg_queue_size_sum);
> -	blkg_stat_exit(&stats->avg_queue_size_samples);
> -	blkg_stat_exit(&stats->dequeue);
> -	blkg_stat_exit(&stats->group_wait_time);
> -	blkg_stat_exit(&stats->idle_time);
> -	blkg_stat_exit(&stats->empty_time);
> +	bfq_stat_exit(&stats->time);
> +	bfq_stat_exit(&stats->avg_queue_size_sum);
> +	bfq_stat_exit(&stats->avg_queue_size_samples);
> +	bfq_stat_exit(&stats->dequeue);
> +	bfq_stat_exit(&stats->group_wait_time);
> +	bfq_stat_exit(&stats->idle_time);
> +	bfq_stat_exit(&stats->empty_time);
> #endif
> }
> 
> @@ -377,13 +495,13 @@ static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
> 	    blkg_rwstat_init(&stats->service_time, gfp) ||
> 	    blkg_rwstat_init(&stats->wait_time, gfp) ||
> 	    blkg_rwstat_init(&stats->queued, gfp) ||
> -	    blkg_stat_init(&stats->time, gfp) ||
> -	    blkg_stat_init(&stats->avg_queue_size_sum, gfp) ||
> -	    blkg_stat_init(&stats->avg_queue_size_samples, gfp) ||
> -	    blkg_stat_init(&stats->dequeue, gfp) ||
> -	    blkg_stat_init(&stats->group_wait_time, gfp) ||
> -	    blkg_stat_init(&stats->idle_time, gfp) ||
> -	    blkg_stat_init(&stats->empty_time, gfp)) {
> +	    bfq_stat_init(&stats->time, gfp) ||
> +	    bfq_stat_init(&stats->avg_queue_size_sum, gfp) ||
> +	    bfq_stat_init(&stats->avg_queue_size_samples, gfp) ||
> +	    bfq_stat_init(&stats->dequeue, gfp) ||
> +	    bfq_stat_init(&stats->group_wait_time, gfp) ||
> +	    bfq_stat_init(&stats->idle_time, gfp) ||
> +	    bfq_stat_init(&stats->empty_time, gfp)) {
> 		bfqg_stats_exit(stats);
> 		return -ENOMEM;
> 	}
> @@ -927,7 +1045,7 @@ static int bfqg_print_rwstat(struct seq_file *sf, void *v)
> static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
> 				      struct blkg_policy_data *pd, int off)
> {
> -	u64 sum = blkg_stat_recursive_sum(pd_to_blkg(pd),
> +	u64 sum = bfq_stat_recursive_sum(pd_to_blkg(pd),
> 					  &blkcg_policy_bfq, off);
> 	return __blkg_prfill_u64(sf, pd, sum);
> }
> @@ -996,11 +1114,11 @@ static u64 bfqg_prfill_avg_queue_size(struct seq_file *sf,
> 				      struct blkg_policy_data *pd, int off)
> {
> 	struct bfq_group *bfqg = pd_to_bfqg(pd);
> -	u64 samples = blkg_stat_read(&bfqg->stats.avg_queue_size_samples);
> +	u64 samples = bfq_stat_read(&bfqg->stats.avg_queue_size_samples);
> 	u64 v = 0;
> 
> 	if (samples) {
> -		v = blkg_stat_read(&bfqg->stats.avg_queue_size_sum);
> +		v = bfq_stat_read(&bfqg->stats.avg_queue_size_sum);
> 		v = div64_u64(v, samples);
> 	}
> 	__blkg_prfill_u64(sf, pd, v);
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index c2faa77824f8..aef4fa0046b8 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -777,6 +777,11 @@ enum bfqq_expiration {
> 	BFQQE_PREEMPTED		/* preemption in progress */
> };
> 
> +struct bfq_stat {
> +	struct percpu_counter		cpu_cnt;
> +	atomic64_t			aux_cnt;
> +};
> +
> struct bfqg_stats {
> #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
> 	/* number of ios merged */
> @@ -788,19 +793,19 @@ struct bfqg_stats {
> 	/* number of IOs queued up */
> 	struct blkg_rwstat		queued;
> 	/* total disk time and nr sectors dispatched by this group */
> -	struct blkg_stat		time;
> +	struct bfq_stat		time;
> 	/* sum of number of ios queued across all samples */
> -	struct blkg_stat		avg_queue_size_sum;
> +	struct bfq_stat		avg_queue_size_sum;
> 	/* count of samples taken for average */
> -	struct blkg_stat		avg_queue_size_samples;
> +	struct bfq_stat		avg_queue_size_samples;
> 	/* how many times this group has been removed from service tree */
> -	struct blkg_stat		dequeue;
> +	struct bfq_stat		dequeue;
> 	/* total time spent waiting for it to be assigned a timeslice. */
> -	struct blkg_stat		group_wait_time;
> +	struct bfq_stat		group_wait_time;
> 	/* time spent idling for this blkcg_gq */
> -	struct blkg_stat		idle_time;
> +	struct bfq_stat		idle_time;
> 	/* total time with empty current active q with other requests queued */
> -	struct blkg_stat		empty_time;
> +	struct bfq_stat		empty_time;
> 	/* fields after this shouldn't be cleared on stat reset */
> 	u64				start_group_wait_time;
> 	u64				start_idle_time;
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index aad0f5d9a2ea..1aa5c64675d6 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -577,20 +577,6 @@ u64 __blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
> }
> EXPORT_SYMBOL_GPL(__blkg_prfill_rwstat);
> 
> -/**
> - * blkg_prfill_stat - prfill callback for blkg_stat
> - * @sf: seq_file to print to
> - * @pd: policy private data of interest
> - * @off: offset to the blkg_stat in @pd
> - *
> - * prfill callback for printing a blkg_stat.
> - */
> -u64 blkg_prfill_stat(struct seq_file *sf, struct blkg_policy_data *pd, int off)
> -{
> -	return __blkg_prfill_u64(sf, pd, blkg_stat_read((void *)pd + off));
> -}
> -EXPORT_SYMBOL_GPL(blkg_prfill_stat);
> -
> /**
>  * blkg_prfill_rwstat - prfill callback for blkg_rwstat
>  * @sf: seq_file to print to
> @@ -692,48 +678,6 @@ int blkg_print_stat_ios_recursive(struct seq_file *sf, void *v)
> }
> EXPORT_SYMBOL_GPL(blkg_print_stat_ios_recursive);
> 
> -/**
> - * blkg_stat_recursive_sum - collect hierarchical blkg_stat
> - * @blkg: blkg of interest
> - * @pol: blkcg_policy which contains the blkg_stat
> - * @off: offset to the blkg_stat in blkg_policy_data or @blkg
> - *
> - * Collect the blkg_stat specified by @blkg, @pol and @off and all its
> - * online descendants and their aux counts.  The caller must be holding the
> - * queue lock for online tests.
> - *
> - * If @pol is NULL, blkg_stat is at @off bytes into @blkg; otherwise, it is
> - * at @off bytes into @blkg's blkg_policy_data of the policy.
> - */
> -u64 blkg_stat_recursive_sum(struct blkcg_gq *blkg,
> -			    struct blkcg_policy *pol, int off)
> -{
> -	struct blkcg_gq *pos_blkg;
> -	struct cgroup_subsys_state *pos_css;
> -	u64 sum = 0;
> -
> -	lockdep_assert_held(&blkg->q->queue_lock);
> -
> -	rcu_read_lock();
> -	blkg_for_each_descendant_pre(pos_blkg, pos_css, blkg) {
> -		struct blkg_stat *stat;
> -
> -		if (!pos_blkg->online)
> -			continue;
> -
> -		if (pol)
> -			stat = (void *)blkg_to_pd(pos_blkg, pol) + off;
> -		else
> -			stat = (void *)blkg + off;
> -
> -		sum += blkg_stat_read(stat) + atomic64_read(&stat->aux_cnt);
> -	}
> -	rcu_read_unlock();
> -
> -	return sum;
> -}
> -EXPORT_SYMBOL_GPL(blkg_stat_recursive_sum);
> -
> /**
>  * blkg_rwstat_recursive_sum - collect hierarchical blkg_rwstat
>  * @blkg: blkg of interest
> diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
> index e4a81767e111..33f23a858438 100644
> --- a/include/linux/blk-cgroup.h
> +++ b/include/linux/blk-cgroup.h
> @@ -65,11 +65,6 @@ struct blkcg {
>  * blkg_[rw]stat->aux_cnt is excluded for local stats but included for
>  * recursive.  Used to carry stats of dead children.
>  */
> -struct blkg_stat {
> -	struct percpu_counter		cpu_cnt;
> -	atomic64_t			aux_cnt;
> -};
> -
> struct blkg_rwstat {
> 	struct percpu_counter		cpu_cnt[BLKG_RWSTAT_NR];
> 	atomic64_t			aux_cnt[BLKG_RWSTAT_NR];
> @@ -217,7 +212,6 @@ void blkcg_print_blkgs(struct seq_file *sf, struct blkcg *blkcg,
> u64 __blkg_prfill_u64(struct seq_file *sf, struct blkg_policy_data *pd, u64 v);
> u64 __blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
> 			 const struct blkg_rwstat_sample *rwstat);
> -u64 blkg_prfill_stat(struct seq_file *sf, struct blkg_policy_data *pd, int off);
> u64 blkg_prfill_rwstat(struct seq_file *sf, struct blkg_policy_data *pd,
> 		       int off);
> int blkg_print_stat_bytes(struct seq_file *sf, void *v);
> @@ -225,8 +219,6 @@ int blkg_print_stat_ios(struct seq_file *sf, void *v);
> int blkg_print_stat_bytes_recursive(struct seq_file *sf, void *v);
> int blkg_print_stat_ios_recursive(struct seq_file *sf, void *v);
> 
> -u64 blkg_stat_recursive_sum(struct blkcg_gq *blkg,
> -			    struct blkcg_policy *pol, int off);
> void blkg_rwstat_recursive_sum(struct blkcg_gq *blkg, struct blkcg_policy *pol,
> 		int off, struct blkg_rwstat_sample *sum);
> 
> @@ -579,69 +571,6 @@ static inline void blkg_put(struct blkcg_gq *blkg)
> 		if (((d_blkg) = __blkg_lookup(css_to_blkcg(pos_css),	\
> 					      (p_blkg)->q, false)))
> 
> -static inline int blkg_stat_init(struct blkg_stat *stat, gfp_t gfp)
> -{
> -	int ret;
> -
> -	ret = percpu_counter_init(&stat->cpu_cnt, 0, gfp);
> -	if (ret)
> -		return ret;
> -
> -	atomic64_set(&stat->aux_cnt, 0);
> -	return 0;
> -}
> -
> -static inline void blkg_stat_exit(struct blkg_stat *stat)
> -{
> -	percpu_counter_destroy(&stat->cpu_cnt);
> -}
> -
> -/**
> - * blkg_stat_add - add a value to a blkg_stat
> - * @stat: target blkg_stat
> - * @val: value to add
> - *
> - * Add @val to @stat.  The caller must ensure that IRQ on the same CPU
> - * don't re-enter this function for the same counter.
> - */
> -static inline void blkg_stat_add(struct blkg_stat *stat, uint64_t val)
> -{
> -	percpu_counter_add_batch(&stat->cpu_cnt, val, BLKG_STAT_CPU_BATCH);
> -}
> -
> -/**
> - * blkg_stat_read - read the current value of a blkg_stat
> - * @stat: blkg_stat to read
> - */
> -static inline uint64_t blkg_stat_read(struct blkg_stat *stat)
> -{
> -	return percpu_counter_sum_positive(&stat->cpu_cnt);
> -}
> -
> -/**
> - * blkg_stat_reset - reset a blkg_stat
> - * @stat: blkg_stat to reset
> - */
> -static inline void blkg_stat_reset(struct blkg_stat *stat)
> -{
> -	percpu_counter_set(&stat->cpu_cnt, 0);
> -	atomic64_set(&stat->aux_cnt, 0);
> -}
> -
> -/**
> - * blkg_stat_add_aux - add a blkg_stat into another's aux count
> - * @to: the destination blkg_stat
> - * @from: the source
> - *
> - * Add @from's count including the aux one to @to's aux count.
> - */
> -static inline void blkg_stat_add_aux(struct blkg_stat *to,
> -				     struct blkg_stat *from)
> -{
> -	atomic64_add(blkg_stat_read(from) + atomic64_read(&from->aux_cnt),
> -		     &to->aux_cnt);
> -}
> -
> static inline int blkg_rwstat_init(struct blkg_rwstat *rwstat, gfp_t gfp)
> {
> 	int i, ret;
> --
> 2.20.1
> 


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 6/6] block: rename CONFIG_DEBUG_BLK_CGROUP to CONFIG_BFQ_CGROUP_DEBUG
  2019-06-06 10:26 ` [PATCH 6/6] block: rename CONFIG_DEBUG_BLK_CGROUP to CONFIG_BFQ_CGROUP_DEBUG Christoph Hellwig
@ 2019-06-07  6:07   ` Paolo Valente
  0 siblings, 0 replies; 14+ messages in thread
From: Paolo Valente @ 2019-06-07  6:07 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Jens Axboe, linux-block, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 14073 bytes --]



> Il giorno 6 giu 2019, alle ore 12:26, Christoph Hellwig <hch@lst.de> ha scritto:
> 
> This option is entirely bfq specific, give it an appropinquate name.
> 
> Also make it depend on CONFIG_BFQ_GROUP_IOSCHED in Kconfig, as all
> the functionality already does so anyway.
> 

Acked-by: Paolo Valente <paolo.valente@linaro.org>

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> Documentation/block/bfq-iosched.txt          | 12 ++++-----
> Documentation/cgroup-v1/blkio-controller.txt | 12 ++++-----
> block/Kconfig.iosched                        |  7 +++++
> block/bfq-cgroup.c                           | 27 ++++++++++----------
> block/bfq-iosched.c                          |  8 +++---
> block/bfq-iosched.h                          |  4 +--
> init/Kconfig                                 |  8 ------
> 7 files changed, 38 insertions(+), 40 deletions(-)
> 
> diff --git a/Documentation/block/bfq-iosched.txt b/Documentation/block/bfq-iosched.txt
> index 1a0f2ac02eb6..f02163fabf80 100644
> --- a/Documentation/block/bfq-iosched.txt
> +++ b/Documentation/block/bfq-iosched.txt
> @@ -38,13 +38,13 @@ stack). To give an idea of the limits with BFQ, on slow or average
> CPUs, here are, first, the limits of BFQ for three different CPUs, on,
> respectively, an average laptop, an old desktop, and a cheap embedded
> system, in case full hierarchical support is enabled (i.e.,
> -CONFIG_BFQ_GROUP_IOSCHED is set), but CONFIG_DEBUG_BLK_CGROUP is not
> +CONFIG_BFQ_GROUP_IOSCHED is set), but CONFIG_BFQ_CGROUP_DEBUG is not
> set (Section 4-2):
> - Intel i7-4850HQ: 400 KIOPS
> - AMD A8-3850: 250 KIOPS
> - ARM CortexTM-A53 Octa-core: 80 KIOPS
> 
> -If CONFIG_DEBUG_BLK_CGROUP is set (and of course full hierarchical
> +If CONFIG_BFQ_CGROUP_DEBUG is set (and of course full hierarchical
> support is enabled), then the sustainable throughput with BFQ
> decreases, because all blkio.bfq* statistics are created and updated
> (Section 4-2). For BFQ, this leads to the following maximum
> @@ -537,19 +537,19 @@ or io.bfq.weight.
> 
> As for cgroups-v1 (blkio controller), the exact set of stat files
> created, and kept up-to-date by bfq, depends on whether
> -CONFIG_DEBUG_BLK_CGROUP is set. If it is set, then bfq creates all
> +CONFIG_BFQ_CGROUP_DEBUG is set. If it is set, then bfq creates all
> the stat files documented in
> Documentation/cgroup-v1/blkio-controller.txt. If, instead,
> -CONFIG_DEBUG_BLK_CGROUP is not set, then bfq creates only the files
> +CONFIG_BFQ_CGROUP_DEBUG is not set, then bfq creates only the files
> blkio.bfq.io_service_bytes
> blkio.bfq.io_service_bytes_recursive
> blkio.bfq.io_serviced
> blkio.bfq.io_serviced_recursive
> 
> -The value of CONFIG_DEBUG_BLK_CGROUP greatly influences the maximum
> +The value of CONFIG_BFQ_CGROUP_DEBUG greatly influences the maximum
> throughput sustainable with bfq, because updating the blkio.bfq.*
> stats is rather costly, especially for some of the stats enabled by
> -CONFIG_DEBUG_BLK_CGROUP.
> +CONFIG_BFQ_CGROUP_DEBUG.
> 
> Parameters to set
> -----------------
> diff --git a/Documentation/cgroup-v1/blkio-controller.txt b/Documentation/cgroup-v1/blkio-controller.txt
> index 673dc34d3f78..47cf84102f88 100644
> --- a/Documentation/cgroup-v1/blkio-controller.txt
> +++ b/Documentation/cgroup-v1/blkio-controller.txt
> @@ -126,7 +126,7 @@ Various user visible config options
> CONFIG_BLK_CGROUP
> 	- Block IO controller.
> 
> -CONFIG_DEBUG_BLK_CGROUP
> +CONFIG_BFQ_CGROUP_DEBUG
> 	- Debug help. Right now some additional stats file show up in cgroup
> 	  if this option is enabled.
> 
> @@ -246,13 +246,13 @@ Proportional weight policy files
> 	  write, sync or async.
> 
> - blkio.avg_queue_size
> -	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
> +	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y.
> 	  The average queue size for this cgroup over the entire time of this
> 	  cgroup's existence. Queue size samples are taken each time one of the
> 	  queues of this cgroup gets a timeslice.
> 
> - blkio.group_wait_time
> -	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
> +	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y.
> 	  This is the amount of time the cgroup had to wait since it became busy
> 	  (i.e., went from 0 to 1 request queued) to get a timeslice for one of
> 	  its queues. This is different from the io_wait_time which is the
> @@ -263,7 +263,7 @@ Proportional weight policy files
> 	  got a timeslice and will not include the current delta.
> 
> - blkio.empty_time
> -	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
> +	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y.
> 	  This is the amount of time a cgroup spends without any pending
> 	  requests when not being served, i.e., it does not include any time
> 	  spent idling for one of the queues of the cgroup. This is in
> @@ -272,7 +272,7 @@ Proportional weight policy files
> 	  time it had a pending request and will not include the current delta.
> 
> - blkio.idle_time
> -	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
> +	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y.
> 	  This is the amount of time spent by the IO scheduler idling for a
> 	  given cgroup in anticipation of a better request than the existing ones
> 	  from other queues/cgroups. This is in nanoseconds. If this is read
> @@ -281,7 +281,7 @@ Proportional weight policy files
> 	  the current delta.
> 
> - blkio.dequeue
> -	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
> +	- Debugging aid only enabled if CONFIG_BFQ_CGROUP_DEBUG=y. This
> 	  gives the statistics about how many a times a group was dequeued
> 	  from service tree of the device. First two fields specify the major
> 	  and minor number of the device and third field specifies the number
> diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
> index 4626b88b2d5a..7a6b2f29a582 100644
> --- a/block/Kconfig.iosched
> +++ b/block/Kconfig.iosched
> @@ -36,6 +36,13 @@ config BFQ_GROUP_IOSCHED
>        Enable hierarchical scheduling in BFQ, using the blkio
>        (cgroups-v1) or io (cgroups-v2) controller.
> 
> +config BFQ_CGROUP_DEBUG
> +	bool "BFQ IO controller debugging"
> +	depends on BFQ_GROUP_IOSCHED
> +	---help---
> +	Enable some debugging help. Currently it exports additional stat
> +	files in a cgroup which can be useful for debugging.
> +
> endmenu
> 
> endif
> diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
> index d84302445e30..0f6cd688924f 100644
> --- a/block/bfq-cgroup.c
> +++ b/block/bfq-cgroup.c
> @@ -15,8 +15,7 @@
> 
> #include "bfq-iosched.h"
> 
> -#if defined(CONFIG_BFQ_GROUP_IOSCHED) &&  defined(CONFIG_DEBUG_BLK_CGROUP)
> -
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> static int bfq_stat_init(struct bfq_stat *stat, gfp_t gfp)
> {
> 	int ret;
> @@ -253,7 +252,7 @@ void bfqg_stats_update_completion(struct bfq_group *bfqg, u64 start_time_ns,
> 				io_start_time_ns - start_time_ns);
> }
> 
> -#else /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
> +#else /* CONFIG_BFQ_CGROUP_DEBUG */
> 
> void bfqg_stats_update_io_add(struct bfq_group *bfqg, struct bfq_queue *bfqq,
> 			      unsigned int op) { }
> @@ -267,7 +266,7 @@ void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { }
> void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { }
> void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { }
> 
> -#endif /* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
> +#endif /* CONFIG_BFQ_CGROUP_DEBUG */
> 
> #ifdef CONFIG_BFQ_GROUP_IOSCHED
> 
> @@ -351,7 +350,7 @@ void bfqg_and_blkg_put(struct bfq_group *bfqg)
> /* @stats = 0 */
> static void bfqg_stats_reset(struct bfqg_stats *stats)
> {
> -#ifdef CONFIG_DEBUG_BLK_CGROUP
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> 	/* queued stats shouldn't be cleared */
> 	blkg_rwstat_reset(&stats->merged);
> 	blkg_rwstat_reset(&stats->service_time);
> @@ -372,7 +371,7 @@ static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from)
> 	if (!to || !from)
> 		return;
> 
> -#ifdef CONFIG_DEBUG_BLK_CGROUP
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> 	/* queued stats shouldn't be cleared */
> 	blkg_rwstat_add_aux(&to->merged, &from->merged);
> 	blkg_rwstat_add_aux(&to->service_time, &from->service_time);
> @@ -432,7 +431,7 @@ void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg)
> 
> static void bfqg_stats_exit(struct bfqg_stats *stats)
> {
> -#ifdef CONFIG_DEBUG_BLK_CGROUP
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> 	blkg_rwstat_exit(&stats->merged);
> 	blkg_rwstat_exit(&stats->service_time);
> 	blkg_rwstat_exit(&stats->wait_time);
> @@ -449,7 +448,7 @@ static void bfqg_stats_exit(struct bfqg_stats *stats)
> 
> static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
> {
> -#ifdef CONFIG_DEBUG_BLK_CGROUP
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> 	if (blkg_rwstat_init(&stats->merged, gfp) ||
> 	    blkg_rwstat_init(&stats->service_time, gfp) ||
> 	    blkg_rwstat_init(&stats->wait_time, gfp) ||
> @@ -986,7 +985,7 @@ static ssize_t bfq_io_set_weight(struct kernfs_open_file *of,
> 	return ret ?: nbytes;
> }
> 
> -#ifdef CONFIG_DEBUG_BLK_CGROUP
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> static int bfqg_print_stat(struct seq_file *sf, void *v)
> {
> 	blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat,
> @@ -1109,7 +1108,7 @@ static int bfqg_print_avg_queue_size(struct seq_file *sf, void *v)
> 			  0, false);
> 	return 0;
> }
> -#endif /* CONFIG_DEBUG_BLK_CGROUP */
> +#endif /* CONFIG_BFQ_CGROUP_DEBUG */
> 
> struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
> {
> @@ -1157,7 +1156,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
> 		.private = (unsigned long)&blkcg_policy_bfq,
> 		.seq_show = blkg_print_stat_ios,
> 	},
> -#ifdef CONFIG_DEBUG_BLK_CGROUP
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> 	{
> 		.name = "bfq.time",
> 		.private = offsetof(struct bfq_group, stats.time),
> @@ -1187,7 +1186,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
> 		.private = offsetof(struct bfq_group, stats.queued),
> 		.seq_show = bfqg_print_rwstat,
> 	},
> -#endif /* CONFIG_DEBUG_BLK_CGROUP */
> +#endif /* CONFIG_BFQ_CGROUP_DEBUG */
> 
> 	/* the same statistics which cover the bfqg and its descendants */
> 	{
> @@ -1200,7 +1199,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
> 		.private = (unsigned long)&blkcg_policy_bfq,
> 		.seq_show = blkg_print_stat_ios_recursive,
> 	},
> -#ifdef CONFIG_DEBUG_BLK_CGROUP
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> 	{
> 		.name = "bfq.time_recursive",
> 		.private = offsetof(struct bfq_group, stats.time),
> @@ -1254,7 +1253,7 @@ struct cftype bfq_blkcg_legacy_files[] = {
> 		.private = offsetof(struct bfq_group, stats.dequeue),
> 		.seq_show = bfqg_print_stat,
> 	},
> -#endif	/* CONFIG_DEBUG_BLK_CGROUP */
> +#endif	/* CONFIG_BFQ_CGROUP_DEBUG */
> 	{ }	/* terminate */
> };
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index f8d430f88d25..e9a587707d67 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -4403,7 +4403,7 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
> 	return rq;
> }
> 
> -#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> static void bfq_update_dispatch_stats(struct request_queue *q,
> 				      struct request *rq,
> 				      struct bfq_queue *in_serv_queue,
> @@ -4453,7 +4453,7 @@ static inline void bfq_update_dispatch_stats(struct request_queue *q,
> 					     struct request *rq,
> 					     struct bfq_queue *in_serv_queue,
> 					     bool idle_timer_disabled) {}
> -#endif
> +#endif /* CONFIG_BFQ_CGROUP_DEBUG */
> 
> static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
> {
> @@ -5007,7 +5007,7 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
> 	return idle_timer_disabled;
> }
> 
> -#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> static void bfq_update_insert_stats(struct request_queue *q,
> 				    struct bfq_queue *bfqq,
> 				    bool idle_timer_disabled,
> @@ -5037,7 +5037,7 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
> 					   struct bfq_queue *bfqq,
> 					   bool idle_timer_disabled,
> 					   unsigned int cmd_flags) {}
> -#endif
> +#endif /* CONFIG_BFQ_CGROUP_DEBUG */
> 
> static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
> 			       bool at_head)
> diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
> index aef4fa0046b8..584d3c9ed8ba 100644
> --- a/block/bfq-iosched.h
> +++ b/block/bfq-iosched.h
> @@ -783,7 +783,7 @@ struct bfq_stat {
> };
> 
> struct bfqg_stats {
> -#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
> +#ifdef CONFIG_BFQ_CGROUP_DEBUG
> 	/* number of ios merged */
> 	struct blkg_rwstat		merged;
> 	/* total time spent on device in ns, may not be accurate w/ queueing */
> @@ -811,7 +811,7 @@ struct bfqg_stats {
> 	u64				start_idle_time;
> 	u64				start_empty_time;
> 	uint16_t			flags;
> -#endif	/* CONFIG_BFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */
> +#endif /* CONFIG_BFQ_CGROUP_DEBUG */
> };
> 
> #ifdef CONFIG_BFQ_GROUP_IOSCHED
> diff --git a/init/Kconfig b/init/Kconfig
> index 36894c9fb420..df9d36ba80e3 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -800,14 +800,6 @@ config BLK_CGROUP
> 
> 	See Documentation/cgroup-v1/blkio-controller.txt for more information.
> 
> -config DEBUG_BLK_CGROUP
> -	bool "IO controller debugging"
> -	depends on BLK_CGROUP
> -	default n
> -	---help---
> -	Enable some debugging help. Currently it exports additional stat
> -	files in a cgroup which can be useful for debugging.
> -
> config CGROUP_WRITEBACK
> 	bool
> 	depends on MEMCG && BLK_CGROUP
> --
> 2.20.1
> 


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: blk-cgroup cleanups
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
                   ` (5 preceding siblings ...)
  2019-06-06 10:26 ` [PATCH 6/6] block: rename CONFIG_DEBUG_BLK_CGROUP to CONFIG_BFQ_CGROUP_DEBUG Christoph Hellwig
@ 2019-06-10 18:23 ` Tejun Heo
  2019-06-20  9:42 ` Christoph Hellwig
  2019-06-20 16:32 ` Jens Axboe
  8 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2019-06-10 18:23 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jens Axboe, Paolo Valente, linux-block, cgroups, linux-kernel

On Thu, Jun 06, 2019 at 12:26:18PM +0200, Christoph Hellwig wrote:
> Hi all,
> 
> below are a couple of cleanups I came up with when trying to understand
> the blk-cgroup code.

For the whole patchset,

 Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: blk-cgroup cleanups
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
                   ` (6 preceding siblings ...)
  2019-06-10 18:23 ` blk-cgroup cleanups Tejun Heo
@ 2019-06-20  9:42 ` Christoph Hellwig
  2019-06-20 16:32 ` Jens Axboe
  8 siblings, 0 replies; 14+ messages in thread
From: Christoph Hellwig @ 2019-06-20  9:42 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

ping?

On Thu, Jun 06, 2019 at 12:26:18PM +0200, Christoph Hellwig wrote:
> Hi all,
> 
> below are a couple of cleanups I came up with when trying to understand
> the blk-cgroup code.
---end quoted text---

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: blk-cgroup cleanups
  2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
                   ` (7 preceding siblings ...)
  2019-06-20  9:42 ` Christoph Hellwig
@ 2019-06-20 16:32 ` Jens Axboe
  8 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2019-06-20 16:32 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Paolo Valente, linux-block, cgroups, linux-kernel

On 6/6/19 4:26 AM, Christoph Hellwig wrote:
> Hi all,
> 
> below are a couple of cleanups I came up with when trying to understand
> the blk-cgroup code.

Applied, thanks.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-06-20 16:32 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-06 10:26 blk-cgroup cleanups Christoph Hellwig
2019-06-06 10:26 ` [PATCH 1/6] blk-cgroup: factor out a helper to read rwstat counter Christoph Hellwig
2019-06-06 15:03   ` Chaitanya Kulkarni
2019-06-06 10:26 ` [PATCH 2/6] blk-cgroup: pass blkg_rwstat structures by reference Christoph Hellwig
2019-06-06 10:26 ` [PATCH 3/6] blk-cgroup: introduce a new struct blkg_rwstat_sample Christoph Hellwig
2019-06-06 10:26 ` [PATCH 4/6] blk-cgroup: move struct blkg_stat to bfq Christoph Hellwig
2019-06-07  6:07   ` Paolo Valente
2019-06-06 10:26 ` [PATCH 5/6] bfq-iosched: move bfq_stat_recursive_sum into the only caller Christoph Hellwig
2019-06-07  6:07   ` Paolo Valente
2019-06-06 10:26 ` [PATCH 6/6] block: rename CONFIG_DEBUG_BLK_CGROUP to CONFIG_BFQ_CGROUP_DEBUG Christoph Hellwig
2019-06-07  6:07   ` Paolo Valente
2019-06-10 18:23 ` blk-cgroup cleanups Tejun Heo
2019-06-20  9:42 ` Christoph Hellwig
2019-06-20 16:32 ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).