linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/10] sched/psi: some optimization and extension
@ 2022-08-24  8:18 Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 01/10] sched/psi: fix periodic aggregation shut off Chengming Zhou
                   ` (9 more replies)
  0 siblings, 10 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

Hi all,

This patch series are some optimizations and extensions for PSI.

patch 1/10 fix periodic aggregation shut off problem introduced by earlier
commit 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups").

patch 2-4 are some misc optimizations, so put them in front of this series.

patch 5/10 optimize task switch inside shared cgroups when in_memstall status
of prev task and next task are different.

patch 6/10 remove NR_ONCPU task accounting to save 4 bytes in the first
cacheline to be used by the following patch 7/10, which introduce new
PSI resource PSI_IRQ to track IRQ/SOFTIRQ pressure stall information.

patch 8-9 cache parent psi_group in struct psi_group to speed up
the hot iteration path.

patch 10/10 introduce a per-cgroup interface "cgroup.pressure" to disable
or re-enable PSI in the cgroup level, and we implement hiding and unhiding
the pressure files per Tejun's suggestion[1], which depends on his work[2].

[1] https://lore.kernel.org/all/YvqjhqJQi2J8RG3X@slm.duckdns.org/
[2] https://lore.kernel.org/all/20220820000550.367085-1-tj@kernel.org/

Performance test using mmtests/config-scheduler-perfpipe in /user.slice/user-0.slice/session-4.scope

                                 next                patched       patched/only-leaf
Min       Time        8.82 (   0.00%)        8.49 (   3.74%)        8.00 (   9.32%)
1st-qrtle Time        8.90 (   0.00%)        8.58 (   3.63%)        8.05 (   9.58%)
2nd-qrtle Time        8.94 (   0.00%)        8.61 (   3.65%)        8.09 (   9.50%)
3rd-qrtle Time        8.99 (   0.00%)        8.65 (   3.75%)        8.15 (   9.35%)
Max-1     Time        8.82 (   0.00%)        8.49 (   3.74%)        8.00 (   9.32%)
Max-5     Time        8.82 (   0.00%)        8.49 (   3.74%)        8.00 (   9.32%)
Max-10    Time        8.84 (   0.00%)        8.55 (   3.20%)        8.04 (   9.05%)
Max-90    Time        9.04 (   0.00%)        8.67 (   4.10%)        8.18 (   9.51%)
Max-95    Time        9.04 (   0.00%)        8.68 (   4.03%)        8.20 (   9.26%)
Max-99    Time        9.07 (   0.00%)        8.73 (   3.82%)        8.25 (   9.11%)
Max       Time        9.12 (   0.00%)        8.89 (   2.54%)        8.27 (   9.29%)
Amean     Time        8.95 (   0.00%)        8.62 *   3.67%*        8.11 *   9.43%*

Thanks!

Changes in v3:
 - Rebase on linux-next and reorder patches to put misc optimizations
   patches in the front of this series.
 - Drop patch "sched/psi: don't change task psi_flags when migrate CPU/group"
   since it caused a little performance regression and it's just
   code refactoring, so drop it.
 - Don't define PSI_IRQ and PSI_IRQ_FULL when !CONFIG_IRQ_TIME_ACCOUNTING,
   in which case they are not used.
 - Add patch 8/10 "sched/psi: consolidate cgroup_psi()" make cgroup_psi()
   can handle all cgroups including root cgroup, make patch 9/10 simpler.
 - Rename interface to "cgroup.pressure" and add some explanation
   per Michal's suggestion.
 - Hide and unhide pressure files when disable/re-enable cgroup PSI,
   depends on Tejun's work.

Changes in v2:
 - Add Acked-by tags from Johannes Weiner. Thanks for review!
 - Fix periodic aggregation wakeup for common ancestors in
   psi_task_switch().
 - Add patch 7/10 from Johannes Weiner, which remove NR_ONCPU
   task accounting to save 4 bytes in the first cacheline.
 - Remove "psi_irq=" kernel cmdline parameter in last version.
 - Add per-cgroup interface "cgroup.psi" to disable/re-enable
   PSI stats accounting in the cgroup level.

Chengming Zhou (9):
  sched/psi: fix periodic aggregation shut off
  sched/psi: don't create cgroup PSI files when psi_disabled
  sched/psi: save percpu memory when !psi_cgroups_enabled
  sched/psi: move private helpers to sched/stats.h
  sched/psi: optimize task switch inside shared cgroups again
  sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure
  sched/psi: consolidate cgroup_psi()
  sched/psi: cache parent psi_group to speed up groups iterate
  sched/psi: per-cgroup PSI accounting disable/re-enable interface

Johannes Weiner (1):
  sched/psi: remove NR_ONCPU task accounting

 Documentation/admin-guide/cgroup-v2.rst |  23 +++
 include/linux/cgroup-defs.h             |   3 +
 include/linux/cgroup.h                  |   5 -
 include/linux/psi.h                     |  12 +-
 include/linux/psi_types.h               |  29 ++-
 kernel/cgroup/cgroup.c                  |  94 ++++++++-
 kernel/sched/core.c                     |   1 +
 kernel/sched/psi.c                      | 256 +++++++++++++++++-------
 kernel/sched/stats.h                    |   6 +
 9 files changed, 338 insertions(+), 91 deletions(-)

-- 
2.37.2


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v3 01/10] sched/psi: fix periodic aggregation shut off
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 02/10] sched/psi: don't create cgroup PSI files when psi_disabled Chengming Zhou
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

We don't want to wake periodic aggregation work back up if the
task change is the aggregation worker itself going to sleep, or
we'll ping-pong forever.

Previously, we would use psi_task_change() in psi_dequeue() when
task going to sleep, so this check was put in psi_task_change().

But commit 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups")
defer task sleep handling to psi_task_switch(), won't go through
psi_task_change() anymore.

So this patch move this check to psi_task_switch().

Fixes: 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups")
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
 kernel/sched/psi.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index ecb4b4ff4ce0..39463dcc16bb 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -796,7 +796,6 @@ void psi_task_change(struct task_struct *task, int clear, int set)
 {
 	int cpu = task_cpu(task);
 	struct psi_group *group;
-	bool wake_clock = true;
 	void *iter = NULL;
 	u64 now;
 
@@ -806,19 +805,9 @@ void psi_task_change(struct task_struct *task, int clear, int set)
 	psi_flags_change(task, clear, set);
 
 	now = cpu_clock(cpu);
-	/*
-	 * Periodic aggregation shuts off if there is a period of no
-	 * task changes, so we wake it back up if necessary. However,
-	 * don't do this if the task change is the aggregation worker
-	 * itself going to sleep, or we'll ping-pong forever.
-	 */
-	if (unlikely((clear & TSK_RUNNING) &&
-		     (task->flags & PF_WQ_WORKER) &&
-		     wq_worker_last_func(task) == psi_avgs_work))
-		wake_clock = false;
 
 	while ((group = iterate_groups(task, &iter)))
-		psi_group_change(group, cpu, clear, set, now, wake_clock);
+		psi_group_change(group, cpu, clear, set, now, true);
 }
 
 void psi_task_switch(struct task_struct *prev, struct task_struct *next,
@@ -854,6 +843,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 
 	if (prev->pid) {
 		int clear = TSK_ONCPU, set = 0;
+		bool wake_clock = true;
 
 		/*
 		 * When we're going to sleep, psi_dequeue() lets us
@@ -867,13 +857,23 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 				clear |= TSK_MEMSTALL_RUNNING;
 			if (prev->in_iowait)
 				set |= TSK_IOWAIT;
+
+			/*
+			 * Periodic aggregation shuts off if there is a period of no
+			 * task changes, so we wake it back up if necessary. However,
+			 * don't do this if the task change is the aggregation worker
+			 * itself going to sleep, or we'll ping-pong forever.
+			 */
+			if (unlikely((prev->flags & PF_WQ_WORKER) &&
+				     wq_worker_last_func(prev) == psi_avgs_work))
+				wake_clock = false;
 		}
 
 		psi_flags_change(prev, clear, set);
 
 		iter = NULL;
 		while ((group = iterate_groups(prev, &iter)) && group != common)
-			psi_group_change(group, cpu, clear, set, now, true);
+			psi_group_change(group, cpu, clear, set, now, wake_clock);
 
 		/*
 		 * TSK_ONCPU is handled up to the common ancestor. If we're tasked
@@ -882,7 +882,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 		if (sleep) {
 			clear &= ~TSK_ONCPU;
 			for (; group; group = iterate_groups(prev, &iter))
-				psi_group_change(group, cpu, clear, set, now, true);
+				psi_group_change(group, cpu, clear, set, now, wake_clock);
 		}
 	}
 }
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 02/10] sched/psi: don't create cgroup PSI files when psi_disabled
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 01/10] sched/psi: fix periodic aggregation shut off Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 03/10] sched/psi: save percpu memory when !psi_cgroups_enabled Chengming Zhou
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

commit 3958e2d0c34e ("cgroup: make per-cgroup pressure stall tracking configurable")
make PSI can be configured to skip per-cgroup stall accounting. And
doesn't expose PSI files in cgroup hierarchy.

This patch do the same thing when psi_disabled.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
 kernel/cgroup/cgroup.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 03dbbf8a8c28..2f79ddf9a85d 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -3748,6 +3748,9 @@ static void cgroup_pressure_release(struct kernfs_open_file *of)
 
 bool cgroup_psi_enabled(void)
 {
+	if (static_branch_likely(&psi_disabled))
+		return false;
+
 	return (cgroup_feature_disable_mask & (1 << OPT_FEATURE_PRESSURE)) == 0;
 }
 
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 03/10] sched/psi: save percpu memory when !psi_cgroups_enabled
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 01/10] sched/psi: fix periodic aggregation shut off Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 02/10] sched/psi: don't create cgroup PSI files when psi_disabled Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 04/10] sched/psi: move private helpers to sched/stats.h Chengming Zhou
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

We won't use cgroup psi_group when !psi_cgroups_enabled, so don't
bother to alloc percpu memory and init for it.

Also don't need to migrate task PSI stats between cgroups in
cgroup_move_task().

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
 kernel/sched/psi.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index 39463dcc16bb..77d53c03a76f 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -201,6 +201,7 @@ void __init psi_init(void)
 {
 	if (!psi_enable) {
 		static_branch_enable(&psi_disabled);
+		static_branch_disable(&psi_cgroups_enabled);
 		return;
 	}
 
@@ -950,7 +951,7 @@ void psi_memstall_leave(unsigned long *flags)
 #ifdef CONFIG_CGROUPS
 int psi_cgroup_alloc(struct cgroup *cgroup)
 {
-	if (static_branch_likely(&psi_disabled))
+	if (!static_branch_likely(&psi_cgroups_enabled))
 		return 0;
 
 	cgroup->psi = kzalloc(sizeof(struct psi_group), GFP_KERNEL);
@@ -968,7 +969,7 @@ int psi_cgroup_alloc(struct cgroup *cgroup)
 
 void psi_cgroup_free(struct cgroup *cgroup)
 {
-	if (static_branch_likely(&psi_disabled))
+	if (!static_branch_likely(&psi_cgroups_enabled))
 		return;
 
 	cancel_delayed_work_sync(&cgroup->psi->avgs_work);
@@ -996,7 +997,7 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
 	struct rq_flags rf;
 	struct rq *rq;
 
-	if (static_branch_likely(&psi_disabled)) {
+	if (!static_branch_likely(&psi_cgroups_enabled)) {
 		/*
 		 * Lame to do this here, but the scheduler cannot be locked
 		 * from the outside, so we move cgroups from inside sched/.
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 04/10] sched/psi: move private helpers to sched/stats.h
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
                   ` (2 preceding siblings ...)
  2022-08-24  8:18 ` [PATCH v3 03/10] sched/psi: save percpu memory when !psi_cgroups_enabled Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 05/10] sched/psi: optimize task switch inside shared cgroups again Chengming Zhou
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

This patch move psi_task_change/psi_task_switch declarations out of
PSI public header, since they are only needed for implementing the
PSI stats tracking in sched/stats.h

psi_task_switch is obvious, psi_task_change can't be public helper
since it doesn't check psi_disabled static key. And there is no
any user now, so put it in sched/stats.h too.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
 include/linux/psi.h  | 4 ----
 kernel/sched/stats.h | 4 ++++
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/psi.h b/include/linux/psi.h
index dd74411ac21d..fffd229fbf19 100644
--- a/include/linux/psi.h
+++ b/include/linux/psi.h
@@ -18,10 +18,6 @@ extern struct psi_group psi_system;
 
 void psi_init(void);
 
-void psi_task_change(struct task_struct *task, int clear, int set);
-void psi_task_switch(struct task_struct *prev, struct task_struct *next,
-		     bool sleep);
-
 void psi_memstall_enter(unsigned long *flags);
 void psi_memstall_leave(unsigned long *flags);
 
diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
index baa839c1ba96..c39b467ece43 100644
--- a/kernel/sched/stats.h
+++ b/kernel/sched/stats.h
@@ -107,6 +107,10 @@ __schedstats_from_se(struct sched_entity *se)
 }
 
 #ifdef CONFIG_PSI
+void psi_task_change(struct task_struct *task, int clear, int set);
+void psi_task_switch(struct task_struct *prev, struct task_struct *next,
+		     bool sleep);
+
 /*
  * PSI tracks state that persists across sleeps, such as iowaits and
  * memory stalls. As a result, it has to distinguish between sleeps,
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 05/10] sched/psi: optimize task switch inside shared cgroups again
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
                   ` (3 preceding siblings ...)
  2022-08-24  8:18 ` [PATCH v3 04/10] sched/psi: move private helpers to sched/stats.h Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24 14:06   ` Johannes Weiner
  2022-08-24  8:18 ` [PATCH v3 06/10] sched/psi: remove NR_ONCPU task accounting Chengming Zhou
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

commit 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups")
defer prev task sleep handling to psi_task_switch(), so we don't need
to clear and set TSK_ONCPU state for common cgroups.

    A
    |
    B
   / \
  C   D
 /     \
prev   next

After that commit psi_task_switch() do:
1. psi_group_change(next, .set=TSK_ONCPU) for D
2. psi_group_change(prev, .clear=TSK_ONCPU | TSK_RUNNING) for C
3. psi_group_change(prev, .clear=TSK_RUNNING) for B, A

But there is a limitation "prev->psi_flags == next->psi_flags" that
if not satisfied, will make this cgroups optimization unusable for both
sleep switch or running switch cases. For example:

prev->in_memstall != next->in_memstall when sleep switch:
1. psi_group_change(next, .set=TSK_ONCPU) for D, B, A
2. psi_group_change(prev, .clear=TSK_ONCPU | TSK_RUNNING) for C, B, A

prev->in_memstall != next->in_memstall when running switch:
1. psi_group_change(next, .set=TSK_ONCPU) for D, B, A
2. psi_group_change(prev, .clear=TSK_ONCPU) for C, B, A

The reason why this limitation exist is that we consider a group is
PSI_MEM_FULL if the CPU is actively reclaiming and nothing productive
could run even if it were runnable. So when CPU curr changed from prev
to next and their in_memstall status is different, we have to change
PSI_MEM_FULL status for their common cgroups.

This patch remove this limitation by making psi_group_change() change
PSI_MEM_FULL status depend on CPU curr->in_memstall status.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 kernel/sched/psi.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index 77d53c03a76f..26c03bd56b9c 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -820,8 +820,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 	u64 now = cpu_clock(cpu);
 
 	if (next->pid) {
-		bool identical_state;
-
 		psi_flags_change(next, 0, TSK_ONCPU);
 		/*
 		 * When switching between tasks that have an identical
@@ -829,11 +827,9 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 		 * we reach the first common ancestor. Iterate @next's
 		 * ancestors only until we encounter @prev's ONCPU.
 		 */
-		identical_state = prev->psi_flags == next->psi_flags;
 		iter = NULL;
 		while ((group = iterate_groups(next, &iter))) {
-			if (identical_state &&
-			    per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
+			if (per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
 				common = group;
 				break;
 			}
@@ -880,7 +876,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 		 * TSK_ONCPU is handled up to the common ancestor. If we're tasked
 		 * with dequeuing too, finish that for the rest of the hierarchy.
 		 */
-		if (sleep) {
+		if (sleep || unlikely(prev->in_memstall != next->in_memstall)) {
 			clear &= ~TSK_ONCPU;
 			for (; group; group = iterate_groups(prev, &iter))
 				psi_group_change(group, cpu, clear, set, now, wake_clock);
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 06/10] sched/psi: remove NR_ONCPU task accounting
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
                   ` (4 preceding siblings ...)
  2022-08-24  8:18 ` [PATCH v3 05/10] sched/psi: optimize task switch inside shared cgroups again Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 07/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure Chengming Zhou
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

From: Johannes Weiner <hannes@cmpxchg.org>

We put all fields updated by the scheduler in the first cacheline of
struct psi_group_cpu for performance.

Since we want add another PSI_IRQ_FULL to track IRQ/SOFTIRQ pressure,
we need to reclaim space first. This patch remove NR_ONCPU task accounting
in struct psi_group_cpu, use one bit in state_mask to track instead.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Reviewed-by: Chengming Zhou <zhouchengming@bytedance.com>
Tested-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 include/linux/psi_types.h | 16 +++++++--------
 kernel/sched/psi.c        | 41 ++++++++++++++++++++++++++++-----------
 2 files changed, 37 insertions(+), 20 deletions(-)

diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
index c7fe7c089718..54cb74946db4 100644
--- a/include/linux/psi_types.h
+++ b/include/linux/psi_types.h
@@ -15,13 +15,6 @@ enum psi_task_count {
 	NR_IOWAIT,
 	NR_MEMSTALL,
 	NR_RUNNING,
-	/*
-	 * This can't have values other than 0 or 1 and could be
-	 * implemented as a bit flag. But for now we still have room
-	 * in the first cacheline of psi_group_cpu, and this way we
-	 * don't have to special case any state tracking for it.
-	 */
-	NR_ONCPU,
 	/*
 	 * For IO and CPU stalls the presence of running/oncpu tasks
 	 * in the domain means a partial rather than a full stall.
@@ -32,16 +25,18 @@ enum psi_task_count {
 	 * threads and memstall ones.
 	 */
 	NR_MEMSTALL_RUNNING,
-	NR_PSI_TASK_COUNTS = 5,
+	NR_PSI_TASK_COUNTS = 4,
 };
 
 /* Task state bitmasks */
 #define TSK_IOWAIT	(1 << NR_IOWAIT)
 #define TSK_MEMSTALL	(1 << NR_MEMSTALL)
 #define TSK_RUNNING	(1 << NR_RUNNING)
-#define TSK_ONCPU	(1 << NR_ONCPU)
 #define TSK_MEMSTALL_RUNNING	(1 << NR_MEMSTALL_RUNNING)
 
+/* Only one task can be scheduled, no corresponding task count */
+#define TSK_ONCPU	(1 << NR_PSI_TASK_COUNTS)
+
 /* Resources that workloads could be stalled on */
 enum psi_res {
 	PSI_IO,
@@ -68,6 +63,9 @@ enum psi_states {
 	NR_PSI_STATES = 7,
 };
 
+/* Use one bit in the state mask to track TSK_ONCPU */
+#define PSI_ONCPU	(1 << NR_PSI_STATES)
+
 enum psi_aggregators {
 	PSI_AVGS = 0,
 	PSI_POLL,
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index 26c03bd56b9c..af83531162fc 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -212,7 +212,7 @@ void __init psi_init(void)
 	group_init(&psi_system);
 }
 
-static bool test_state(unsigned int *tasks, enum psi_states state)
+static bool test_state(unsigned int *tasks, enum psi_states state, bool oncpu)
 {
 	switch (state) {
 	case PSI_IO_SOME:
@@ -225,9 +225,9 @@ static bool test_state(unsigned int *tasks, enum psi_states state)
 		return unlikely(tasks[NR_MEMSTALL] &&
 			tasks[NR_RUNNING] == tasks[NR_MEMSTALL_RUNNING]);
 	case PSI_CPU_SOME:
-		return unlikely(tasks[NR_RUNNING] > tasks[NR_ONCPU]);
+		return unlikely(tasks[NR_RUNNING] > oncpu);
 	case PSI_CPU_FULL:
-		return unlikely(tasks[NR_RUNNING] && !tasks[NR_ONCPU]);
+		return unlikely(tasks[NR_RUNNING] && !oncpu);
 	case PSI_NONIDLE:
 		return tasks[NR_IOWAIT] || tasks[NR_MEMSTALL] ||
 			tasks[NR_RUNNING];
@@ -689,9 +689,9 @@ static void psi_group_change(struct psi_group *group, int cpu,
 			     bool wake_clock)
 {
 	struct psi_group_cpu *groupc;
-	u32 state_mask = 0;
 	unsigned int t, m;
 	enum psi_states s;
+	u32 state_mask;
 
 	groupc = per_cpu_ptr(group->pcpu, cpu);
 
@@ -707,17 +707,36 @@ static void psi_group_change(struct psi_group *group, int cpu,
 
 	record_times(groupc, now);
 
+	/*
+	 * Start with TSK_ONCPU, which doesn't have a corresponding
+	 * task count - it's just a boolean flag directly encoded in
+	 * the state mask. Clear, set, or carry the current state if
+	 * no changes are requested.
+	 */
+	if (unlikely(clear & TSK_ONCPU)) {
+		state_mask = 0;
+		clear &= ~TSK_ONCPU;
+	} else if (unlikely(set & TSK_ONCPU)) {
+		state_mask = PSI_ONCPU;
+		set &= ~TSK_ONCPU;
+	} else {
+		state_mask = groupc->state_mask & PSI_ONCPU;
+	}
+
+	/*
+	 * The rest of the state mask is calculated based on the task
+	 * counts. Update those first, then construct the mask.
+	 */
 	for (t = 0, m = clear; m; m &= ~(1 << t), t++) {
 		if (!(m & (1 << t)))
 			continue;
 		if (groupc->tasks[t]) {
 			groupc->tasks[t]--;
 		} else if (!psi_bug) {
-			printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u %u] clear=%x set=%x\n",
+			printk_deferred(KERN_ERR "psi: task underflow! cpu=%d t=%d tasks=[%u %u %u %u] clear=%x set=%x\n",
 					cpu, t, groupc->tasks[0],
 					groupc->tasks[1], groupc->tasks[2],
-					groupc->tasks[3], groupc->tasks[4],
-					clear, set);
+					groupc->tasks[3], clear, set);
 			psi_bug = 1;
 		}
 	}
@@ -726,9 +745,8 @@ static void psi_group_change(struct psi_group *group, int cpu,
 		if (set & (1 << t))
 			groupc->tasks[t]++;
 
-	/* Calculate state mask representing active states */
 	for (s = 0; s < NR_PSI_STATES; s++) {
-		if (test_state(groupc->tasks, s))
+		if (test_state(groupc->tasks, s, state_mask & PSI_ONCPU))
 			state_mask |= (1 << s);
 	}
 
@@ -740,7 +758,7 @@ static void psi_group_change(struct psi_group *group, int cpu,
 	 * task in a cgroup is in_memstall, the corresponding groupc
 	 * on that cpu is in PSI_MEM_FULL state.
 	 */
-	if (unlikely(groupc->tasks[NR_ONCPU] && cpu_curr(cpu)->in_memstall))
+	if (unlikely((state_mask & PSI_ONCPU) && cpu_curr(cpu)->in_memstall))
 		state_mask |= (1 << PSI_MEM_FULL);
 
 	groupc->state_mask = state_mask;
@@ -829,7 +847,8 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 		 */
 		iter = NULL;
 		while ((group = iterate_groups(next, &iter))) {
-			if (per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
+			if (per_cpu_ptr(group->pcpu, cpu)->state_mask &
+			    PSI_ONCPU) {
 				common = group;
 				break;
 			}
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 07/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
                   ` (5 preceding siblings ...)
  2022-08-24  8:18 ` [PATCH v3 06/10] sched/psi: remove NR_ONCPU task accounting Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24 10:46   ` Johannes Weiner
  2022-08-24  8:18 ` [PATCH v3 08/10] sched/psi: consolidate cgroup_psi() Chengming Zhou
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

Now PSI already tracked workload pressure stall information for
CPU, memory and IO. Apart from these, IRQ/SOFTIRQ could have
obvious impact on some workload productivity, such as web service
workload.

When CONFIG_IRQ_TIME_ACCOUNTING, we can get IRQ/SOFTIRQ delta time
from update_rq_clock_task(), in which we can record that delta
to CPU curr task's cgroups as PSI_IRQ_FULL status.

Note we don't use PSI_IRQ_SOME since IRQ/SOFTIRQ always happen in
the current task on the CPU, make nothing productive could run
even if it were runnable, so we only use PSI_IRQ_FULL.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 Documentation/admin-guide/cgroup-v2.rst |  6 ++
 include/linux/psi_types.h               | 10 +++-
 kernel/cgroup/cgroup.c                  | 27 +++++++++
 kernel/sched/core.c                     |  1 +
 kernel/sched/psi.c                      | 74 ++++++++++++++++++++++++-
 kernel/sched/stats.h                    |  2 +
 6 files changed, 116 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index be4a77baf784..971c418bc778 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -976,6 +976,12 @@ All cgroup core files are prefixed with "cgroup."
 	killing cgroups is a process directed operation, i.e. it affects
 	the whole thread-group.
 
+  irq.pressure
+	A read-write nested-keyed file.
+
+	Shows pressure stall information for IRQ/SOFTIRQ. See
+	:ref:`Documentation/accounting/psi.rst <psi>` for details.
+
 Controllers
 ===========
 
diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
index 54cb74946db4..40c28171cd91 100644
--- a/include/linux/psi_types.h
+++ b/include/linux/psi_types.h
@@ -42,7 +42,10 @@ enum psi_res {
 	PSI_IO,
 	PSI_MEM,
 	PSI_CPU,
-	NR_PSI_RESOURCES = 3,
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+	PSI_IRQ,
+#endif
+	NR_PSI_RESOURCES,
 };
 
 /*
@@ -58,9 +61,12 @@ enum psi_states {
 	PSI_MEM_FULL,
 	PSI_CPU_SOME,
 	PSI_CPU_FULL,
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+	PSI_IRQ_FULL,
+#endif
 	/* Only per-CPU, to weigh the CPU in the global average: */
 	PSI_NONIDLE,
-	NR_PSI_STATES = 7,
+	NR_PSI_STATES,
 };
 
 /* Use one bit in the state mask to track TSK_ONCPU */
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 2f79ddf9a85d..8540878469e6 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -3731,6 +3731,23 @@ static ssize_t cgroup_cpu_pressure_write(struct kernfs_open_file *of,
 	return cgroup_pressure_write(of, buf, nbytes, PSI_CPU);
 }
 
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+static int cgroup_irq_pressure_show(struct seq_file *seq, void *v)
+{
+	struct cgroup *cgrp = seq_css(seq)->cgroup;
+	struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+
+	return psi_show(seq, psi, PSI_IRQ);
+}
+
+static ssize_t cgroup_irq_pressure_write(struct kernfs_open_file *of,
+					  char *buf, size_t nbytes,
+					  loff_t off)
+{
+	return cgroup_pressure_write(of, buf, nbytes, PSI_IRQ);
+}
+#endif
+
 static __poll_t cgroup_pressure_poll(struct kernfs_open_file *of,
 					  poll_table *pt)
 {
@@ -5150,6 +5167,16 @@ static struct cftype cgroup_base_files[] = {
 		.poll = cgroup_pressure_poll,
 		.release = cgroup_pressure_release,
 	},
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+	{
+		.name = "irq.pressure",
+		.flags = CFTYPE_PRESSURE,
+		.seq_show = cgroup_irq_pressure_show,
+		.write = cgroup_irq_pressure_write,
+		.poll = cgroup_pressure_poll,
+		.release = cgroup_pressure_release,
+	},
+#endif
 #endif /* CONFIG_PSI */
 	{ }	/* terminate */
 };
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 61436b8e0337..178f9836ae96 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -708,6 +708,7 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
 
 	rq->prev_irq_time += irq_delta;
 	delta -= irq_delta;
+	psi_account_irqtime(rq->curr, irq_delta);
 #endif
 #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
 	if (static_key_false((&paravirt_steal_rq_enabled))) {
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index af83531162fc..7aab6f13ed12 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -903,6 +903,36 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 	}
 }
 
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+void psi_account_irqtime(struct task_struct *task, u32 delta)
+{
+	int cpu = task_cpu(task);
+	void *iter = NULL;
+	struct psi_group *group;
+	struct psi_group_cpu *groupc;
+	u64 now;
+
+	if (!task->pid)
+		return;
+
+	now = cpu_clock(cpu);
+
+	while ((group = iterate_groups(task, &iter))) {
+		groupc = per_cpu_ptr(group->pcpu, cpu);
+
+		write_seqcount_begin(&groupc->seq);
+
+		record_times(groupc, now);
+		groupc->times[PSI_IRQ_FULL] += delta;
+
+		write_seqcount_end(&groupc->seq);
+
+		if (group->poll_states & (1 << PSI_IRQ_FULL))
+			psi_schedule_poll_work(group, 1);
+	}
+}
+#endif
+
 /**
  * psi_memstall_enter - mark the beginning of a memory stall section
  * @flags: flags to handle nested sections
@@ -1064,6 +1094,7 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
 
 int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
 {
+	bool only_full = false;
 	int full;
 	u64 now;
 
@@ -1078,7 +1109,11 @@ int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
 		group->avg_next_update = update_averages(group, now);
 	mutex_unlock(&group->avgs_lock);
 
-	for (full = 0; full < 2; full++) {
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+	only_full = res == PSI_IRQ;
+#endif
+
+	for (full = 0; full < 2 - only_full; full++) {
 		unsigned long avg[3] = { 0, };
 		u64 total = 0;
 		int w;
@@ -1092,7 +1127,7 @@ int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
 		}
 
 		seq_printf(m, "%s avg10=%lu.%02lu avg60=%lu.%02lu avg300=%lu.%02lu total=%llu\n",
-			   full ? "full" : "some",
+			   full || only_full ? "full" : "some",
 			   LOAD_INT(avg[0]), LOAD_FRAC(avg[0]),
 			   LOAD_INT(avg[1]), LOAD_FRAC(avg[1]),
 			   LOAD_INT(avg[2]), LOAD_FRAC(avg[2]),
@@ -1120,6 +1155,11 @@ struct psi_trigger *psi_trigger_create(struct psi_group *group,
 	else
 		return ERR_PTR(-EINVAL);
 
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+	if (res == PSI_IRQ && --state != PSI_IRQ_FULL)
+		return ERR_PTR(-EINVAL);
+#endif
+
 	if (state >= PSI_NONIDLE)
 		return ERR_PTR(-EINVAL);
 
@@ -1404,6 +1444,33 @@ static const struct proc_ops psi_cpu_proc_ops = {
 	.proc_release	= psi_fop_release,
 };
 
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+static int psi_irq_show(struct seq_file *m, void *v)
+{
+	return psi_show(m, &psi_system, PSI_IRQ);
+}
+
+static int psi_irq_open(struct inode *inode, struct file *file)
+{
+	return psi_open(file, psi_irq_show);
+}
+
+static ssize_t psi_irq_write(struct file *file, const char __user *user_buf,
+			     size_t nbytes, loff_t *ppos)
+{
+	return psi_write(file, user_buf, nbytes, PSI_IRQ);
+}
+
+static const struct proc_ops psi_irq_proc_ops = {
+	.proc_open	= psi_irq_open,
+	.proc_read	= seq_read,
+	.proc_lseek	= seq_lseek,
+	.proc_write	= psi_irq_write,
+	.proc_poll	= psi_fop_poll,
+	.proc_release	= psi_fop_release,
+};
+#endif
+
 static int __init psi_proc_init(void)
 {
 	if (psi_enable) {
@@ -1411,6 +1478,9 @@ static int __init psi_proc_init(void)
 		proc_create("pressure/io", 0666, NULL, &psi_io_proc_ops);
 		proc_create("pressure/memory", 0666, NULL, &psi_memory_proc_ops);
 		proc_create("pressure/cpu", 0666, NULL, &psi_cpu_proc_ops);
+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
+		proc_create("pressure/irq", 0666, NULL, &psi_irq_proc_ops);
+#endif
 	}
 	return 0;
 }
diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
index c39b467ece43..84a188913cc9 100644
--- a/kernel/sched/stats.h
+++ b/kernel/sched/stats.h
@@ -110,6 +110,7 @@ __schedstats_from_se(struct sched_entity *se)
 void psi_task_change(struct task_struct *task, int clear, int set);
 void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 		     bool sleep);
+void psi_account_irqtime(struct task_struct *task, u32 delta);
 
 /*
  * PSI tracks state that persists across sleeps, such as iowaits and
@@ -205,6 +206,7 @@ static inline void psi_ttwu_dequeue(struct task_struct *p) {}
 static inline void psi_sched_switch(struct task_struct *prev,
 				    struct task_struct *next,
 				    bool sleep) {}
+static inline void psi_account_irqtime(struct task_struct *task, u32 delta) {}
 #endif /* CONFIG_PSI */
 
 #ifdef CONFIG_SCHED_INFO
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 08/10] sched/psi: consolidate cgroup_psi()
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
                   ` (6 preceding siblings ...)
  2022-08-24  8:18 ` [PATCH v3 07/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24 10:19   ` Johannes Weiner
  2022-08-24  8:18 ` [PATCH v3 09/10] sched/psi: cache parent psi_group to speed up groups iterate Chengming Zhou
  2022-08-24  8:18 ` [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface Chengming Zhou
  9 siblings, 1 reply; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

cgroup_psi() can't return psi_group for root cgroup, so we have many
open code "psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi".

This patch move cgroup_psi() definition to <linux/psi.h>, in which
we can return psi_system for root cgroup, so can handle all cgroups.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 include/linux/cgroup.h |  5 -----
 include/linux/psi.h    |  6 ++++++
 kernel/cgroup/cgroup.c | 10 +++++-----
 3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 7ed1fa7a6fc8..3c48753f2949 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -682,11 +682,6 @@ static inline void pr_cont_cgroup_path(struct cgroup *cgrp)
 	pr_cont_kernfs_path(cgrp->kn);
 }
 
-static inline struct psi_group *cgroup_psi(struct cgroup *cgrp)
-{
-	return cgrp->psi;
-}
-
 bool cgroup_psi_enabled(void);
 
 static inline void cgroup_init_kthreadd(void)
diff --git a/include/linux/psi.h b/include/linux/psi.h
index fffd229fbf19..362a74ca1d3b 100644
--- a/include/linux/psi.h
+++ b/include/linux/psi.h
@@ -7,6 +7,7 @@
 #include <linux/sched.h>
 #include <linux/poll.h>
 #include <linux/cgroup-defs.h>
+#include <linux/cgroup.h>
 
 struct seq_file;
 struct css_set;
@@ -30,6 +31,11 @@ __poll_t psi_trigger_poll(void **trigger_ptr, struct file *file,
 			poll_table *wait);
 
 #ifdef CONFIG_CGROUPS
+static inline struct psi_group *cgroup_psi(struct cgroup *cgrp)
+{
+	return cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+}
+
 int psi_cgroup_alloc(struct cgroup *cgrp);
 void psi_cgroup_free(struct cgroup *cgrp);
 void cgroup_move_task(struct task_struct *p, struct css_set *to);
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 8540878469e6..cc228235ce38 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -3657,21 +3657,21 @@ static int cpu_stat_show(struct seq_file *seq, void *v)
 static int cgroup_io_pressure_show(struct seq_file *seq, void *v)
 {
 	struct cgroup *cgrp = seq_css(seq)->cgroup;
-	struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+	struct psi_group *psi = cgroup_psi(cgrp);
 
 	return psi_show(seq, psi, PSI_IO);
 }
 static int cgroup_memory_pressure_show(struct seq_file *seq, void *v)
 {
 	struct cgroup *cgrp = seq_css(seq)->cgroup;
-	struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+	struct psi_group *psi = cgroup_psi(cgrp);
 
 	return psi_show(seq, psi, PSI_MEM);
 }
 static int cgroup_cpu_pressure_show(struct seq_file *seq, void *v)
 {
 	struct cgroup *cgrp = seq_css(seq)->cgroup;
-	struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+	struct psi_group *psi = cgroup_psi(cgrp);
 
 	return psi_show(seq, psi, PSI_CPU);
 }
@@ -3697,7 +3697,7 @@ static ssize_t cgroup_pressure_write(struct kernfs_open_file *of, char *buf,
 		return -EBUSY;
 	}
 
-	psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+	psi = cgroup_psi(cgrp);
 	new = psi_trigger_create(psi, buf, res);
 	if (IS_ERR(new)) {
 		cgroup_put(cgrp);
@@ -3735,7 +3735,7 @@ static ssize_t cgroup_cpu_pressure_write(struct kernfs_open_file *of,
 static int cgroup_irq_pressure_show(struct seq_file *seq, void *v)
 {
 	struct cgroup *cgrp = seq_css(seq)->cgroup;
-	struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi;
+	struct psi_group *psi = cgroup_psi(cgrp);
 
 	return psi_show(seq, psi, PSI_IRQ);
 }
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 09/10] sched/psi: cache parent psi_group to speed up groups iterate
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
                   ` (7 preceding siblings ...)
  2022-08-24  8:18 ` [PATCH v3 08/10] sched/psi: consolidate cgroup_psi() Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24 10:18   ` Johannes Weiner
  2022-08-24  8:18 ` [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface Chengming Zhou
  9 siblings, 1 reply; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

We use iterate_groups() to iterate each level psi_group to update
PSI stats, which is a very hot path.

In current code, iterate_groups() have to use multiple branches and
cgroup_parent() to get parent psi_group for each level, which is not
very efficient.

This patch cache parent psi_group in struct psi_group, only need to get
psi_group of task itself first, then just use group->parent to iterate.

Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 include/linux/psi_types.h |  2 ++
 kernel/sched/psi.c        | 47 ++++++++++++++++-----------------------
 2 files changed, 21 insertions(+), 28 deletions(-)

diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
index 40c28171cd91..a0b746258c68 100644
--- a/include/linux/psi_types.h
+++ b/include/linux/psi_types.h
@@ -151,6 +151,8 @@ struct psi_trigger {
 };
 
 struct psi_group {
+	struct psi_group *parent;
+
 	/* Protects data used by the aggregator */
 	struct mutex avgs_lock;
 
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index 7aab6f13ed12..814e99b1fed3 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -772,30 +772,18 @@ static void psi_group_change(struct psi_group *group, int cpu,
 		schedule_delayed_work(&group->avgs_work, PSI_FREQ);
 }
 
-static struct psi_group *iterate_groups(struct task_struct *task, void **iter)
+static inline struct psi_group *task_psi_group(struct task_struct *task)
 {
-	if (*iter == &psi_system)
-		return NULL;
-
 #ifdef CONFIG_CGROUPS
-	if (static_branch_likely(&psi_cgroups_enabled)) {
-		struct cgroup *cgroup = NULL;
-
-		if (!*iter)
-			cgroup = task->cgroups->dfl_cgrp;
-		else
-			cgroup = cgroup_parent(*iter);
-
-		if (cgroup && cgroup_parent(cgroup)) {
-			*iter = cgroup;
-			return cgroup_psi(cgroup);
-		}
-	}
+	if (static_branch_likely(&psi_cgroups_enabled))
+		return cgroup_psi(task_dfl_cgroup(task));
 #endif
-	*iter = &psi_system;
 	return &psi_system;
 }
 
+#define for_each_psi_group(group) \
+	for (; group; group = group->parent)
+
 static void psi_flags_change(struct task_struct *task, int clear, int set)
 {
 	if (((task->psi_flags & set) ||
@@ -815,7 +803,6 @@ void psi_task_change(struct task_struct *task, int clear, int set)
 {
 	int cpu = task_cpu(task);
 	struct psi_group *group;
-	void *iter = NULL;
 	u64 now;
 
 	if (!task->pid)
@@ -825,7 +812,8 @@ void psi_task_change(struct task_struct *task, int clear, int set)
 
 	now = cpu_clock(cpu);
 
-	while ((group = iterate_groups(task, &iter)))
+	group = task_psi_group(task);
+	for_each_psi_group(group)
 		psi_group_change(group, cpu, clear, set, now, true);
 }
 
@@ -834,7 +822,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 {
 	struct psi_group *group, *common = NULL;
 	int cpu = task_cpu(prev);
-	void *iter;
 	u64 now = cpu_clock(cpu);
 
 	if (next->pid) {
@@ -845,8 +832,8 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 		 * we reach the first common ancestor. Iterate @next's
 		 * ancestors only until we encounter @prev's ONCPU.
 		 */
-		iter = NULL;
-		while ((group = iterate_groups(next, &iter))) {
+		group = task_psi_group(next);
+		for_each_psi_group(group) {
 			if (per_cpu_ptr(group->pcpu, cpu)->state_mask &
 			    PSI_ONCPU) {
 				common = group;
@@ -887,9 +874,12 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 
 		psi_flags_change(prev, clear, set);
 
-		iter = NULL;
-		while ((group = iterate_groups(prev, &iter)) && group != common)
+		group = task_psi_group(prev);
+		for_each_psi_group(group) {
+			if (group == common)
+				break;
 			psi_group_change(group, cpu, clear, set, now, wake_clock);
+		}
 
 		/*
 		 * TSK_ONCPU is handled up to the common ancestor. If we're tasked
@@ -897,7 +887,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 		 */
 		if (sleep || unlikely(prev->in_memstall != next->in_memstall)) {
 			clear &= ~TSK_ONCPU;
-			for (; group; group = iterate_groups(prev, &iter))
+			for_each_psi_group(group)
 				psi_group_change(group, cpu, clear, set, now, wake_clock);
 		}
 	}
@@ -907,7 +897,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
 void psi_account_irqtime(struct task_struct *task, u32 delta)
 {
 	int cpu = task_cpu(task);
-	void *iter = NULL;
 	struct psi_group *group;
 	struct psi_group_cpu *groupc;
 	u64 now;
@@ -917,7 +906,8 @@ void psi_account_irqtime(struct task_struct *task, u32 delta)
 
 	now = cpu_clock(cpu);
 
-	while ((group = iterate_groups(task, &iter))) {
+	group = task_psi_group(task);
+	for_each_psi_group(group) {
 		groupc = per_cpu_ptr(group->pcpu, cpu);
 
 		write_seqcount_begin(&groupc->seq);
@@ -1009,6 +999,7 @@ int psi_cgroup_alloc(struct cgroup *cgroup)
 		return -ENOMEM;
 	}
 	group_init(cgroup->psi);
+	cgroup->psi->parent = cgroup_psi(cgroup_parent(cgroup));
 	return 0;
 }
 
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface
  2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
                   ` (8 preceding siblings ...)
  2022-08-24  8:18 ` [PATCH v3 09/10] sched/psi: cache parent psi_group to speed up groups iterate Chengming Zhou
@ 2022-08-24  8:18 ` Chengming Zhou
  2022-08-24  9:59   ` Johannes Weiner
  9 siblings, 1 reply; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24  8:18 UTC (permalink / raw)
  To: tj, hannes, mkoutny, surenb
  Cc: gregkh, corbet, mingo, peterz, songmuchun, cgroups, linux-doc,
	linux-kernel, Chengming Zhou

PSI accounts stalls for each cgroup separately and aggregates it
at each level of the hierarchy. This may cause non-negligible overhead
for some workloads when under deep level of the hierarchy.

commit 3958e2d0c34e ("cgroup: make per-cgroup pressure stall tracking configurable")
make PSI to skip per-cgroup stall accounting, only account system-wide
to avoid this each level overhead.

But for our use case, we also want leaf cgroup PSI stats accounted for
userspace adjustment on that cgroup, apart from only system-wide adjustment.

So this patch introduce a per-cgroup PSI accounting disable/re-enable
interface "cgroup.pressure", which is a read-write single value file that
allowed values are "0" and "1", the defaults is "1" so per-cgroup
PSI stats is enabled by default.

Implementation details:

It should be relatively straight-forward to disable and re-enable
state aggregation, time tracking, averaging on a per-cgroup level,
if we can live with losing history from while it was disabled.
I.e. the avgs will restart from 0, total= will have gaps.

But it's hard or complex to stop/restart groupc->tasks[] updates,
which is not implemented in this patch. So we always update
groupc->tasks[] and PSI_ONCPU bit in psi_group_change() even when
the cgroup PSI stats is disabled.

Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 Documentation/admin-guide/cgroup-v2.rst | 17 +++++++
 include/linux/cgroup-defs.h             |  3 ++
 include/linux/psi.h                     |  2 +
 include/linux/psi_types.h               |  1 +
 kernel/cgroup/cgroup.c                  | 56 +++++++++++++++++++++++
 kernel/sched/psi.c                      | 59 ++++++++++++++++++++++---
 6 files changed, 131 insertions(+), 7 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 971c418bc778..4cad4e2b31ec 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -976,6 +976,23 @@ All cgroup core files are prefixed with "cgroup."
 	killing cgroups is a process directed operation, i.e. it affects
 	the whole thread-group.
 
+  cgroup.pressure
+	A read-write single value file that allowed values are "0" and "1".
+	The default is "1".
+
+	Writing "0" to the file will disable the cgroup PSI accounting.
+	Writing "1" to the file will re-enable the cgroup PSI accounting.
+
+	This control attribute is not hierarchical, so disable or enable PSI
+	accounting in a cgroup does not affect PSI accounting in descendants
+	and doesn't need pass enablement via ancestors from root.
+
+	The reason this control attribute exists is that PSI accounts stalls for
+	each cgroup separately and aggregates it at each level of the hierarchy.
+	This may cause non-negligible overhead for some workloads when under
+	deep level of the hierarchy, in which case this control attribute can
+	be used to disable PSI accounting in the non-leaf cgroups.
+
   irq.pressure
 	A read-write nested-keyed file.
 
diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index 1283993d7ea8..cfdb74a89c5c 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -428,6 +428,9 @@ struct cgroup {
 	struct cgroup_file procs_file;	/* handle for "cgroup.procs" */
 	struct cgroup_file events_file;	/* handle for "cgroup.events" */
 
+	/* handles for "{cpu,memory,io,irq}.pressure" */
+	struct cgroup_file psi_files[NR_PSI_RESOURCES];
+
 	/*
 	 * The bitmask of subsystems enabled on the child cgroups.
 	 * ->subtree_control is the one configured through
diff --git a/include/linux/psi.h b/include/linux/psi.h
index 362a74ca1d3b..b09c0c611fa7 100644
--- a/include/linux/psi.h
+++ b/include/linux/psi.h
@@ -39,6 +39,7 @@ static inline struct psi_group *cgroup_psi(struct cgroup *cgrp)
 int psi_cgroup_alloc(struct cgroup *cgrp);
 void psi_cgroup_free(struct cgroup *cgrp);
 void cgroup_move_task(struct task_struct *p, struct css_set *to);
+void psi_cgroup_enabled_sync(struct psi_group *group);
 #endif
 
 #else /* CONFIG_PSI */
@@ -60,6 +61,7 @@ static inline void cgroup_move_task(struct task_struct *p, struct css_set *to)
 {
 	rcu_assign_pointer(p->cgroups, to);
 }
+static inline void psi_cgroup_enabled_sync(struct psi_group *group) {}
 #endif
 
 #endif /* CONFIG_PSI */
diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
index a0b746258c68..ab1f9b463df9 100644
--- a/include/linux/psi_types.h
+++ b/include/linux/psi_types.h
@@ -152,6 +152,7 @@ struct psi_trigger {
 
 struct psi_group {
 	struct psi_group *parent;
+	bool enabled;
 
 	/* Protects data used by the aggregator */
 	struct mutex avgs_lock;
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index cc228235ce38..fa8428125d62 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -3748,6 +3748,52 @@ static ssize_t cgroup_irq_pressure_write(struct kernfs_open_file *of,
 }
 #endif
 
+static int cgroup_psi_show(struct seq_file *seq, void *v)
+{
+	struct cgroup *cgrp = seq_css(seq)->cgroup;
+	struct psi_group *psi = cgroup_psi(cgrp);
+
+	seq_printf(seq, "%d\n", psi->enabled);
+
+	return 0;
+}
+
+static ssize_t cgroup_psi_write(struct kernfs_open_file *of,
+				char *buf, size_t nbytes, loff_t off)
+{
+	ssize_t ret;
+	int enable;
+	struct cgroup *cgrp;
+	struct psi_group *psi;
+
+	ret = kstrtoint(strstrip(buf), 0, &enable);
+	if (ret)
+		return ret;
+
+	if (enable < 0 || enable > 1)
+		return -ERANGE;
+
+	cgrp = cgroup_kn_lock_live(of->kn, false);
+	if (!cgrp)
+		return -ENOENT;
+
+	psi = cgroup_psi(cgrp);
+	if (psi->enabled != enable) {
+		int i;
+
+		/* show or hide {cpu,memory,io,irq}.pressure files */
+		for (i = 0; i < NR_PSI_RESOURCES; i++)
+			cgroup_file_show(&cgrp->psi_files[i], enable);
+
+		psi->enabled = enable;
+		psi_cgroup_enabled_sync(psi);
+	}
+
+	cgroup_kn_unlock(of->kn);
+
+	return nbytes;
+}
+
 static __poll_t cgroup_pressure_poll(struct kernfs_open_file *of,
 					  poll_table *pt)
 {
@@ -5146,6 +5192,7 @@ static struct cftype cgroup_base_files[] = {
 	{
 		.name = "io.pressure",
 		.flags = CFTYPE_PRESSURE,
+		.file_offset = offsetof(struct cgroup, psi_files[PSI_IO]),
 		.seq_show = cgroup_io_pressure_show,
 		.write = cgroup_io_pressure_write,
 		.poll = cgroup_pressure_poll,
@@ -5154,6 +5201,7 @@ static struct cftype cgroup_base_files[] = {
 	{
 		.name = "memory.pressure",
 		.flags = CFTYPE_PRESSURE,
+		.file_offset = offsetof(struct cgroup, psi_files[PSI_MEM]),
 		.seq_show = cgroup_memory_pressure_show,
 		.write = cgroup_memory_pressure_write,
 		.poll = cgroup_pressure_poll,
@@ -5162,6 +5210,7 @@ static struct cftype cgroup_base_files[] = {
 	{
 		.name = "cpu.pressure",
 		.flags = CFTYPE_PRESSURE,
+		.file_offset = offsetof(struct cgroup, psi_files[PSI_CPU]),
 		.seq_show = cgroup_cpu_pressure_show,
 		.write = cgroup_cpu_pressure_write,
 		.poll = cgroup_pressure_poll,
@@ -5171,12 +5220,19 @@ static struct cftype cgroup_base_files[] = {
 	{
 		.name = "irq.pressure",
 		.flags = CFTYPE_PRESSURE,
+		.file_offset = offsetof(struct cgroup, psi_files[PSI_IRQ]),
 		.seq_show = cgroup_irq_pressure_show,
 		.write = cgroup_irq_pressure_write,
 		.poll = cgroup_pressure_poll,
 		.release = cgroup_pressure_release,
 	},
 #endif
+	{
+		.name = "cgroup.pressure",
+		.flags = CFTYPE_PRESSURE,
+		.seq_show = cgroup_psi_show,
+		.write = cgroup_psi_write,
+	},
 #endif /* CONFIG_PSI */
 	{ }	/* terminate */
 };
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index 814e99b1fed3..27bd4946d563 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -181,6 +181,7 @@ static void group_init(struct psi_group *group)
 {
 	int cpu;
 
+	group->enabled = true;
 	for_each_possible_cpu(cpu)
 		seqcount_init(&per_cpu_ptr(group->pcpu, cpu)->seq);
 	group->avg_last_update = sched_clock();
@@ -696,17 +697,16 @@ static void psi_group_change(struct psi_group *group, int cpu,
 	groupc = per_cpu_ptr(group->pcpu, cpu);
 
 	/*
-	 * First we assess the aggregate resource states this CPU's
-	 * tasks have been in since the last change, and account any
-	 * SOME and FULL time these may have resulted in.
-	 *
-	 * Then we update the task counts according to the state
+	 * First we update the task counts according to the state
 	 * change requested through the @clear and @set bits.
+	 *
+	 * Then if the cgroup PSI stats accounting enabled, we
+	 * assess the aggregate resource states this CPU's tasks
+	 * have been in since the last change, and account any
+	 * SOME and FULL time these may have resulted in.
 	 */
 	write_seqcount_begin(&groupc->seq);
 
-	record_times(groupc, now);
-
 	/*
 	 * Start with TSK_ONCPU, which doesn't have a corresponding
 	 * task count - it's just a boolean flag directly encoded in
@@ -745,6 +745,14 @@ static void psi_group_change(struct psi_group *group, int cpu,
 		if (set & (1 << t))
 			groupc->tasks[t]++;
 
+	if (!group->enabled) {
+		if (groupc->state_mask & (1 << PSI_NONIDLE))
+			record_times(groupc, now);
+		groupc->state_mask = state_mask;
+		write_seqcount_end(&groupc->seq);
+		return;
+	}
+
 	for (s = 0; s < NR_PSI_STATES; s++) {
 		if (test_state(groupc->tasks, s, state_mask & PSI_ONCPU))
 			state_mask |= (1 << s);
@@ -761,6 +769,7 @@ static void psi_group_change(struct psi_group *group, int cpu,
 	if (unlikely((state_mask & PSI_ONCPU) && cpu_curr(cpu)->in_memstall))
 		state_mask |= (1 << PSI_MEM_FULL);
 
+	record_times(groupc, now);
 	groupc->state_mask = state_mask;
 
 	write_seqcount_end(&groupc->seq);
@@ -908,6 +917,8 @@ void psi_account_irqtime(struct task_struct *task, u32 delta)
 
 	group = task_psi_group(task);
 	for_each_psi_group(group) {
+		if (!group->enabled)
+			continue;
 		groupc = per_cpu_ptr(group->pcpu, cpu);
 
 		write_seqcount_begin(&groupc->seq);
@@ -1081,6 +1092,40 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
 
 	task_rq_unlock(rq, task, &rf);
 }
+
+void psi_cgroup_enabled_sync(struct psi_group *group)
+{
+	int cpu;
+
+	/*
+	 * After we disable psi_group->enabled, we don't actually
+	 * stop percpu tasks accounting in each psi_group_cpu,
+	 * instead only stop test_state() loop, record_times()
+	 * and averaging worker, see psi_group_change() for details.
+	 *
+	 * When disable cgroup PSI, this function has nothing to sync
+	 * since cgroup pressure files are hidden and percpu psi_group_cpu
+	 * would see !psi_group->enabled and only do task accounting.
+	 *
+	 * When re-enable cgroup PSI, this function use psi_group_change()
+	 * to get correct state mask from test_state() loop on tasks[],
+	 * and restart groupc->state_start from now, use .clear = .set = 0
+	 * here since no task status really changed.
+	 */
+	if (!group->enabled)
+		return;
+
+	for_each_possible_cpu(cpu) {
+		struct rq *rq = cpu_rq(cpu);
+		struct rq_flags rf;
+		u64 now;
+
+		rq_lock_irq(rq, &rf);
+		now = cpu_clock(cpu);
+		psi_group_change(group, cpu, 0, 0, now, true);
+		rq_unlock_irq(rq, &rf);
+	}
+}
 #endif /* CONFIG_CGROUPS */
 
 int psi_show(struct seq_file *m, struct psi_group *group, enum psi_res res)
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface
  2022-08-24  8:18 ` [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface Chengming Zhou
@ 2022-08-24  9:59   ` Johannes Weiner
  2022-08-24 10:55     ` Chengming Zhou
  2022-08-25 12:28     ` Chengming Zhou
  0 siblings, 2 replies; 23+ messages in thread
From: Johannes Weiner @ 2022-08-24  9:59 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

Hi Chengming,

Thanks for incorporating all the feedback. I have a few nitpicks
below, but with those considered, please add:

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

On Wed, Aug 24, 2022 at 04:18:29PM +0800, Chengming Zhou wrote:
> @@ -5171,12 +5220,19 @@ static struct cftype cgroup_base_files[] = {
>  	{
>  		.name = "irq.pressure",
>  		.flags = CFTYPE_PRESSURE,
> +		.file_offset = offsetof(struct cgroup, psi_files[PSI_IRQ]),
>  		.seq_show = cgroup_irq_pressure_show,
>  		.write = cgroup_irq_pressure_write,
>  		.poll = cgroup_pressure_poll,
>  		.release = cgroup_pressure_release,
>  	},
>  #endif
> +	{
> +		.name = "cgroup.pressure",
> +		.flags = CFTYPE_PRESSURE,
> +		.seq_show = cgroup_psi_show,
> +		.write = cgroup_psi_write,

To match the naming convention, these should be called
cgroup_pressure_show() and cgroup_pressure_write().

> @@ -745,6 +745,14 @@ static void psi_group_change(struct psi_group *group, int cpu,
>  		if (set & (1 << t))
>  			groupc->tasks[t]++;
>  
> +	if (!group->enabled) {
> +		if (groupc->state_mask & (1 << PSI_NONIDLE))
> +			record_times(groupc, now);

Thanks for the explanation in the other thread, it made sense. But can
you please add a comment to document it? Something like:

	/*
	 * On the first group change after disabling PSI, conclude
	 * the current state and flush its time. This is unlikely
	 * to matter to the user, but aggregation (get_recent_times)
	 * may have already incorporated the live state into times_prev;
	 * avoid a delta sample underflow when PSI is later re-enabled.
	 */

An unlikely() would also make sense on that branch.

> @@ -1081,6 +1092,40 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
>  
>  	task_rq_unlock(rq, task, &rf);
>  }
> +
> +void psi_cgroup_enabled_sync(struct psi_group *group)
> +{
> +	int cpu;
> +
> +	/*
> +	 * After we disable psi_group->enabled, we don't actually
> +	 * stop percpu tasks accounting in each psi_group_cpu,
> +	 * instead only stop test_state() loop, record_times()
> +	 * and averaging worker, see psi_group_change() for details.
> +	 *
> +	 * When disable cgroup PSI, this function has nothing to sync
> +	 * since cgroup pressure files are hidden and percpu psi_group_cpu
> +	 * would see !psi_group->enabled and only do task accounting.
> +	 *
> +	 * When re-enable cgroup PSI, this function use psi_group_change()
> +	 * to get correct state mask from test_state() loop on tasks[],
> +	 * and restart groupc->state_start from now, use .clear = .set = 0
> +	 * here since no task status really changed.
> +	 */
> +	if (!group->enabled)
> +		return;

Thanks for adding the comment, that's helpful.

I think the function would be a tad clearer and self-documenting if
you called it psi_cgroup_restart(), and only call it on enabling.

> +	for_each_possible_cpu(cpu) {
> +		struct rq *rq = cpu_rq(cpu);
> +		struct rq_flags rf;
> +		u64 now;
> +
> +		rq_lock_irq(rq, &rf);
> +		now = cpu_clock(cpu);
> +		psi_group_change(group, cpu, 0, 0, now, true);
> +		rq_unlock_irq(rq, &rf);
> +	}
> +}
>  #endif /* CONFIG_CGROUPS */

Thanks,
Johannes

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 09/10] sched/psi: cache parent psi_group to speed up groups iterate
  2022-08-24  8:18 ` [PATCH v3 09/10] sched/psi: cache parent psi_group to speed up groups iterate Chengming Zhou
@ 2022-08-24 10:18   ` Johannes Weiner
  2022-08-24 10:48     ` Chengming Zhou
  0 siblings, 1 reply; 23+ messages in thread
From: Johannes Weiner @ 2022-08-24 10:18 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

Hi Chengming,

This looks generally good to me, but I have one comment:

On Wed, Aug 24, 2022 at 04:18:28PM +0800, Chengming Zhou wrote:
> @@ -772,30 +772,18 @@ static void psi_group_change(struct psi_group *group, int cpu,
>  		schedule_delayed_work(&group->avgs_work, PSI_FREQ);
>  }
>  
> -static struct psi_group *iterate_groups(struct task_struct *task, void **iter)
> +static inline struct psi_group *task_psi_group(struct task_struct *task)
>  {
> -	if (*iter == &psi_system)
> -		return NULL;
> -
>  #ifdef CONFIG_CGROUPS
> -	if (static_branch_likely(&psi_cgroups_enabled)) {
> -		struct cgroup *cgroup = NULL;
> -
> -		if (!*iter)
> -			cgroup = task->cgroups->dfl_cgrp;
> -		else
> -			cgroup = cgroup_parent(*iter);
> -
> -		if (cgroup && cgroup_parent(cgroup)) {
> -			*iter = cgroup;
> -			return cgroup_psi(cgroup);
> -		}
> -	}
> +	if (static_branch_likely(&psi_cgroups_enabled))
> +		return cgroup_psi(task_dfl_cgroup(task));
>  #endif
> -	*iter = &psi_system;
>  	return &psi_system;
>  }
>  
> +#define for_each_psi_group(group) \
> +	for (; group; group = group->parent)

It would be better to open-code this. It's hiding that it's walking
ancestors, and the name and single parameter suggest it's walking some
global list - not that the parameter is iterator AND starting point.

This makes for particularly obscure code in the discontiguous loops in
psi_task_switch():

	group = task_psi_group(task);
	for_each_psi_group(group)
		if (group == common)
			break;
	/* This looks like a second full loop: */
	for_each_psi_group(group)
		...

>  static void psi_flags_change(struct task_struct *task, int clear, int set)
>  {
>  	if (((task->psi_flags & set) ||
> @@ -815,7 +803,6 @@ void psi_task_change(struct task_struct *task, int clear, int set)
>  {
>  	int cpu = task_cpu(task);
>  	struct psi_group *group;
> -	void *iter = NULL;
>  	u64 now;
>  
>  	if (!task->pid)
> @@ -825,7 +812,8 @@ void psi_task_change(struct task_struct *task, int clear, int set)
>  
>  	now = cpu_clock(cpu);
>  
> -	while ((group = iterate_groups(task, &iter)))
> +	group = task_psi_group(task);
> +	for_each_psi_group(group)
>  		psi_group_change(group, cpu, clear, set, now, true);

task_psi_group() is never NULL, so this should be a do-while loop:

	group = task_psi_group(task);
	do {
		psi_group_change(group, cpu, clear, set, now, true);
	} while ((group = group->parent));

> @@ -834,7 +822,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  {
>  	struct psi_group *group, *common = NULL;
>  	int cpu = task_cpu(prev);
> -	void *iter;
>  	u64 now = cpu_clock(cpu);
>  
>  	if (next->pid) {
> @@ -845,8 +832,8 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  		 * we reach the first common ancestor. Iterate @next's
>  		 * ancestors only until we encounter @prev's ONCPU.
>  		 */
> -		iter = NULL;
> -		while ((group = iterate_groups(next, &iter))) {
> +		group = task_psi_group(next);
> +		for_each_psi_group(group) {

Ditto.

>  			if (per_cpu_ptr(group->pcpu, cpu)->state_mask &
>  			    PSI_ONCPU) {
>  				common = group;
> @@ -887,9 +874,12 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  
>  		psi_flags_change(prev, clear, set);
>  
> -		iter = NULL;
> -		while ((group = iterate_groups(prev, &iter)) && group != common)
> +		group = task_psi_group(prev);
> +		for_each_psi_group(group) {
> +			if (group == common)
> +				break;

Ditto.

>  			psi_group_change(group, cpu, clear, set, now, wake_clock);
> +		}
>  
>  		/*
>  		 * TSK_ONCPU is handled up to the common ancestor. If we're tasked
> @@ -897,7 +887,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  		 */
>  		if (sleep || unlikely(prev->in_memstall != next->in_memstall)) {
>  			clear &= ~TSK_ONCPU;
> -			for (; group; group = iterate_groups(prev, &iter))
> +			for_each_psi_group(group)
>  				psi_group_change(group, cpu, clear, set, now, wake_clock);

This can stay as is, group may already be NULL here.

> @@ -907,7 +897,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  void psi_account_irqtime(struct task_struct *task, u32 delta)
>  {
>  	int cpu = task_cpu(task);
> -	void *iter = NULL;
>  	struct psi_group *group;
>  	struct psi_group_cpu *groupc;
>  	u64 now;
> @@ -917,7 +906,8 @@ void psi_account_irqtime(struct task_struct *task, u32 delta)
>  
>  	now = cpu_clock(cpu);
>  
> -	while ((group = iterate_groups(task, &iter))) {
> +	group = task_psi_group(task);
> +	for_each_psi_group(group) {
>  		groupc = per_cpu_ptr(group->pcpu, cpu);

do-while again.

With that,
Acked-by: Johannes Weiner <hannes@cmpxchg.org>

Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 08/10] sched/psi: consolidate cgroup_psi()
  2022-08-24  8:18 ` [PATCH v3 08/10] sched/psi: consolidate cgroup_psi() Chengming Zhou
@ 2022-08-24 10:19   ` Johannes Weiner
  0 siblings, 0 replies; 23+ messages in thread
From: Johannes Weiner @ 2022-08-24 10:19 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On Wed, Aug 24, 2022 at 04:18:27PM +0800, Chengming Zhou wrote:
> cgroup_psi() can't return psi_group for root cgroup, so we have many
> open code "psi = cgroup_ino(cgrp) == 1 ? &psi_system : cgrp->psi".
> 
> This patch move cgroup_psi() definition to <linux/psi.h>, in which
> we can return psi_system for root cgroup, so can handle all cgroups.
> 
> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 07/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure
  2022-08-24  8:18 ` [PATCH v3 07/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure Chengming Zhou
@ 2022-08-24 10:46   ` Johannes Weiner
  2022-08-24 11:53     ` Chengming Zhou
  0 siblings, 1 reply; 23+ messages in thread
From: Johannes Weiner @ 2022-08-24 10:46 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On Wed, Aug 24, 2022 at 04:18:26PM +0800, Chengming Zhou wrote:
> @@ -903,6 +903,36 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  	}
>  }
>  
> +#ifdef CONFIG_IRQ_TIME_ACCOUNTING
> +void psi_account_irqtime(struct task_struct *task, u32 delta)
> +{
> +	int cpu = task_cpu(task);
> +	void *iter = NULL;
> +	struct psi_group *group;
> +	struct psi_group_cpu *groupc;
> +	u64 now;
> +
> +	if (!task->pid)
> +		return;
> +
> +	now = cpu_clock(cpu);
> +
> +	while ((group = iterate_groups(task, &iter))) {
> +		groupc = per_cpu_ptr(group->pcpu, cpu);
> +
> +		write_seqcount_begin(&groupc->seq);
> +
> +		record_times(groupc, now);
> +		groupc->times[PSI_IRQ_FULL] += delta;
> +
> +		write_seqcount_end(&groupc->seq);
> +
> +		if (group->poll_states & (1 << PSI_IRQ_FULL))
> +			psi_schedule_poll_work(group, 1);
> +	}

Shouldn't this kick avgs_work too? If the CPU is otherwise idle,
times[PSI_IRQ_FULL] would overflow after two missed averaging runs.

avgs_work should probably also self-perpetuate when PSI_IRQ_FULL is in
changed_states. (Looking at that code, I think it can be simplified:
delete nonidle and do `if (changed_states) schedule_delayed_work()`.)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 09/10] sched/psi: cache parent psi_group to speed up groups iterate
  2022-08-24 10:18   ` Johannes Weiner
@ 2022-08-24 10:48     ` Chengming Zhou
  0 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24 10:48 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On 2022/8/24 18:18, Johannes Weiner wrote:
> Hi Chengming,
> 
> This looks generally good to me, but I have one comment:
> 
> On Wed, Aug 24, 2022 at 04:18:28PM +0800, Chengming Zhou wrote:
>> @@ -772,30 +772,18 @@ static void psi_group_change(struct psi_group *group, int cpu,
>>  		schedule_delayed_work(&group->avgs_work, PSI_FREQ);
>>  }
>>  
>> -static struct psi_group *iterate_groups(struct task_struct *task, void **iter)
>> +static inline struct psi_group *task_psi_group(struct task_struct *task)
>>  {
>> -	if (*iter == &psi_system)
>> -		return NULL;
>> -
>>  #ifdef CONFIG_CGROUPS
>> -	if (static_branch_likely(&psi_cgroups_enabled)) {
>> -		struct cgroup *cgroup = NULL;
>> -
>> -		if (!*iter)
>> -			cgroup = task->cgroups->dfl_cgrp;
>> -		else
>> -			cgroup = cgroup_parent(*iter);
>> -
>> -		if (cgroup && cgroup_parent(cgroup)) {
>> -			*iter = cgroup;
>> -			return cgroup_psi(cgroup);
>> -		}
>> -	}
>> +	if (static_branch_likely(&psi_cgroups_enabled))
>> +		return cgroup_psi(task_dfl_cgroup(task));
>>  #endif
>> -	*iter = &psi_system;
>>  	return &psi_system;
>>  }
>>  
>> +#define for_each_psi_group(group) \
>> +	for (; group; group = group->parent)
> 
> It would be better to open-code this. It's hiding that it's walking
> ancestors, and the name and single parameter suggest it's walking some
> global list - not that the parameter is iterator AND starting point.
> 
> This makes for particularly obscure code in the discontiguous loops in
> psi_task_switch():
> 
> 	group = task_psi_group(task);
> 	for_each_psi_group(group)
> 		if (group == common)
> 			break;
> 	/* This looks like a second full loop: */
> 	for_each_psi_group(group)
> 		...
> 

Good point, it's not clear as open-code, I will change these in next version.

Thanks!


>>  static void psi_flags_change(struct task_struct *task, int clear, int set)
>>  {
>>  	if (((task->psi_flags & set) ||
>> @@ -815,7 +803,6 @@ void psi_task_change(struct task_struct *task, int clear, int set)
>>  {
>>  	int cpu = task_cpu(task);
>>  	struct psi_group *group;
>> -	void *iter = NULL;
>>  	u64 now;
>>  
>>  	if (!task->pid)
>> @@ -825,7 +812,8 @@ void psi_task_change(struct task_struct *task, int clear, int set)
>>  
>>  	now = cpu_clock(cpu);
>>  
>> -	while ((group = iterate_groups(task, &iter)))
>> +	group = task_psi_group(task);
>> +	for_each_psi_group(group)
>>  		psi_group_change(group, cpu, clear, set, now, true);
> 
> task_psi_group() is never NULL, so this should be a do-while loop:
> 
> 	group = task_psi_group(task);
> 	do {
> 		psi_group_change(group, cpu, clear, set, now, true);
> 	} while ((group = group->parent));
> 
>> @@ -834,7 +822,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  {
>>  	struct psi_group *group, *common = NULL;
>>  	int cpu = task_cpu(prev);
>> -	void *iter;
>>  	u64 now = cpu_clock(cpu);
>>  
>>  	if (next->pid) {
>> @@ -845,8 +832,8 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  		 * we reach the first common ancestor. Iterate @next's
>>  		 * ancestors only until we encounter @prev's ONCPU.
>>  		 */
>> -		iter = NULL;
>> -		while ((group = iterate_groups(next, &iter))) {
>> +		group = task_psi_group(next);
>> +		for_each_psi_group(group) {
> 
> Ditto.
> 
>>  			if (per_cpu_ptr(group->pcpu, cpu)->state_mask &
>>  			    PSI_ONCPU) {
>>  				common = group;
>> @@ -887,9 +874,12 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  
>>  		psi_flags_change(prev, clear, set);
>>  
>> -		iter = NULL;
>> -		while ((group = iterate_groups(prev, &iter)) && group != common)
>> +		group = task_psi_group(prev);
>> +		for_each_psi_group(group) {
>> +			if (group == common)
>> +				break;
> 
> Ditto.
> 
>>  			psi_group_change(group, cpu, clear, set, now, wake_clock);
>> +		}
>>  
>>  		/*
>>  		 * TSK_ONCPU is handled up to the common ancestor. If we're tasked
>> @@ -897,7 +887,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  		 */
>>  		if (sleep || unlikely(prev->in_memstall != next->in_memstall)) {
>>  			clear &= ~TSK_ONCPU;
>> -			for (; group; group = iterate_groups(prev, &iter))
>> +			for_each_psi_group(group)
>>  				psi_group_change(group, cpu, clear, set, now, wake_clock);
> 
> This can stay as is, group may already be NULL here.
> 
>> @@ -907,7 +897,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  void psi_account_irqtime(struct task_struct *task, u32 delta)
>>  {
>>  	int cpu = task_cpu(task);
>> -	void *iter = NULL;
>>  	struct psi_group *group;
>>  	struct psi_group_cpu *groupc;
>>  	u64 now;
>> @@ -917,7 +906,8 @@ void psi_account_irqtime(struct task_struct *task, u32 delta)
>>  
>>  	now = cpu_clock(cpu);
>>  
>> -	while ((group = iterate_groups(task, &iter))) {
>> +	group = task_psi_group(task);
>> +	for_each_psi_group(group) {
>>  		groupc = per_cpu_ptr(group->pcpu, cpu);
> 
> do-while again.
> 
> With that,
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> 
> Thanks!

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface
  2022-08-24  9:59   ` Johannes Weiner
@ 2022-08-24 10:55     ` Chengming Zhou
  2022-08-25 12:28     ` Chengming Zhou
  1 sibling, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24 10:55 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On 2022/8/24 17:59, Johannes Weiner wrote:
> Hi Chengming,
> 
> Thanks for incorporating all the feedback. I have a few nitpicks
> below, but with those considered, please add:
> 
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> 
> On Wed, Aug 24, 2022 at 04:18:29PM +0800, Chengming Zhou wrote:
>> @@ -5171,12 +5220,19 @@ static struct cftype cgroup_base_files[] = {
>>  	{
>>  		.name = "irq.pressure",
>>  		.flags = CFTYPE_PRESSURE,
>> +		.file_offset = offsetof(struct cgroup, psi_files[PSI_IRQ]),
>>  		.seq_show = cgroup_irq_pressure_show,
>>  		.write = cgroup_irq_pressure_write,
>>  		.poll = cgroup_pressure_poll,
>>  		.release = cgroup_pressure_release,
>>  	},
>>  #endif
>> +	{
>> +		.name = "cgroup.pressure",
>> +		.flags = CFTYPE_PRESSURE,
>> +		.seq_show = cgroup_psi_show,
>> +		.write = cgroup_psi_write,
> 
> To match the naming convention, these should be called
> cgroup_pressure_show() and cgroup_pressure_write().

Hello,

I forgot to change the names, will do.

> 
>> @@ -745,6 +745,14 @@ static void psi_group_change(struct psi_group *group, int cpu,
>>  		if (set & (1 << t))
>>  			groupc->tasks[t]++;
>>  
>> +	if (!group->enabled) {
>> +		if (groupc->state_mask & (1 << PSI_NONIDLE))
>> +			record_times(groupc, now);
> 
> Thanks for the explanation in the other thread, it made sense. But can
> you please add a comment to document it? Something like:
> 
> 	/*
> 	 * On the first group change after disabling PSI, conclude
> 	 * the current state and flush its time. This is unlikely
> 	 * to matter to the user, but aggregation (get_recent_times)
> 	 * may have already incorporated the live state into times_prev;
> 	 * avoid a delta sample underflow when PSI is later re-enabled.
> 	 */
> 
> An unlikely() would also make sense on that branch.

The comment is very helpful, unlikely() is also very good point,
will add in the next version.

> 
>> @@ -1081,6 +1092,40 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
>>  
>>  	task_rq_unlock(rq, task, &rf);
>>  }
>> +
>> +void psi_cgroup_enabled_sync(struct psi_group *group)
>> +{
>> +	int cpu;
>> +
>> +	/*
>> +	 * After we disable psi_group->enabled, we don't actually
>> +	 * stop percpu tasks accounting in each psi_group_cpu,
>> +	 * instead only stop test_state() loop, record_times()
>> +	 * and averaging worker, see psi_group_change() for details.
>> +	 *
>> +	 * When disable cgroup PSI, this function has nothing to sync
>> +	 * since cgroup pressure files are hidden and percpu psi_group_cpu
>> +	 * would see !psi_group->enabled and only do task accounting.
>> +	 *
>> +	 * When re-enable cgroup PSI, this function use psi_group_change()
>> +	 * to get correct state mask from test_state() loop on tasks[],
>> +	 * and restart groupc->state_start from now, use .clear = .set = 0
>> +	 * here since no task status really changed.
>> +	 */
>> +	if (!group->enabled)
>> +		return;
> 
> Thanks for adding the comment, that's helpful.
> 
> I think the function would be a tad clearer and self-documenting if
> you called it psi_cgroup_restart(), and only call it on enabling.

Ok, it's better, will do.

Thanks for your review!

> 
>> +	for_each_possible_cpu(cpu) {
>> +		struct rq *rq = cpu_rq(cpu);
>> +		struct rq_flags rf;
>> +		u64 now;
>> +
>> +		rq_lock_irq(rq, &rf);
>> +		now = cpu_clock(cpu);
>> +		psi_group_change(group, cpu, 0, 0, now, true);
>> +		rq_unlock_irq(rq, &rf);
>> +	}
>> +}
>>  #endif /* CONFIG_CGROUPS */
> 
> Thanks,
> Johannes

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 07/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure
  2022-08-24 10:46   ` Johannes Weiner
@ 2022-08-24 11:53     ` Chengming Zhou
  0 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24 11:53 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On 2022/8/24 18:46, Johannes Weiner wrote:
> On Wed, Aug 24, 2022 at 04:18:26PM +0800, Chengming Zhou wrote:
>> @@ -903,6 +903,36 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  	}
>>  }
>>  
>> +#ifdef CONFIG_IRQ_TIME_ACCOUNTING
>> +void psi_account_irqtime(struct task_struct *task, u32 delta)
>> +{
>> +	int cpu = task_cpu(task);
>> +	void *iter = NULL;
>> +	struct psi_group *group;
>> +	struct psi_group_cpu *groupc;
>> +	u64 now;
>> +
>> +	if (!task->pid)
>> +		return;
>> +
>> +	now = cpu_clock(cpu);
>> +
>> +	while ((group = iterate_groups(task, &iter))) {
>> +		groupc = per_cpu_ptr(group->pcpu, cpu);
>> +
>> +		write_seqcount_begin(&groupc->seq);
>> +
>> +		record_times(groupc, now);
>> +		groupc->times[PSI_IRQ_FULL] += delta;
>> +
>> +		write_seqcount_end(&groupc->seq);
>> +
>> +		if (group->poll_states & (1 << PSI_IRQ_FULL))
>> +			psi_schedule_poll_work(group, 1);
>> +	}
> 
> Shouldn't this kick avgs_work too? If the CPU is otherwise idle,
> times[PSI_IRQ_FULL] would overflow after two missed averaging runs.

If the CPU is idle, task->pid == 0, so no times[PSI_IRQ_FULL] would accumulate?
I was thinking if task->pid != 0, avgs_work should be active.

Not sure, maybe I missed something.

> 
> avgs_work should probably also self-perpetuate when PSI_IRQ_FULL is in
> changed_states. (Looking at that code, I think it can be simplified:
> delete nonidle and do `if (changed_states) schedule_delayed_work()`.)

```
collect_percpu_times(group, PSI_AVGS, &changed_states);
nonidle = changed_states & (1 << PSI_NONIDLE);

if (nonidle) {
	schedule_delayed_work(dwork, nsecs_to_jiffies(
			group->avg_next_update - now) + 1);
}
```

Yes, changed_states include PSI_IRQ_FULL, here we only check nonidle
= changed_states & (1 << PSI_NONIDLE), so it will not restart
if only PSI_IRQ_FULL?

If use `if (changed_states) schedule_delayed_work()`, avgs_work will
self-restart when only PSI_IRQ_FULL changes?

Thanks!


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 05/10] sched/psi: optimize task switch inside shared cgroups again
  2022-08-24  8:18 ` [PATCH v3 05/10] sched/psi: optimize task switch inside shared cgroups again Chengming Zhou
@ 2022-08-24 14:06   ` Johannes Weiner
  2022-08-24 15:30     ` Chengming Zhou
  0 siblings, 1 reply; 23+ messages in thread
From: Johannes Weiner @ 2022-08-24 14:06 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On Wed, Aug 24, 2022 at 04:18:24PM +0800, Chengming Zhou wrote:
> commit 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups")
> defer prev task sleep handling to psi_task_switch(), so we don't need
> to clear and set TSK_ONCPU state for common cgroups.
> 
>     A
>     |
>     B
>    / \
>   C   D
>  /     \
> prev   next
> 
> After that commit psi_task_switch() do:
> 1. psi_group_change(next, .set=TSK_ONCPU) for D
> 2. psi_group_change(prev, .clear=TSK_ONCPU | TSK_RUNNING) for C
> 3. psi_group_change(prev, .clear=TSK_RUNNING) for B, A
> 
> But there is a limitation "prev->psi_flags == next->psi_flags" that
> if not satisfied, will make this cgroups optimization unusable for both
> sleep switch or running switch cases. For example:
> 
> prev->in_memstall != next->in_memstall when sleep switch:
> 1. psi_group_change(next, .set=TSK_ONCPU) for D, B, A
> 2. psi_group_change(prev, .clear=TSK_ONCPU | TSK_RUNNING) for C, B, A
> 
> prev->in_memstall != next->in_memstall when running switch:
> 1. psi_group_change(next, .set=TSK_ONCPU) for D, B, A
> 2. psi_group_change(prev, .clear=TSK_ONCPU) for C, B, A
> 
> The reason why this limitation exist is that we consider a group is
> PSI_MEM_FULL if the CPU is actively reclaiming and nothing productive
> could run even if it were runnable. So when CPU curr changed from prev
> to next and their in_memstall status is different, we have to change
> PSI_MEM_FULL status for their common cgroups.
> 
> This patch remove this limitation by making psi_group_change() change
> PSI_MEM_FULL status depend on CPU curr->in_memstall status.
> 
> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>

Hoo boy, that took me a second.


Way back when PSI_MEM_FULL was accounted from the timer tick, task
switching could simply iterate next and prev to the common ancestor to
update TSK_ONCPU and be done.

Then memstall ticks were replaced with checking curr->in_memstall
directly in psi_group_change(). That meant that now if the task switch
was between a memstall and a !memstall task, we had to iterate through
the common ancestors at least ONCE to fix up their state_masks.

We added the identical_state filter to make sure the common ancestor
elimination was skipped in that case. It seems that was always a
little too eager, because it caused us to walk the common ancestors
*twice* instead of the required once: the iteration for next could
have stopped at the common ancestor; prev could have updated TSK_ONCPU
up to the common ancestor, then finish to the root without changing
any flags, just to get the new curr->in_memstall into the state_masks.

This patch recognizes this and makes it so that we walk to the root
exactly once if state_mask needs updating.


Unless I missed anything, would you mind adding this to the changelog?

I'm not quite sure how 4117cebf1a9f ("psi: Optimize task switch inside
shared cgroups") fits into the picture. That optimized the sleep case,
but the sleep case never had the common ancestor optimization (the dq
would have already cleared TSK_ONCPU up to the root). Let me know if I
am mistaken.

AFAICS I can see, this patch here is simply catching up on a missed
optimization that could have been done in 7fae6c8171d2 ("psi: Use
ONCPU state tracking machinery to detect reclaim") directly already.

So I think it all makes sense. I have just two notes on the diff:

> @@ -820,8 +820,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  	u64 now = cpu_clock(cpu);
>  
>  	if (next->pid) {
> -		bool identical_state;
> -
>  		psi_flags_change(next, 0, TSK_ONCPU);
>  		/*
>  		 * When switching between tasks that have an identical
> @@ -829,11 +827,9 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  		 * we reach the first common ancestor. Iterate @next's
>  		 * ancestors only until we encounter @prev's ONCPU.
>  		 */

The comment is rather stale now. Could you change it to this?

		/*
		 * Set TSK_ONCPU on @next's cgroups. If @next shares any
		 * ancestors with @prev, those will already have @prev's
		 * TSK_ONCPU bit set, and we can stop the iteration there.
		 */

> -		identical_state = prev->psi_flags == next->psi_flags;
>  		iter = NULL;
>  		while ((group = iterate_groups(next, &iter))) {
> -			if (identical_state &&
> -			    per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
> +			if (per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
>  				common = group;
>  				break;
>  			}
> @@ -880,7 +876,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>  		 * TSK_ONCPU is handled up to the common ancestor. If we're tasked
>  		 * with dequeuing too, finish that for the rest of the hierarchy.
>  		 */
> -		if (sleep) {
> +		if (sleep || unlikely(prev->in_memstall != next->in_memstall)) {
>  			clear &= ~TSK_ONCPU;
>  			for (; group; group = iterate_groups(prev, &iter))
>  				psi_group_change(group, cpu, clear, set, now, wake_clock);

Okay, this computes too. But it is somewhat special-cased, without
explaining why the memstall state in particular matters. Instead of
focusing on the exceptions though, can we just generalize this a bit?

		/*
		 * TSK_ONCPU is handled up to the common ancestor. If there are
		 * any other differences between the two tasks (e.g. prev goes
		 * to sleep, or only one task is memstall), finish propagating
		 * those differences all the way up to the root.
		 */
		if ((prev->psi_flags ^ next->psi_flags) & ~TSK_ONCPU) {
			clear &= ~TSK_ONCPU;
			for (; group; group = iterate_groups(prev, &iter))
				psi_group_change(group, cpu, clear, set, now, wake_clock);
		}

Thanks
Johannes

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 05/10] sched/psi: optimize task switch inside shared cgroups again
  2022-08-24 14:06   ` Johannes Weiner
@ 2022-08-24 15:30     ` Chengming Zhou
  0 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-24 15:30 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On 2022/8/24 22:06, Johannes Weiner wrote:
> On Wed, Aug 24, 2022 at 04:18:24PM +0800, Chengming Zhou wrote:
>> commit 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups")
>> defer prev task sleep handling to psi_task_switch(), so we don't need
>> to clear and set TSK_ONCPU state for common cgroups.
>>
>>     A
>>     |
>>     B
>>    / \
>>   C   D
>>  /     \
>> prev   next
>>
>> After that commit psi_task_switch() do:
>> 1. psi_group_change(next, .set=TSK_ONCPU) for D
>> 2. psi_group_change(prev, .clear=TSK_ONCPU | TSK_RUNNING) for C
>> 3. psi_group_change(prev, .clear=TSK_RUNNING) for B, A
>>
>> But there is a limitation "prev->psi_flags == next->psi_flags" that
>> if not satisfied, will make this cgroups optimization unusable for both
>> sleep switch or running switch cases. For example:
>>
>> prev->in_memstall != next->in_memstall when sleep switch:
>> 1. psi_group_change(next, .set=TSK_ONCPU) for D, B, A
>> 2. psi_group_change(prev, .clear=TSK_ONCPU | TSK_RUNNING) for C, B, A
>>
>> prev->in_memstall != next->in_memstall when running switch:
>> 1. psi_group_change(next, .set=TSK_ONCPU) for D, B, A
>> 2. psi_group_change(prev, .clear=TSK_ONCPU) for C, B, A
>>
>> The reason why this limitation exist is that we consider a group is
>> PSI_MEM_FULL if the CPU is actively reclaiming and nothing productive
>> could run even if it were runnable. So when CPU curr changed from prev
>> to next and their in_memstall status is different, we have to change
>> PSI_MEM_FULL status for their common cgroups.
>>
>> This patch remove this limitation by making psi_group_change() change
>> PSI_MEM_FULL status depend on CPU curr->in_memstall status.
>>
>> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
> 
> Hoo boy, that took me a second.
> 

Thanks for your time. :-)

> 
> Way back when PSI_MEM_FULL was accounted from the timer tick, task
> switching could simply iterate next and prev to the common ancestor to
> update TSK_ONCPU and be done.
> 
> Then memstall ticks were replaced with checking curr->in_memstall
> directly in psi_group_change(). That meant that now if the task switch
> was between a memstall and a !memstall task, we had to iterate through
> the common ancestors at least ONCE to fix up their state_masks.
> 
> We added the identical_state filter to make sure the common ancestor
> elimination was skipped in that case. It seems that was always a
> little too eager, because it caused us to walk the common ancestors
> *twice* instead of the required once: the iteration for next could
> have stopped at the common ancestor; prev could have updated TSK_ONCPU
> up to the common ancestor, then finish to the root without changing
> any flags, just to get the new curr->in_memstall into the state_masks.
> 
> This patch recognizes this and makes it so that we walk to the root
> exactly once if state_mask needs updating.
> 
> 
> Unless I missed anything, would you mind adding this to the changelog?

Your explanation is very clear and accurate, will add it.

> 
> I'm not quite sure how 4117cebf1a9f ("psi: Optimize task switch inside
> shared cgroups") fits into the picture. That optimized the sleep case,
> but the sleep case never had the common ancestor optimization (the dq
> would have already cleared TSK_ONCPU up to the root). Let me know if I
> am mistaken.

That commit skiped clearing TSK_ONCPU in dequeue when sleep, so also have
the common ancestor optimization.

> 
> AFAICS I can see, this patch here is simply catching up on a missed
> optimization that could have been done in 7fae6c8171d2 ("psi: Use
> ONCPU state tracking machinery to detect reclaim") directly already.

Yes, apart from catching on a missed optimization, I later found in testing
this patch is necessary for the next patch 06/10.

Imaging we walk the common ancestors twice:
(1) psi_group_change(.clear = 0, .set = TSK_ONCPU)
(2) psi_group_change(.clear = TSK_ONCPU, .set = 0)

We previously used tasks[NR_ONCPU] to record TSK_ONCPU, so tasks[NR_ONCPU]++
in (1) then tasks[NR_ONCPU]-- in (2), tasks[NR_ONCPU] still be correct.

The patch 06/10 change to use one bit in state mask to record TSK_ONCPU,
so PSI_ONCPU bit will be set in (1), but then be cleared in (2), which
cause the psi_group_cpu has task running but without PSI_ONCPU bit set!

With this patch, we will never walk the common ancestors twice, so don't
have above problem anymore.

> 
> So I think it all makes sense. I have just two notes on the diff:
> 
>> @@ -820,8 +820,6 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  	u64 now = cpu_clock(cpu);
>>  
>>  	if (next->pid) {
>> -		bool identical_state;
>> -
>>  		psi_flags_change(next, 0, TSK_ONCPU);
>>  		/*
>>  		 * When switching between tasks that have an identical
>> @@ -829,11 +827,9 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  		 * we reach the first common ancestor. Iterate @next's
>>  		 * ancestors only until we encounter @prev's ONCPU.
>>  		 */
> 
> The comment is rather stale now. Could you change it to this?

Good, will update the comment.

> 
> 		/*
> 		 * Set TSK_ONCPU on @next's cgroups. If @next shares any
> 		 * ancestors with @prev, those will already have @prev's
> 		 * TSK_ONCPU bit set, and we can stop the iteration there.
> 		 */
> 
>> -		identical_state = prev->psi_flags == next->psi_flags;
>>  		iter = NULL;
>>  		while ((group = iterate_groups(next, &iter))) {
>> -			if (identical_state &&
>> -			    per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
>> +			if (per_cpu_ptr(group->pcpu, cpu)->tasks[NR_ONCPU]) {
>>  				common = group;
>>  				break;
>>  			}
>> @@ -880,7 +876,7 @@ void psi_task_switch(struct task_struct *prev, struct task_struct *next,
>>  		 * TSK_ONCPU is handled up to the common ancestor. If we're tasked
>>  		 * with dequeuing too, finish that for the rest of the hierarchy.
>>  		 */
>> -		if (sleep) {
>> +		if (sleep || unlikely(prev->in_memstall != next->in_memstall)) {
>>  			clear &= ~TSK_ONCPU;
>>  			for (; group; group = iterate_groups(prev, &iter))
>>  				psi_group_change(group, cpu, clear, set, now, wake_clock);
> 
> Okay, this computes too. But it is somewhat special-cased, without
> explaining why the memstall state in particular matters. Instead of
> focusing on the exceptions though, can we just generalize this a bit?
> 
> 		/*
> 		 * TSK_ONCPU is handled up to the common ancestor. If there are
> 		 * any other differences between the two tasks (e.g. prev goes
> 		 * to sleep, or only one task is memstall), finish propagating
> 		 * those differences all the way up to the root.
> 		 */
> 		if ((prev->psi_flags ^ next->psi_flags) & ~TSK_ONCPU) {
> 			clear &= ~TSK_ONCPU;
> 			for (; group; group = iterate_groups(prev, &iter))
> 				psi_group_change(group, cpu, clear, set, now, wake_clock);
> 		}

I think this is much better and the comment is very clear!

Thanks.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface
  2022-08-24  9:59   ` Johannes Weiner
  2022-08-24 10:55     ` Chengming Zhou
@ 2022-08-25 12:28     ` Chengming Zhou
  2022-08-25 13:20       ` Johannes Weiner
  1 sibling, 1 reply; 23+ messages in thread
From: Chengming Zhou @ 2022-08-25 12:28 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On 2022/8/24 17:59, Johannes Weiner wrote:
> Hi Chengming,
> 
> Thanks for incorporating all the feedback. I have a few nitpicks
> below, but with those considered, please add:
> 
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> 
> On Wed, Aug 24, 2022 at 04:18:29PM +0800, Chengming Zhou wrote:
>> @@ -5171,12 +5220,19 @@ static struct cftype cgroup_base_files[] = {
>>  	{
>>  		.name = "irq.pressure",
>>  		.flags = CFTYPE_PRESSURE,
>> +		.file_offset = offsetof(struct cgroup, psi_files[PSI_IRQ]),
>>  		.seq_show = cgroup_irq_pressure_show,
>>  		.write = cgroup_irq_pressure_write,
>>  		.poll = cgroup_pressure_poll,
>>  		.release = cgroup_pressure_release,
>>  	},
>>  #endif
>> +	{
>> +		.name = "cgroup.pressure",
>> +		.flags = CFTYPE_PRESSURE,
>> +		.seq_show = cgroup_psi_show,
>> +		.write = cgroup_psi_write,
> 
> To match the naming convention, these should be called
> cgroup_pressure_show() and cgroup_pressure_write().

I just find cgroup_pressure_write() already exists, so I change the names
to cgroup_pressure_enable_show() and cgroup_pressure_enable_write(),
since this file name is simplified from "cgroup.pressure.enable".

Thanks.

> 
>> @@ -745,6 +745,14 @@ static void psi_group_change(struct psi_group *group, int cpu,
>>  		if (set & (1 << t))
>>  			groupc->tasks[t]++;
>>  
>> +	if (!group->enabled) {
>> +		if (groupc->state_mask & (1 << PSI_NONIDLE))
>> +			record_times(groupc, now);
> 
> Thanks for the explanation in the other thread, it made sense. But can
> you please add a comment to document it? Something like:
> 
> 	/*
> 	 * On the first group change after disabling PSI, conclude
> 	 * the current state and flush its time. This is unlikely
> 	 * to matter to the user, but aggregation (get_recent_times)
> 	 * may have already incorporated the live state into times_prev;
> 	 * avoid a delta sample underflow when PSI is later re-enabled.
> 	 */
> 
> An unlikely() would also make sense on that branch.
> 
>> @@ -1081,6 +1092,40 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to)
>>  
>>  	task_rq_unlock(rq, task, &rf);
>>  }
>> +
>> +void psi_cgroup_enabled_sync(struct psi_group *group)
>> +{
>> +	int cpu;
>> +
>> +	/*
>> +	 * After we disable psi_group->enabled, we don't actually
>> +	 * stop percpu tasks accounting in each psi_group_cpu,
>> +	 * instead only stop test_state() loop, record_times()
>> +	 * and averaging worker, see psi_group_change() for details.
>> +	 *
>> +	 * When disable cgroup PSI, this function has nothing to sync
>> +	 * since cgroup pressure files are hidden and percpu psi_group_cpu
>> +	 * would see !psi_group->enabled and only do task accounting.
>> +	 *
>> +	 * When re-enable cgroup PSI, this function use psi_group_change()
>> +	 * to get correct state mask from test_state() loop on tasks[],
>> +	 * and restart groupc->state_start from now, use .clear = .set = 0
>> +	 * here since no task status really changed.
>> +	 */
>> +	if (!group->enabled)
>> +		return;
> 
> Thanks for adding the comment, that's helpful.
> 
> I think the function would be a tad clearer and self-documenting if
> you called it psi_cgroup_restart(), and only call it on enabling.
> 
>> +	for_each_possible_cpu(cpu) {
>> +		struct rq *rq = cpu_rq(cpu);
>> +		struct rq_flags rf;
>> +		u64 now;
>> +
>> +		rq_lock_irq(rq, &rf);
>> +		now = cpu_clock(cpu);
>> +		psi_group_change(group, cpu, 0, 0, now, true);
>> +		rq_unlock_irq(rq, &rf);
>> +	}
>> +}
>>  #endif /* CONFIG_CGROUPS */
> 
> Thanks,
> Johannes

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface
  2022-08-25 12:28     ` Chengming Zhou
@ 2022-08-25 13:20       ` Johannes Weiner
  2022-08-25 13:29         ` Chengming Zhou
  0 siblings, 1 reply; 23+ messages in thread
From: Johannes Weiner @ 2022-08-25 13:20 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On Thu, Aug 25, 2022 at 08:28:39PM +0800, Chengming Zhou wrote:
> On 2022/8/24 17:59, Johannes Weiner wrote:
> > Hi Chengming,
> > 
> > Thanks for incorporating all the feedback. I have a few nitpicks
> > below, but with those considered, please add:
> > 
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> > 
> > On Wed, Aug 24, 2022 at 04:18:29PM +0800, Chengming Zhou wrote:
> >> @@ -5171,12 +5220,19 @@ static struct cftype cgroup_base_files[] = {
> >>  	{
> >>  		.name = "irq.pressure",
> >>  		.flags = CFTYPE_PRESSURE,
> >> +		.file_offset = offsetof(struct cgroup, psi_files[PSI_IRQ]),
> >>  		.seq_show = cgroup_irq_pressure_show,
> >>  		.write = cgroup_irq_pressure_write,
> >>  		.poll = cgroup_pressure_poll,
> >>  		.release = cgroup_pressure_release,
> >>  	},
> >>  #endif
> >> +	{
> >> +		.name = "cgroup.pressure",
> >> +		.flags = CFTYPE_PRESSURE,
> >> +		.seq_show = cgroup_psi_show,
> >> +		.write = cgroup_psi_write,
> > 
> > To match the naming convention, these should be called
> > cgroup_pressure_show() and cgroup_pressure_write().
> 
> I just find cgroup_pressure_write() already exists, so I change the names
> to cgroup_pressure_enable_show() and cgroup_pressure_enable_write(),
> since this file name is simplified from "cgroup.pressure.enable".

That makes two outliers instead of one. It's probably better to steal
cgroup_pressure_write for cgroup.pressure, and rename the currently
misnamed helper. How about do_pressure_write()? pressure_write()?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface
  2022-08-25 13:20       ` Johannes Weiner
@ 2022-08-25 13:29         ` Chengming Zhou
  0 siblings, 0 replies; 23+ messages in thread
From: Chengming Zhou @ 2022-08-25 13:29 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: tj, mkoutny, surenb, gregkh, corbet, mingo, peterz, songmuchun,
	cgroups, linux-doc, linux-kernel

On 2022/8/25 21:20, Johannes Weiner wrote:
> On Thu, Aug 25, 2022 at 08:28:39PM +0800, Chengming Zhou wrote:
>> On 2022/8/24 17:59, Johannes Weiner wrote:
>>> Hi Chengming,
>>>
>>> Thanks for incorporating all the feedback. I have a few nitpicks
>>> below, but with those considered, please add:
>>>
>>> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
>>>
>>> On Wed, Aug 24, 2022 at 04:18:29PM +0800, Chengming Zhou wrote:
>>>> @@ -5171,12 +5220,19 @@ static struct cftype cgroup_base_files[] = {
>>>>  	{
>>>>  		.name = "irq.pressure",
>>>>  		.flags = CFTYPE_PRESSURE,
>>>> +		.file_offset = offsetof(struct cgroup, psi_files[PSI_IRQ]),
>>>>  		.seq_show = cgroup_irq_pressure_show,
>>>>  		.write = cgroup_irq_pressure_write,
>>>>  		.poll = cgroup_pressure_poll,
>>>>  		.release = cgroup_pressure_release,
>>>>  	},
>>>>  #endif
>>>> +	{
>>>> +		.name = "cgroup.pressure",
>>>> +		.flags = CFTYPE_PRESSURE,
>>>> +		.seq_show = cgroup_psi_show,
>>>> +		.write = cgroup_psi_write,
>>>
>>> To match the naming convention, these should be called
>>> cgroup_pressure_show() and cgroup_pressure_write().
>>
>> I just find cgroup_pressure_write() already exists, so I change the names
>> to cgroup_pressure_enable_show() and cgroup_pressure_enable_write(),
>> since this file name is simplified from "cgroup.pressure.enable".
> 
> That makes two outliers instead of one. It's probably better to steal
> cgroup_pressure_write for cgroup.pressure, and rename the currently
> misnamed helper. How about do_pressure_write()? pressure_write()?

Ok, I will change that helper to pressure_write().

Thanks.


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2022-08-25 13:30 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-24  8:18 [PATCH v3 00/10] sched/psi: some optimization and extension Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 01/10] sched/psi: fix periodic aggregation shut off Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 02/10] sched/psi: don't create cgroup PSI files when psi_disabled Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 03/10] sched/psi: save percpu memory when !psi_cgroups_enabled Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 04/10] sched/psi: move private helpers to sched/stats.h Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 05/10] sched/psi: optimize task switch inside shared cgroups again Chengming Zhou
2022-08-24 14:06   ` Johannes Weiner
2022-08-24 15:30     ` Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 06/10] sched/psi: remove NR_ONCPU task accounting Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 07/10] sched/psi: add PSI_IRQ to track IRQ/SOFTIRQ pressure Chengming Zhou
2022-08-24 10:46   ` Johannes Weiner
2022-08-24 11:53     ` Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 08/10] sched/psi: consolidate cgroup_psi() Chengming Zhou
2022-08-24 10:19   ` Johannes Weiner
2022-08-24  8:18 ` [PATCH v3 09/10] sched/psi: cache parent psi_group to speed up groups iterate Chengming Zhou
2022-08-24 10:18   ` Johannes Weiner
2022-08-24 10:48     ` Chengming Zhou
2022-08-24  8:18 ` [PATCH v3 10/10] sched/psi: per-cgroup PSI accounting disable/re-enable interface Chengming Zhou
2022-08-24  9:59   ` Johannes Weiner
2022-08-24 10:55     ` Chengming Zhou
2022-08-25 12:28     ` Chengming Zhou
2022-08-25 13:20       ` Johannes Weiner
2022-08-25 13:29         ` Chengming Zhou

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).