All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/uclamp: Fix a bug in propagating uclamp value in new cgroups
@ 2019-12-24 11:54 Qais Yousef
  2020-01-07 10:10 ` Peter Zijlstra
  0 siblings, 1 reply; 2+ messages in thread
From: Qais Yousef @ 2019-12-24 11:54 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra
  Cc: Vincent Guittot, Dietmar Eggemann, Tejun Heo, surenb,
	Patrick Bellasi, Doug Smythies, Juri Lelli, Ben Segall,
	Mel Gorman, linux-kernel, Qais Yousef

When a new cgroup is created, the effective uclamp value wasn't updated
with a call to cpu_util_update_eff() that looks at the hierarchy and
update to the most restrictive values.

Fix it by ensuring to call cpu_util_update_eff() when a new cgroup
becomes online.

Without this change, the newly created cgroup uses the default
root_task_group uclamp values, which is 1024 for both uclamp_{min, max},
which will cause the rq to to be clamped to max, hence cause the
system to run at max frequency.

The problem was observed on Ubuntu server and was reproduced on Debian
and Buildroot rootfs.

By default, Ubuntu and Debian create a cpu controller cgroup hierarchy
and add all tasks to it - which creates enough noise to keep the rq
uclamp value at max most of the time. Imitating this behavior makes the
problem visible in Buildroot too which otherwise looks fine since it's a
minimal userspace.

Reported-by: Doug Smythies <dsmythies@telus.net>
Tested-by: Doug Smythies <dsmythies@telus.net>
Fixes: 0b60ba2dd342 ("sched/uclamp: Propagate parent clamps")
Link: https://lore.kernel.org/lkml/000701d5b965$361b6c60$a2524520$@net/
Signed-off-by: Qais Yousef <qais.yousef@arm.com>
---
 kernel/sched/core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 90e4b00ace89..bfe756dee129 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7100,6 +7100,12 @@ static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
 
 	if (parent)
 		sched_online_group(tg, parent);
+
+#ifdef CONFIG_UCLAMP_TASK_GROUP
+	/* Propagate the effective uclamp value for the new group */
+	cpu_util_update_eff(css);
+#endif
+
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] sched/uclamp: Fix a bug in propagating uclamp value in new cgroups
  2019-12-24 11:54 [PATCH] sched/uclamp: Fix a bug in propagating uclamp value in new cgroups Qais Yousef
@ 2020-01-07 10:10 ` Peter Zijlstra
  0 siblings, 0 replies; 2+ messages in thread
From: Peter Zijlstra @ 2020-01-07 10:10 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Ingo Molnar, Vincent Guittot, Dietmar Eggemann, Tejun Heo,
	surenb, Patrick Bellasi, Doug Smythies, Juri Lelli, Ben Segall,
	Mel Gorman, linux-kernel

On Tue, Dec 24, 2019 at 11:54:04AM +0000, Qais Yousef wrote:
> When a new cgroup is created, the effective uclamp value wasn't updated
> with a call to cpu_util_update_eff() that looks at the hierarchy and
> update to the most restrictive values.
> 
> Fix it by ensuring to call cpu_util_update_eff() when a new cgroup
> becomes online.
> 
> Without this change, the newly created cgroup uses the default
> root_task_group uclamp values, which is 1024 for both uclamp_{min, max},
> which will cause the rq to to be clamped to max, hence cause the
> system to run at max frequency.
> 
> The problem was observed on Ubuntu server and was reproduced on Debian
> and Buildroot rootfs.
> 
> By default, Ubuntu and Debian create a cpu controller cgroup hierarchy
> and add all tasks to it - which creates enough noise to keep the rq
> uclamp value at max most of the time. Imitating this behavior makes the
> problem visible in Buildroot too which otherwise looks fine since it's a
> minimal userspace.
> 
> Reported-by: Doug Smythies <dsmythies@telus.net>
> Tested-by: Doug Smythies <dsmythies@telus.net>
> Fixes: 0b60ba2dd342 ("sched/uclamp: Propagate parent clamps")
> Link: https://lore.kernel.org/lkml/000701d5b965$361b6c60$a2524520$@net/
> Signed-off-by: Qais Yousef <qais.yousef@arm.com>

Thanks!

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-01-07 10:11 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-24 11:54 [PATCH] sched/uclamp: Fix a bug in propagating uclamp value in new cgroups Qais Yousef
2020-01-07 10:10 ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.