All of lore.kernel.org
 help / color / mirror / Atom feed
* [LTP] [PATCH 0/6] cfs_bandwidth01 and CGroup API
@ 2021-05-13 15:21 Richard Palethorpe
  2021-05-13 15:21 ` [LTP] [PATCH 1/6] API/cgroups: Allow fetching of CGroup name Richard Palethorpe
                   ` (5 more replies)
  0 siblings, 6 replies; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-13 15:21 UTC (permalink / raw)
  To: ltp

Hello,

This adds a test scheduler test which uses the cpu controller.

Also included are some additions and fixes to the CGroup API related
to the test.

Richard Palethorpe (6):
  API/cgroups: Allow fetching of CGroup name
  API/cgroups: Remove obsolete function in API
  API/cgroups: Check for unknown controller name
  API/cgroups: Add cpu controller
  API/cgroups: Auto add controllers to subtree_control in new subgroup
  sched/cgroup: Add cfs_bandwidth01

 include/tst_cgroup.h                          |   4 +
 lib/tst_cgroup.c                              |  94 ++++++----
 runtest/sched                                 |   1 +
 .../kernel/sched/cfs-scheduler/.gitignore     |   1 +
 testcases/kernel/sched/cfs-scheduler/Makefile |   4 +-
 .../sched/cfs-scheduler/cfs_bandwidth01.c     | 175 ++++++++++++++++++
 6 files changed, 245 insertions(+), 34 deletions(-)
 create mode 100644 testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c

-- 
2.31.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 1/6] API/cgroups: Allow fetching of CGroup name
  2021-05-13 15:21 [LTP] [PATCH 0/6] cfs_bandwidth01 and CGroup API Richard Palethorpe
@ 2021-05-13 15:21 ` Richard Palethorpe
  2021-05-19 11:14   ` Cyril Hrubis
  2021-05-13 15:21 ` [LTP] [PATCH 2/6] API/cgroups: Remove obsolete function in API Richard Palethorpe
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-13 15:21 UTC (permalink / raw)
  To: ltp

Lets the test author get the name of a CGroup.

Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
---
 include/tst_cgroup.h | 4 ++++
 lib/tst_cgroup.c     | 5 +++++
 2 files changed, 9 insertions(+)

diff --git a/include/tst_cgroup.h b/include/tst_cgroup.h
index bcf465a91..0df989bfd 100644
--- a/include/tst_cgroup.h
+++ b/include/tst_cgroup.h
@@ -133,6 +133,10 @@ struct tst_cgroup_group *
 tst_cgroup_group_mk(const struct tst_cgroup_group *const parent,
 		    const char *const group_name)
 		    __attribute__ ((nonnull, warn_unused_result));
+const char *
+tst_cgroup_group_name(const struct tst_cgroup_group *const cg)
+		      __attribute__ ((nonnull, warn_unused_result));
+
 /* Remove a descendant CGroup */
 struct tst_cgroup_group *
 tst_cgroup_group_rm(struct tst_cgroup_group *const cg)
diff --git a/lib/tst_cgroup.c b/lib/tst_cgroup.c
index 9b1270331..d3f1a8ba4 100644
--- a/lib/tst_cgroup.c
+++ b/lib/tst_cgroup.c
@@ -855,6 +855,11 @@ tst_cgroup_group_mk(const struct tst_cgroup_group *const parent,
 	return cg;
 }
 
+const char *tst_cgroup_group_name(const struct tst_cgroup_group *const cg)
+{
+	return cg->group_name;
+}
+
 struct tst_cgroup_group *tst_cgroup_group_rm(struct tst_cgroup_group *const cg)
 {
 	struct cgroup_dir **dir;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 2/6] API/cgroups: Remove obsolete function in API
  2021-05-13 15:21 [LTP] [PATCH 0/6] cfs_bandwidth01 and CGroup API Richard Palethorpe
  2021-05-13 15:21 ` [LTP] [PATCH 1/6] API/cgroups: Allow fetching of CGroup name Richard Palethorpe
@ 2021-05-13 15:21 ` Richard Palethorpe
  2021-05-19 11:15   ` Cyril Hrubis
  2021-05-13 15:21 ` [LTP] [PATCH 3/6] API/cgroups: Check for unknown controller name Richard Palethorpe
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-13 15:21 UTC (permalink / raw)
  To: ltp

Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
---
 lib/tst_cgroup.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/lib/tst_cgroup.c b/lib/tst_cgroup.c
index d3f1a8ba4..316dddde5 100644
--- a/lib/tst_cgroup.c
+++ b/lib/tst_cgroup.c
@@ -225,12 +225,6 @@ static void add_ctrl(uint32_t *const ctrl_field,
 	*ctrl_field |= 1 << ctrl->ctrl_indx;
 }
 
-__attribute__ ((warn_unused_result))
-struct cgroup_root *tst_cgroup_root_get(void)
-{
-	return roots[0].ver ? roots : roots + 1;
-}
-
 static int cgroup_v2_mounted(void)
 {
 	return !!roots[0].ver;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 3/6] API/cgroups: Check for unknown controller name
  2021-05-13 15:21 [LTP] [PATCH 0/6] cfs_bandwidth01 and CGroup API Richard Palethorpe
  2021-05-13 15:21 ` [LTP] [PATCH 1/6] API/cgroups: Allow fetching of CGroup name Richard Palethorpe
  2021-05-13 15:21 ` [LTP] [PATCH 2/6] API/cgroups: Remove obsolete function in API Richard Palethorpe
@ 2021-05-13 15:21 ` Richard Palethorpe
  2021-05-19 11:20   ` Cyril Hrubis
  2021-05-13 15:21 ` [LTP] [PATCH 4/6] API/cgroups: Add cpu controller Richard Palethorpe
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-13 15:21 UTC (permalink / raw)
  To: ltp

Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
---
 lib/tst_cgroup.c | 42 ++++++++++++++++++++++++------------------
 1 file changed, 24 insertions(+), 18 deletions(-)

diff --git a/lib/tst_cgroup.c b/lib/tst_cgroup.c
index 316dddde5..54636fd7e 100644
--- a/lib/tst_cgroup.c
+++ b/lib/tst_cgroup.c
@@ -306,7 +306,7 @@ void tst_cgroup_print_config(void)
 }
 
 __attribute__ ((nonnull, warn_unused_result))
-static struct cgroup_ctrl *cgroup_find_ctrl(const char *const ctrl_name)
+static struct cgroup_ctrl *cgroup_try_find_ctrl(const char *const ctrl_name)
 {
 	struct cgroup_ctrl *ctrl = controllers;
 
@@ -319,6 +319,22 @@ static struct cgroup_ctrl *cgroup_find_ctrl(const char *const ctrl_name)
 	return ctrl;
 }
 
+__attribute__ ((nonnull, returns_nonnull, warn_unused_result))
+static struct cgroup_ctrl *cgroup_find_ctrl(const char *const file,
+					    const int lineno,
+					    const char *const ctrl_name)
+{
+	struct cgroup_ctrl *const ctrl = cgroup_try_find_ctrl(ctrl_name);
+
+	if (!ctrl) {
+		tst_brk_(file, lineno, TBROK,
+			 "Did not find controller '%s'\n", ctrl_name);
+	}
+
+	return ctrl;
+}
+
+
 /* Determine if a mounted cgroup hierarchy is unique and record it if so.
  *
  * For CGroups V2 this is very simple as there is only one
@@ -355,7 +371,7 @@ static void cgroup_root_scan(const char *const mnt_type,
 	SAFE_FILE_READAT(mnt_dfd, "cgroup.controllers", buf, sizeof(buf));
 
 	for (tok = strtok(buf, " "); tok; tok = strtok(NULL, " ")) {
-		if ((const_ctrl = cgroup_find_ctrl(tok)))
+		if ((const_ctrl = cgroup_try_find_ctrl(tok)))
 			add_ctrl(&ctrl_field, const_ctrl);
 	}
 
@@ -371,7 +387,7 @@ static void cgroup_root_scan(const char *const mnt_type,
 
 v1:
 	for (tok = strtok(mnt_opts, ","); tok; tok = strtok(NULL, ",")) {
-		if ((const_ctrl = cgroup_find_ctrl(tok)))
+		if ((const_ctrl = cgroup_try_find_ctrl(tok)))
 			add_ctrl(&ctrl_field, const_ctrl);
 
 		no_prefix |= !strcmp("noprefix", tok);
@@ -580,7 +596,8 @@ void tst_cgroup_require(const char *const ctrl_name,
 			const struct tst_cgroup_opts *options)
 {
 	const char *const cgsc = "cgroup.subtree_control";
-	struct cgroup_ctrl *const ctrl = cgroup_find_ctrl(ctrl_name);
+	struct cgroup_ctrl *const ctrl =
+		cgroup_find_ctrl(__FILE__, __LINE__, ctrl_name);
 	struct cgroup_root *root;
 
 	if (!options)
@@ -892,13 +909,7 @@ static const struct cgroup_file *cgroup_file_find(const char *const file,
 	memcpy(ctrl_name, file_name, len);
 	ctrl_name[len] = '\0';
 
-        ctrl = cgroup_find_ctrl(ctrl_name);
-
-	if (!ctrl) {
-		tst_brk_(file, lineno, TBROK,
-			 "Did not find controller '%s'\n", ctrl_name);
-		return NULL;
-	}
+	ctrl = cgroup_find_ctrl(file, lineno, ctrl_name);
 
 	for (cfile = ctrl->files; cfile->file_name; cfile++) {
 		if (!strcmp(file_name, cfile->file_name))
@@ -919,7 +930,8 @@ enum tst_cgroup_ver tst_cgroup_ver(const char *const file, const int lineno,
 				    const struct tst_cgroup_group *const cg,
 				    const char *const ctrl_name)
 {
-	const struct cgroup_ctrl *const ctrl = cgroup_find_ctrl(ctrl_name);
+	const struct cgroup_ctrl *const ctrl = cgroup_find_ctrl(file, lineno,
+								ctrl_name);
 	const struct cgroup_dir *dir;
 
 	if (!strcmp(ctrl_name, "cgroup")) {
@@ -929,12 +941,6 @@ enum tst_cgroup_ver tst_cgroup_ver(const char *const file, const int lineno,
 		return 0;
 	}
 
-	if (!ctrl) {
-		tst_brk_(file, lineno,
-			 TBROK, "Unknown controller '%s'", ctrl_name);
-		return 0;
-	}
-
 	dir = cg->dirs_by_ctrl[ctrl->ctrl_indx];
 
 	if (!dir) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 4/6] API/cgroups: Add cpu controller
  2021-05-13 15:21 [LTP] [PATCH 0/6] cfs_bandwidth01 and CGroup API Richard Palethorpe
                   ` (2 preceding siblings ...)
  2021-05-13 15:21 ` [LTP] [PATCH 3/6] API/cgroups: Check for unknown controller name Richard Palethorpe
@ 2021-05-13 15:21 ` Richard Palethorpe
  2021-05-19 11:30   ` Cyril Hrubis
  2021-05-13 15:21 ` [LTP] [PATCH 5/6] API/cgroups: Auto add controllers to subtree_control in new subgroup Richard Palethorpe
  2021-05-13 15:21 ` [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01 Richard Palethorpe
  5 siblings, 1 reply; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-13 15:21 UTC (permalink / raw)
  To: ltp

Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
---
 lib/tst_cgroup.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/lib/tst_cgroup.c b/lib/tst_cgroup.c
index 54636fd7e..da177a1ad 100644
--- a/lib/tst_cgroup.c
+++ b/lib/tst_cgroup.c
@@ -82,7 +82,8 @@ struct cgroup_root {
 /* Controller sub-systems */
 enum cgroup_ctrl_indx {
 	CTRL_MEMORY = 1,
-	CTRL_CPUSET = 2,
+	CTRL_CPU,
+	CTRL_CPUSET,
 };
 #define CTRLS_MAX CTRL_CPUSET
 
@@ -162,6 +163,18 @@ static const files_t memory_ctrl_files = {
 	{ }
 };
 
+static const files_t cpu_ctrl_files = {
+	/* The V1 quota and period files were combined in the V2 max
+	 * file. The quota is in the first column and if we just print
+	 * a single value to the file, it will be treated as the
+	 * quota. To get or set the period we need to branch on the
+	 * API version.
+	 */
+	{ "cpu.max", "cpu.cfs_quota_us", CTRL_CPU },
+	{ "cpu.cfs_period_us", "cpu.cfs_period_us", CTRL_CPU },
+	{ }
+};
+
 static const files_t cpuset_ctrl_files = {
 	{ "cpuset.cpus", "cpuset.cpus", CTRL_CPUSET },
 	{ "cpuset.mems", "cpuset.mems", CTRL_CPUSET },
@@ -174,6 +187,9 @@ static struct cgroup_ctrl controllers[] = {
 	[CTRL_MEMORY] = {
 		"memory", memory_ctrl_files, CTRL_MEMORY, NULL, 0
 	},
+	[CTRL_CPU] = {
+		"cpu", cpu_ctrl_files, CTRL_CPU, NULL, 0
+	},
 	[CTRL_CPUSET] = {
 		"cpuset", cpuset_ctrl_files, CTRL_CPUSET, NULL, 0
 	},
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 5/6] API/cgroups: Auto add controllers to subtree_control in new subgroup
  2021-05-13 15:21 [LTP] [PATCH 0/6] cfs_bandwidth01 and CGroup API Richard Palethorpe
                   ` (3 preceding siblings ...)
  2021-05-13 15:21 ` [LTP] [PATCH 4/6] API/cgroups: Add cpu controller Richard Palethorpe
@ 2021-05-13 15:21 ` Richard Palethorpe
  2021-05-19 11:41   ` Cyril Hrubis
  2021-05-13 15:21 ` [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01 Richard Palethorpe
  5 siblings, 1 reply; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-13 15:21 UTC (permalink / raw)
  To: ltp

This is what we have always wanted so far. If we do not want to do it
on a new test then a new config option can be added during require.

Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
---
 lib/tst_cgroup.c | 23 ++++++++++++++++-------
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/lib/tst_cgroup.c b/lib/tst_cgroup.c
index da177a1ad..279617297 100644
--- a/lib/tst_cgroup.c
+++ b/lib/tst_cgroup.c
@@ -843,19 +843,28 @@ static void cgroup_group_init(struct tst_cgroup_group *const cg,
 	strcpy(cg->group_name, group_name);
 }
 
-__attribute__ ((nonnull))
-static void cgroup_group_add_dir(struct tst_cgroup_group *const cg,
+__attribute__((nonnull (2, 3)))
+static void cgroup_group_add_dir(const struct tst_cgroup_group *const parent,
+				 struct tst_cgroup_group *const cg,
 				 struct cgroup_dir *const dir)
 {
 	const struct cgroup_ctrl *ctrl;
 	int i;
 
-	if (dir->dir_root->ver == TST_CGROUP_V2)
+	if (dir->dir_root->ver != TST_CGROUP_V1)
 		cg->dirs_by_ctrl[0] = dir;
 
 	for_each_ctrl(ctrl) {
-		if (has_ctrl(dir->ctrl_field, ctrl))
-			cg->dirs_by_ctrl[ctrl->ctrl_indx] = dir;
+		if (!has_ctrl(dir->ctrl_field, ctrl))
+			continue;
+
+		cg->dirs_by_ctrl[ctrl->ctrl_indx] = dir;
+
+		if (!parent || dir->dir_root->ver == TST_CGROUP_V1)
+			continue;
+
+		SAFE_CGROUP_PRINTF(parent, "cgroup.subtree_control",
+				   "+%s", ctrl->ctrl_name);
 	}
 
 	for (i = 0; cg->dirs[i]; i++);
@@ -876,7 +885,7 @@ tst_cgroup_group_mk(const struct tst_cgroup_group *const parent,
 	for_each_dir(parent, 0, dir) {
 		new_dir = SAFE_MALLOC(sizeof(*new_dir));
 		cgroup_dir_mk(*dir, group_name, new_dir);
-		cgroup_group_add_dir(cg, new_dir);
+		cgroup_group_add_dir(parent, cg, new_dir);
 	}
 
 	return cg;
@@ -1029,7 +1038,7 @@ static struct tst_cgroup_group *cgroup_group_from_roots(const size_t tree_off)
 		dir = (typeof(dir))(((char *)root) + tree_off);
 
 		if (dir->ctrl_field)
-			cgroup_group_add_dir(cg, dir);
+			cgroup_group_add_dir(NULL, cg, dir);
 	}
 
 	if (cg->dirs[0]) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01
  2021-05-13 15:21 [LTP] [PATCH 0/6] cfs_bandwidth01 and CGroup API Richard Palethorpe
                   ` (4 preceding siblings ...)
  2021-05-13 15:21 ` [LTP] [PATCH 5/6] API/cgroups: Auto add controllers to subtree_control in new subgroup Richard Palethorpe
@ 2021-05-13 15:21 ` Richard Palethorpe
  2021-05-19 11:51   ` Cyril Hrubis
  5 siblings, 1 reply; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-13 15:21 UTC (permalink / raw)
  To: ltp

Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
---
 runtest/sched                                 |   1 +
 .../kernel/sched/cfs-scheduler/.gitignore     |   1 +
 testcases/kernel/sched/cfs-scheduler/Makefile |   4 +-
 .../sched/cfs-scheduler/cfs_bandwidth01.c     | 175 ++++++++++++++++++
 4 files changed, 179 insertions(+), 2 deletions(-)
 create mode 100644 testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c

diff --git a/runtest/sched b/runtest/sched
index bfc4f2711..592898723 100644
--- a/runtest/sched
+++ b/runtest/sched
@@ -6,6 +6,7 @@ pth_str03 pth_str03
 time-schedule01		time-schedule
 trace_sched01		trace_sched -c 1
 
+cfs_bandwidth01 cfs_bandwidth01 -i 5
 hackbench01 hackbench 50 process 1000
 hackbench02 hackbench 20 thread 1000
 
diff --git a/testcases/kernel/sched/cfs-scheduler/.gitignore b/testcases/kernel/sched/cfs-scheduler/.gitignore
index db2759e4f..c5dacd6ef 100644
--- a/testcases/kernel/sched/cfs-scheduler/.gitignore
+++ b/testcases/kernel/sched/cfs-scheduler/.gitignore
@@ -1 +1,2 @@
 /hackbench
+cfs_bandwidth01
diff --git a/testcases/kernel/sched/cfs-scheduler/Makefile b/testcases/kernel/sched/cfs-scheduler/Makefile
index aa3bf8459..2ffe1f7f9 100644
--- a/testcases/kernel/sched/cfs-scheduler/Makefile
+++ b/testcases/kernel/sched/cfs-scheduler/Makefile
@@ -18,8 +18,8 @@
 
 top_srcdir		?= ../../../..
 
-include $(top_srcdir)/include/mk/env_pre.mk
+include $(top_srcdir)/include/mk/testcases.mk
 
-LDLIBS			+= -lpthread
+hackbench: LDLIBS			+= -lpthread
 
 include $(top_srcdir)/include/mk/generic_leaf_target.mk
diff --git a/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c b/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
new file mode 100644
index 000000000..b1f98d50f
--- /dev/null
+++ b/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
@@ -0,0 +1,175 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* Copyright (c) 2021 SUSE LLC <rpalethorpe@suse.com> */
+/*\
+ *
+ * [Description]
+ *
+ * Creates a multi-level CGroup hierarchy with the cpu controller
+ * enabled. The leaf groups are populated with "busy" processes which
+ * simulate intermittent cpu load. They spin for some time then sleep
+ * then repeat.
+ *
+ * Both the trunk and leaf groups are set cpu bandwidth limits. The
+ * busy processes will intermittently exceed these limits. Causing
+ * them to be throttled. When they begin sleeping this will then cause
+ * them to be unthrottle.
+ *
+ * The test is known to reproduce an issue with an update to
+ * SLE-15-SP1 (kernel 4.12.14-197.64, bsc#1179093).
+ */
+
+#include <stdlib.h>
+
+#include "tst_test.h"
+#include "tst_cgroup.h"
+#include "tst_timer.h"
+
+static const struct tst_cgroup_group *cg_test;
+static struct tst_cgroup_group *cg_level2, *cg_level3a, *cg_level3b;
+static struct tst_cgroup_group *cg_workers[3];
+
+static void set_cpu_quota(const struct tst_cgroup_group *const cg,
+			  const float quota_percent)
+{
+	const unsigned int period_us = 10000;
+	const unsigned int quota_us = (quota_percent / 100) * (float)period_us;
+
+	if (TST_CGROUP_VER(cg, "cpu") != TST_CGROUP_V1) {
+		SAFE_CGROUP_PRINTF(cg, "cpu.max",
+				   "%u %u", quota_us, period_us);
+	} else {
+		SAFE_CGROUP_PRINTF(cg, "cpu.max",
+				   "%u", quota_us);
+		SAFE_CGROUP_PRINTF(cg, "cpu.cfs_period_us",
+				  "%u", period_us);
+	}
+
+	tst_res(TINFO, "Set '%s/cpu.max' = '%d %d'",
+		tst_cgroup_group_name(cg), quota_us, period_us);
+}
+
+static struct tst_cgroup_group *
+mk_cpu_cgroup(const struct tst_cgroup_group *const cg_parent,
+	      const char *const cg_child_name,
+	      const float quota_percent)
+{
+	struct tst_cgroup_group *const cg =
+		tst_cgroup_group_mk(cg_parent, cg_child_name);
+
+	set_cpu_quota(cg, quota_percent);
+
+	return cg;
+}
+
+static void busy_loop(const unsigned int sleep_ms)
+{
+	for (;;) {
+		tst_timer_start(CLOCK_MONOTONIC_RAW);
+		while (!tst_timer_expired_ms(20))
+			;
+
+		const int ret = tst_checkpoint_wait(0, sleep_ms);
+
+		if (!ret)
+			exit(0);
+
+		if (errno != ETIMEDOUT)
+			tst_brk(TBROK | TERRNO, "tst_checkpoint_wait");
+	}
+}
+
+static void fork_busy_procs_in_cgroup(const struct tst_cgroup_group *const cg)
+{
+	const unsigned int sleeps_ms[] = {3000, 1000, 10};
+	const pid_t worker_pid = SAFE_FORK();
+	size_t i;
+
+	if (worker_pid)
+		return;
+
+	for (i = 0; i < ARRAY_SIZE(sleeps_ms); i++) {
+		const pid_t busy_pid = SAFE_FORK();
+
+		if (!busy_pid)
+			busy_loop(sleeps_ms[i]);
+
+		SAFE_CGROUP_PRINTF(cg, "cgroup.procs", "%d", busy_pid);
+	}
+
+	tst_reap_children();
+
+	exit(0);
+}
+
+static void do_test(void)
+{
+	size_t i;
+
+	cg_level2 = tst_cgroup_group_mk(cg_test, "level2");
+
+	cg_level3a = tst_cgroup_group_mk(cg_level2, "level3a");
+	cg_workers[0] = mk_cpu_cgroup(cg_level3a, "worker1", 30);
+	cg_workers[1] = mk_cpu_cgroup(cg_level3a, "worker2", 20);
+
+	cg_level3b = tst_cgroup_group_mk(cg_level2, "level3b");
+	cg_workers[2] = mk_cpu_cgroup(cg_level3b, "worker3", 30);
+
+	for (i = 0; i < ARRAY_SIZE(cg_workers); i++)
+		fork_busy_procs_in_cgroup(cg_workers[i]);
+
+	tst_res(TPASS, "Scheduled bandwidth constrained workers");
+
+	sleep(1);
+
+	set_cpu_quota(cg_level2, 50);
+
+	sleep(2);
+
+	TST_CHECKPOINT_WAKE2(0, 3 * 3);
+	tst_reap_children();
+
+	tst_res(TPASS, "Workers exited");
+}
+
+static void setup(void)
+{
+	tst_cgroup_require("cpu", NULL);
+
+	cg_test = tst_cgroup_get_test_group();
+}
+
+static void cleanup(void)
+{
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(cg_workers); i++) {
+		if (cg_workers[i])
+			cg_workers[i] = tst_cgroup_group_rm(cg_workers[i]);
+	}
+
+	if (cg_level3a)
+		cg_level3a = tst_cgroup_group_rm(cg_level3a);
+	if (cg_level3b)
+		cg_level3b = tst_cgroup_group_rm(cg_level3b);
+	if (cg_level2)
+		cg_level2 = tst_cgroup_group_rm(cg_level2);
+
+	tst_cgroup_cleanup();
+}
+
+static struct tst_test test = {
+	.test_all = do_test,
+	.setup = setup,
+	.cleanup = cleanup,
+	.forks_child = 1,
+	.needs_checkpoints = 1,
+	.taint_check = TST_TAINT_W | TST_TAINT_D,
+	.tags = (const struct tst_tag[]) {
+		{"linux-git", "39f23ce07b93"},
+		{"linux-git", "b34cb07dde7c"},
+		{"linux-git", "fe61468b2cbc"},
+		{"linux-git", "5ab297bab984"},
+		{"linux-git", "6d4d22468dae"},
+		{ }
+	}
+};
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 1/6] API/cgroups: Allow fetching of CGroup name
  2021-05-13 15:21 ` [LTP] [PATCH 1/6] API/cgroups: Allow fetching of CGroup name Richard Palethorpe
@ 2021-05-19 11:14   ` Cyril Hrubis
  0 siblings, 0 replies; 20+ messages in thread
From: Cyril Hrubis @ 2021-05-19 11:14 UTC (permalink / raw)
  To: ltp

Hi!
Reviewed-by: Cyril Hrubis <chrubis@suse.cz>

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 2/6] API/cgroups: Remove obsolete function in API
  2021-05-13 15:21 ` [LTP] [PATCH 2/6] API/cgroups: Remove obsolete function in API Richard Palethorpe
@ 2021-05-19 11:15   ` Cyril Hrubis
  0 siblings, 0 replies; 20+ messages in thread
From: Cyril Hrubis @ 2021-05-19 11:15 UTC (permalink / raw)
  To: ltp

Hi!
Reviewed-by: Cyril Hrubis <chrubis@suse.cz>

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 3/6] API/cgroups: Check for unknown controller name
  2021-05-13 15:21 ` [LTP] [PATCH 3/6] API/cgroups: Check for unknown controller name Richard Palethorpe
@ 2021-05-19 11:20   ` Cyril Hrubis
  2021-05-20  8:13     ` Richard Palethorpe
  0 siblings, 1 reply; 20+ messages in thread
From: Cyril Hrubis @ 2021-05-19 11:20 UTC (permalink / raw)
  To: ltp

Hi!
So as far as I can tell this only shuffles code around but the
functionality is the same since we moved the controller check from the
tst_cgroup_ver() and tst_cgroup_require() to a common function.

I'm not sure that this is worth the trouble.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 4/6] API/cgroups: Add cpu controller
  2021-05-13 15:21 ` [LTP] [PATCH 4/6] API/cgroups: Add cpu controller Richard Palethorpe
@ 2021-05-19 11:30   ` Cyril Hrubis
  2021-05-20  8:35     ` Richard Palethorpe
  0 siblings, 1 reply; 20+ messages in thread
From: Cyril Hrubis @ 2021-05-19 11:30 UTC (permalink / raw)
  To: ltp

Hi!
>  lib/tst_cgroup.c | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/tst_cgroup.c b/lib/tst_cgroup.c
> index 54636fd7e..da177a1ad 100644
> --- a/lib/tst_cgroup.c
> +++ b/lib/tst_cgroup.c
> @@ -82,7 +82,8 @@ struct cgroup_root {
>  /* Controller sub-systems */
>  enum cgroup_ctrl_indx {
>  	CTRL_MEMORY = 1,
> -	CTRL_CPUSET = 2,
> +	CTRL_CPU,
> +	CTRL_CPUSET,
>  };
>  #define CTRLS_MAX CTRL_CPUSET
>  
> @@ -162,6 +163,18 @@ static const files_t memory_ctrl_files = {
>  	{ }
>  };
>  
> +static const files_t cpu_ctrl_files = {
> +	/* The V1 quota and period files were combined in the V2 max
> +	 * file. The quota is in the first column and if we just print
> +	 * a single value to the file, it will be treated as the
> +	 * quota. To get or set the period we need to branch on the
> +	 * API version.
> +	 */

I wonder if this is worth a helper function, something as:

#define SAFE_CGROUP_CPU_SET_MAX(cg, quota_us, period_us) \
	tst_cgroup_cpu_set_max(__FILE__, __LINENO__, cg, quota_us, period_us)

void tst_cgroup_cpu_set_max(const char *const file, const int lineno,
                            const struct tst_cgroup_group *const cg,
                            unsigned int quota_us, unsigned int period_us);

#define SAFE_CGROUP_CPU_GET_MAX(cg, quota_us, period_us) \
	tst_cgroup_cpu_get_max(__FILE__, __LINENO__, cg, quota_us, period_us)

void tst_cgroup_cpu_get_max(const char *const file, const int lineno,
                            const struct tst_cgroup_group *const cg,
                            unsigned int *quota_us, unsigned int *period_us);

I guess that if we are going to add more tests we would end up with such
functions somewhere anyways.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 5/6] API/cgroups: Auto add controllers to subtree_control in new subgroup
  2021-05-13 15:21 ` [LTP] [PATCH 5/6] API/cgroups: Auto add controllers to subtree_control in new subgroup Richard Palethorpe
@ 2021-05-19 11:41   ` Cyril Hrubis
  2021-05-20  8:40     ` Richard Palethorpe
  0 siblings, 1 reply; 20+ messages in thread
From: Cyril Hrubis @ 2021-05-19 11:41 UTC (permalink / raw)
  To: ltp

Hi!
> -	if (dir->dir_root->ver == TST_CGROUP_V2)
> +	if (dir->dir_root->ver != TST_CGROUP_V1)
>  		cg->dirs_by_ctrl[0] = dir;

This change is useless, isn't it?

>  	for_each_ctrl(ctrl) {
> -		if (has_ctrl(dir->ctrl_field, ctrl))
> -			cg->dirs_by_ctrl[ctrl->ctrl_indx] = dir;
> +		if (!has_ctrl(dir->ctrl_field, ctrl))
> +			continue;
> +
> +		cg->dirs_by_ctrl[ctrl->ctrl_indx] = dir;
> +
> +		if (!parent || dir->dir_root->ver == TST_CGROUP_V1)
> +			continue;
> +
> +		SAFE_CGROUP_PRINTF(parent, "cgroup.subtree_control",
> +				   "+%s", ctrl->ctrl_name);
>  	}

Looks good. Agree that we should copy the controllers from parent here
for V2.

>  	for (i = 0; cg->dirs[i]; i++);
> @@ -876,7 +885,7 @@ tst_cgroup_group_mk(const struct tst_cgroup_group *const parent,
>  	for_each_dir(parent, 0, dir) {
>  		new_dir = SAFE_MALLOC(sizeof(*new_dir));
>  		cgroup_dir_mk(*dir, group_name, new_dir);
> -		cgroup_group_add_dir(cg, new_dir);
> +		cgroup_group_add_dir(parent, cg, new_dir);
>  	}
>  
>  	return cg;
> @@ -1029,7 +1038,7 @@ static struct tst_cgroup_group *cgroup_group_from_roots(const size_t tree_off)
>  		dir = (typeof(dir))(((char *)root) + tree_off);
>  
>  		if (dir->ctrl_field)
> -			cgroup_group_add_dir(cg, dir);
> +			cgroup_group_add_dir(NULL, cg, dir);
>  	}
>  
>  	if (cg->dirs[0]) {

Reviewed-by: Cyril Hrubis <chrubis@suse.cz>

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01
  2021-05-13 15:21 ` [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01 Richard Palethorpe
@ 2021-05-19 11:51   ` Cyril Hrubis
  2021-05-20  8:10     ` Cyril Hrubis
  2021-05-20  8:50     ` Richard Palethorpe
  0 siblings, 2 replies; 20+ messages in thread
From: Cyril Hrubis @ 2021-05-19 11:51 UTC (permalink / raw)
  To: ltp

On Thu, May 13, 2021 at 04:21:25PM +0100, Richard Palethorpe via ltp wrote:
> Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
> ---
>  runtest/sched                                 |   1 +
>  .../kernel/sched/cfs-scheduler/.gitignore     |   1 +
>  testcases/kernel/sched/cfs-scheduler/Makefile |   4 +-
>  .../sched/cfs-scheduler/cfs_bandwidth01.c     | 175 ++++++++++++++++++
>  4 files changed, 179 insertions(+), 2 deletions(-)
>  create mode 100644 testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
> 
> diff --git a/runtest/sched b/runtest/sched
> index bfc4f2711..592898723 100644
> --- a/runtest/sched
> +++ b/runtest/sched
> @@ -6,6 +6,7 @@ pth_str03 pth_str03
>  time-schedule01		time-schedule
>  trace_sched01		trace_sched -c 1
>  
> +cfs_bandwidth01 cfs_bandwidth01 -i 5
>  hackbench01 hackbench 50 process 1000
>  hackbench02 hackbench 20 thread 1000
>  
> diff --git a/testcases/kernel/sched/cfs-scheduler/.gitignore b/testcases/kernel/sched/cfs-scheduler/.gitignore
> index db2759e4f..c5dacd6ef 100644
> --- a/testcases/kernel/sched/cfs-scheduler/.gitignore
> +++ b/testcases/kernel/sched/cfs-scheduler/.gitignore
> @@ -1 +1,2 @@
>  /hackbench
> +cfs_bandwidth01
> diff --git a/testcases/kernel/sched/cfs-scheduler/Makefile b/testcases/kernel/sched/cfs-scheduler/Makefile
> index aa3bf8459..2ffe1f7f9 100644
> --- a/testcases/kernel/sched/cfs-scheduler/Makefile
> +++ b/testcases/kernel/sched/cfs-scheduler/Makefile
> @@ -18,8 +18,8 @@
>  
>  top_srcdir		?= ../../../..
>  
> -include $(top_srcdir)/include/mk/env_pre.mk
> +include $(top_srcdir)/include/mk/testcases.mk
>  
> -LDLIBS			+= -lpthread
> +hackbench: LDLIBS			+= -lpthread
>  
>  include $(top_srcdir)/include/mk/generic_leaf_target.mk
> diff --git a/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c b/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
> new file mode 100644
> index 000000000..b1f98d50f
> --- /dev/null
> +++ b/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
> @@ -0,0 +1,175 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/* Copyright (c) 2021 SUSE LLC <rpalethorpe@suse.com> */
> +/*\
> + *
> + * [Description]
> + *
> + * Creates a multi-level CGroup hierarchy with the cpu controller
> + * enabled. The leaf groups are populated with "busy" processes which
> + * simulate intermittent cpu load. They spin for some time then sleep
> + * then repeat.
> + *
> + * Both the trunk and leaf groups are set cpu bandwidth limits. The
> + * busy processes will intermittently exceed these limits. Causing
> + * them to be throttled. When they begin sleeping this will then cause
> + * them to be unthrottle.
> + *
> + * The test is known to reproduce an issue with an update to
> + * SLE-15-SP1 (kernel 4.12.14-197.64, bsc#1179093).
> + */
> +
> +#include <stdlib.h>
> +
> +#include "tst_test.h"
> +#include "tst_cgroup.h"
> +#include "tst_timer.h"
> +
> +static const struct tst_cgroup_group *cg_test;
> +static struct tst_cgroup_group *cg_level2, *cg_level3a, *cg_level3b;
> +static struct tst_cgroup_group *cg_workers[3];
> +
> +static void set_cpu_quota(const struct tst_cgroup_group *const cg,
> +			  const float quota_percent)
> +{
> +	const unsigned int period_us = 10000;
> +	const unsigned int quota_us = (quota_percent / 100) * (float)period_us;
> +
> +	if (TST_CGROUP_VER(cg, "cpu") != TST_CGROUP_V1) {
> +		SAFE_CGROUP_PRINTF(cg, "cpu.max",
> +				   "%u %u", quota_us, period_us);
> +	} else {
> +		SAFE_CGROUP_PRINTF(cg, "cpu.max",
> +				   "%u", quota_us);
> +		SAFE_CGROUP_PRINTF(cg, "cpu.cfs_period_us",
> +				  "%u", period_us);
> +	}
> +
> +	tst_res(TINFO, "Set '%s/cpu.max' = '%d %d'",
> +		tst_cgroup_group_name(cg), quota_us, period_us);
> +}
> +
> +static struct tst_cgroup_group *
> +mk_cpu_cgroup(const struct tst_cgroup_group *const cg_parent,
> +	      const char *const cg_child_name,
> +	      const float quota_percent)
> +{
> +	struct tst_cgroup_group *const cg =
> +		tst_cgroup_group_mk(cg_parent, cg_child_name);
> +
> +	set_cpu_quota(cg, quota_percent);
> +
> +	return cg;
> +}
> +
> +static void busy_loop(const unsigned int sleep_ms)
> +{
> +	for (;;) {
> +		tst_timer_start(CLOCK_MONOTONIC_RAW);
> +		while (!tst_timer_expired_ms(20))
> +			;
> +
> +		const int ret = tst_checkpoint_wait(0, sleep_ms);
> +
> +		if (!ret)
> +			exit(0);
> +
> +		if (errno != ETIMEDOUT)
> +			tst_brk(TBROK | TERRNO, "tst_checkpoint_wait");
> +	}
> +}
> +
> +static void fork_busy_procs_in_cgroup(const struct tst_cgroup_group *const cg)
> +{
> +	const unsigned int sleeps_ms[] = {3000, 1000, 10};
> +	const pid_t worker_pid = SAFE_FORK();
> +	size_t i;
> +
> +	if (worker_pid)
> +		return;
> +
> +	for (i = 0; i < ARRAY_SIZE(sleeps_ms); i++) {
> +		const pid_t busy_pid = SAFE_FORK();
> +
> +		if (!busy_pid)
> +			busy_loop(sleeps_ms[i]);
> +
> +		SAFE_CGROUP_PRINTF(cg, "cgroup.procs", "%d", busy_pid);
> +	}
> +
> +	tst_reap_children();
> +
> +	exit(0);
> +}
> +
> +static void do_test(void)
> +{
> +	size_t i;
> +
> +	cg_level2 = tst_cgroup_group_mk(cg_test, "level2");
> +
> +	cg_level3a = tst_cgroup_group_mk(cg_level2, "level3a");
> +	cg_workers[0] = mk_cpu_cgroup(cg_level3a, "worker1", 30);
> +	cg_workers[1] = mk_cpu_cgroup(cg_level3a, "worker2", 20);
> +
> +	cg_level3b = tst_cgroup_group_mk(cg_level2, "level3b");
> +	cg_workers[2] = mk_cpu_cgroup(cg_level3b, "worker3", 30);
> +
> +	for (i = 0; i < ARRAY_SIZE(cg_workers); i++)
> +		fork_busy_procs_in_cgroup(cg_workers[i]);
> +
> +	tst_res(TPASS, "Scheduled bandwidth constrained workers");
> +
> +	sleep(1);
> +
> +	set_cpu_quota(cg_level2, 50);
> +
> +	sleep(2);
> +
> +	TST_CHECKPOINT_WAKE2(0, 3 * 3);
> +	tst_reap_children();
> +
> +	tst_res(TPASS, "Workers exited");
> +}
> +
> +static void setup(void)
> +{
> +	tst_cgroup_require("cpu", NULL);
> +
> +	cg_test = tst_cgroup_get_test_group();
> +}
> +
> +static void cleanup(void)
> +{
> +	size_t i;
> +
> +	for (i = 0; i < ARRAY_SIZE(cg_workers); i++) {
> +		if (cg_workers[i])
> +			cg_workers[i] = tst_cgroup_group_rm(cg_workers[i]);
> +	}
> +
> +	if (cg_level3a)
> +		cg_level3a = tst_cgroup_group_rm(cg_level3a);
> +	if (cg_level3b)
> +		cg_level3b = tst_cgroup_group_rm(cg_level3b);
> +	if (cg_level2)
> +		cg_level2 = tst_cgroup_group_rm(cg_level2);

Hmm, I wonder if we can move this part of the cleanup to the test
library as well. If we add all cgroups the user has created into a FIFO
linked list then this could be implemented as a single loop in the
tst_cgroup_clean().

We would have to loop over the list in the tst_cgroup_group_rm() in
order to remove the about to be removed group from the list as well, but
I guess that this is still worth the trouble.

Other than that the test looks nice and clean.

> +	tst_cgroup_cleanup();
> +}
> +
> +static struct tst_test test = {
> +	.test_all = do_test,
> +	.setup = setup,
> +	.cleanup = cleanup,
> +	.forks_child = 1,
> +	.needs_checkpoints = 1,
> +	.taint_check = TST_TAINT_W | TST_TAINT_D,
> +	.tags = (const struct tst_tag[]) {
> +		{"linux-git", "39f23ce07b93"},
> +		{"linux-git", "b34cb07dde7c"},
> +		{"linux-git", "fe61468b2cbc"},
> +		{"linux-git", "5ab297bab984"},
> +		{"linux-git", "6d4d22468dae"},
> +		{ }
> +	}
> +};
> -- 
> 2.31.1
> 
> 
> -- 
> Mailing list info: https://lists.linux.it/listinfo/ltp

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01
  2021-05-19 11:51   ` Cyril Hrubis
@ 2021-05-20  8:10     ` Cyril Hrubis
  2021-05-20  8:50     ` Richard Palethorpe
  1 sibling, 0 replies; 20+ messages in thread
From: Cyril Hrubis @ 2021-05-20  8:10 UTC (permalink / raw)
  To: ltp

Hi!
> Hmm, I wonder if we can move this part of the cleanup to the test
> library as well. If we add all cgroups the user has created into a FIFO
                                                                     ^
								     LIFO
As we have to remove the last insterted first.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 3/6] API/cgroups: Check for unknown controller name
  2021-05-19 11:20   ` Cyril Hrubis
@ 2021-05-20  8:13     ` Richard Palethorpe
  0 siblings, 0 replies; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-20  8:13 UTC (permalink / raw)
  To: ltp

hello,

Cyril Hrubis <chrubis@suse.cz> writes:

> Hi!
> So as far as I can tell this only shuffles code around but the
> functionality is the same since we moved the controller check from the
> tst_cgroup_ver() and tst_cgroup_require() to a common function.
>
> I'm not sure that this is worth the trouble.

It causes tst_brk to be called when the controller name is not found
instead of a null deref.

Although it may not be worth creating the function because tst_brk can
return as far as the compiler is concerned (although tst_require should
never be called from cleanup context). So I need to look at this again.

-- 
Thank you,
Richard.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 4/6] API/cgroups: Add cpu controller
  2021-05-19 11:30   ` Cyril Hrubis
@ 2021-05-20  8:35     ` Richard Palethorpe
  0 siblings, 0 replies; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-20  8:35 UTC (permalink / raw)
  To: ltp

Hello,

Cyril Hrubis <chrubis@suse.cz> writes:

> Hi!
>>  lib/tst_cgroup.c | 18 +++++++++++++++++-
>>  1 file changed, 17 insertions(+), 1 deletion(-)
>> 
>> diff --git a/lib/tst_cgroup.c b/lib/tst_cgroup.c
>> index 54636fd7e..da177a1ad 100644
>> --- a/lib/tst_cgroup.c
>> +++ b/lib/tst_cgroup.c
>> @@ -82,7 +82,8 @@ struct cgroup_root {
>>  /* Controller sub-systems */
>>  enum cgroup_ctrl_indx {
>>  	CTRL_MEMORY = 1,
>> -	CTRL_CPUSET = 2,
>> +	CTRL_CPU,
>> +	CTRL_CPUSET,
>>  };
>>  #define CTRLS_MAX CTRL_CPUSET
>>  
>> @@ -162,6 +163,18 @@ static const files_t memory_ctrl_files = {
>>  	{ }
>>  };
>>  
>> +static const files_t cpu_ctrl_files = {
>> +	/* The V1 quota and period files were combined in the V2 max
>> +	 * file. The quota is in the first column and if we just print
>> +	 * a single value to the file, it will be treated as the
>> +	 * quota. To get or set the period we need to branch on the
>> +	 * API version.
>> +	 */
>
> I wonder if this is worth a helper function, something as:
>
> #define SAFE_CGROUP_CPU_SET_MAX(cg, quota_us, period_us) \
> 	tst_cgroup_cpu_set_max(__FILE__, __LINENO__, cg, quota_us, period_us)
>
> void tst_cgroup_cpu_set_max(const char *const file, const int lineno,
>                             const struct tst_cgroup_group *const cg,
>                             unsigned int quota_us, unsigned int period_us);
>
> #define SAFE_CGROUP_CPU_GET_MAX(cg, quota_us, period_us) \
> 	tst_cgroup_cpu_get_max(__FILE__, __LINENO__, cg, quota_us, period_us)
>
> void tst_cgroup_cpu_get_max(const char *const file, const int lineno,
>                             const struct tst_cgroup_group *const cg,
>                             unsigned int *quota_us, unsigned int *period_us);
>
> I guess that if we are going to add more tests we would end up with such
> functions somewhere anyways.

Yes, when we have more tests. Because there is an alternative that we
add a mapping/filter layer to transform V2 writes into V1 writes. e.g.

SAFE_CGROUP_PRINTF(cg, "cpu.max", "%d %d", "50 100");

When done on V1 it will write to a buffer with sprintf and then split
that into two writes to the V1 files.

This is much more complicated for this case, but there are hundreds of
these knobs. In some cases we just need to convert a string to an
integer e.g. "max" => "-1" or whatever memory.max takes.

In most cases it will just be a case of sprintf into a buffer and then
do something with that.

I can think of problems with both approaches. :-)

-- 
Thank you,
Richard.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 5/6] API/cgroups: Auto add controllers to subtree_control in new subgroup
  2021-05-19 11:41   ` Cyril Hrubis
@ 2021-05-20  8:40     ` Richard Palethorpe
  0 siblings, 0 replies; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-20  8:40 UTC (permalink / raw)
  To: ltp

Hello,

Cyril Hrubis <chrubis@suse.cz> writes:

> Hi!
>> -	if (dir->dir_root->ver == TST_CGROUP_V2)
>> +	if (dir->dir_root->ver != TST_CGROUP_V1)
>>  		cg->dirs_by_ctrl[0] = dir;
>
> This change is useless, isn't it?

Hopefully there will be no V3, but if there is it will most likely be a
unified hierarchy like V2.

Maybe this should have been in a seperate patch though?

>
>>  	for_each_ctrl(ctrl) {
>> -		if (has_ctrl(dir->ctrl_field, ctrl))
>> -			cg->dirs_by_ctrl[ctrl->ctrl_indx] = dir;
>> +		if (!has_ctrl(dir->ctrl_field, ctrl))
>> +			continue;
>> +
>> +		cg->dirs_by_ctrl[ctrl->ctrl_indx] = dir;
>> +
>> +		if (!parent || dir->dir_root->ver == TST_CGROUP_V1)
>> +			continue;
>> +
>> +		SAFE_CGROUP_PRINTF(parent, "cgroup.subtree_control",
>> +				   "+%s", ctrl->ctrl_name);
>>  	}
>
> Looks good. Agree that we should copy the controllers from parent here
> for V2.

Yup, as suggested by Li Wang.

>
>>  	for (i = 0; cg->dirs[i]; i++);
>> @@ -876,7 +885,7 @@ tst_cgroup_group_mk(const struct tst_cgroup_group *const parent,
>>  	for_each_dir(parent, 0, dir) {
>>  		new_dir = SAFE_MALLOC(sizeof(*new_dir));
>>  		cgroup_dir_mk(*dir, group_name, new_dir);
>> -		cgroup_group_add_dir(cg, new_dir);
>> +		cgroup_group_add_dir(parent, cg, new_dir);
>>  	}
>>  
>>  	return cg;
>> @@ -1029,7 +1038,7 @@ static struct tst_cgroup_group *cgroup_group_from_roots(const size_t tree_off)
>>  		dir = (typeof(dir))(((char *)root) + tree_off);
>>  
>>  		if (dir->ctrl_field)
>> -			cgroup_group_add_dir(cg, dir);
>> +			cgroup_group_add_dir(NULL, cg, dir);
>>  	}
>>  
>>  	if (cg->dirs[0]) {
>
> Reviewed-by: Cyril Hrubis <chrubis@suse.cz>


-- 
Thank you,
Richard.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01
  2021-05-19 11:51   ` Cyril Hrubis
  2021-05-20  8:10     ` Cyril Hrubis
@ 2021-05-20  8:50     ` Richard Palethorpe
  2021-05-21  9:29       ` Richard Palethorpe
  1 sibling, 1 reply; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-20  8:50 UTC (permalink / raw)
  To: ltp

Hello,

Cyril Hrubis <chrubis@suse.cz> writes:

> On Thu, May 13, 2021 at 04:21:25PM +0100, Richard Palethorpe via ltp wrote:
>> Signed-off-by: Richard Palethorpe <rpalethorpe@suse.com>
>> ---
>>  runtest/sched                                 |   1 +
>>  .../kernel/sched/cfs-scheduler/.gitignore     |   1 +
>>  testcases/kernel/sched/cfs-scheduler/Makefile |   4 +-
>>  .../sched/cfs-scheduler/cfs_bandwidth01.c     | 175 ++++++++++++++++++
>>  4 files changed, 179 insertions(+), 2 deletions(-)
>>  create mode 100644 testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
>> 
>> diff --git a/runtest/sched b/runtest/sched
>> index bfc4f2711..592898723 100644
>> --- a/runtest/sched
>> +++ b/runtest/sched
>> @@ -6,6 +6,7 @@ pth_str03 pth_str03
>>  time-schedule01		time-schedule
>>  trace_sched01		trace_sched -c 1
>>  
>> +cfs_bandwidth01 cfs_bandwidth01 -i 5
>>  hackbench01 hackbench 50 process 1000
>>  hackbench02 hackbench 20 thread 1000
>>  
>> diff --git a/testcases/kernel/sched/cfs-scheduler/.gitignore b/testcases/kernel/sched/cfs-scheduler/.gitignore
>> index db2759e4f..c5dacd6ef 100644
>> --- a/testcases/kernel/sched/cfs-scheduler/.gitignore
>> +++ b/testcases/kernel/sched/cfs-scheduler/.gitignore
>> @@ -1 +1,2 @@
>>  /hackbench
>> +cfs_bandwidth01
>> diff --git a/testcases/kernel/sched/cfs-scheduler/Makefile b/testcases/kernel/sched/cfs-scheduler/Makefile
>> index aa3bf8459..2ffe1f7f9 100644
>> --- a/testcases/kernel/sched/cfs-scheduler/Makefile
>> +++ b/testcases/kernel/sched/cfs-scheduler/Makefile
>> @@ -18,8 +18,8 @@
>>  
>>  top_srcdir		?= ../../../..
>>  
>> -include $(top_srcdir)/include/mk/env_pre.mk
>> +include $(top_srcdir)/include/mk/testcases.mk
>>  
>> -LDLIBS			+= -lpthread
>> +hackbench: LDLIBS			+= -lpthread
>>  
>>  include $(top_srcdir)/include/mk/generic_leaf_target.mk
>> diff --git a/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c b/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
>> new file mode 100644
>> index 000000000..b1f98d50f
>> --- /dev/null
>> +++ b/testcases/kernel/sched/cfs-scheduler/cfs_bandwidth01.c
>> @@ -0,0 +1,175 @@
>> +// SPDX-License-Identifier: GPL-2.0-or-later
>> +/* Copyright (c) 2021 SUSE LLC <rpalethorpe@suse.com> */
>> +/*\
>> + *
>> + * [Description]
>> + *
>> + * Creates a multi-level CGroup hierarchy with the cpu controller
>> + * enabled. The leaf groups are populated with "busy" processes which
>> + * simulate intermittent cpu load. They spin for some time then sleep
>> + * then repeat.
>> + *
>> + * Both the trunk and leaf groups are set cpu bandwidth limits. The
>> + * busy processes will intermittently exceed these limits. Causing
>> + * them to be throttled. When they begin sleeping this will then cause
>> + * them to be unthrottle.
>> + *
>> + * The test is known to reproduce an issue with an update to
>> + * SLE-15-SP1 (kernel 4.12.14-197.64, bsc#1179093).
>> + */
>> +
>> +#include <stdlib.h>
>> +
>> +#include "tst_test.h"
>> +#include "tst_cgroup.h"
>> +#include "tst_timer.h"
>> +
>> +static const struct tst_cgroup_group *cg_test;
>> +static struct tst_cgroup_group *cg_level2, *cg_level3a, *cg_level3b;
>> +static struct tst_cgroup_group *cg_workers[3];
>> +
>> +static void set_cpu_quota(const struct tst_cgroup_group *const cg,
>> +			  const float quota_percent)
>> +{
>> +	const unsigned int period_us = 10000;
>> +	const unsigned int quota_us = (quota_percent / 100) * (float)period_us;
>> +
>> +	if (TST_CGROUP_VER(cg, "cpu") != TST_CGROUP_V1) {
>> +		SAFE_CGROUP_PRINTF(cg, "cpu.max",
>> +				   "%u %u", quota_us, period_us);
>> +	} else {
>> +		SAFE_CGROUP_PRINTF(cg, "cpu.max",
>> +				   "%u", quota_us);
>> +		SAFE_CGROUP_PRINTF(cg, "cpu.cfs_period_us",
>> +				  "%u", period_us);
>> +	}
>> +
>> +	tst_res(TINFO, "Set '%s/cpu.max' = '%d %d'",
>> +		tst_cgroup_group_name(cg), quota_us, period_us);
>> +}
>> +
>> +static struct tst_cgroup_group *
>> +mk_cpu_cgroup(const struct tst_cgroup_group *const cg_parent,
>> +	      const char *const cg_child_name,
>> +	      const float quota_percent)
>> +{
>> +	struct tst_cgroup_group *const cg =
>> +		tst_cgroup_group_mk(cg_parent, cg_child_name);
>> +
>> +	set_cpu_quota(cg, quota_percent);
>> +
>> +	return cg;
>> +}
>> +
>> +static void busy_loop(const unsigned int sleep_ms)
>> +{
>> +	for (;;) {
>> +		tst_timer_start(CLOCK_MONOTONIC_RAW);
>> +		while (!tst_timer_expired_ms(20))
>> +			;
>> +
>> +		const int ret = tst_checkpoint_wait(0, sleep_ms);
>> +
>> +		if (!ret)
>> +			exit(0);
>> +
>> +		if (errno != ETIMEDOUT)
>> +			tst_brk(TBROK | TERRNO, "tst_checkpoint_wait");
>> +	}
>> +}
>> +
>> +static void fork_busy_procs_in_cgroup(const struct tst_cgroup_group *const cg)
>> +{
>> +	const unsigned int sleeps_ms[] = {3000, 1000, 10};
>> +	const pid_t worker_pid = SAFE_FORK();
>> +	size_t i;
>> +
>> +	if (worker_pid)
>> +		return;
>> +
>> +	for (i = 0; i < ARRAY_SIZE(sleeps_ms); i++) {
>> +		const pid_t busy_pid = SAFE_FORK();
>> +
>> +		if (!busy_pid)
>> +			busy_loop(sleeps_ms[i]);
>> +
>> +		SAFE_CGROUP_PRINTF(cg, "cgroup.procs", "%d", busy_pid);
>> +	}
>> +
>> +	tst_reap_children();
>> +
>> +	exit(0);
>> +}
>> +
>> +static void do_test(void)
>> +{
>> +	size_t i;
>> +
>> +	cg_level2 = tst_cgroup_group_mk(cg_test, "level2");
>> +
>> +	cg_level3a = tst_cgroup_group_mk(cg_level2, "level3a");
>> +	cg_workers[0] = mk_cpu_cgroup(cg_level3a, "worker1", 30);
>> +	cg_workers[1] = mk_cpu_cgroup(cg_level3a, "worker2", 20);
>> +
>> +	cg_level3b = tst_cgroup_group_mk(cg_level2, "level3b");
>> +	cg_workers[2] = mk_cpu_cgroup(cg_level3b, "worker3", 30);
>> +
>> +	for (i = 0; i < ARRAY_SIZE(cg_workers); i++)
>> +		fork_busy_procs_in_cgroup(cg_workers[i]);
>> +
>> +	tst_res(TPASS, "Scheduled bandwidth constrained workers");
>> +
>> +	sleep(1);
>> +
>> +	set_cpu_quota(cg_level2, 50);
>> +
>> +	sleep(2);
>> +
>> +	TST_CHECKPOINT_WAKE2(0, 3 * 3);
>> +	tst_reap_children();
>> +
>> +	tst_res(TPASS, "Workers exited");
>> +}
>> +
>> +static void setup(void)
>> +{
>> +	tst_cgroup_require("cpu", NULL);
>> +
>> +	cg_test = tst_cgroup_get_test_group();
>> +}
>> +
>> +static void cleanup(void)
>> +{
>> +	size_t i;
>> +
>> +	for (i = 0; i < ARRAY_SIZE(cg_workers); i++) {
>> +		if (cg_workers[i])
>> +			cg_workers[i] = tst_cgroup_group_rm(cg_workers[i]);
>> +	}
>> +
>> +	if (cg_level3a)
>> +		cg_level3a = tst_cgroup_group_rm(cg_level3a);
>> +	if (cg_level3b)
>> +		cg_level3b = tst_cgroup_group_rm(cg_level3b);
>> +	if (cg_level2)
>> +		cg_level2 = tst_cgroup_group_rm(cg_level2);
>
> Hmm, I wonder if we can move this part of the cleanup to the test
> library as well. If we add all cgroups the user has created into a FIFO
> linked list then this could be implemented as a single loop in the
> tst_cgroup_clean().
>
> We would have to loop over the list in the tst_cgroup_group_rm() in
> order to remove the about to be removed group from the list as well, but
> I guess that this is still worth the trouble.

This sounds good. We probably need to check if the groups have processes
in them to print a nice error message. My main concern with automatic
cleanup is confusing errors from deep in the lib.

>
> Other than that the test looks nice and clean.
>
>> +	tst_cgroup_cleanup();
>> +}
>> +
>> +static struct tst_test test = {
>> +	.test_all = do_test,
>> +	.setup = setup,
>> +	.cleanup = cleanup,
>> +	.forks_child = 1,
>> +	.needs_checkpoints = 1,
>> +	.taint_check = TST_TAINT_W | TST_TAINT_D,
>> +	.tags = (const struct tst_tag[]) {
>> +		{"linux-git", "39f23ce07b93"},
>> +		{"linux-git", "b34cb07dde7c"},
>> +		{"linux-git", "fe61468b2cbc"},
>> +		{"linux-git", "5ab297bab984"},
>> +		{"linux-git", "6d4d22468dae"},
>> +		{ }
>> +	}
>> +};
>> -- 
>> 2.31.1
>> 
>> 
>> -- 
>> Mailing list info: https://lists.linux.it/listinfo/ltp


-- 
Thank you,
Richard.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01
  2021-05-20  8:50     ` Richard Palethorpe
@ 2021-05-21  9:29       ` Richard Palethorpe
  2021-05-25  9:06         ` Cyril Hrubis
  0 siblings, 1 reply; 20+ messages in thread
From: Richard Palethorpe @ 2021-05-21  9:29 UTC (permalink / raw)
  To: ltp

Hello,

>>> +
>>> +static void cleanup(void)
>>> +{
>>> +	size_t i;
>>> +
>>> +	for (i = 0; i < ARRAY_SIZE(cg_workers); i++) {
>>> +		if (cg_workers[i])
>>> +			cg_workers[i] = tst_cgroup_group_rm(cg_workers[i]);
>>> +	}
>>> +
>>> +	if (cg_level3a)
>>> +		cg_level3a = tst_cgroup_group_rm(cg_level3a);
>>> +	if (cg_level3b)
>>> +		cg_level3b = tst_cgroup_group_rm(cg_level3b);
>>> +	if (cg_level2)
>>> +		cg_level2 = tst_cgroup_group_rm(cg_level2);
>>
>> Hmm, I wonder if we can move this part of the cleanup to the test
>> library as well. If we add all cgroups the user has created into a FIFO
>> linked list then this could be implemented as a single loop in the
>> tst_cgroup_clean().
>>
>> We would have to loop over the list in the tst_cgroup_group_rm() in
>> order to remove the about to be removed group from the list as well, but
>> I guess that this is still worth the trouble.
>
> This sounds good. We probably need to check if the groups have processes
> in them to print a nice error message. My main concern with automatic
> cleanup is confusing errors from deep in the lib.
>

I think maybe this API makes a fundamental mistake of mixing memory/object
management with actual creation and deletion of CGroups. OTOH that is
not really clear either.

But if a child process starts deleting CGroups, which might be a
reasonable thing to do, then we will get a mismatch between child and
parent. Then the cleanup will be wrong.

Also any kind of linked list or array implementation uses more lines of
code than the cleanup function and more complex for sure... even if we
have 10 test cases like this is it really work saving a few lines in
each case?

I don't know. But I think we need to see a few more cases.

-- 
Thank you,
Richard.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01
  2021-05-21  9:29       ` Richard Palethorpe
@ 2021-05-25  9:06         ` Cyril Hrubis
  0 siblings, 0 replies; 20+ messages in thread
From: Cyril Hrubis @ 2021-05-25  9:06 UTC (permalink / raw)
  To: ltp

Hi!
> I think maybe this API makes a fundamental mistake of mixing memory/object
> management with actual creation and deletion of CGroups. OTOH that is
> not really clear either.
> 
> But if a child process starts deleting CGroups, which might be a
> reasonable thing to do, then we will get a mismatch between child and
> parent. Then the cleanup will be wrong.

Good point. I guess that we can make up rules that would make sure we do
not run cleanup both in the parent and child, but that would probably
overcomplicate the library.

> Also any kind of linked list or array implementation uses more lines of
> code than the cleanup function and more complex for sure... even if we
> have 10 test cases like this is it really work saving a few lines in
> each case?
> 
> I don't know. But I think we need to see a few more cases.

Makes sense.

-- 
Cyril Hrubis
chrubis@suse.cz

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2021-05-25  9:06 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-13 15:21 [LTP] [PATCH 0/6] cfs_bandwidth01 and CGroup API Richard Palethorpe
2021-05-13 15:21 ` [LTP] [PATCH 1/6] API/cgroups: Allow fetching of CGroup name Richard Palethorpe
2021-05-19 11:14   ` Cyril Hrubis
2021-05-13 15:21 ` [LTP] [PATCH 2/6] API/cgroups: Remove obsolete function in API Richard Palethorpe
2021-05-19 11:15   ` Cyril Hrubis
2021-05-13 15:21 ` [LTP] [PATCH 3/6] API/cgroups: Check for unknown controller name Richard Palethorpe
2021-05-19 11:20   ` Cyril Hrubis
2021-05-20  8:13     ` Richard Palethorpe
2021-05-13 15:21 ` [LTP] [PATCH 4/6] API/cgroups: Add cpu controller Richard Palethorpe
2021-05-19 11:30   ` Cyril Hrubis
2021-05-20  8:35     ` Richard Palethorpe
2021-05-13 15:21 ` [LTP] [PATCH 5/6] API/cgroups: Auto add controllers to subtree_control in new subgroup Richard Palethorpe
2021-05-19 11:41   ` Cyril Hrubis
2021-05-20  8:40     ` Richard Palethorpe
2021-05-13 15:21 ` [LTP] [PATCH 6/6] sched/cgroup: Add cfs_bandwidth01 Richard Palethorpe
2021-05-19 11:51   ` Cyril Hrubis
2021-05-20  8:10     ` Cyril Hrubis
2021-05-20  8:50     ` Richard Palethorpe
2021-05-21  9:29       ` Richard Palethorpe
2021-05-25  9:06         ` Cyril Hrubis

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.