cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] cgroup: Introduce cpu controller test suite
@ 2022-04-22 17:33 David Vernet
       [not found] ` <20220422173349.3394844-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
  0 siblings, 1 reply; 9+ messages in thread
From: David Vernet @ 2022-04-22 17:33 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, peterz-wEGCiKHe2LqWVfeAwA7xHQ,
	mingo-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

This patchset introduces a new test_cpu.c test suite as part of
tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
validate the cgroup v2 cpu controller.

This patchset only contains testcases that validate cpu.stat and
cpu.weight, but I'm expecting to send further patchsets after this that
also include testcases that validate other knobs such as cpu.max.

Note that checkpatch complains about a missing MAINTAINERS file entry for
[PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org/.

Changelog:
v2:
  - s/cgcpu/cpucg for variable names and test names.
  - Pass struct timespec as part of struct cpu_hog_func_param rather than
    stuffing the whole time as nanoseconds in a single long.

David Vernet (4):
  cgroup: Add new test_cpu.c test suite in cgroup selftests
  cgroup: Add test_cpucg_stats() testcase to cgroup cpu selftests
  cgroup: Add test_cpucg_weight_overprovisioned() testcase
  cgroup: Add test_cpucg_weight_underprovisioned() testcase

 tools/testing/selftests/cgroup/.gitignore    |   1 +
 tools/testing/selftests/cgroup/Makefile      |   2 +
 tools/testing/selftests/cgroup/cgroup_util.c |  12 +
 tools/testing/selftests/cgroup/cgroup_util.h |   4 +
 tools/testing/selftests/cgroup/test_cpu.c    | 446 +++++++++++++++++++
 5 files changed, 465 insertions(+)
 create mode 100644 tools/testing/selftests/cgroup/test_cpu.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests
       [not found] ` <20220422173349.3394844-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
@ 2022-04-22 17:33   ` David Vernet
  2022-04-22 17:33   ` [PATCH v2 2/4] cgroup: Add test_cpucg_stats() testcase to cgroup cpu selftests David Vernet
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: David Vernet @ 2022-04-22 17:33 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, peterz-wEGCiKHe2LqWVfeAwA7xHQ,
	mingo-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

The cgroup selftests suite currently contains tests that validate various
aspects of cgroup, such as validating the expected behavior for memory
controllers, the expected behavior of cgroup.procs, etc. There are no tests
that validate the expected behavior of the cgroup cpu controller.

This patch therefore adds a new test_cpu.c file that will contain cpu
controller testcases. The file currently only contains a single testcase
that validates creating nested cgroups with cgroup.subtree_control
including cpu. Future patches will add more sophisticated testcases that
validate functional aspects of the cpu controller.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/.gitignore |   1 +
 tools/testing/selftests/cgroup/Makefile   |   2 +
 tools/testing/selftests/cgroup/test_cpu.c | 110 ++++++++++++++++++++++
 3 files changed, 113 insertions(+)
 create mode 100644 tools/testing/selftests/cgroup/test_cpu.c

diff --git a/tools/testing/selftests/cgroup/.gitignore b/tools/testing/selftests/cgroup/.gitignore
index be9643ef6285..306ee1b01e72 100644
--- a/tools/testing/selftests/cgroup/.gitignore
+++ b/tools/testing/selftests/cgroup/.gitignore
@@ -4,3 +4,4 @@ test_core
 test_freezer
 test_kmem
 test_kill
+test_cpu
diff --git a/tools/testing/selftests/cgroup/Makefile b/tools/testing/selftests/cgroup/Makefile
index 745fe25fa0b9..478217cc1371 100644
--- a/tools/testing/selftests/cgroup/Makefile
+++ b/tools/testing/selftests/cgroup/Makefile
@@ -10,6 +10,7 @@ TEST_GEN_PROGS += test_kmem
 TEST_GEN_PROGS += test_core
 TEST_GEN_PROGS += test_freezer
 TEST_GEN_PROGS += test_kill
+TEST_GEN_PROGS += test_cpu
 
 LOCAL_HDRS += $(selfdir)/clone3/clone3_selftests.h $(selfdir)/pidfd/pidfd.h
 
@@ -20,3 +21,4 @@ $(OUTPUT)/test_kmem: cgroup_util.c
 $(OUTPUT)/test_core: cgroup_util.c
 $(OUTPUT)/test_freezer: cgroup_util.c
 $(OUTPUT)/test_kill: cgroup_util.c
+$(OUTPUT)/test_cpu: cgroup_util.c
diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
new file mode 100644
index 000000000000..a724bff00d07
--- /dev/null
+++ b/tools/testing/selftests/cgroup/test_cpu.c
@@ -0,0 +1,110 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define _GNU_SOURCE
+#include <linux/limits.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+#include "cgroup_util.h"
+
+/*
+ * This test creates two nested cgroups with and without enabling
+ * the cpu controller.
+ */
+static int test_cpucg_subtree_control(const char *root)
+{
+	char *parent = NULL, *child = NULL, *parent2 = NULL, *child2 = NULL;
+	int ret = KSFT_FAIL;
+
+	// Create two nested cgroups with the cpu controller enabled.
+	parent = cg_name(root, "cpucg_test_0");
+	if (!parent)
+		goto cleanup;
+
+	if (cg_create(parent))
+		goto cleanup;
+
+	if (cg_write(parent, "cgroup.subtree_control", "+cpu"))
+		goto cleanup;
+
+	child = cg_name(parent, "cpucg_test_child");
+	if (!child)
+		goto cleanup;
+
+	if (cg_create(child))
+		goto cleanup;
+
+	if (cg_read_strstr(child, "cgroup.controllers", "cpu"))
+		goto cleanup;
+
+	// Create two nested cgroups without enabling the cpu controller.
+	parent2 = cg_name(root, "cpucg_test_1");
+	if (!parent2)
+		goto cleanup;
+
+	if (cg_create(parent2))
+		goto cleanup;
+
+	child2 = cg_name(parent2, "cpucg_test_child");
+	if (!child2)
+		goto cleanup;
+
+	if (cg_create(child2))
+		goto cleanup;
+
+	if (!cg_read_strstr(child2, "cgroup.controllers", "cpu"))
+		goto cleanup;
+
+	ret = KSFT_PASS;
+
+cleanup:
+	cg_destroy(child);
+	free(child);
+	cg_destroy(child2);
+	free(child2);
+	cg_destroy(parent);
+	free(parent);
+	cg_destroy(parent2);
+	free(parent2);
+
+	return ret;
+}
+
+#define T(x) { x, #x }
+struct cpucg_test {
+	int (*fn)(const char *root);
+	const char *name;
+} tests[] = {
+	T(test_cpucg_subtree_control),
+};
+#undef T
+
+int main(int argc, char *argv[])
+{
+	char root[PATH_MAX];
+	int i, ret = EXIT_SUCCESS;
+
+	if (cg_find_unified_root(root, sizeof(root)))
+		ksft_exit_skip("cgroup v2 isn't mounted\n");
+
+	if (cg_read_strstr(root, "cgroup.subtree_control", "cpu"))
+		if (cg_write(root, "cgroup.subtree_control", "+cpu"))
+			ksft_exit_skip("Failed to set cpu controller\n");
+
+	for (i = 0; i < ARRAY_SIZE(tests); i++) {
+		switch (tests[i].fn(root)) {
+		case KSFT_PASS:
+			ksft_test_result_pass("%s\n", tests[i].name);
+			break;
+		case KSFT_SKIP:
+			ksft_test_result_skip("%s\n", tests[i].name);
+			break;
+		default:
+			ret = EXIT_FAILURE;
+			ksft_test_result_fail("%s\n", tests[i].name);
+			break;
+		}
+	}
+
+	return ret;
+}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/4] cgroup: Add test_cpucg_stats() testcase to cgroup cpu selftests
       [not found] ` <20220422173349.3394844-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
  2022-04-22 17:33   ` [PATCH v2 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests David Vernet
@ 2022-04-22 17:33   ` David Vernet
  2022-04-22 17:33   ` [PATCH v2 3/4] cgroup: Add test_cpucg_weight_overprovisioned() testcase David Vernet
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: David Vernet @ 2022-04-22 17:33 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, peterz-wEGCiKHe2LqWVfeAwA7xHQ,
	mingo-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

test_cpu.c includes testcases that validate the cgroup cpu controller.
This patch adds a new testcase called test_cpucg_stats() that verifies the
expected behavior of the cpu.stat interface. In doing so, we define a
new hog_cpus_timed() function which takes a cpu_hog_func_param struct
that configures how many CPUs it uses, and how long it runs. Future
patches will also spawn threads that hog CPUs, so this function will
eventually serve those use-cases as well.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/cgroup_util.h |   3 +
 tools/testing/selftests/cgroup/test_cpu.c    | 128 +++++++++++++++++++
 2 files changed, 131 insertions(+)

diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
index 4f66d10626d2..1df13dc8b8aa 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/cgroup_util.h
@@ -8,6 +8,9 @@
 
 #define MB(x) (x << 20)
 
+#define USEC_PER_SEC	1000000L
+#define NSEC_PER_SEC	1000000000L
+
 /*
  * Checks if two given values differ by less than err% of their sum.
  */
diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
index a724bff00d07..3bd61964a262 100644
--- a/tools/testing/selftests/cgroup/test_cpu.c
+++ b/tools/testing/selftests/cgroup/test_cpu.c
@@ -2,11 +2,19 @@
 
 #define _GNU_SOURCE
 #include <linux/limits.h>
+#include <errno.h>
+#include <pthread.h>
 #include <stdio.h>
+#include <time.h>
 
 #include "../kselftest.h"
 #include "cgroup_util.h"
 
+struct cpu_hog_func_param {
+	int nprocs;
+	struct timespec ts;
+};
+
 /*
  * This test creates two nested cgroups with and without enabling
  * the cpu controller.
@@ -70,12 +78,132 @@ static int test_cpucg_subtree_control(const char *root)
 	return ret;
 }
 
+static void *hog_cpu_thread_func(void *arg)
+{
+	while (1)
+		;
+
+	return NULL;
+}
+
+static struct timespec
+timespec_sub(const struct timespec *lhs, const struct timespec *rhs)
+{
+	struct timespec zero = {
+		.tv_sec = 0,
+		.tv_nsec = 0,
+	};
+	struct timespec ret;
+
+	if (lhs->tv_sec < rhs->tv_sec)
+		return zero;
+
+	ret.tv_sec = lhs->tv_sec - rhs->tv_sec;
+
+	if (lhs->tv_nsec < rhs->tv_nsec) {
+		if (ret.tv_sec == 0)
+			return zero;
+
+		ret.tv_sec--;
+		ret.tv_nsec = NSEC_PER_SEC - rhs->tv_nsec + lhs->tv_nsec;
+	} else
+		ret.tv_nsec = lhs->tv_nsec - rhs->tv_nsec;
+
+	return ret;
+}
+
+static int hog_cpus_timed(const char *cgroup, void *arg)
+{
+	const struct cpu_hog_func_param *param =
+		(struct cpu_hog_func_param *)arg;
+	struct timespec ts_run = param->ts;
+	struct timespec ts_remaining = ts_run;
+	int i, ret;
+
+	for (i = 0; i < param->nprocs; i++) {
+		pthread_t tid;
+
+		ret = pthread_create(&tid, NULL, &hog_cpu_thread_func, NULL);
+		if (ret != 0)
+			return ret;
+	}
+
+	while (ts_remaining.tv_sec > 0 || ts_remaining.tv_nsec > 0) {
+		struct timespec ts_total;
+
+		ret = nanosleep(&ts_remaining, NULL);
+		if (ret && errno != EINTR)
+			return ret;
+
+		ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_total);
+		if (ret != 0)
+			return ret;
+
+		ts_remaining = timespec_sub(&ts_run, &ts_total);
+	}
+
+	return 0;
+}
+
+/*
+ * Creates a cpu cgroup, burns a CPU for a few quanta, and verifies that
+ * cpu.stat shows the expected output.
+ */
+static int test_cpucg_stats(const char *root)
+{
+	int ret = KSFT_FAIL;
+	long usage_usec, user_usec, system_usec;
+	long usage_seconds = 2;
+	long expected_usage_usec = usage_seconds * USEC_PER_SEC;
+	char *cpucg;
+
+	cpucg = cg_name(root, "cpucg_test");
+	if (!cpucg)
+		goto cleanup;
+
+	if (cg_create(cpucg))
+		goto cleanup;
+
+	usage_usec = cg_read_key_long(cpucg, "cpu.stat", "usage_usec");
+	user_usec = cg_read_key_long(cpucg, "cpu.stat", "user_usec");
+	system_usec = cg_read_key_long(cpucg, "cpu.stat", "system_usec");
+	if (usage_usec != 0 || user_usec != 0 || system_usec != 0)
+		goto cleanup;
+
+	struct cpu_hog_func_param param = {
+		.nprocs = 1,
+		.ts = {
+			.tv_sec = usage_seconds,
+			.tv_nsec = 0,
+		},
+	};
+	if (cg_run(cpucg, hog_cpus_timed, (void *)&param))
+		goto cleanup;
+
+	usage_usec = cg_read_key_long(cpucg, "cpu.stat", "usage_usec");
+	user_usec = cg_read_key_long(cpucg, "cpu.stat", "user_usec");
+	if (user_usec <= 0)
+		goto cleanup;
+
+	if (!values_close(usage_usec, expected_usage_usec, 1))
+		goto cleanup;
+
+	ret = KSFT_PASS;
+
+cleanup:
+	cg_destroy(cpucg);
+	free(cpucg);
+
+	return ret;
+}
+
 #define T(x) { x, #x }
 struct cpucg_test {
 	int (*fn)(const char *root);
 	const char *name;
 } tests[] = {
 	T(test_cpucg_subtree_control),
+	T(test_cpucg_stats),
 };
 #undef T
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 3/4] cgroup: Add test_cpucg_weight_overprovisioned() testcase
       [not found] ` <20220422173349.3394844-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
  2022-04-22 17:33   ` [PATCH v2 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests David Vernet
  2022-04-22 17:33   ` [PATCH v2 2/4] cgroup: Add test_cpucg_stats() testcase to cgroup cpu selftests David Vernet
@ 2022-04-22 17:33   ` David Vernet
  2024-04-29  6:29     ` Pengfei Xu
  2022-04-22 17:33   ` [PATCH v2 4/4] cgroup: Add test_cpucg_weight_underprovisioned() testcase David Vernet
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: David Vernet @ 2022-04-22 17:33 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, peterz-wEGCiKHe2LqWVfeAwA7xHQ,
	mingo-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

test_cpu.c includes testcases that validate the cgroup cpu controller.
This patch adds a new testcase called test_cpucg_weight_overprovisioned()
that verifies the expected behavior of creating multiple processes with
different cpu.weight, on a system that is overprovisioned.

So as to avoid code duplication, this patch also updates cpu_hog_func_param
to take a new hog_clock_type enum which informs how time is counted in
hog_cpus_timed() (either process time or wall clock time).

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/cgroup_util.c |  12 ++
 tools/testing/selftests/cgroup/cgroup_util.h |   1 +
 tools/testing/selftests/cgroup/test_cpu.c    | 135 ++++++++++++++++++-
 3 files changed, 145 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
index 0cf7e90c0052..b690fdc8b4cd 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.c
+++ b/tools/testing/selftests/cgroup/cgroup_util.c
@@ -190,6 +190,18 @@ int cg_write(const char *cgroup, const char *control, char *buf)
 	return -1;
 }
 
+int cg_write_numeric(const char *cgroup, const char *control, long value)
+{
+	char buf[64];
+	int ret;
+
+	ret = sprintf(buf, "%lu", value);
+	if (ret < 0)
+		return ret;
+
+	return cg_write(cgroup, control, buf);
+}
+
 int cg_find_unified_root(char *root, size_t len)
 {
 	char buf[10 * PAGE_SIZE];
diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
index 1df13dc8b8aa..0f79156697cf 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/cgroup_util.h
@@ -35,6 +35,7 @@ extern long cg_read_long(const char *cgroup, const char *control);
 long cg_read_key_long(const char *cgroup, const char *control, const char *key);
 extern long cg_read_lc(const char *cgroup, const char *control);
 extern int cg_write(const char *cgroup, const char *control, char *buf);
+int cg_write_numeric(const char *cgroup, const char *control, long value);
 extern int cg_run(const char *cgroup,
 		  int (*fn)(const char *cgroup, void *arg),
 		  void *arg);
diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
index 3bd61964a262..8d901c06c79d 100644
--- a/tools/testing/selftests/cgroup/test_cpu.c
+++ b/tools/testing/selftests/cgroup/test_cpu.c
@@ -2,6 +2,8 @@
 
 #define _GNU_SOURCE
 #include <linux/limits.h>
+#include <sys/sysinfo.h>
+#include <sys/wait.h>
 #include <errno.h>
 #include <pthread.h>
 #include <stdio.h>
@@ -10,9 +12,17 @@
 #include "../kselftest.h"
 #include "cgroup_util.h"
 
+enum hog_clock_type {
+	// Count elapsed time using the CLOCK_PROCESS_CPUTIME_ID clock.
+	CPU_HOG_CLOCK_PROCESS,
+	// Count elapsed time using system wallclock time.
+	CPU_HOG_CLOCK_WALL,
+};
+
 struct cpu_hog_func_param {
 	int nprocs;
 	struct timespec ts;
+	enum hog_clock_type clock_type;
 };
 
 /*
@@ -118,8 +128,13 @@ static int hog_cpus_timed(const char *cgroup, void *arg)
 		(struct cpu_hog_func_param *)arg;
 	struct timespec ts_run = param->ts;
 	struct timespec ts_remaining = ts_run;
+	struct timespec ts_start;
 	int i, ret;
 
+	ret = clock_gettime(CLOCK_MONOTONIC, &ts_start);
+	if (ret != 0)
+		return ret;
+
 	for (i = 0; i < param->nprocs; i++) {
 		pthread_t tid;
 
@@ -135,9 +150,19 @@ static int hog_cpus_timed(const char *cgroup, void *arg)
 		if (ret && errno != EINTR)
 			return ret;
 
-		ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_total);
-		if (ret != 0)
-			return ret;
+		if (param->clock_type == CPU_HOG_CLOCK_PROCESS) {
+			ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_total);
+			if (ret != 0)
+				return ret;
+		} else {
+			struct timespec ts_current;
+
+			ret = clock_gettime(CLOCK_MONOTONIC, &ts_current);
+			if (ret != 0)
+				return ret;
+
+			ts_total = timespec_sub(&ts_current, &ts_start);
+		}
 
 		ts_remaining = timespec_sub(&ts_run, &ts_total);
 	}
@@ -176,6 +201,7 @@ static int test_cpucg_stats(const char *root)
 			.tv_sec = usage_seconds,
 			.tv_nsec = 0,
 		},
+		.clock_type = CPU_HOG_CLOCK_PROCESS,
 	};
 	if (cg_run(cpucg, hog_cpus_timed, (void *)&param))
 		goto cleanup;
@@ -197,6 +223,108 @@ static int test_cpucg_stats(const char *root)
 	return ret;
 }
 
+/*
+ * First, this test creates the following hierarchy:
+ * A
+ * A/B     cpu.weight = 50
+ * A/C     cpu.weight = 100
+ * A/D     cpu.weight = 150
+ *
+ * A separate process is then created for each child cgroup which spawns as
+ * many threads as there are cores, and hogs each CPU as much as possible
+ * for some time interval.
+ *
+ * Once all of the children have exited, we verify that each child cgroup
+ * was given proportional runtime as informed by their cpu.weight.
+ */
+static int test_cpucg_weight_overprovisioned(const char *root)
+{
+	struct child {
+		char *cgroup;
+		pid_t pid;
+		long usage;
+	};
+	int ret = KSFT_FAIL, i;
+	char *parent = NULL;
+	struct child children[3] = {NULL};
+	long usage_seconds = 10;
+
+	parent = cg_name(root, "cpucg_test_0");
+	if (!parent)
+		goto cleanup;
+
+	if (cg_create(parent))
+		goto cleanup;
+
+	if (cg_write(parent, "cgroup.subtree_control", "+cpu"))
+		goto cleanup;
+
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		children[i].cgroup = cg_name_indexed(parent, "cpucg_child", i);
+		if (!children[i].cgroup)
+			goto cleanup;
+
+		if (cg_create(children[i].cgroup))
+			goto cleanup;
+
+		if (cg_write_numeric(children[i].cgroup, "cpu.weight",
+					50 * (i + 1)))
+			goto cleanup;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		struct cpu_hog_func_param param = {
+			.nprocs = get_nprocs(),
+			.ts = {
+				.tv_sec = usage_seconds,
+				.tv_nsec = 0,
+			},
+			.clock_type = CPU_HOG_CLOCK_WALL,
+		};
+		pid_t pid = cg_run_nowait(children[i].cgroup, hog_cpus_timed,
+				(void *)&param);
+		if (pid <= 0)
+			goto cleanup;
+		children[i].pid = pid;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		int retcode;
+
+		waitpid(children[i].pid, &retcode, 0);
+		if (!WIFEXITED(retcode))
+			goto cleanup;
+		if (WEXITSTATUS(retcode))
+			goto cleanup;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(children); i++)
+		children[i].usage = cg_read_key_long(children[i].cgroup,
+				"cpu.stat", "usage_usec");
+
+	for (i = 0; i < ARRAY_SIZE(children) - 1; i++) {
+		long delta;
+
+		if (children[i + 1].usage <= children[i].usage)
+			goto cleanup;
+
+		delta = children[i + 1].usage - children[i].usage;
+		if (!values_close(delta, children[0].usage, 35))
+			goto cleanup;
+	}
+
+	ret = KSFT_PASS;
+cleanup:
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		cg_destroy(children[i].cgroup);
+		free(children[i].cgroup);
+	}
+	cg_destroy(parent);
+	free(parent);
+
+	return ret;
+}
+
 #define T(x) { x, #x }
 struct cpucg_test {
 	int (*fn)(const char *root);
@@ -204,6 +332,7 @@ struct cpucg_test {
 } tests[] = {
 	T(test_cpucg_subtree_control),
 	T(test_cpucg_stats),
+	T(test_cpucg_weight_overprovisioned),
 };
 #undef T
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 4/4] cgroup: Add test_cpucg_weight_underprovisioned() testcase
       [not found] ` <20220422173349.3394844-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
                     ` (2 preceding siblings ...)
  2022-04-22 17:33   ` [PATCH v2 3/4] cgroup: Add test_cpucg_weight_overprovisioned() testcase David Vernet
@ 2022-04-22 17:33   ` David Vernet
  2022-04-22 17:48   ` [PATCH 0/4] cgroup: Introduce cpu controller test suite Tejun Heo
  2022-04-22 18:40   ` Tejun Heo
  5 siblings, 0 replies; 9+ messages in thread
From: David Vernet @ 2022-04-22 17:33 UTC (permalink / raw)
  To: tj-DgEjT+Ai2ygdnm+yROfE0A, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA, peterz-wEGCiKHe2LqWVfeAwA7xHQ,
	mingo-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

test_cpu.c includes testcases that validate the cgroup cpu controller.
This patch adds a new testcase called test_cpucg_weight_underprovisioned()
that verifies that processes with different cpu.weight that are all running
on an underprovisioned system, still get roughly the same amount of cpu
time.

Because test_cpucg_weight_underprovisioned() is very similar to
test_cpucg_weight_overprovisioned(), this patch also pulls the common logic
into a separate helper function that is invoked from both testcases, and
which uses function pointers to invoke the unique portions of the
testcases.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/test_cpu.c | 155 ++++++++++++++++------
 1 file changed, 117 insertions(+), 38 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
index 8d901c06c79d..64f9ce91c992 100644
--- a/tools/testing/selftests/cgroup/test_cpu.c
+++ b/tools/testing/selftests/cgroup/test_cpu.c
@@ -19,6 +19,12 @@ enum hog_clock_type {
 	CPU_HOG_CLOCK_WALL,
 };
 
+struct cpu_hogger {
+	char *cgroup;
+	pid_t pid;
+	long usage;
+};
+
 struct cpu_hog_func_param {
 	int nprocs;
 	struct timespec ts;
@@ -223,31 +229,15 @@ static int test_cpucg_stats(const char *root)
 	return ret;
 }
 
-/*
- * First, this test creates the following hierarchy:
- * A
- * A/B     cpu.weight = 50
- * A/C     cpu.weight = 100
- * A/D     cpu.weight = 150
- *
- * A separate process is then created for each child cgroup which spawns as
- * many threads as there are cores, and hogs each CPU as much as possible
- * for some time interval.
- *
- * Once all of the children have exited, we verify that each child cgroup
- * was given proportional runtime as informed by their cpu.weight.
- */
-static int test_cpucg_weight_overprovisioned(const char *root)
+static int
+run_cpucg_weight_test(
+		const char *root,
+		pid_t (*spawn_child)(const struct cpu_hogger *child),
+		int (*validate)(const struct cpu_hogger *children, int num_children))
 {
-	struct child {
-		char *cgroup;
-		pid_t pid;
-		long usage;
-	};
 	int ret = KSFT_FAIL, i;
 	char *parent = NULL;
-	struct child children[3] = {NULL};
-	long usage_seconds = 10;
+	struct cpu_hogger children[3] = {NULL};
 
 	parent = cg_name(root, "cpucg_test_0");
 	if (!parent)
@@ -273,16 +263,7 @@ static int test_cpucg_weight_overprovisioned(const char *root)
 	}
 
 	for (i = 0; i < ARRAY_SIZE(children); i++) {
-		struct cpu_hog_func_param param = {
-			.nprocs = get_nprocs(),
-			.ts = {
-				.tv_sec = usage_seconds,
-				.tv_nsec = 0,
-			},
-			.clock_type = CPU_HOG_CLOCK_WALL,
-		};
-		pid_t pid = cg_run_nowait(children[i].cgroup, hog_cpus_timed,
-				(void *)&param);
+		pid_t pid = spawn_child(&children[i]);
 		if (pid <= 0)
 			goto cleanup;
 		children[i].pid = pid;
@@ -302,7 +283,46 @@ static int test_cpucg_weight_overprovisioned(const char *root)
 		children[i].usage = cg_read_key_long(children[i].cgroup,
 				"cpu.stat", "usage_usec");
 
-	for (i = 0; i < ARRAY_SIZE(children) - 1; i++) {
+	if (validate(children, ARRAY_SIZE(children)))
+		goto cleanup;
+
+	ret = KSFT_PASS;
+cleanup:
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		cg_destroy(children[i].cgroup);
+		free(children[i].cgroup);
+	}
+	cg_destroy(parent);
+	free(parent);
+
+	return ret;
+}
+
+static pid_t weight_hog_ncpus(const struct cpu_hogger *child, int ncpus)
+{
+	long usage_seconds = 10;
+	struct cpu_hog_func_param param = {
+		.nprocs = ncpus,
+		.ts = {
+			.tv_sec = usage_seconds,
+			.tv_nsec = 0,
+		},
+		.clock_type = CPU_HOG_CLOCK_WALL,
+	};
+	return cg_run_nowait(child->cgroup, hog_cpus_timed, (void *)&param);
+}
+
+static pid_t weight_hog_all_cpus(const struct cpu_hogger *child)
+{
+	return weight_hog_ncpus(child, get_nprocs());
+}
+
+static int
+overprovision_validate(const struct cpu_hogger *children, int num_children)
+{
+	int ret = KSFT_FAIL, i;
+
+	for (i = 0; i < num_children - 1; i++) {
 		long delta;
 
 		if (children[i + 1].usage <= children[i].usage)
@@ -315,16 +335,74 @@ static int test_cpucg_weight_overprovisioned(const char *root)
 
 	ret = KSFT_PASS;
 cleanup:
-	for (i = 0; i < ARRAY_SIZE(children); i++) {
-		cg_destroy(children[i].cgroup);
-		free(children[i].cgroup);
+	return ret;
+}
+
+/*
+ * First, this test creates the following hierarchy:
+ * A
+ * A/B     cpu.weight = 50
+ * A/C     cpu.weight = 100
+ * A/D     cpu.weight = 150
+ *
+ * A separate process is then created for each child cgroup which spawns as
+ * many threads as there are cores, and hogs each CPU as much as possible
+ * for some time interval.
+ *
+ * Once all of the children have exited, we verify that each child cgroup
+ * was given proportional runtime as informed by their cpu.weight.
+ */
+static int test_cpucg_weight_overprovisioned(const char *root)
+{
+	return run_cpucg_weight_test(root, weight_hog_all_cpus,
+			overprovision_validate);
+}
+
+static pid_t weight_hog_one_cpu(const struct cpu_hogger *child)
+{
+	return weight_hog_ncpus(child, 1);
+}
+
+static int
+underprovision_validate(const struct cpu_hogger *children, int num_children)
+{
+	int ret = KSFT_FAIL, i;
+
+	for (i = 0; i < num_children - 1; i++) {
+		if (!values_close(children[i + 1].usage, children[0].usage, 15))
+			goto cleanup;
 	}
-	cg_destroy(parent);
-	free(parent);
 
+	ret = KSFT_PASS;
+cleanup:
 	return ret;
 }
 
+/*
+ * First, this test creates the following hierarchy:
+ * A
+ * A/B     cpu.weight = 50
+ * A/C     cpu.weight = 100
+ * A/D     cpu.weight = 150
+ *
+ * A separate process is then created for each child cgroup which spawns a
+ * single thread that hogs a CPU. The testcase is only run on systems that
+ * have at least one core per-thread in the child processes.
+ *
+ * Once all of the children have exited, we verify that each child cgroup
+ * had roughly the same runtime despite having different cpu.weight.
+ */
+static int test_cpucg_weight_underprovisioned(const char *root)
+{
+	// Only run the test if there are enough cores to avoid overprovisioning
+	// the system.
+	if (get_nprocs() < 4)
+		return KSFT_SKIP;
+
+	return run_cpucg_weight_test(root, weight_hog_one_cpu,
+			underprovision_validate);
+}
+
 #define T(x) { x, #x }
 struct cpucg_test {
 	int (*fn)(const char *root);
@@ -333,6 +411,7 @@ struct cpucg_test {
 	T(test_cpucg_subtree_control),
 	T(test_cpucg_stats),
 	T(test_cpucg_weight_overprovisioned),
+	T(test_cpucg_weight_underprovisioned),
 };
 #undef T
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] cgroup: Introduce cpu controller test suite
       [not found] ` <20220422173349.3394844-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
                     ` (3 preceding siblings ...)
  2022-04-22 17:33   ` [PATCH v2 4/4] cgroup: Add test_cpucg_weight_underprovisioned() testcase David Vernet
@ 2022-04-22 17:48   ` Tejun Heo
       [not found]     ` <YmLqdIiXdpQqcPTd-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
  2022-04-22 18:40   ` Tejun Heo
  5 siblings, 1 reply; 9+ messages in thread
From: Tejun Heo @ 2022-04-22 17:48 UTC (permalink / raw)
  To: David Vernet
  Cc: lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
	cgroups-u79uwXL29TY76Z2rM5mHXA, peterz-wEGCiKHe2LqWVfeAwA7xHQ,
	mingo-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

Hello, David.

On Fri, Apr 22, 2022 at 10:33:47AM -0700, David Vernet wrote:
> This patchset introduces a new test_cpu.c test suite as part of
> tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
> validate the cgroup v2 cpu controller.
> 
> This patchset only contains testcases that validate cpu.stat and
> cpu.weight, but I'm expecting to send further patchsets after this that
> also include testcases that validate other knobs such as cpu.max.
> 
> Note that checkpatch complains about a missing MAINTAINERS file entry for
> [PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
> https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org/.

Looks great to me. Thanks for adding the much needed selftests. Peter, if
you're okay with it, imma route it through the cgroup tree.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] cgroup: Introduce cpu controller test suite
       [not found]     ` <YmLqdIiXdpQqcPTd-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
@ 2022-04-22 17:50       ` Peter Zijlstra
  0 siblings, 0 replies; 9+ messages in thread
From: Peter Zijlstra @ 2022-04-22 17:50 UTC (permalink / raw)
  To: Tejun Heo
  Cc: David Vernet, lizefan.x-EC8Uxl6Npydl57MIdRCFDg,
	hannes-druUgvl0LCNAfugRpC6u6w, cgroups-u79uwXL29TY76Z2rM5mHXA,
	mingo-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 07:48:36AM -1000, Tejun Heo wrote:
> Hello, David.
> 
> On Fri, Apr 22, 2022 at 10:33:47AM -0700, David Vernet wrote:
> > This patchset introduces a new test_cpu.c test suite as part of
> > tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
> > validate the cgroup v2 cpu controller.
> > 
> > This patchset only contains testcases that validate cpu.stat and
> > cpu.weight, but I'm expecting to send further patchsets after this that
> > also include testcases that validate other knobs such as cpu.max.
> > 
> > Note that checkpatch complains about a missing MAINTAINERS file entry for
> > [PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
> > https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org/.
> 
> Looks great to me. Thanks for adding the much needed selftests. Peter, if
> you're okay with it, imma route it through the cgroup tree.

Sure, have at. Thanks!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/4] cgroup: Introduce cpu controller test suite
       [not found] ` <20220422173349.3394844-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
                     ` (4 preceding siblings ...)
  2022-04-22 17:48   ` [PATCH 0/4] cgroup: Introduce cpu controller test suite Tejun Heo
@ 2022-04-22 18:40   ` Tejun Heo
  5 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2022-04-22 18:40 UTC (permalink / raw)
  To: David Vernet
  Cc: lizefan.x-EC8Uxl6Npydl57MIdRCFDg, hannes-druUgvl0LCNAfugRpC6u6w,
	cgroups-u79uwXL29TY76Z2rM5mHXA, peterz-wEGCiKHe2LqWVfeAwA7xHQ,
	mingo-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 10:33:47AM -0700, David Vernet wrote:
> This patchset introduces a new test_cpu.c test suite as part of
> tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
> validate the cgroup v2 cpu controller.
> 
> This patchset only contains testcases that validate cpu.stat and
> cpu.weight, but I'm expecting to send further patchsets after this that
> also include testcases that validate other knobs such as cpu.max.
> 
> Note that checkpatch complains about a missing MAINTAINERS file entry for
> [PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
> https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org/.

Applied to cgroup/for-5.19.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/4] cgroup: Add test_cpucg_weight_overprovisioned() testcase
  2022-04-22 17:33   ` [PATCH v2 3/4] cgroup: Add test_cpucg_weight_overprovisioned() testcase David Vernet
@ 2024-04-29  6:29     ` Pengfei Xu
  0 siblings, 0 replies; 9+ messages in thread
From: Pengfei Xu @ 2024-04-29  6:29 UTC (permalink / raw)
  To: David Vernet
  Cc: tj, lizefan.x, hannes, cgroups, peterz, mingo, linux-kernel, kernel-team

Hi David Vernet,

Greeting!

On 2022-04-22 at 10:33:52 -0700, David Vernet wrote:
> test_cpu.c includes testcases that validate the cgroup cpu controller.
> This patch adds a new testcase called test_cpucg_weight_overprovisioned()
> that verifies the expected behavior of creating multiple processes with
> different cpu.weight, on a system that is overprovisioned.
> 
> So as to avoid code duplication, this patch also updates cpu_hog_func_param
> to take a new hog_clock_type enum which informs how time is counted in
> hog_cpus_timed() (either process time or wall clock time).
> 
> Signed-off-by: David Vernet <void@manifault.com>
> ---
>  tools/testing/selftests/cgroup/cgroup_util.c |  12 ++
>  tools/testing/selftests/cgroup/cgroup_util.h |   1 +
>  tools/testing/selftests/cgroup/test_cpu.c    | 135 ++++++++++++++++++-
>  3 files changed, 145 insertions(+), 3 deletions(-)
> 

Related commit in kernel:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6376b22cd0a3455a534b6921b816ffab68ddc48f
Kernel: v6.4 ~ v6.9-rc5

I found test_cpu "test_cpucg_weight_overprovisioned" was failed on
SPR(Sapphire Rapids) x86 server sometimes:
"
# ./test_cpu
ok 1 test_cpucg_subtree_control
ok 2 test_cpucg_stats
not ok 3 test_cpucg_weight_overprovisioned
ok 4 test_cpucg_weight_underprovisioned
ok 5 test_cpucg_nested_weight_overprovisioned
ok 6 test_cpucg_nested_weight_underprovisioned
ok 7 test_cpucg_max
ok 8 test_cpucg_max_nested
"

If I changed the "struct child children[3] = {NULL};" to
"struct child children[1] = {NULL};", test_cpu case
"test_cpucg_weight_overprovisioned" will not failed on SPR.
I'm not familar with cgroup and does above change make sence, could you
take a look for the test_cpu failed sometimes issue if you have time.

Best Regards,
Thanks a lot!


> diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
> index 0cf7e90c0052..b690fdc8b4cd 100644
> --- a/tools/testing/selftests/cgroup/cgroup_util.c
> +++ b/tools/testing/selftests/cgroup/cgroup_util.c
> @@ -190,6 +190,18 @@ int cg_write(const char *cgroup, const char *control, char *buf)
>  	return -1;
>  }
>  
> +int cg_write_numeric(const char *cgroup, const char *control, long value)
> +{
> +	char buf[64];
> +	int ret;
> +
> +	ret = sprintf(buf, "%lu", value);
> +	if (ret < 0)
> +		return ret;
> +
> +	return cg_write(cgroup, control, buf);
> +}
> +
>  int cg_find_unified_root(char *root, size_t len)
>  {
>  	char buf[10 * PAGE_SIZE];
> diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
> index 1df13dc8b8aa..0f79156697cf 100644
> --- a/tools/testing/selftests/cgroup/cgroup_util.h
> +++ b/tools/testing/selftests/cgroup/cgroup_util.h
> @@ -35,6 +35,7 @@ extern long cg_read_long(const char *cgroup, const char *control);
>  long cg_read_key_long(const char *cgroup, const char *control, const char *key);
>  extern long cg_read_lc(const char *cgroup, const char *control);
>  extern int cg_write(const char *cgroup, const char *control, char *buf);
> +int cg_write_numeric(const char *cgroup, const char *control, long value);
>  extern int cg_run(const char *cgroup,
>  		  int (*fn)(const char *cgroup, void *arg),
>  		  void *arg);
> diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
> index 3bd61964a262..8d901c06c79d 100644
> --- a/tools/testing/selftests/cgroup/test_cpu.c
> +++ b/tools/testing/selftests/cgroup/test_cpu.c
> @@ -2,6 +2,8 @@
>  
>  #define _GNU_SOURCE
>  #include <linux/limits.h>
> +#include <sys/sysinfo.h>
> +#include <sys/wait.h>
>  #include <errno.h>
>  #include <pthread.h>
>  #include <stdio.h>
> @@ -10,9 +12,17 @@
>  #include "../kselftest.h"
>  #include "cgroup_util.h"
>  
> +enum hog_clock_type {
> +	// Count elapsed time using the CLOCK_PROCESS_CPUTIME_ID clock.
> +	CPU_HOG_CLOCK_PROCESS,
> +	// Count elapsed time using system wallclock time.
> +	CPU_HOG_CLOCK_WALL,
> +};
> +
>  struct cpu_hog_func_param {
>  	int nprocs;
>  	struct timespec ts;
> +	enum hog_clock_type clock_type;
>  };
>  
>  /*
> @@ -118,8 +128,13 @@ static int hog_cpus_timed(const char *cgroup, void *arg)
>  		(struct cpu_hog_func_param *)arg;
>  	struct timespec ts_run = param->ts;
>  	struct timespec ts_remaining = ts_run;
> +	struct timespec ts_start;
>  	int i, ret;
>  
> +	ret = clock_gettime(CLOCK_MONOTONIC, &ts_start);
> +	if (ret != 0)
> +		return ret;
> +
>  	for (i = 0; i < param->nprocs; i++) {
>  		pthread_t tid;
>  
> @@ -135,9 +150,19 @@ static int hog_cpus_timed(const char *cgroup, void *arg)
>  		if (ret && errno != EINTR)
>  			return ret;
>  
> -		ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_total);
> -		if (ret != 0)
> -			return ret;
> +		if (param->clock_type == CPU_HOG_CLOCK_PROCESS) {
> +			ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_total);
> +			if (ret != 0)
> +				return ret;
> +		} else {
> +			struct timespec ts_current;
> +
> +			ret = clock_gettime(CLOCK_MONOTONIC, &ts_current);
> +			if (ret != 0)
> +				return ret;
> +
> +			ts_total = timespec_sub(&ts_current, &ts_start);
> +		}
>  
>  		ts_remaining = timespec_sub(&ts_run, &ts_total);
>  	}
> @@ -176,6 +201,7 @@ static int test_cpucg_stats(const char *root)
>  			.tv_sec = usage_seconds,
>  			.tv_nsec = 0,
>  		},
> +		.clock_type = CPU_HOG_CLOCK_PROCESS,
>  	};
>  	if (cg_run(cpucg, hog_cpus_timed, (void *)&param))
>  		goto cleanup;
> @@ -197,6 +223,108 @@ static int test_cpucg_stats(const char *root)
>  	return ret;
>  }
>  
> +/*
> + * First, this test creates the following hierarchy:
> + * A
> + * A/B     cpu.weight = 50
> + * A/C     cpu.weight = 100
> + * A/D     cpu.weight = 150
> + *
> + * A separate process is then created for each child cgroup which spawns as
> + * many threads as there are cores, and hogs each CPU as much as possible
> + * for some time interval.
> + *
> + * Once all of the children have exited, we verify that each child cgroup
> + * was given proportional runtime as informed by their cpu.weight.
> + */
> +static int test_cpucg_weight_overprovisioned(const char *root)
> +{
> +	struct child {
> +		char *cgroup;
> +		pid_t pid;
> +		long usage;
> +	};
> +	int ret = KSFT_FAIL, i;
> +	char *parent = NULL;
> +	struct child children[3] = {NULL};
> +	long usage_seconds = 10;
> +
> +	parent = cg_name(root, "cpucg_test_0");
> +	if (!parent)
> +		goto cleanup;
> +
> +	if (cg_create(parent))
> +		goto cleanup;
> +
> +	if (cg_write(parent, "cgroup.subtree_control", "+cpu"))
> +		goto cleanup;
> +
> +	for (i = 0; i < ARRAY_SIZE(children); i++) {
> +		children[i].cgroup = cg_name_indexed(parent, "cpucg_child", i);
> +		if (!children[i].cgroup)
> +			goto cleanup;
> +
> +		if (cg_create(children[i].cgroup))
> +			goto cleanup;
> +
> +		if (cg_write_numeric(children[i].cgroup, "cpu.weight",
> +					50 * (i + 1)))
> +			goto cleanup;
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(children); i++) {
> +		struct cpu_hog_func_param param = {
> +			.nprocs = get_nprocs(),
> +			.ts = {
> +				.tv_sec = usage_seconds,
> +				.tv_nsec = 0,
> +			},
> +			.clock_type = CPU_HOG_CLOCK_WALL,
> +		};
> +		pid_t pid = cg_run_nowait(children[i].cgroup, hog_cpus_timed,
> +				(void *)&param);
> +		if (pid <= 0)
> +			goto cleanup;
> +		children[i].pid = pid;
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(children); i++) {
> +		int retcode;
> +
> +		waitpid(children[i].pid, &retcode, 0);
> +		if (!WIFEXITED(retcode))
> +			goto cleanup;
> +		if (WEXITSTATUS(retcode))
> +			goto cleanup;
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(children); i++)
> +		children[i].usage = cg_read_key_long(children[i].cgroup,
> +				"cpu.stat", "usage_usec");
> +
> +	for (i = 0; i < ARRAY_SIZE(children) - 1; i++) {
> +		long delta;
> +
> +		if (children[i + 1].usage <= children[i].usage)
> +			goto cleanup;
> +
> +		delta = children[i + 1].usage - children[i].usage;
> +		if (!values_close(delta, children[0].usage, 35))
> +			goto cleanup;
> +	}
> +
> +	ret = KSFT_PASS;
> +cleanup:
> +	for (i = 0; i < ARRAY_SIZE(children); i++) {
> +		cg_destroy(children[i].cgroup);
> +		free(children[i].cgroup);
> +	}
> +	cg_destroy(parent);
> +	free(parent);
> +
> +	return ret;
> +}
> +
>  #define T(x) { x, #x }
>  struct cpucg_test {
>  	int (*fn)(const char *root);
> @@ -204,6 +332,7 @@ struct cpucg_test {
>  } tests[] = {
>  	T(test_cpucg_subtree_control),
>  	T(test_cpucg_stats),
> +	T(test_cpucg_weight_overprovisioned),
>  };
>  #undef T
>  
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2024-04-29  6:28 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-22 17:33 [PATCH 0/4] cgroup: Introduce cpu controller test suite David Vernet
     [not found] ` <20220422173349.3394844-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
2022-04-22 17:33   ` [PATCH v2 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests David Vernet
2022-04-22 17:33   ` [PATCH v2 2/4] cgroup: Add test_cpucg_stats() testcase to cgroup cpu selftests David Vernet
2022-04-22 17:33   ` [PATCH v2 3/4] cgroup: Add test_cpucg_weight_overprovisioned() testcase David Vernet
2024-04-29  6:29     ` Pengfei Xu
2022-04-22 17:33   ` [PATCH v2 4/4] cgroup: Add test_cpucg_weight_underprovisioned() testcase David Vernet
2022-04-22 17:48   ` [PATCH 0/4] cgroup: Introduce cpu controller test suite Tejun Heo
     [not found]     ` <YmLqdIiXdpQqcPTd-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
2022-04-22 17:50       ` Peter Zijlstra
2022-04-22 18:40   ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).