linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] cgroup: Introduce cpu controller test suite
@ 2022-04-19 21:32 David Vernet
  2022-04-19 21:32 ` [PATCH 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests David Vernet
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: David Vernet @ 2022-04-19 21:32 UTC (permalink / raw)
  To: tj, lizefan.x, hannes, cgroups, linux-kernel; +Cc: kernel-team

This patchset introduces a new test_cpu.c test suite as part of
tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
validate the cgroup v2 cpu controller.

This patchset only contains testcases that validate cpu.stat and
cpu.weight, but I'm expecting to send further patchsets after this that
also include testcases that validate other knobs such as cpu.max.

Note that checkpatch complains about a missing MAINTAINERS file entry for
[PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin@linux.dev/.

David Vernet (4):
  cgroup: Add new test_cpu.c test suite in cgroup selftests
  cgroup: Add test_cgcpu_stats() testcase to cgroup cpu selftests
  cgroup: Add test_cgcpu_weight_overprovisioned() testcase
  cgroup: Add new test_cgcpu_weight_underprovisioned() testcase

 tools/testing/selftests/cgroup/.gitignore    |   1 +
 tools/testing/selftests/cgroup/Makefile      |   2 +
 tools/testing/selftests/cgroup/cgroup_util.c |  12 +
 tools/testing/selftests/cgroup/cgroup_util.h |   4 +
 tools/testing/selftests/cgroup/test_cpu.c    | 416 +++++++++++++++++++
 5 files changed, 435 insertions(+)
 create mode 100644 tools/testing/selftests/cgroup/test_cpu.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests
  2022-04-19 21:32 [PATCH 0/4] cgroup: Introduce cpu controller test suite David Vernet
@ 2022-04-19 21:32 ` David Vernet
  2022-04-19 21:32 ` [PATCH 2/4] cgroup: Add test_cgcpu_stats() testcase to cgroup cpu selftests David Vernet
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: David Vernet @ 2022-04-19 21:32 UTC (permalink / raw)
  To: tj, lizefan.x, hannes, cgroups, linux-kernel; +Cc: kernel-team

The cgroup selftests suite currently contains tests that validate various
aspects of cgroup, such as validating the expected behavior for memory
controllers, the expected behavior of cgroup.procs, etc. There are no tests
that validate the expected behavior of the cgroup cpu controller.

This patch therefore adds a new test_cpu.c file that will contain cpu
controller testcases. The file currently only contains a single testcase
that validates creating nested cgroups with cgroup.subtree_control
including cpu. Future patches will add more sophisticated testcases that
validate functional aspects of the cpu controller.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/.gitignore |   1 +
 tools/testing/selftests/cgroup/Makefile   |   2 +
 tools/testing/selftests/cgroup/test_cpu.c | 110 ++++++++++++++++++++++
 3 files changed, 113 insertions(+)
 create mode 100644 tools/testing/selftests/cgroup/test_cpu.c

diff --git a/tools/testing/selftests/cgroup/.gitignore b/tools/testing/selftests/cgroup/.gitignore
index be9643ef6285..306ee1b01e72 100644
--- a/tools/testing/selftests/cgroup/.gitignore
+++ b/tools/testing/selftests/cgroup/.gitignore
@@ -4,3 +4,4 @@ test_core
 test_freezer
 test_kmem
 test_kill
+test_cpu
diff --git a/tools/testing/selftests/cgroup/Makefile b/tools/testing/selftests/cgroup/Makefile
index 745fe25fa0b9..478217cc1371 100644
--- a/tools/testing/selftests/cgroup/Makefile
+++ b/tools/testing/selftests/cgroup/Makefile
@@ -10,6 +10,7 @@ TEST_GEN_PROGS += test_kmem
 TEST_GEN_PROGS += test_core
 TEST_GEN_PROGS += test_freezer
 TEST_GEN_PROGS += test_kill
+TEST_GEN_PROGS += test_cpu
 
 LOCAL_HDRS += $(selfdir)/clone3/clone3_selftests.h $(selfdir)/pidfd/pidfd.h
 
@@ -20,3 +21,4 @@ $(OUTPUT)/test_kmem: cgroup_util.c
 $(OUTPUT)/test_core: cgroup_util.c
 $(OUTPUT)/test_freezer: cgroup_util.c
 $(OUTPUT)/test_kill: cgroup_util.c
+$(OUTPUT)/test_cpu: cgroup_util.c
diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
new file mode 100644
index 000000000000..4faa279bbab3
--- /dev/null
+++ b/tools/testing/selftests/cgroup/test_cpu.c
@@ -0,0 +1,110 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#define _GNU_SOURCE
+#include <linux/limits.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+#include "cgroup_util.h"
+
+/*
+ * This test creates two nested cgroups with and without enabling
+ * the cpu controller.
+ */
+static int test_cgcpu_subtree_control(const char *root)
+{
+	char *parent = NULL, *child = NULL, *parent2 = NULL, *child2 = NULL;
+	int ret = KSFT_FAIL;
+
+	// Create two nested cgroups with the cpu controller enabled.
+	parent = cg_name(root, "cgcpu_test_0");
+	if (!parent)
+		goto cleanup;
+
+	if (cg_create(parent))
+		goto cleanup;
+
+	if (cg_write(parent, "cgroup.subtree_control", "+cpu"))
+		goto cleanup;
+
+	child = cg_name(parent, "cgcpu_test_child");
+	if (!child)
+		goto cleanup;
+
+	if (cg_create(child))
+		goto cleanup;
+
+	if (cg_read_strstr(child, "cgroup.controllers", "cpu"))
+		goto cleanup;
+
+	// Create two nested cgroups without enabling the cpu controller.
+	parent2 = cg_name(root, "cgcpu_test_1");
+	if (!parent2)
+		goto cleanup;
+
+	if (cg_create(parent2))
+		goto cleanup;
+
+	child2 = cg_name(parent2, "cgcpu_test_child");
+	if (!child2)
+		goto cleanup;
+
+	if (cg_create(child2))
+		goto cleanup;
+
+	if (!cg_read_strstr(child2, "cgroup.controllers", "cpu"))
+		goto cleanup;
+
+	ret = KSFT_PASS;
+
+cleanup:
+	cg_destroy(child);
+	free(child);
+	cg_destroy(child2);
+	free(child2);
+	cg_destroy(parent);
+	free(parent);
+	cg_destroy(parent2);
+	free(parent2);
+
+	return ret;
+}
+
+#define T(x) { x, #x }
+struct cgcpu_test {
+	int (*fn)(const char *root);
+	const char *name;
+} tests[] = {
+	T(test_cgcpu_subtree_control),
+};
+#undef T
+
+int main(int argc, char *argv[])
+{
+	char root[PATH_MAX];
+	int i, ret = EXIT_SUCCESS;
+
+	if (cg_find_unified_root(root, sizeof(root)))
+		ksft_exit_skip("cgroup v2 isn't mounted\n");
+
+	if (cg_read_strstr(root, "cgroup.subtree_control", "cpu"))
+		if (cg_write(root, "cgroup.subtree_control", "+cpu"))
+			ksft_exit_skip("Failed to set cpu controller\n");
+
+	for (i = 0; i < ARRAY_SIZE(tests); i++) {
+		switch (tests[i].fn(root)) {
+		case KSFT_PASS:
+			ksft_test_result_pass("%s\n", tests[i].name);
+			break;
+		case KSFT_SKIP:
+			ksft_test_result_skip("%s\n", tests[i].name);
+			break;
+		default:
+			ret = EXIT_FAILURE;
+			ksft_test_result_fail("%s\n", tests[i].name);
+			break;
+		}
+	}
+
+	return ret;
+}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/4] cgroup: Add test_cgcpu_stats() testcase to cgroup cpu selftests
  2022-04-19 21:32 [PATCH 0/4] cgroup: Introduce cpu controller test suite David Vernet
  2022-04-19 21:32 ` [PATCH 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests David Vernet
@ 2022-04-19 21:32 ` David Vernet
  2022-04-19 21:32 ` [PATCH 3/4] cgroup: Add test_cgcpu_weight_overprovisioned() testcase David Vernet
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: David Vernet @ 2022-04-19 21:32 UTC (permalink / raw)
  To: tj, lizefan.x, hannes, cgroups, linux-kernel; +Cc: kernel-team

test_cpu.c includes testcases that validate the cgroup cpu controller.
This patch adds a new testcase called test_cgcpu_stats() that verifies the
expected behavior of the cpu.stat interface. In doing so, we define a
new hog_cpus_timed() function which takes a cpu_hog_func_param struct
that configures how many CPUs it uses, and how long it runs. Future
patches will also spawn threads that hog CPUs, so this function will
eventually serve those use-cases as well.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/cgroup_util.h |   3 +
 tools/testing/selftests/cgroup/test_cpu.c    | 105 +++++++++++++++++++
 2 files changed, 108 insertions(+)

diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
index 4f66d10626d2..1df13dc8b8aa 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/cgroup_util.h
@@ -8,6 +8,9 @@
 
 #define MB(x) (x << 20)
 
+#define USEC_PER_SEC	1000000L
+#define NSEC_PER_SEC	1000000000L
+
 /*
  * Checks if two given values differ by less than err% of their sum.
  */
diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
index 4faa279bbab3..57f6308b1ef4 100644
--- a/tools/testing/selftests/cgroup/test_cpu.c
+++ b/tools/testing/selftests/cgroup/test_cpu.c
@@ -2,11 +2,19 @@
 
 #define _GNU_SOURCE
 #include <linux/limits.h>
+#include <errno.h>
+#include <pthread.h>
 #include <stdio.h>
+#include <time.h>
 
 #include "../kselftest.h"
 #include "cgroup_util.h"
 
+struct cpu_hog_func_param {
+	int nprocs;
+	long runtime_nsec;
+};
+
 /*
  * This test creates two nested cgroups with and without enabling
  * the cpu controller.
@@ -70,12 +78,109 @@ static int test_cgcpu_subtree_control(const char *root)
 	return ret;
 }
 
+static void *hog_cpu_thread_func(void *arg)
+{
+	while (1)
+		;
+
+	return NULL;
+}
+
+static int hog_cpus_timed(const char *cgroup, void *arg)
+{
+	const struct cpu_hog_func_param *param =
+		(struct cpu_hog_func_param *)arg;
+	long nsecs_remaining = param->runtime_nsec;
+	int i, ret;
+
+	for (i = 0; i < param->nprocs; i++) {
+		pthread_t tid;
+
+		ret = pthread_create(&tid, NULL, &hog_cpu_thread_func, NULL);
+		if (ret != 0)
+			return ret;
+	}
+
+	while (nsecs_remaining > 0) {
+		long nsecs_so_far;
+		struct timespec ts = {
+			.tv_sec = nsecs_remaining / NSEC_PER_SEC,
+			.tv_nsec = nsecs_remaining % NSEC_PER_SEC,
+		};
+
+		ret = nanosleep(&ts, NULL);
+		if (ret && errno != EINTR)
+			return ret;
+
+		ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts);
+		if (ret != 0)
+			return ret;
+
+		nsecs_so_far = ts.tv_sec * NSEC_PER_SEC + ts.tv_nsec;
+		nsecs_remaining = nsecs_so_far > param->runtime_nsec
+			? 0
+			: param->runtime_nsec - nsecs_so_far;
+	}
+
+	return 0;
+}
+
+/*
+ * Creates a cpu cgroup, burns a CPU for a few quanta, and verifies that
+ * cpu.stats shows the expected output.
+ */
+static int test_cgcpu_stats(const char *root)
+{
+	int ret = KSFT_FAIL;
+	long usage_usec, user_usec, system_usec;
+	long usage_seconds = 2;
+	long expected_usage_usec = usage_seconds * USEC_PER_SEC;
+	char *cgcpu;
+
+	cgcpu = cg_name(root, "cgcpu_test");
+	if (!cgcpu)
+		goto cleanup;
+
+	if (cg_create(cgcpu))
+		goto cleanup;
+
+	usage_usec = cg_read_key_long(cgcpu, "cpu.stat", "usage_usec");
+	user_usec = cg_read_key_long(cgcpu, "cpu.stat", "user_usec");
+	system_usec = cg_read_key_long(cgcpu, "cpu.stat", "system_usec");
+	if (usage_usec != 0 || user_usec != 0 || system_usec != 0)
+		goto cleanup;
+
+	struct cpu_hog_func_param param = {
+		.nprocs = 1,
+		.runtime_nsec = usage_seconds * NSEC_PER_SEC,
+	};
+	if (cg_run(cgcpu, hog_cpus_timed, (void *)&param))
+		goto cleanup;
+
+	usage_usec = cg_read_key_long(cgcpu, "cpu.stat", "usage_usec");
+	user_usec = cg_read_key_long(cgcpu, "cpu.stat", "user_usec");
+	if (user_usec <= 0)
+		goto cleanup;
+
+	if (!values_close(usage_usec, expected_usage_usec, 1))
+		goto cleanup;
+
+	ret = KSFT_PASS;
+
+cleanup:
+	cg_destroy(cgcpu);
+	free(cgcpu);
+
+	return ret;
+}
+
 #define T(x) { x, #x }
 struct cgcpu_test {
 	int (*fn)(const char *root);
 	const char *name;
 } tests[] = {
 	T(test_cgcpu_subtree_control),
+	T(test_cgcpu_stats),
 };
 #undef T
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/4] cgroup: Add test_cgcpu_weight_overprovisioned() testcase
  2022-04-19 21:32 [PATCH 0/4] cgroup: Introduce cpu controller test suite David Vernet
  2022-04-19 21:32 ` [PATCH 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests David Vernet
  2022-04-19 21:32 ` [PATCH 2/4] cgroup: Add test_cgcpu_stats() testcase to cgroup cpu selftests David Vernet
@ 2022-04-19 21:32 ` David Vernet
  2022-04-19 21:32 ` [PATCH 4/4] cgroup: Add test_cgcpu_weight_underprovisioned() testcase David Vernet
  2022-04-21 22:21 ` [PATCH 0/4] cgroup: Introduce cpu controller test suite Tejun Heo
  4 siblings, 0 replies; 11+ messages in thread
From: David Vernet @ 2022-04-19 21:32 UTC (permalink / raw)
  To: tj, lizefan.x, hannes, cgroups, linux-kernel; +Cc: kernel-team

test_cpu.c includes testcases that validate the cgroup cpu controller.
This patch adds a new testcase called test_cgcpu_weight_overprovisioned()
that verifies the expected behavior of creating multiple processes with
different cpu.weight, on a system that is overprovisioned.

So as to avoid code duplication, this patch also updates cpu_hog_func_param
to take a new hog_clock_type enum which informs how time is counted in
hog_cpus_timed() (either process time or wall clock time).

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/cgroup_util.c |  12 ++
 tools/testing/selftests/cgroup/cgroup_util.h |   1 +
 tools/testing/selftests/cgroup/test_cpu.c    | 138 ++++++++++++++++++-
 3 files changed, 144 insertions(+), 7 deletions(-)

diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
index 0cf7e90c0052..b690fdc8b4cd 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.c
+++ b/tools/testing/selftests/cgroup/cgroup_util.c
@@ -190,6 +190,18 @@ int cg_write(const char *cgroup, const char *control, char *buf)
 	return -1;
 }
 
+int cg_write_numeric(const char *cgroup, const char *control, long value)
+{
+	char buf[64];
+	int ret;
+
+	ret = sprintf(buf, "%lu", value);
+	if (ret < 0)
+		return ret;
+
+	return cg_write(cgroup, control, buf);
+}
+
 int cg_find_unified_root(char *root, size_t len)
 {
 	char buf[10 * PAGE_SIZE];
diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
index 1df13dc8b8aa..0f79156697cf 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/cgroup_util.h
@@ -35,6 +35,7 @@ extern long cg_read_long(const char *cgroup, const char *control);
 long cg_read_key_long(const char *cgroup, const char *control, const char *key);
 extern long cg_read_lc(const char *cgroup, const char *control);
 extern int cg_write(const char *cgroup, const char *control, char *buf);
+int cg_write_numeric(const char *cgroup, const char *control, long value);
 extern int cg_run(const char *cgroup,
 		  int (*fn)(const char *cgroup, void *arg),
 		  void *arg);
diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
index 57f6308b1ef4..2afac9f9e1e2 100644
--- a/tools/testing/selftests/cgroup/test_cpu.c
+++ b/tools/testing/selftests/cgroup/test_cpu.c
@@ -2,6 +2,8 @@
 
 #define _GNU_SOURCE
 #include <linux/limits.h>
+#include <sys/sysinfo.h>
+#include <sys/wait.h>
 #include <errno.h>
 #include <pthread.h>
 #include <stdio.h>
@@ -10,9 +12,17 @@
 #include "../kselftest.h"
 #include "cgroup_util.h"
 
+enum hog_clock_type {
+	// Count elapsed time using the CLOCK_PROCESS_CPUTIME_ID clock.
+	CPU_HOG_CLOCK_PROCESS,
+	// Count elapsed time using system wallclock time.
+	CPU_HOG_CLOCK_WALL,
+};
+
 struct cpu_hog_func_param {
 	int nprocs;
 	long runtime_nsec;
+	enum hog_clock_type clock_type;
 };
 
 /*
@@ -90,8 +100,14 @@ static int hog_cpus_timed(const char *cgroup, void *arg)
 {
 	const struct cpu_hog_func_param *param =
 		(struct cpu_hog_func_param *)arg;
+	long start_time;
 	long nsecs_remaining = param->runtime_nsec;
 	int i, ret;
+	struct timespec ts;
+
+	ret = clock_gettime(CLOCK_MONOTONIC, &ts);
+	if (ret != 0)
+		return ret;
 
 	for (i = 0; i < param->nprocs; i++) {
 		pthread_t tid;
@@ -101,22 +117,29 @@ static int hog_cpus_timed(const char *cgroup, void *arg)
 			return ret;
 	}
 
+	start_time = ts.tv_nsec + ts.tv_sec * NSEC_PER_SEC;
 	while (nsecs_remaining > 0) {
-		long nsecs_so_far;
-		struct timespec ts = {
-			.tv_sec = nsecs_remaining / NSEC_PER_SEC,
-			.tv_nsec = nsecs_remaining % NSEC_PER_SEC,
-		};
+		long nsecs_so_far, baseline;
+		clockid_t clock_id;
 
+		ts.tv_sec = nsecs_remaining / NSEC_PER_SEC;
+		ts.tv_nsec = nsecs_remaining % NSEC_PER_SEC;
 		ret = nanosleep(&ts, NULL);
 		if (ret && errno != EINTR)
 			return ret;
 
-		ret = clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts);
+		if (param->clock_type == CPU_HOG_CLOCK_PROCESS) {
+			clock_id = CLOCK_PROCESS_CPUTIME_ID;
+			baseline = 0;
+		} else {
+			clock_id = CLOCK_MONOTONIC;
+			baseline = start_time;
+		}
+		ret = clock_gettime(clock_id, &ts);
 		if (ret != 0)
 			return ret;
 
-		nsecs_so_far = ts.tv_sec * NSEC_PER_SEC + ts.tv_nsec;
+		nsecs_so_far = ts.tv_sec * NSEC_PER_SEC + ts.tv_nsec - baseline;
 		nsecs_remaining = nsecs_so_far > param->runtime_nsec
 			? 0
 			: param->runtime_nsec - nsecs_so_far;
@@ -153,6 +176,7 @@ static int test_cgcpu_stats(const char *root)
 	struct cpu_hog_func_param param = {
 		.nprocs = 1,
 		.runtime_nsec = usage_seconds * NSEC_PER_SEC,
+		.clock_type = CPU_HOG_CLOCK_PROCESS,
 	};
 	if (cg_run(cgcpu, hog_cpus_timed, (void *)&param))
 		goto cleanup;
@@ -174,6 +198,105 @@ static int test_cgcpu_stats(const char *root)
 	return ret;
 }
 
+/*
+ * First, this test creates the following hierarchy:
+ * A
+ * A/B     cpu.weight = 50
+ * A/C     cpu.weight = 100
+ * A/D     cpu.weight = 150
+ *
+ * A separate process is then created for each child cgroup which spawns as
+ * many threads as there are cores, and hogs each CPU as much as possible
+ * for some time interval.
+ *
+ * Once all of the children have exited, we verify that each child cgroup
+ * was given proportional runtime as informed by their cpu.weight.
+ */
+static int test_cgcpu_weight_overprovisioned(const char *root)
+{
+	struct child {
+		char *cgroup;
+		pid_t pid;
+		long usage;
+	};
+	int ret = KSFT_FAIL, i;
+	char *parent = NULL;
+	struct child children[3] = {NULL};
+	long usage_seconds = 10;
+
+	parent = cg_name(root, "cgcpu_test_0");
+	if (!parent)
+		goto cleanup;
+
+	if (cg_create(parent))
+		goto cleanup;
+
+	if (cg_write(parent, "cgroup.subtree_control", "+cpu"))
+		goto cleanup;
+
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		children[i].cgroup = cg_name_indexed(parent, "cgcpu_child", i);
+		if (!children[i].cgroup)
+			goto cleanup;
+
+		if (cg_create(children[i].cgroup))
+			goto cleanup;
+
+		if (cg_write_numeric(children[i].cgroup, "cpu.weight",
+					50 * (i + 1)))
+			goto cleanup;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		struct cpu_hog_func_param param = {
+			.nprocs = get_nprocs(),
+			.runtime_nsec = usage_seconds * NSEC_PER_SEC,
+			.clock_type = CPU_HOG_CLOCK_WALL,
+		};
+		pid_t pid = cg_run_nowait(children[i].cgroup, hog_cpus_timed,
+				(void *)&param);
+		if (pid <= 0)
+			goto cleanup;
+		children[i].pid = pid;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		int retcode;
+
+		waitpid(children[i].pid, &retcode, 0);
+		if (!WIFEXITED(retcode))
+			goto cleanup;
+		if (WEXITSTATUS(retcode))
+			goto cleanup;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(children); i++)
+		children[i].usage = cg_read_key_long(children[i].cgroup,
+				"cpu.stat", "usage_usec");
+
+	for (i = 0; i < ARRAY_SIZE(children) - 1; i++) {
+		long delta;
+
+		if (children[i + 1].usage <= children[i].usage)
+			goto cleanup;
+
+		delta = children[i + 1].usage - children[i].usage;
+		if (!values_close(delta, children[0].usage, 35))
+			goto cleanup;
+	}
+
+	ret = KSFT_PASS;
+cleanup:
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		cg_destroy(children[i].cgroup);
+		free(children[i].cgroup);
+	}
+	cg_destroy(parent);
+	free(parent);
+
+	return ret;
+}
+
 #define T(x) { x, #x }
 struct cgcpu_test {
 	int (*fn)(const char *root);
@@ -181,6 +304,7 @@ struct cgcpu_test {
 } tests[] = {
 	T(test_cgcpu_subtree_control),
 	T(test_cgcpu_stats),
+	T(test_cgcpu_weight_overprovisioned),
 };
 #undef T
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/4] cgroup: Add test_cgcpu_weight_underprovisioned() testcase
  2022-04-19 21:32 [PATCH 0/4] cgroup: Introduce cpu controller test suite David Vernet
                   ` (2 preceding siblings ...)
  2022-04-19 21:32 ` [PATCH 3/4] cgroup: Add test_cgcpu_weight_overprovisioned() testcase David Vernet
@ 2022-04-19 21:32 ` David Vernet
  2022-04-21 22:21 ` [PATCH 0/4] cgroup: Introduce cpu controller test suite Tejun Heo
  4 siblings, 0 replies; 11+ messages in thread
From: David Vernet @ 2022-04-19 21:32 UTC (permalink / raw)
  To: tj, lizefan.x, hannes, cgroups, linux-kernel; +Cc: kernel-team

test_cpu.c includes testcases that validate the cgroup cpu controller.
This patch adds a new testcase called test_cgcpu_weight_underprovisioned()
that verifies that processes with different cpu.weight that are all running
on an underprovisioned system, still get roughly the same amount of cpu
time.

Because test_cgcpu_weight_underprovisioned() is very similar to
test_cgcpu_weight_overprovisioned(), this patch also pulls the common logic
into a separate helper function that is invoked from both testcases, and
which uses function pointers to invoke the unique portions of the
testcases.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/test_cpu.c | 149 +++++++++++++++++-----
 1 file changed, 114 insertions(+), 35 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_cpu.c b/tools/testing/selftests/cgroup/test_cpu.c
index 2afac9f9e1e2..7adeadba88c4 100644
--- a/tools/testing/selftests/cgroup/test_cpu.c
+++ b/tools/testing/selftests/cgroup/test_cpu.c
@@ -19,6 +19,12 @@ enum hog_clock_type {
 	CPU_HOG_CLOCK_WALL,
 };
 
+struct cpu_hogger {
+	char *cgroup;
+	pid_t pid;
+	long usage;
+};
+
 struct cpu_hog_func_param {
 	int nprocs;
 	long runtime_nsec;
@@ -198,31 +204,15 @@ static int test_cgcpu_stats(const char *root)
 	return ret;
 }
 
-/*
- * First, this test creates the following hierarchy:
- * A
- * A/B     cpu.weight = 50
- * A/C     cpu.weight = 100
- * A/D     cpu.weight = 150
- *
- * A separate process is then created for each child cgroup which spawns as
- * many threads as there are cores, and hogs each CPU as much as possible
- * for some time interval.
- *
- * Once all of the children have exited, we verify that each child cgroup
- * was given proportional runtime as informed by their cpu.weight.
- */
-static int test_cgcpu_weight_overprovisioned(const char *root)
+static int
+run_cgcpu_weight_test(
+		const char *root,
+		pid_t (*spawn_child)(const struct cpu_hogger *child),
+		int (*validate)(const struct cpu_hogger *children, int num_children))
 {
-	struct child {
-		char *cgroup;
-		pid_t pid;
-		long usage;
-	};
 	int ret = KSFT_FAIL, i;
 	char *parent = NULL;
-	struct child children[3] = {NULL};
-	long usage_seconds = 10;
+	struct cpu_hogger children[3] = {NULL};
 
 	parent = cg_name(root, "cgcpu_test_0");
 	if (!parent)
@@ -248,13 +238,7 @@ static int test_cgcpu_weight_overprovisioned(const char *root)
 	}
 
 	for (i = 0; i < ARRAY_SIZE(children); i++) {
-		struct cpu_hog_func_param param = {
-			.nprocs = get_nprocs(),
-			.runtime_nsec = usage_seconds * NSEC_PER_SEC,
-			.clock_type = CPU_HOG_CLOCK_WALL,
-		};
-		pid_t pid = cg_run_nowait(children[i].cgroup, hog_cpus_timed,
-				(void *)&param);
+		pid_t pid = spawn_child(&children[i]);
 		if (pid <= 0)
 			goto cleanup;
 		children[i].pid = pid;
@@ -274,7 +258,43 @@ static int test_cgcpu_weight_overprovisioned(const char *root)
 		children[i].usage = cg_read_key_long(children[i].cgroup,
 				"cpu.stat", "usage_usec");
 
-	for (i = 0; i < ARRAY_SIZE(children) - 1; i++) {
+	if (validate(children, ARRAY_SIZE(children)))
+		goto cleanup;
+
+	ret = KSFT_PASS;
+cleanup:
+	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		cg_destroy(children[i].cgroup);
+		free(children[i].cgroup);
+	}
+	cg_destroy(parent);
+	free(parent);
+
+	return ret;
+}
+
+static pid_t weight_hog_ncpus(const struct cpu_hogger *child, int ncpus)
+{
+	long usage_seconds = 10;
+	struct cpu_hog_func_param param = {
+		.nprocs = ncpus,
+		.runtime_nsec = usage_seconds * NSEC_PER_SEC,
+		.clock_type = CPU_HOG_CLOCK_WALL,
+	};
+	return cg_run_nowait(child->cgroup, hog_cpus_timed, (void *)&param);
+}
+
+static pid_t weight_hog_all_cpus(const struct cpu_hogger *child)
+{
+	return weight_hog_ncpus(child, get_nprocs());
+}
+
+static int
+overprovision_validate(const struct cpu_hogger *children, int num_children)
+{
+	int ret = KSFT_FAIL, i;
+
+	for (i = 0; i < num_children - 1; i++) {
 		long delta;
 
 		if (children[i + 1].usage <= children[i].usage)
@@ -287,16 +307,74 @@ static int test_cgcpu_weight_overprovisioned(const char *root)
 
 	ret = KSFT_PASS;
 cleanup:
-	for (i = 0; i < ARRAY_SIZE(children); i++) {
-		cg_destroy(children[i].cgroup);
-		free(children[i].cgroup);
+	return ret;
+}
+
+/*
+ * First, this test creates the following hierarchy:
+ * A
+ * A/B     cpu.weight = 50
+ * A/C     cpu.weight = 100
+ * A/D     cpu.weight = 150
+ *
+ * A separate process is then created for each child cgroup which spawns as
+ * many threads as there are cores, and hogs each CPU as much as possible
+ * for some time interval.
+ *
+ * Once all of the children have exited, we verify that each child cgroup
+ * was given proportional runtime as informed by their cpu.weight.
+ */
+static int test_cgcpu_weight_overprovisioned(const char *root)
+{
+	return run_cgcpu_weight_test(root, weight_hog_all_cpus,
+			overprovision_validate);
+}
+
+static pid_t weight_hog_one_cpu(const struct cpu_hogger *child)
+{
+	return weight_hog_ncpus(child, 1);
+}
+
+static int
+underprovision_validate(const struct cpu_hogger *children, int num_children)
+{
+	int ret = KSFT_FAIL, i;
+
+	for (i = 0; i < num_children - 1; i++) {
+		if (!values_close(children[i + 1].usage, children[0].usage, 15))
+			goto cleanup;
 	}
-	cg_destroy(parent);
-	free(parent);
 
+	ret = KSFT_PASS;
+cleanup:
 	return ret;
 }
 
+/*
+ * First, this test creates the following hierarchy:
+ * A
+ * A/B     cpu.weight = 50
+ * A/C     cpu.weight = 100
+ * A/D     cpu.weight = 150
+ *
+ * A separate process is then created for each child cgroup which spawns a
+ * single thread that hogs a CPU. The testcase is only run on systems that
+ * have at least one core per-thread in the child processes.
+ *
+ * Once all of the children have exited, we verify that each child cgroup
+ * had roughly the same runtime despite having different cpu.weight.
+ */
+static int test_cgcpu_weight_underprovisioned(const char *root)
+{
+	// Only run the test if there are enough cores to avoid overprovisioning
+	// the system.
+	if (get_nprocs() < 4)
+		return KSFT_SKIP;
+
+	return run_cgcpu_weight_test(root, weight_hog_one_cpu,
+			underprovision_validate);
+}
+
 #define T(x) { x, #x }
 struct cgcpu_test {
 	int (*fn)(const char *root);
@@ -305,6 +383,7 @@ struct cgcpu_test {
 	T(test_cgcpu_subtree_control),
 	T(test_cgcpu_stats),
 	T(test_cgcpu_weight_overprovisioned),
+	T(test_cgcpu_weight_underprovisioned),
 };
 #undef T
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] cgroup: Introduce cpu controller test suite
  2022-04-19 21:32 [PATCH 0/4] cgroup: Introduce cpu controller test suite David Vernet
                   ` (3 preceding siblings ...)
  2022-04-19 21:32 ` [PATCH 4/4] cgroup: Add test_cgcpu_weight_underprovisioned() testcase David Vernet
@ 2022-04-21 22:21 ` Tejun Heo
  2022-04-22 12:32   ` David Vernet
  4 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2022-04-21 22:21 UTC (permalink / raw)
  To: David Vernet; +Cc: lizefan.x, hannes, cgroups, linux-kernel, kernel-team

Hello,

On Tue, Apr 19, 2022 at 02:32:40PM -0700, David Vernet wrote:
> This patchset introduces a new test_cpu.c test suite as part of
> tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
> validate the cgroup v2 cpu controller.
> 
> This patchset only contains testcases that validate cpu.stat and
> cpu.weight, but I'm expecting to send further patchsets after this that
> also include testcases that validate other knobs such as cpu.max.
> 
> Note that checkpatch complains about a missing MAINTAINERS file entry for
> [PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
> https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin@linux.dev/.

This looks great to me. A few small things:

* Can you please repost w/ Ingo and Peterz cc'd?

* Maybe cpucg instead of cgcpu?

* Single level testing is great but extending the case to cover deeper
  nesting level would be great. ie. a test case with multi level tree w/
  both under and over provisioned parts in the tree.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] cgroup: Introduce cpu controller test suite
  2022-04-21 22:21 ` [PATCH 0/4] cgroup: Introduce cpu controller test suite Tejun Heo
@ 2022-04-22 12:32   ` David Vernet
  0 siblings, 0 replies; 11+ messages in thread
From: David Vernet @ 2022-04-22 12:32 UTC (permalink / raw)
  To: Tejun Heo; +Cc: lizefan.x, hannes, cgroups, linux-kernel, kernel-team

Hi Tejun,

On Thu, Apr 21, 2022 at 12:21:18PM -1000, Tejun Heo wrote:
> * Can you please repost w/ Ingo and Peterz cc'd?

Will do, I'll cc them on v2.

> * Maybe cpucg instead of cgcpu?

Agreed, that seems more intuitive. I'll change it in v2.

> * Single level testing is great but extending the case to cover deeper
>   nesting level would be great. ie. a test case with multi level tree w/
>   both under and over provisioned parts in the tree.

This sounds like a great idea. I have another patch set that I was planning
to send out which adds a few more testcases. I can include some testcases
in that set which validate more complicated nesting setups.

Thanks,
David

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] cgroup: Introduce cpu controller test suite
  2022-04-22 17:33 David Vernet
  2022-04-22 17:48 ` Tejun Heo
@ 2022-04-22 18:40 ` Tejun Heo
  1 sibling, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2022-04-22 18:40 UTC (permalink / raw)
  To: David Vernet
  Cc: lizefan.x, hannes, cgroups, peterz, mingo, linux-kernel, kernel-team

On Fri, Apr 22, 2022 at 10:33:47AM -0700, David Vernet wrote:
> This patchset introduces a new test_cpu.c test suite as part of
> tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
> validate the cgroup v2 cpu controller.
> 
> This patchset only contains testcases that validate cpu.stat and
> cpu.weight, but I'm expecting to send further patchsets after this that
> also include testcases that validate other knobs such as cpu.max.
> 
> Note that checkpatch complains about a missing MAINTAINERS file entry for
> [PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
> https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin@linux.dev/.

Applied to cgroup/for-5.19.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] cgroup: Introduce cpu controller test suite
  2022-04-22 17:48 ` Tejun Heo
@ 2022-04-22 17:50   ` Peter Zijlstra
  0 siblings, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2022-04-22 17:50 UTC (permalink / raw)
  To: Tejun Heo
  Cc: David Vernet, lizefan.x, hannes, cgroups, mingo, linux-kernel,
	kernel-team

On Fri, Apr 22, 2022 at 07:48:36AM -1000, Tejun Heo wrote:
> Hello, David.
> 
> On Fri, Apr 22, 2022 at 10:33:47AM -0700, David Vernet wrote:
> > This patchset introduces a new test_cpu.c test suite as part of
> > tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
> > validate the cgroup v2 cpu controller.
> > 
> > This patchset only contains testcases that validate cpu.stat and
> > cpu.weight, but I'm expecting to send further patchsets after this that
> > also include testcases that validate other knobs such as cpu.max.
> > 
> > Note that checkpatch complains about a missing MAINTAINERS file entry for
> > [PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
> > https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin@linux.dev/.
> 
> Looks great to me. Thanks for adding the much needed selftests. Peter, if
> you're okay with it, imma route it through the cgroup tree.

Sure, have at. Thanks!

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/4] cgroup: Introduce cpu controller test suite
  2022-04-22 17:33 David Vernet
@ 2022-04-22 17:48 ` Tejun Heo
  2022-04-22 17:50   ` Peter Zijlstra
  2022-04-22 18:40 ` Tejun Heo
  1 sibling, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2022-04-22 17:48 UTC (permalink / raw)
  To: David Vernet
  Cc: lizefan.x, hannes, cgroups, peterz, mingo, linux-kernel, kernel-team

Hello, David.

On Fri, Apr 22, 2022 at 10:33:47AM -0700, David Vernet wrote:
> This patchset introduces a new test_cpu.c test suite as part of
> tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
> validate the cgroup v2 cpu controller.
> 
> This patchset only contains testcases that validate cpu.stat and
> cpu.weight, but I'm expecting to send further patchsets after this that
> also include testcases that validate other knobs such as cpu.max.
> 
> Note that checkpatch complains about a missing MAINTAINERS file entry for
> [PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
> https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin@linux.dev/.

Looks great to me. Thanks for adding the much needed selftests. Peter, if
you're okay with it, imma route it through the cgroup tree.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 0/4] cgroup: Introduce cpu controller test suite
@ 2022-04-22 17:33 David Vernet
  2022-04-22 17:48 ` Tejun Heo
  2022-04-22 18:40 ` Tejun Heo
  0 siblings, 2 replies; 11+ messages in thread
From: David Vernet @ 2022-04-22 17:33 UTC (permalink / raw)
  To: tj, lizefan.x, hannes; +Cc: cgroups, peterz, mingo, linux-kernel, kernel-team

This patchset introduces a new test_cpu.c test suite as part of
tools/testing/selftests/cgroup. test_cpu.c will contain testcases that
validate the cgroup v2 cpu controller.

This patchset only contains testcases that validate cpu.stat and
cpu.weight, but I'm expecting to send further patchsets after this that
also include testcases that validate other knobs such as cpu.max.

Note that checkpatch complains about a missing MAINTAINERS file entry for
[PATCH 1/4], but Roman Gushchin added that entry in a separate patchset:
https://lore.kernel.org/all/20220415000133.3955987-4-roman.gushchin@linux.dev/.

Changelog:
v2:
  - s/cgcpu/cpucg for variable names and test names.
  - Pass struct timespec as part of struct cpu_hog_func_param rather than
    stuffing the whole time as nanoseconds in a single long.

David Vernet (4):
  cgroup: Add new test_cpu.c test suite in cgroup selftests
  cgroup: Add test_cpucg_stats() testcase to cgroup cpu selftests
  cgroup: Add test_cpucg_weight_overprovisioned() testcase
  cgroup: Add test_cpucg_weight_underprovisioned() testcase

 tools/testing/selftests/cgroup/.gitignore    |   1 +
 tools/testing/selftests/cgroup/Makefile      |   2 +
 tools/testing/selftests/cgroup/cgroup_util.c |  12 +
 tools/testing/selftests/cgroup/cgroup_util.h |   4 +
 tools/testing/selftests/cgroup/test_cpu.c    | 446 +++++++++++++++++++
 5 files changed, 465 insertions(+)
 create mode 100644 tools/testing/selftests/cgroup/test_cpu.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-04-22 19:03 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-19 21:32 [PATCH 0/4] cgroup: Introduce cpu controller test suite David Vernet
2022-04-19 21:32 ` [PATCH 1/4] cgroup: Add new test_cpu.c test suite in cgroup selftests David Vernet
2022-04-19 21:32 ` [PATCH 2/4] cgroup: Add test_cgcpu_stats() testcase to cgroup cpu selftests David Vernet
2022-04-19 21:32 ` [PATCH 3/4] cgroup: Add test_cgcpu_weight_overprovisioned() testcase David Vernet
2022-04-19 21:32 ` [PATCH 4/4] cgroup: Add test_cgcpu_weight_underprovisioned() testcase David Vernet
2022-04-21 22:21 ` [PATCH 0/4] cgroup: Introduce cpu controller test suite Tejun Heo
2022-04-22 12:32   ` David Vernet
2022-04-22 17:33 David Vernet
2022-04-22 17:48 ` Tejun Heo
2022-04-22 17:50   ` Peter Zijlstra
2022-04-22 18:40 ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).