All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Fix bugs in memcontroller cgroup tests
@ 2022-04-22 15:57 ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm
  Cc: tj, roman.gushchin, linux-kernel, linux-mm, cgroups, hannes,
	mhocko, shakeelb, kernel-team, void

tools/testing/selftests/cgroup/test_memcontrol.c contains a set of
testcases which validate expected behavior of the cgroup memory controller.
Roman Gushchin recently sent out a patchset that fixed a few issues in the
test. This patchset continues that effort by fixing a few more issues that
were causing non-deterministic failures in the suite. With this patchset,
I'm unable to reproduce any more errors after running the tests in a
continuous loop for many iterations. Before, I was able to reproduce at
least one of the errors fixed in this patchset with just one or two runs.

David Vernet (5):
  cgroups: Refactor children cgroups in memcg tests
  cgroup: Account for memory_recursiveprot in test_memcg_low()
  cgroup: Account for memory_localevents in
    test_memcg_oom_group_leaf_events()
  cgroup: Removing racy check in test_memcg_sock()
  cgroup: Fix racy check in alloc_pagecache_max_30M() helper function

 tools/testing/selftests/cgroup/cgroup_util.c  | 12 ++++
 tools/testing/selftests/cgroup/cgroup_util.h  |  1 +
 .../selftests/cgroup/test_memcontrol.c        | 69 +++++++++++++------
 3 files changed, 61 insertions(+), 21 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 0/5] Fix bugs in memcontroller cgroup tests
@ 2022-04-22 15:57 ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b
  Cc: tj-DgEjT+Ai2ygdnm+yROfE0A, roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg,
	void-gq6j2QGBifHby3iVrkZq2A

tools/testing/selftests/cgroup/test_memcontrol.c contains a set of
testcases which validate expected behavior of the cgroup memory controller.
Roman Gushchin recently sent out a patchset that fixed a few issues in the
test. This patchset continues that effort by fixing a few more issues that
were causing non-deterministic failures in the suite. With this patchset,
I'm unable to reproduce any more errors after running the tests in a
continuous loop for many iterations. Before, I was able to reproduce at
least one of the errors fixed in this patchset with just one or two runs.

David Vernet (5):
  cgroups: Refactor children cgroups in memcg tests
  cgroup: Account for memory_recursiveprot in test_memcg_low()
  cgroup: Account for memory_localevents in
    test_memcg_oom_group_leaf_events()
  cgroup: Removing racy check in test_memcg_sock()
  cgroup: Fix racy check in alloc_pagecache_max_30M() helper function

 tools/testing/selftests/cgroup/cgroup_util.c  | 12 ++++
 tools/testing/selftests/cgroup/cgroup_util.h  |  1 +
 .../selftests/cgroup/test_memcontrol.c        | 69 +++++++++++++------
 3 files changed, 61 insertions(+), 21 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm
  Cc: tj, roman.gushchin, linux-kernel, linux-mm, cgroups, hannes,
	mhocko, shakeelb, kernel-team, void

In test_memcg_min() and test_memcg_low(), there is an array of four sibling
cgroups. All but one of these sibling groups does a 50MB allocation, and
the group that does no allocation is the third of four in the array.  This
is not a problem per se, but makes it a bit tricky to do some assertions in
test_memcg_low(), as we want to make assertions on the siblings based on
whether or not they performed allocations. Having a static index before
which all groups have performed an allocation makes this cleaner.

This patch therefore reorders the sibling groups so that the group that
performs no allocations is the last in the array. A follow-on patch will
leverage this to fix a bug in the test that incorrectly asserts that a
sibling group that had performed an allocation, but only had protection
from its parent, will not observe any memory.events.low events during
reclaim.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/test_memcontrol.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index 6b5259394e68..aa50eaa8b157 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -321,7 +321,7 @@ static int test_memcg_min(const char *root)
 		if (cg_create(children[i]))
 			goto cleanup;
 
-		if (i == 2)
+		if (i > 2)
 			continue;
 
 		cg_run_nowait(children[i], alloc_pagecache_50M_noexit,
@@ -336,9 +336,9 @@ static int test_memcg_min(const char *root)
 		goto cleanup;
 	if (cg_write(children[1], "memory.min", "25M"))
 		goto cleanup;
-	if (cg_write(children[2], "memory.min", "500M"))
+	if (cg_write(children[2], "memory.min", "0"))
 		goto cleanup;
-	if (cg_write(children[3], "memory.min", "0"))
+	if (cg_write(children[3], "memory.min", "500M"))
 		goto cleanup;
 
 	attempts = 0;
@@ -364,7 +364,7 @@ static int test_memcg_min(const char *root)
 	if (!values_close(c[1], MB(17), 20))
 		goto cleanup;
 
-	if (!values_close(c[2], 0, 1))
+	if (c[3] != 0)
 		goto cleanup;
 
 	if (!cg_run(parent[2], alloc_anon, (void *)MB(170)))
@@ -476,7 +476,7 @@ static int test_memcg_low(const char *root)
 		if (cg_create(children[i]))
 			goto cleanup;
 
-		if (i == 2)
+		if (i > 2)
 			continue;
 
 		if (cg_run(children[i], alloc_pagecache_50M, (void *)(long)fd))
@@ -491,9 +491,9 @@ static int test_memcg_low(const char *root)
 		goto cleanup;
 	if (cg_write(children[1], "memory.low", "25M"))
 		goto cleanup;
-	if (cg_write(children[2], "memory.low", "500M"))
+	if (cg_write(children[2], "memory.low", "0"))
 		goto cleanup;
-	if (cg_write(children[3], "memory.low", "0"))
+	if (cg_write(children[3], "memory.low", "500M"))
 		goto cleanup;
 
 	if (cg_run(parent[2], alloc_anon, (void *)MB(148)))
@@ -511,7 +511,7 @@ static int test_memcg_low(const char *root)
 	if (!values_close(c[1], MB(17), 20))
 		goto cleanup;
 
-	if (!values_close(c[2], 0, 1))
+	if (c[3] != 0)
 		goto cleanup;
 
 	if (cg_run(parent[2], alloc_anon, (void *)MB(166))) {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b
  Cc: tj-DgEjT+Ai2ygdnm+yROfE0A, roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg,
	void-gq6j2QGBifHby3iVrkZq2A

In test_memcg_min() and test_memcg_low(), there is an array of four sibling
cgroups. All but one of these sibling groups does a 50MB allocation, and
the group that does no allocation is the third of four in the array.  This
is not a problem per se, but makes it a bit tricky to do some assertions in
test_memcg_low(), as we want to make assertions on the siblings based on
whether or not they performed allocations. Having a static index before
which all groups have performed an allocation makes this cleaner.

This patch therefore reorders the sibling groups so that the group that
performs no allocations is the last in the array. A follow-on patch will
leverage this to fix a bug in the test that incorrectly asserts that a
sibling group that had performed an allocation, but only had protection
from its parent, will not observe any memory.events.low events during
reclaim.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/test_memcontrol.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index 6b5259394e68..aa50eaa8b157 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -321,7 +321,7 @@ static int test_memcg_min(const char *root)
 		if (cg_create(children[i]))
 			goto cleanup;
 
-		if (i == 2)
+		if (i > 2)
 			continue;
 
 		cg_run_nowait(children[i], alloc_pagecache_50M_noexit,
@@ -336,9 +336,9 @@ static int test_memcg_min(const char *root)
 		goto cleanup;
 	if (cg_write(children[1], "memory.min", "25M"))
 		goto cleanup;
-	if (cg_write(children[2], "memory.min", "500M"))
+	if (cg_write(children[2], "memory.min", "0"))
 		goto cleanup;
-	if (cg_write(children[3], "memory.min", "0"))
+	if (cg_write(children[3], "memory.min", "500M"))
 		goto cleanup;
 
 	attempts = 0;
@@ -364,7 +364,7 @@ static int test_memcg_min(const char *root)
 	if (!values_close(c[1], MB(17), 20))
 		goto cleanup;
 
-	if (!values_close(c[2], 0, 1))
+	if (c[3] != 0)
 		goto cleanup;
 
 	if (!cg_run(parent[2], alloc_anon, (void *)MB(170)))
@@ -476,7 +476,7 @@ static int test_memcg_low(const char *root)
 		if (cg_create(children[i]))
 			goto cleanup;
 
-		if (i == 2)
+		if (i > 2)
 			continue;
 
 		if (cg_run(children[i], alloc_pagecache_50M, (void *)(long)fd))
@@ -491,9 +491,9 @@ static int test_memcg_low(const char *root)
 		goto cleanup;
 	if (cg_write(children[1], "memory.low", "25M"))
 		goto cleanup;
-	if (cg_write(children[2], "memory.low", "500M"))
+	if (cg_write(children[2], "memory.low", "0"))
 		goto cleanup;
-	if (cg_write(children[3], "memory.low", "0"))
+	if (cg_write(children[3], "memory.low", "500M"))
 		goto cleanup;
 
 	if (cg_run(parent[2], alloc_anon, (void *)MB(148)))
@@ -511,7 +511,7 @@ static int test_memcg_low(const char *root)
 	if (!values_close(c[1], MB(17), 20))
 		goto cleanup;
 
-	if (!values_close(c[2], 0, 1))
+	if (c[3] != 0)
 		goto cleanup;
 
 	if (cg_run(parent[2], alloc_anon, (void *)MB(166))) {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/5] cgroup: Account for memory_recursiveprot in test_memcg_low()
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm
  Cc: tj, roman.gushchin, linux-kernel, linux-mm, cgroups, hannes,
	mhocko, shakeelb, kernel-team, void

The test_memcg_low() testcase in test_memcontrol.c verifies the expected
behavior of groups using the memory.low knob. Part of the testcase verifies
that a group with memory.low that experiences reclaim due to memory
pressure elsewhere in the system, observes memory.events.low events as a
result of that reclaim.

In commit 8a931f801340 ("mm: memcontrol: recursive memory.low protection"),
the memory controller was updated to propagate memory.low and memory.min
protection from a parent group to its children via a configurable
memory_recursiveprot mount option. This unfortunately broke the memcg
tests, which asserts that a sibling that experienced reclaim but had a
memory.low value of 0, would not observe any memory.low events. This patch
updates test_memcg_low() to account for the new behavior introduced by
memory_recursiveprot.

So as to make the test resilient to multiple configurations, the patch also
adds a new proc_mount_contains() helper that checks for a string in
/proc/mounts, and is used to toggle behavior based on whether the default
memory_recursiveprot was present.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/cgroup_util.c     | 12 ++++++++++++
 tools/testing/selftests/cgroup/cgroup_util.h     |  1 +
 tools/testing/selftests/cgroup/test_memcontrol.c | 16 +++++++++++++---
 3 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
index dbaa7aabbb4a..e5d8d727bdcf 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.c
+++ b/tools/testing/selftests/cgroup/cgroup_util.c
@@ -535,6 +535,18 @@ int set_oom_adj_score(int pid, int score)
 	return 0;
 }
 
+int proc_mount_contains(const char *option)
+{
+	char buf[4 * PAGE_SIZE];
+	ssize_t read;
+
+	read = read_text("/proc/mounts", buf, sizeof(buf));
+	if (read < 0)
+		return read;
+
+	return strstr(buf, option) != NULL;
+}
+
 ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size)
 {
 	char path[PATH_MAX];
diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
index 628738532ac9..756f76052b44 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/cgroup_util.h
@@ -48,6 +48,7 @@ extern int is_swap_enabled(void);
 extern int set_oom_adj_score(int pid, int score);
 extern int cg_wait_for_proc_count(const char *cgroup, int count);
 extern int cg_killall(const char *cgroup);
+int proc_mount_contains(const char *option);
 extern ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size);
 extern int proc_read_strstr(int pid, bool thread, const char *item, const char *needle);
 extern pid_t clone_into_cgroup(int cgroup_fd);
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index aa50eaa8b157..ea2fd27e52df 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -21,6 +21,8 @@
 #include "../kselftest.h"
 #include "cgroup_util.h"
 
+static bool has_recursiveprot;
+
 /*
  * This test creates two nested cgroups with and without enabling
  * the memory controller.
@@ -521,15 +523,18 @@ static int test_memcg_low(const char *root)
 	}
 
 	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		int no_low_events_index = has_recursiveprot ? 2 : 1;
+
 		oom = cg_read_key_long(children[i], "memory.events", "oom ");
 		low = cg_read_key_long(children[i], "memory.events", "low ");
 
 		if (oom)
 			goto cleanup;
-		if (i < 2 && low <= 0)
+		if (i <= no_low_events_index && low <= 0)
 			goto cleanup;
-		if (i >= 2 && low)
+		if (i > no_low_events_index && low)
 			goto cleanup;
+
 	}
 
 	ret = KSFT_PASS;
@@ -1272,7 +1277,7 @@ struct memcg_test {
 int main(int argc, char **argv)
 {
 	char root[PATH_MAX];
-	int i, ret = EXIT_SUCCESS;
+	int i, proc_status, ret = EXIT_SUCCESS;
 
 	if (cg_find_unified_root(root, sizeof(root)))
 		ksft_exit_skip("cgroup v2 isn't mounted\n");
@@ -1288,6 +1293,11 @@ int main(int argc, char **argv)
 		if (cg_write(root, "cgroup.subtree_control", "+memory"))
 			ksft_exit_skip("Failed to set memory controller\n");
 
+	proc_status = proc_mount_contains("memory_recursiveprot");
+	if (proc_status < 0)
+		ksft_exit_skip("Failed to query cgroup mount option\n");
+	has_recursiveprot = proc_status;
+
 	for (i = 0; i < ARRAY_SIZE(tests); i++) {
 		switch (tests[i].fn(root)) {
 		case KSFT_PASS:
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/5] cgroup: Account for memory_recursiveprot in test_memcg_low()
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b
  Cc: tj-DgEjT+Ai2ygdnm+yROfE0A, roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg,
	void-gq6j2QGBifHby3iVrkZq2A

The test_memcg_low() testcase in test_memcontrol.c verifies the expected
behavior of groups using the memory.low knob. Part of the testcase verifies
that a group with memory.low that experiences reclaim due to memory
pressure elsewhere in the system, observes memory.events.low events as a
result of that reclaim.

In commit 8a931f801340 ("mm: memcontrol: recursive memory.low protection"),
the memory controller was updated to propagate memory.low and memory.min
protection from a parent group to its children via a configurable
memory_recursiveprot mount option. This unfortunately broke the memcg
tests, which asserts that a sibling that experienced reclaim but had a
memory.low value of 0, would not observe any memory.low events. This patch
updates test_memcg_low() to account for the new behavior introduced by
memory_recursiveprot.

So as to make the test resilient to multiple configurations, the patch also
adds a new proc_mount_contains() helper that checks for a string in
/proc/mounts, and is used to toggle behavior based on whether the default
memory_recursiveprot was present.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/cgroup_util.c     | 12 ++++++++++++
 tools/testing/selftests/cgroup/cgroup_util.h     |  1 +
 tools/testing/selftests/cgroup/test_memcontrol.c | 16 +++++++++++++---
 3 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
index dbaa7aabbb4a..e5d8d727bdcf 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.c
+++ b/tools/testing/selftests/cgroup/cgroup_util.c
@@ -535,6 +535,18 @@ int set_oom_adj_score(int pid, int score)
 	return 0;
 }
 
+int proc_mount_contains(const char *option)
+{
+	char buf[4 * PAGE_SIZE];
+	ssize_t read;
+
+	read = read_text("/proc/mounts", buf, sizeof(buf));
+	if (read < 0)
+		return read;
+
+	return strstr(buf, option) != NULL;
+}
+
 ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size)
 {
 	char path[PATH_MAX];
diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
index 628738532ac9..756f76052b44 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/cgroup_util.h
@@ -48,6 +48,7 @@ extern int is_swap_enabled(void);
 extern int set_oom_adj_score(int pid, int score);
 extern int cg_wait_for_proc_count(const char *cgroup, int count);
 extern int cg_killall(const char *cgroup);
+int proc_mount_contains(const char *option);
 extern ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size);
 extern int proc_read_strstr(int pid, bool thread, const char *item, const char *needle);
 extern pid_t clone_into_cgroup(int cgroup_fd);
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index aa50eaa8b157..ea2fd27e52df 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -21,6 +21,8 @@
 #include "../kselftest.h"
 #include "cgroup_util.h"
 
+static bool has_recursiveprot;
+
 /*
  * This test creates two nested cgroups with and without enabling
  * the memory controller.
@@ -521,15 +523,18 @@ static int test_memcg_low(const char *root)
 	}
 
 	for (i = 0; i < ARRAY_SIZE(children); i++) {
+		int no_low_events_index = has_recursiveprot ? 2 : 1;
+
 		oom = cg_read_key_long(children[i], "memory.events", "oom ");
 		low = cg_read_key_long(children[i], "memory.events", "low ");
 
 		if (oom)
 			goto cleanup;
-		if (i < 2 && low <= 0)
+		if (i <= no_low_events_index && low <= 0)
 			goto cleanup;
-		if (i >= 2 && low)
+		if (i > no_low_events_index && low)
 			goto cleanup;
+
 	}
 
 	ret = KSFT_PASS;
@@ -1272,7 +1277,7 @@ struct memcg_test {
 int main(int argc, char **argv)
 {
 	char root[PATH_MAX];
-	int i, ret = EXIT_SUCCESS;
+	int i, proc_status, ret = EXIT_SUCCESS;
 
 	if (cg_find_unified_root(root, sizeof(root)))
 		ksft_exit_skip("cgroup v2 isn't mounted\n");
@@ -1288,6 +1293,11 @@ int main(int argc, char **argv)
 		if (cg_write(root, "cgroup.subtree_control", "+memory"))
 			ksft_exit_skip("Failed to set memory controller\n");
 
+	proc_status = proc_mount_contains("memory_recursiveprot");
+	if (proc_status < 0)
+		ksft_exit_skip("Failed to query cgroup mount option\n");
+	has_recursiveprot = proc_status;
+
 	for (i = 0; i < ARRAY_SIZE(tests); i++) {
 		switch (tests[i].fn(root)) {
 		case KSFT_PASS:
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/5] cgroup: Account for memory_localevents in test_memcg_oom_group_leaf_events()
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm
  Cc: tj, roman.gushchin, linux-kernel, linux-mm, cgroups, hannes,
	mhocko, shakeelb, kernel-team, void

The test_memcg_oom_group_leaf_events() testcase in the cgroup memcg tests
validates that processes in a group that perform allocations exceeding
memory.oom.group are killed. It also validates that the
memory.events.oom_kill events are properly propagated in this case.  Commit
06e11c907ea4 ("kselftests: memcg: update the oom group leaf events test")
fixed test_memcg_oom_group_leaf_events() to account for the fact that the
memory.events.oom_kill events in a child cgroup is propagated up to its
parent. This behavior can actually be configured by the memory_localevents
mount option, so this patch updates the testcase to properly account for
the possible presence of this mount option.

Signed-off-by: David Vernet <void@manifault.com>
---
 .../testing/selftests/cgroup/test_memcontrol.c  | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index ea2fd27e52df..d88e0ca3f3d1 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -21,6 +21,7 @@
 #include "../kselftest.h"
 #include "cgroup_util.h"
 
+static bool has_localevents;
 static bool has_recursiveprot;
 
 /*
@@ -1091,6 +1092,7 @@ static int test_memcg_oom_group_leaf_events(const char *root)
 {
 	int ret = KSFT_FAIL;
 	char *parent, *child;
+	long parent_oom_events;
 
 	parent = cg_name(root, "memcg_test_0");
 	child = cg_name(root, "memcg_test_0/memcg_test_1");
@@ -1128,7 +1130,15 @@ static int test_memcg_oom_group_leaf_events(const char *root)
 	if (cg_read_key_long(child, "memory.events", "oom_kill ") <= 0)
 		goto cleanup;
 
-	if (cg_read_key_long(parent, "memory.events", "oom_kill ") <= 0)
+	parent_oom_events = cg_read_key_long(
+			parent, "memory.events", "oom_kill ");
+	// If memory_localevents is not enabled (the default), the parent should
+	// count OOM events in its children groups. Otherwise, it should not
+	// have observed any events.
+	if (has_localevents) {
+		if (parent_oom_events != 0)
+			goto cleanup;
+	} else if (parent_oom_events <= 0)
 		goto cleanup;
 
 	ret = KSFT_PASS;
@@ -1298,6 +1308,11 @@ int main(int argc, char **argv)
 		ksft_exit_skip("Failed to query cgroup mount option\n");
 	has_recursiveprot = proc_status;
 
+	proc_status = proc_mount_contains("memory_localevents");
+	if (proc_status < 0)
+		ksft_exit_skip("Failed to query cgroup mount option\n");
+	has_localevents = proc_status;
+
 	for (i = 0; i < ARRAY_SIZE(tests); i++) {
 		switch (tests[i].fn(root)) {
 		case KSFT_PASS:
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/5] cgroup: Account for memory_localevents in test_memcg_oom_group_leaf_events()
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b
  Cc: tj-DgEjT+Ai2ygdnm+yROfE0A, roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg,
	void-gq6j2QGBifHby3iVrkZq2A

The test_memcg_oom_group_leaf_events() testcase in the cgroup memcg tests
validates that processes in a group that perform allocations exceeding
memory.oom.group are killed. It also validates that the
memory.events.oom_kill events are properly propagated in this case.  Commit
06e11c907ea4 ("kselftests: memcg: update the oom group leaf events test")
fixed test_memcg_oom_group_leaf_events() to account for the fact that the
memory.events.oom_kill events in a child cgroup is propagated up to its
parent. This behavior can actually be configured by the memory_localevents
mount option, so this patch updates the testcase to properly account for
the possible presence of this mount option.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 .../testing/selftests/cgroup/test_memcontrol.c  | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index ea2fd27e52df..d88e0ca3f3d1 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -21,6 +21,7 @@
 #include "../kselftest.h"
 #include "cgroup_util.h"
 
+static bool has_localevents;
 static bool has_recursiveprot;
 
 /*
@@ -1091,6 +1092,7 @@ static int test_memcg_oom_group_leaf_events(const char *root)
 {
 	int ret = KSFT_FAIL;
 	char *parent, *child;
+	long parent_oom_events;
 
 	parent = cg_name(root, "memcg_test_0");
 	child = cg_name(root, "memcg_test_0/memcg_test_1");
@@ -1128,7 +1130,15 @@ static int test_memcg_oom_group_leaf_events(const char *root)
 	if (cg_read_key_long(child, "memory.events", "oom_kill ") <= 0)
 		goto cleanup;
 
-	if (cg_read_key_long(parent, "memory.events", "oom_kill ") <= 0)
+	parent_oom_events = cg_read_key_long(
+			parent, "memory.events", "oom_kill ");
+	// If memory_localevents is not enabled (the default), the parent should
+	// count OOM events in its children groups. Otherwise, it should not
+	// have observed any events.
+	if (has_localevents) {
+		if (parent_oom_events != 0)
+			goto cleanup;
+	} else if (parent_oom_events <= 0)
 		goto cleanup;
 
 	ret = KSFT_PASS;
@@ -1298,6 +1308,11 @@ int main(int argc, char **argv)
 		ksft_exit_skip("Failed to query cgroup mount option\n");
 	has_recursiveprot = proc_status;
 
+	proc_status = proc_mount_contains("memory_localevents");
+	if (proc_status < 0)
+		ksft_exit_skip("Failed to query cgroup mount option\n");
+	has_localevents = proc_status;
+
 	for (i = 0; i < ARRAY_SIZE(tests); i++) {
 		switch (tests[i].fn(root)) {
 		case KSFT_PASS:
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/5] cgroup: Removing racy check in test_memcg_sock()
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm
  Cc: tj, roman.gushchin, linux-kernel, linux-mm, cgroups, hannes,
	mhocko, shakeelb, kernel-team, void

test_memcg_sock() in the cgroup memcg tests, verifies expected memory
accounting for sockets. The test forks a process which functions as a TCP
server, and sends large buffers back and forth between itself (as the TCP
client) and the forked TCP server. While doing so, it verifies that
memory.current and memory.stat.sock look correct.

There is currently a check in tcp_client() which asserts memory.current >=
memory.stat.sock. This check is racy, as between memory.current and
memory.stat.sock being queried, a packet could come in which causes
mem_cgroup_charge_skmem() to be invoked. This could cause memory.stat.sock
to exceed memory.current. Reversing the order of querying doesn't address
the problem either, as memory may be reclaimed between the two calls.
Instead, this patch just removes that assertion altogether, and instead
relies on the values_close() check that follows to validate the expected
accounting.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/test_memcontrol.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index d88e0ca3f3d1..c4735fa36a3d 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -993,9 +993,6 @@ static int tcp_client(const char *cgroup, unsigned short port)
 		if (current < 0 || sock < 0)
 			goto close_sk;
 
-		if (current < sock)
-			goto close_sk;
-
 		if (values_close(current, sock, 10)) {
 			ret = KSFT_PASS;
 			break;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/5] cgroup: Removing racy check in test_memcg_sock()
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b
  Cc: tj-DgEjT+Ai2ygdnm+yROfE0A, roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg,
	void-gq6j2QGBifHby3iVrkZq2A

test_memcg_sock() in the cgroup memcg tests, verifies expected memory
accounting for sockets. The test forks a process which functions as a TCP
server, and sends large buffers back and forth between itself (as the TCP
client) and the forked TCP server. While doing so, it verifies that
memory.current and memory.stat.sock look correct.

There is currently a check in tcp_client() which asserts memory.current >=
memory.stat.sock. This check is racy, as between memory.current and
memory.stat.sock being queried, a packet could come in which causes
mem_cgroup_charge_skmem() to be invoked. This could cause memory.stat.sock
to exceed memory.current. Reversing the order of querying doesn't address
the problem either, as memory may be reclaimed between the two calls.
Instead, this patch just removes that assertion altogether, and instead
relies on the values_close() check that follows to validate the expected
accounting.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/test_memcontrol.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index d88e0ca3f3d1..c4735fa36a3d 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -993,9 +993,6 @@ static int tcp_client(const char *cgroup, unsigned short port)
 		if (current < 0 || sock < 0)
 			goto close_sk;
 
-		if (current < sock)
-			goto close_sk;
-
 		if (values_close(current, sock, 10)) {
 			ret = KSFT_PASS;
 			break;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/5] cgroup: Fix racy check in alloc_pagecache_max_30M() helper function
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm
  Cc: tj, roman.gushchin, linux-kernel, linux-mm, cgroups, hannes,
	mhocko, shakeelb, kernel-team, void

alloc_pagecache_max_30M() in the cgroup memcg tests performs a 50MB
pagecache allocation, which it expects to be capped at 30MB due to the
calling process having a memory.high setting of 30MB. After the allocation,
the function contains a check that verifies that MB(29) < memory.current <=
MB(30). This check can actually fail non-deterministically.

The testcases that use this function are test_memcg_high() and
test_memcg_max(), which set memory.min and memory.max to 30MB respectively
for the cgroup under test. The allocation can slightly exceed this number
in both cases, and for memory.max, the process performing the allocation
will not have the OOM killer invoked as it's performing a pagecache
allocation.  This patchset therefore updates the above check to instead use
the verify_close() helper function.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/test_memcontrol.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index c4735fa36a3d..088850f01ae7 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -564,9 +564,14 @@ static int alloc_pagecache_max_30M(const char *cgroup, void *arg)
 {
 	size_t size = MB(50);
 	int ret = -1;
-	long current;
+	long current, high, max;
 	int fd;
 
+	high = cg_read_long(cgroup, "memory.high");
+	max = cg_read_long(cgroup, "memory.max");
+	if (high != MB(30) && max != MB(30))
+		goto cleanup;
+
 	fd = get_temp_fd();
 	if (fd < 0)
 		return -1;
@@ -575,7 +580,7 @@ static int alloc_pagecache_max_30M(const char *cgroup, void *arg)
 		goto cleanup;
 
 	current = cg_read_long(cgroup, "memory.current");
-	if (current <= MB(29) || current > MB(30))
+	if (!values_close(current, MB(30), 5))
 		goto cleanup;
 
 	ret = 0;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/5] cgroup: Fix racy check in alloc_pagecache_max_30M() helper function
@ 2022-04-22 15:57   ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-22 15:57 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b
  Cc: tj-DgEjT+Ai2ygdnm+yROfE0A, roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg,
	void-gq6j2QGBifHby3iVrkZq2A

alloc_pagecache_max_30M() in the cgroup memcg tests performs a 50MB
pagecache allocation, which it expects to be capped at 30MB due to the
calling process having a memory.high setting of 30MB. After the allocation,
the function contains a check that verifies that MB(29) < memory.current <=
MB(30). This check can actually fail non-deterministically.

The testcases that use this function are test_memcg_high() and
test_memcg_max(), which set memory.min and memory.max to 30MB respectively
for the cgroup under test. The allocation can slightly exceed this number
in both cases, and for memory.max, the process performing the allocation
will not have the OOM killer invoked as it's performing a pagecache
allocation.  This patchset therefore updates the above check to instead use
the verify_close() helper function.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/test_memcontrol.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index c4735fa36a3d..088850f01ae7 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -564,9 +564,14 @@ static int alloc_pagecache_max_30M(const char *cgroup, void *arg)
 {
 	size_t size = MB(50);
 	int ret = -1;
-	long current;
+	long current, high, max;
 	int fd;
 
+	high = cg_read_long(cgroup, "memory.high");
+	max = cg_read_long(cgroup, "memory.max");
+	if (high != MB(30) && max != MB(30))
+		goto cleanup;
+
 	fd = get_temp_fd();
 	if (fd < 0)
 		return -1;
@@ -575,7 +580,7 @@ static int alloc_pagecache_max_30M(const char *cgroup, void *arg)
 		goto cleanup;
 
 	current = cg_read_long(cgroup, "memory.current");
-	if (current <= MB(29) || current > MB(30))
+	if (!values_close(current, MB(30), 5))
 		goto cleanup;
 
 	ret = 0;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-22 23:04     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:04 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team


On Fri, Apr 22, 2022 at 08:57:25AM -0700, David Vernet wrote:
> In test_memcg_min() and test_memcg_low(), there is an array of four sibling
> cgroups. All but one of these sibling groups does a 50MB allocation, and
> the group that does no allocation is the third of four in the array.  This
> is not a problem per se, but makes it a bit tricky to do some assertions in
> test_memcg_low(), as we want to make assertions on the siblings based on
> whether or not they performed allocations. Having a static index before
> which all groups have performed an allocation makes this cleaner.
> 
> This patch therefore reorders the sibling groups so that the group that
> performs no allocations is the last in the array.

It makes the comment explaining the test just above the test_memcg_min()
function obsolete. Please, fix it too.

Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-22 23:04     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:04 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg


On Fri, Apr 22, 2022 at 08:57:25AM -0700, David Vernet wrote:
> In test_memcg_min() and test_memcg_low(), there is an array of four sibling
> cgroups. All but one of these sibling groups does a 50MB allocation, and
> the group that does no allocation is the third of four in the array.  This
> is not a problem per se, but makes it a bit tricky to do some assertions in
> test_memcg_low(), as we want to make assertions on the siblings based on
> whether or not they performed allocations. Having a static index before
> which all groups have performed an allocation makes this cleaner.
> 
> This patch therefore reorders the sibling groups so that the group that
> performs no allocations is the last in the array.

It makes the comment explaining the test just above the test_memcg_min()
function obsolete. Please, fix it too.

Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/5] cgroup: Account for memory_recursiveprot in test_memcg_low()
@ 2022-04-22 23:06     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:06 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Fri, Apr 22, 2022 at 08:57:26AM -0700, David Vernet wrote:
> The test_memcg_low() testcase in test_memcontrol.c verifies the expected
> behavior of groups using the memory.low knob. Part of the testcase verifies
> that a group with memory.low that experiences reclaim due to memory
> pressure elsewhere in the system, observes memory.events.low events as a
> result of that reclaim.
> 
> In commit 8a931f801340 ("mm: memcontrol: recursive memory.low protection"),
> the memory controller was updated to propagate memory.low and memory.min
> protection from a parent group to its children via a configurable
> memory_recursiveprot mount option. This unfortunately broke the memcg
> tests, which asserts that a sibling that experienced reclaim but had a
> memory.low value of 0, would not observe any memory.low events. This patch
> updates test_memcg_low() to account for the new behavior introduced by
> memory_recursiveprot.
> 
> So as to make the test resilient to multiple configurations, the patch also
> adds a new proc_mount_contains() helper that checks for a string in
> /proc/mounts, and is used to toggle behavior based on whether the default
> memory_recursiveprot was present.
> 
> Signed-off-by: David Vernet <void@manifault.com>
> ---
>  tools/testing/selftests/cgroup/cgroup_util.c     | 12 ++++++++++++
>  tools/testing/selftests/cgroup/cgroup_util.h     |  1 +
>  tools/testing/selftests/cgroup/test_memcontrol.c | 16 +++++++++++++---
>  3 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
> index dbaa7aabbb4a..e5d8d727bdcf 100644
> --- a/tools/testing/selftests/cgroup/cgroup_util.c
> +++ b/tools/testing/selftests/cgroup/cgroup_util.c
> @@ -535,6 +535,18 @@ int set_oom_adj_score(int pid, int score)
>  	return 0;
>  }
>  
> +int proc_mount_contains(const char *option)
> +{
> +	char buf[4 * PAGE_SIZE];
> +	ssize_t read;
> +
> +	read = read_text("/proc/mounts", buf, sizeof(buf));
> +	if (read < 0)
> +		return read;
> +
> +	return strstr(buf, option) != NULL;
> +}
> +
>  ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size)
>  {
>  	char path[PATH_MAX];
> diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
> index 628738532ac9..756f76052b44 100644
> --- a/tools/testing/selftests/cgroup/cgroup_util.h
> +++ b/tools/testing/selftests/cgroup/cgroup_util.h
> @@ -48,6 +48,7 @@ extern int is_swap_enabled(void);
>  extern int set_oom_adj_score(int pid, int score);
>  extern int cg_wait_for_proc_count(const char *cgroup, int count);
>  extern int cg_killall(const char *cgroup);
> +int proc_mount_contains(const char *option);
>  extern ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size);
>  extern int proc_read_strstr(int pid, bool thread, const char *item, const char *needle);
>  extern pid_t clone_into_cgroup(int cgroup_fd);
> diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
> index aa50eaa8b157..ea2fd27e52df 100644
> --- a/tools/testing/selftests/cgroup/test_memcontrol.c
> +++ b/tools/testing/selftests/cgroup/test_memcontrol.c
> @@ -21,6 +21,8 @@
>  #include "../kselftest.h"
>  #include "cgroup_util.h"
>  
> +static bool has_recursiveprot;
> +
>  /*
>   * This test creates two nested cgroups with and without enabling
>   * the memory controller.
> @@ -521,15 +523,18 @@ static int test_memcg_low(const char *root)
>  	}
>  
>  	for (i = 0; i < ARRAY_SIZE(children); i++) {
> +		int no_low_events_index = has_recursiveprot ? 2 : 1;
> +
>  		oom = cg_read_key_long(children[i], "memory.events", "oom ");
>  		low = cg_read_key_long(children[i], "memory.events", "low ");
>  
>  		if (oom)
>  			goto cleanup;
> -		if (i < 2 && low <= 0)
> +		if (i <= no_low_events_index && low <= 0)
>  			goto cleanup;
> -		if (i >= 2 && low)
> +		if (i > no_low_events_index && low)
>  			goto cleanup;
> +
>  	}
>  
>  	ret = KSFT_PASS;
> @@ -1272,7 +1277,7 @@ struct memcg_test {
>  int main(int argc, char **argv)
>  {
>  	char root[PATH_MAX];
> -	int i, ret = EXIT_SUCCESS;
> +	int i, proc_status, ret = EXIT_SUCCESS;
>  
>  	if (cg_find_unified_root(root, sizeof(root)))
>  		ksft_exit_skip("cgroup v2 isn't mounted\n");
> @@ -1288,6 +1293,11 @@ int main(int argc, char **argv)
>  		if (cg_write(root, "cgroup.subtree_control", "+memory"))
>  			ksft_exit_skip("Failed to set memory controller\n");
>  
> +	proc_status = proc_mount_contains("memory_recursiveprot");
> +	if (proc_status < 0)
> +		ksft_exit_skip("Failed to query cgroup mount option\n");

Hopefully no one has a mountpoint with the memory_recursiveprot name :)

Acked-by: Roman Gushchin <roman.gushchin@linux.dev>

Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/5] cgroup: Account for memory_recursiveprot in test_memcg_low()
@ 2022-04-22 23:06     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:06 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 08:57:26AM -0700, David Vernet wrote:
> The test_memcg_low() testcase in test_memcontrol.c verifies the expected
> behavior of groups using the memory.low knob. Part of the testcase verifies
> that a group with memory.low that experiences reclaim due to memory
> pressure elsewhere in the system, observes memory.events.low events as a
> result of that reclaim.
> 
> In commit 8a931f801340 ("mm: memcontrol: recursive memory.low protection"),
> the memory controller was updated to propagate memory.low and memory.min
> protection from a parent group to its children via a configurable
> memory_recursiveprot mount option. This unfortunately broke the memcg
> tests, which asserts that a sibling that experienced reclaim but had a
> memory.low value of 0, would not observe any memory.low events. This patch
> updates test_memcg_low() to account for the new behavior introduced by
> memory_recursiveprot.
> 
> So as to make the test resilient to multiple configurations, the patch also
> adds a new proc_mount_contains() helper that checks for a string in
> /proc/mounts, and is used to toggle behavior based on whether the default
> memory_recursiveprot was present.
> 
> Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
> ---
>  tools/testing/selftests/cgroup/cgroup_util.c     | 12 ++++++++++++
>  tools/testing/selftests/cgroup/cgroup_util.h     |  1 +
>  tools/testing/selftests/cgroup/test_memcontrol.c | 16 +++++++++++++---
>  3 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
> index dbaa7aabbb4a..e5d8d727bdcf 100644
> --- a/tools/testing/selftests/cgroup/cgroup_util.c
> +++ b/tools/testing/selftests/cgroup/cgroup_util.c
> @@ -535,6 +535,18 @@ int set_oom_adj_score(int pid, int score)
>  	return 0;
>  }
>  
> +int proc_mount_contains(const char *option)
> +{
> +	char buf[4 * PAGE_SIZE];
> +	ssize_t read;
> +
> +	read = read_text("/proc/mounts", buf, sizeof(buf));
> +	if (read < 0)
> +		return read;
> +
> +	return strstr(buf, option) != NULL;
> +}
> +
>  ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size)
>  {
>  	char path[PATH_MAX];
> diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
> index 628738532ac9..756f76052b44 100644
> --- a/tools/testing/selftests/cgroup/cgroup_util.h
> +++ b/tools/testing/selftests/cgroup/cgroup_util.h
> @@ -48,6 +48,7 @@ extern int is_swap_enabled(void);
>  extern int set_oom_adj_score(int pid, int score);
>  extern int cg_wait_for_proc_count(const char *cgroup, int count);
>  extern int cg_killall(const char *cgroup);
> +int proc_mount_contains(const char *option);
>  extern ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size);
>  extern int proc_read_strstr(int pid, bool thread, const char *item, const char *needle);
>  extern pid_t clone_into_cgroup(int cgroup_fd);
> diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
> index aa50eaa8b157..ea2fd27e52df 100644
> --- a/tools/testing/selftests/cgroup/test_memcontrol.c
> +++ b/tools/testing/selftests/cgroup/test_memcontrol.c
> @@ -21,6 +21,8 @@
>  #include "../kselftest.h"
>  #include "cgroup_util.h"
>  
> +static bool has_recursiveprot;
> +
>  /*
>   * This test creates two nested cgroups with and without enabling
>   * the memory controller.
> @@ -521,15 +523,18 @@ static int test_memcg_low(const char *root)
>  	}
>  
>  	for (i = 0; i < ARRAY_SIZE(children); i++) {
> +		int no_low_events_index = has_recursiveprot ? 2 : 1;
> +
>  		oom = cg_read_key_long(children[i], "memory.events", "oom ");
>  		low = cg_read_key_long(children[i], "memory.events", "low ");
>  
>  		if (oom)
>  			goto cleanup;
> -		if (i < 2 && low <= 0)
> +		if (i <= no_low_events_index && low <= 0)
>  			goto cleanup;
> -		if (i >= 2 && low)
> +		if (i > no_low_events_index && low)
>  			goto cleanup;
> +
>  	}
>  
>  	ret = KSFT_PASS;
> @@ -1272,7 +1277,7 @@ struct memcg_test {
>  int main(int argc, char **argv)
>  {
>  	char root[PATH_MAX];
> -	int i, ret = EXIT_SUCCESS;
> +	int i, proc_status, ret = EXIT_SUCCESS;
>  
>  	if (cg_find_unified_root(root, sizeof(root)))
>  		ksft_exit_skip("cgroup v2 isn't mounted\n");
> @@ -1288,6 +1293,11 @@ int main(int argc, char **argv)
>  		if (cg_write(root, "cgroup.subtree_control", "+memory"))
>  			ksft_exit_skip("Failed to set memory controller\n");
>  
> +	proc_status = proc_mount_contains("memory_recursiveprot");
> +	if (proc_status < 0)
> +		ksft_exit_skip("Failed to query cgroup mount option\n");

Hopefully no one has a mountpoint with the memory_recursiveprot name :)

Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>

Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/5] cgroup: Account for memory_localevents in test_memcg_oom_group_leaf_events()
@ 2022-04-22 23:14     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:14 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Fri, Apr 22, 2022 at 08:57:27AM -0700, David Vernet wrote:
> The test_memcg_oom_group_leaf_events() testcase in the cgroup memcg tests
> validates that processes in a group that perform allocations exceeding
> memory.oom.group are killed. It also validates that the
> memory.events.oom_kill events are properly propagated in this case.  Commit
> 06e11c907ea4 ("kselftests: memcg: update the oom group leaf events test")
> fixed test_memcg_oom_group_leaf_events() to account for the fact that the
> memory.events.oom_kill events in a child cgroup is propagated up to its
> parent. This behavior can actually be configured by the memory_localevents
> mount option, so this patch updates the testcase to properly account for
> the possible presence of this mount option.
> 
> Signed-off-by: David Vernet <void@manifault.com>
> ---
>  .../testing/selftests/cgroup/test_memcontrol.c  | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
> index ea2fd27e52df..d88e0ca3f3d1 100644
> --- a/tools/testing/selftests/cgroup/test_memcontrol.c
> +++ b/tools/testing/selftests/cgroup/test_memcontrol.c
> @@ -21,6 +21,7 @@
>  #include "../kselftest.h"
>  #include "cgroup_util.h"
>  
> +static bool has_localevents;
>  static bool has_recursiveprot;
>  
>  /*
> @@ -1091,6 +1092,7 @@ static int test_memcg_oom_group_leaf_events(const char *root)
>  {
>  	int ret = KSFT_FAIL;
>  	char *parent, *child;
> +	long parent_oom_events;
>  
>  	parent = cg_name(root, "memcg_test_0");
>  	child = cg_name(root, "memcg_test_0/memcg_test_1");
> @@ -1128,7 +1130,15 @@ static int test_memcg_oom_group_leaf_events(const char *root)
>  	if (cg_read_key_long(child, "memory.events", "oom_kill ") <= 0)
>  		goto cleanup;
>  
> -	if (cg_read_key_long(parent, "memory.events", "oom_kill ") <= 0)
> +	parent_oom_events = cg_read_key_long(
> +			parent, "memory.events", "oom_kill ");
> +	// If memory_localevents is not enabled (the default), the parent should
> +	// count OOM events in its children groups. Otherwise, it should not
> +	// have observed any events.

Please, use /* */ style comments, it's a generic kernel style.

> +	if (has_localevents) {
> +		if (parent_oom_events != 0)
> +			goto cleanup;
> +	} else if (parent_oom_events <= 0)
>  		goto cleanup;

How about something like this? IMO a bit more clear what's going on.
	if ((has_local_events && parent_oom_events == 0) ||
	    parent_oom_events > 0)
		ret = KSFT_PASS;

Anyway, looks good to me.
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>

>  
>  	ret = KSFT_PASS;
> @@ -1298,6 +1308,11 @@ int main(int argc, char **argv)
>  		ksft_exit_skip("Failed to query cgroup mount option\n");
>  	has_recursiveprot = proc_status;
>  
> +	proc_status = proc_mount_contains("memory_localevents");
> +	if (proc_status < 0)
> +		ksft_exit_skip("Failed to query cgroup mount option\n");
> +	has_localevents = proc_status;
> +
>  	for (i = 0; i < ARRAY_SIZE(tests); i++) {
>  		switch (tests[i].fn(root)) {
>  		case KSFT_PASS:
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/5] cgroup: Account for memory_localevents in test_memcg_oom_group_leaf_events()
@ 2022-04-22 23:14     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:14 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 08:57:27AM -0700, David Vernet wrote:
> The test_memcg_oom_group_leaf_events() testcase in the cgroup memcg tests
> validates that processes in a group that perform allocations exceeding
> memory.oom.group are killed. It also validates that the
> memory.events.oom_kill events are properly propagated in this case.  Commit
> 06e11c907ea4 ("kselftests: memcg: update the oom group leaf events test")
> fixed test_memcg_oom_group_leaf_events() to account for the fact that the
> memory.events.oom_kill events in a child cgroup is propagated up to its
> parent. This behavior can actually be configured by the memory_localevents
> mount option, so this patch updates the testcase to properly account for
> the possible presence of this mount option.
> 
> Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
> ---
>  .../testing/selftests/cgroup/test_memcontrol.c  | 17 ++++++++++++++++-
>  1 file changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
> index ea2fd27e52df..d88e0ca3f3d1 100644
> --- a/tools/testing/selftests/cgroup/test_memcontrol.c
> +++ b/tools/testing/selftests/cgroup/test_memcontrol.c
> @@ -21,6 +21,7 @@
>  #include "../kselftest.h"
>  #include "cgroup_util.h"
>  
> +static bool has_localevents;
>  static bool has_recursiveprot;
>  
>  /*
> @@ -1091,6 +1092,7 @@ static int test_memcg_oom_group_leaf_events(const char *root)
>  {
>  	int ret = KSFT_FAIL;
>  	char *parent, *child;
> +	long parent_oom_events;
>  
>  	parent = cg_name(root, "memcg_test_0");
>  	child = cg_name(root, "memcg_test_0/memcg_test_1");
> @@ -1128,7 +1130,15 @@ static int test_memcg_oom_group_leaf_events(const char *root)
>  	if (cg_read_key_long(child, "memory.events", "oom_kill ") <= 0)
>  		goto cleanup;
>  
> -	if (cg_read_key_long(parent, "memory.events", "oom_kill ") <= 0)
> +	parent_oom_events = cg_read_key_long(
> +			parent, "memory.events", "oom_kill ");
> +	// If memory_localevents is not enabled (the default), the parent should
> +	// count OOM events in its children groups. Otherwise, it should not
> +	// have observed any events.

Please, use /* */ style comments, it's a generic kernel style.

> +	if (has_localevents) {
> +		if (parent_oom_events != 0)
> +			goto cleanup;
> +	} else if (parent_oom_events <= 0)
>  		goto cleanup;

How about something like this? IMO a bit more clear what's going on.
	if ((has_local_events && parent_oom_events == 0) ||
	    parent_oom_events > 0)
		ret = KSFT_PASS;

Anyway, looks good to me.
Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>

>  
>  	ret = KSFT_PASS;
> @@ -1298,6 +1308,11 @@ int main(int argc, char **argv)
>  		ksft_exit_skip("Failed to query cgroup mount option\n");
>  	has_recursiveprot = proc_status;
>  
> +	proc_status = proc_mount_contains("memory_localevents");
> +	if (proc_status < 0)
> +		ksft_exit_skip("Failed to query cgroup mount option\n");
> +	has_localevents = proc_status;
> +
>  	for (i = 0; i < ARRAY_SIZE(tests); i++) {
>  		switch (tests[i].fn(root)) {
>  		case KSFT_PASS:
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/5] cgroup: Removing racy check in test_memcg_sock()
@ 2022-04-22 23:50     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:50 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Fri, Apr 22, 2022 at 08:57:28AM -0700, David Vernet wrote:
> test_memcg_sock() in the cgroup memcg tests, verifies expected memory
> accounting for sockets. The test forks a process which functions as a TCP
> server, and sends large buffers back and forth between itself (as the TCP
> client) and the forked TCP server. While doing so, it verifies that
> memory.current and memory.stat.sock look correct.
> 
> There is currently a check in tcp_client() which asserts memory.current >=
> memory.stat.sock. This check is racy, as between memory.current and
> memory.stat.sock being queried, a packet could come in which causes
> mem_cgroup_charge_skmem() to be invoked. This could cause memory.stat.sock
> to exceed memory.current. Reversing the order of querying doesn't address
> the problem either, as memory may be reclaimed between the two calls.

But just curious, does it fix the flakiness (assuming there is no memory
pressure)?

> Instead, this patch just removes that assertion altogether, and instead
> relies on the values_close() check that follows to validate the expected
> accounting.

Acked-by: Roman Gushchin <roman.gushchin@linux.dev>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/5] cgroup: Removing racy check in test_memcg_sock()
@ 2022-04-22 23:50     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:50 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 08:57:28AM -0700, David Vernet wrote:
> test_memcg_sock() in the cgroup memcg tests, verifies expected memory
> accounting for sockets. The test forks a process which functions as a TCP
> server, and sends large buffers back and forth between itself (as the TCP
> client) and the forked TCP server. While doing so, it verifies that
> memory.current and memory.stat.sock look correct.
> 
> There is currently a check in tcp_client() which asserts memory.current >=
> memory.stat.sock. This check is racy, as between memory.current and
> memory.stat.sock being queried, a packet could come in which causes
> mem_cgroup_charge_skmem() to be invoked. This could cause memory.stat.sock
> to exceed memory.current. Reversing the order of querying doesn't address
> the problem either, as memory may be reclaimed between the two calls.

But just curious, does it fix the flakiness (assuming there is no memory
pressure)?

> Instead, this patch just removes that assertion altogether, and instead
> relies on the values_close() check that follows to validate the expected
> accounting.

Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] cgroup: Fix racy check in alloc_pagecache_max_30M() helper function
@ 2022-04-22 23:56     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:56 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Fri, Apr 22, 2022 at 08:57:29AM -0700, David Vernet wrote:
> alloc_pagecache_max_30M() in the cgroup memcg tests performs a 50MB
> pagecache allocation, which it expects to be capped at 30MB due to the
> calling process having a memory.high setting of 30MB. After the allocation,
> the function contains a check that verifies that MB(29) < memory.current <=
> MB(30). This check can actually fail non-deterministically.
> 
> The testcases that use this function are test_memcg_high() and
> test_memcg_max(), which set memory.min and memory.max to 30MB respectively
> for the cgroup under test. The allocation can slightly exceed this number
> in both cases, and for memory.max, the process performing the allocation
> will not have the OOM killer invoked as it's performing a pagecache
> allocation.  This patchset therefore updates the above check to instead use
> the verify_close() helper function.
> 
> Signed-off-by: David Vernet <void@manifault.com>

Acked-by: Roman Gushchin <roman.gushchin@linux.dev>

Thanks, David!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 5/5] cgroup: Fix racy check in alloc_pagecache_max_30M() helper function
@ 2022-04-22 23:56     ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-22 23:56 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 08:57:29AM -0700, David Vernet wrote:
> alloc_pagecache_max_30M() in the cgroup memcg tests performs a 50MB
> pagecache allocation, which it expects to be capped at 30MB due to the
> calling process having a memory.high setting of 30MB. After the allocation,
> the function contains a check that verifies that MB(29) < memory.current <=
> MB(30). This check can actually fail non-deterministically.
> 
> The testcases that use this function are test_memcg_high() and
> test_memcg_max(), which set memory.min and memory.max to 30MB respectively
> for the cgroup under test. The allocation can slightly exceed this number
> in both cases, and for memory.max, the process performing the allocation
> will not have the OOM killer invoked as it's performing a pagecache
> allocation.  This patchset therefore updates the above check to instead use
> the verify_close() helper function.
> 
> Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>

Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>

Thanks, David!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
  2022-04-22 23:04     ` Roman Gushchin
@ 2022-04-23 11:30       ` David Vernet
  -1 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 11:30 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Fri, Apr 22, 2022 at 04:04:15PM -0700, Roman Gushchin wrote:
>

Thanks for the reviews on this patchset, Roman. FYI I think Andrew already
merged these patches to the -mm tree. I'll send out a follow-on patch that
fixes everything you pointed out, both here and on the other patches in the
set.

> On Fri, Apr 22, 2022 at 08:57:25AM -0700, David Vernet wrote:
> > In test_memcg_min() and test_memcg_low(), there is an array of four sibling
> > cgroups. All but one of these sibling groups does a 50MB allocation, and
> > the group that does no allocation is the third of four in the array.  This
> > is not a problem per se, but makes it a bit tricky to do some assertions in
> > test_memcg_low(), as we want to make assertions on the siblings based on
> > whether or not they performed allocations. Having a static index before
> > which all groups have performed an allocation makes this cleaner.
> > 
> > This patch therefore reorders the sibling groups so that the group that
> > performs no allocations is the last in the array.
> 
> It makes the comment explaining the test just above the test_memcg_min()
> function obsolete. Please, fix it too.

Thanks for catching that. I'll fix the comment both in test_memcg_min() and
test_memcg_low() when I send out that follow-on patch.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-23 11:30       ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 11:30 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 04:04:15PM -0700, Roman Gushchin wrote:
>

Thanks for the reviews on this patchset, Roman. FYI I think Andrew already
merged these patches to the -mm tree. I'll send out a follow-on patch that
fixes everything you pointed out, both here and on the other patches in the
set.

> On Fri, Apr 22, 2022 at 08:57:25AM -0700, David Vernet wrote:
> > In test_memcg_min() and test_memcg_low(), there is an array of four sibling
> > cgroups. All but one of these sibling groups does a 50MB allocation, and
> > the group that does no allocation is the third of four in the array.  This
> > is not a problem per se, but makes it a bit tricky to do some assertions in
> > test_memcg_low(), as we want to make assertions on the siblings based on
> > whether or not they performed allocations. Having a static index before
> > which all groups have performed an allocation makes this cleaner.
> > 
> > This patch therefore reorders the sibling groups so that the group that
> > performs no allocations is the last in the array.
> 
> It makes the comment explaining the test just above the test_memcg_min()
> function obsolete. Please, fix it too.

Thanks for catching that. I'll fix the comment both in test_memcg_min() and
test_memcg_low() when I send out that follow-on patch.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/5] cgroup: Account for memory_recursiveprot in test_memcg_low()
  2022-04-22 23:06     ` Roman Gushchin
@ 2022-04-23 11:33       ` David Vernet
  -1 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 11:33 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Fri, Apr 22, 2022 at 04:06:35PM -0700, Roman Gushchin wrote:
> Hopefully no one has a mountpoint with the memory_recursiveprot name :)

Heh, good point. I considered adding another root-level cgroup.features
file to match the features specified in /sys/kernel/cgroup/features, but
ultimately decided it wasn't warranted given that it just duplicates the
information in /proc/mounts.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 2/5] cgroup: Account for memory_recursiveprot in test_memcg_low()
@ 2022-04-23 11:33       ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 11:33 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 04:06:35PM -0700, Roman Gushchin wrote:
> Hopefully no one has a mountpoint with the memory_recursiveprot name :)

Heh, good point. I considered adding another root-level cgroup.features
file to match the features specified in /sys/kernel/cgroup/features, but
ultimately decided it wasn't warranted given that it just duplicates the
information in /proc/mounts.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/5] cgroup: Account for memory_localevents in test_memcg_oom_group_leaf_events()
  2022-04-22 23:14     ` Roman Gushchin
@ 2022-04-23 11:36       ` David Vernet
  -1 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 11:36 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Fri, Apr 22, 2022 at 04:14:49PM -0700, Roman Gushchin wrote:
> On Fri, Apr 22, 2022 at 08:57:27AM -0700, David Vernet wrote:
> > The test_memcg_oom_group_leaf_events() testcase in the cgroup memcg tests
> > validates that processes in a group that perform allocations exceeding
> > memory.oom.group are killed. It also validates that the
> > memory.events.oom_kill events are properly propagated in this case.  Commit
> > 06e11c907ea4 ("kselftests: memcg: update the oom group leaf events test")
> > fixed test_memcg_oom_group_leaf_events() to account for the fact that the
> > memory.events.oom_kill events in a child cgroup is propagated up to its
> > parent. This behavior can actually be configured by the memory_localevents
> > mount option, so this patch updates the testcase to properly account for
> > the possible presence of this mount option.
> > 
> > Signed-off-by: David Vernet <void@manifault.com>
> > ---
> >  .../testing/selftests/cgroup/test_memcontrol.c  | 17 ++++++++++++++++-
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
> > index ea2fd27e52df..d88e0ca3f3d1 100644
> > --- a/tools/testing/selftests/cgroup/test_memcontrol.c
> > +++ b/tools/testing/selftests/cgroup/test_memcontrol.c
> > @@ -21,6 +21,7 @@
> >  #include "../kselftest.h"
> >  #include "cgroup_util.h"
> >  
> > +static bool has_localevents;
> >  static bool has_recursiveprot;
> >  
> >  /*
> > @@ -1091,6 +1092,7 @@ static int test_memcg_oom_group_leaf_events(const char *root)
> >  {
> >  	int ret = KSFT_FAIL;
> >  	char *parent, *child;
> > +	long parent_oom_events;
> >  
> >  	parent = cg_name(root, "memcg_test_0");
> >  	child = cg_name(root, "memcg_test_0/memcg_test_1");
> > @@ -1128,7 +1130,15 @@ static int test_memcg_oom_group_leaf_events(const char *root)
> >  	if (cg_read_key_long(child, "memory.events", "oom_kill ") <= 0)
> >  		goto cleanup;
> >  
> > -	if (cg_read_key_long(parent, "memory.events", "oom_kill ") <= 0)
> > +	parent_oom_events = cg_read_key_long(
> > +			parent, "memory.events", "oom_kill ");
> > +	// If memory_localevents is not enabled (the default), the parent should
> > +	// count OOM events in its children groups. Otherwise, it should not
> > +	// have observed any events.
> 
> Please, use /* */ style comments, it's a generic kernel style.

Ack, will fix in a follow-on patch.

> 
> > +	if (has_localevents) {
> > +		if (parent_oom_events != 0)
> > +			goto cleanup;
> > +	} else if (parent_oom_events <= 0)
> >  		goto cleanup;
> 
> How about something like this? IMO a bit more clear what's going on.
> 	if ((has_local_events && parent_oom_events == 0) ||
> 	    parent_oom_events > 0)
> 		ret = KSFT_PASS;

Agreed that's a bit clearer, I'll include this in the follow-on patch.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 3/5] cgroup: Account for memory_localevents in test_memcg_oom_group_leaf_events()
@ 2022-04-23 11:36       ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 11:36 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 04:14:49PM -0700, Roman Gushchin wrote:
> On Fri, Apr 22, 2022 at 08:57:27AM -0700, David Vernet wrote:
> > The test_memcg_oom_group_leaf_events() testcase in the cgroup memcg tests
> > validates that processes in a group that perform allocations exceeding
> > memory.oom.group are killed. It also validates that the
> > memory.events.oom_kill events are properly propagated in this case.  Commit
> > 06e11c907ea4 ("kselftests: memcg: update the oom group leaf events test")
> > fixed test_memcg_oom_group_leaf_events() to account for the fact that the
> > memory.events.oom_kill events in a child cgroup is propagated up to its
> > parent. This behavior can actually be configured by the memory_localevents
> > mount option, so this patch updates the testcase to properly account for
> > the possible presence of this mount option.
> > 
> > Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
> > ---
> >  .../testing/selftests/cgroup/test_memcontrol.c  | 17 ++++++++++++++++-
> >  1 file changed, 16 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
> > index ea2fd27e52df..d88e0ca3f3d1 100644
> > --- a/tools/testing/selftests/cgroup/test_memcontrol.c
> > +++ b/tools/testing/selftests/cgroup/test_memcontrol.c
> > @@ -21,6 +21,7 @@
> >  #include "../kselftest.h"
> >  #include "cgroup_util.h"
> >  
> > +static bool has_localevents;
> >  static bool has_recursiveprot;
> >  
> >  /*
> > @@ -1091,6 +1092,7 @@ static int test_memcg_oom_group_leaf_events(const char *root)
> >  {
> >  	int ret = KSFT_FAIL;
> >  	char *parent, *child;
> > +	long parent_oom_events;
> >  
> >  	parent = cg_name(root, "memcg_test_0");
> >  	child = cg_name(root, "memcg_test_0/memcg_test_1");
> > @@ -1128,7 +1130,15 @@ static int test_memcg_oom_group_leaf_events(const char *root)
> >  	if (cg_read_key_long(child, "memory.events", "oom_kill ") <= 0)
> >  		goto cleanup;
> >  
> > -	if (cg_read_key_long(parent, "memory.events", "oom_kill ") <= 0)
> > +	parent_oom_events = cg_read_key_long(
> > +			parent, "memory.events", "oom_kill ");
> > +	// If memory_localevents is not enabled (the default), the parent should
> > +	// count OOM events in its children groups. Otherwise, it should not
> > +	// have observed any events.
> 
> Please, use /* */ style comments, it's a generic kernel style.

Ack, will fix in a follow-on patch.

> 
> > +	if (has_localevents) {
> > +		if (parent_oom_events != 0)
> > +			goto cleanup;
> > +	} else if (parent_oom_events <= 0)
> >  		goto cleanup;
> 
> How about something like this? IMO a bit more clear what's going on.
> 	if ((has_local_events && parent_oom_events == 0) ||
> 	    parent_oom_events > 0)
> 		ret = KSFT_PASS;

Agreed that's a bit clearer, I'll include this in the follow-on patch.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/5] cgroup: Removing racy check in test_memcg_sock()
  2022-04-22 23:50     ` Roman Gushchin
@ 2022-04-23 11:50       ` David Vernet
  -1 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 11:50 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Fri, Apr 22, 2022 at 04:50:12PM -0700, Roman Gushchin wrote:
> On Fri, Apr 22, 2022 at 08:57:28AM -0700, David Vernet wrote:
> > test_memcg_sock() in the cgroup memcg tests, verifies expected memory
> > accounting for sockets. The test forks a process which functions as a TCP
> > server, and sends large buffers back and forth between itself (as the TCP
> > client) and the forked TCP server. While doing so, it verifies that
> > memory.current and memory.stat.sock look correct.
> > 
> > There is currently a check in tcp_client() which asserts memory.current >=
> > memory.stat.sock. This check is racy, as between memory.current and
> > memory.stat.sock being queried, a packet could come in which causes
> > mem_cgroup_charge_skmem() to be invoked. This could cause memory.stat.sock
> > to exceed memory.current. Reversing the order of querying doesn't address
> > the problem either, as memory may be reclaimed between the two calls.
> 
> But just curious, does it fix the flakiness (assuming there is no memory
> pressure)?

Yes, it does fix the flakiness. I saw it fail once or twice in my runs, but
to your point that was only in the presence of memory pressure, which could
make many of the tests in the file fail. Let me know if you'd prefer to put
the check back in, and instead reverse the order of querying memory.current
and memory.stat.sock.

> 
> > Instead, this patch just removes that assertion altogether, and instead
> > relies on the values_close() check that follows to validate the expected
> > accounting.
> 
> Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
> 

Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 4/5] cgroup: Removing racy check in test_memcg_sock()
@ 2022-04-23 11:50       ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 11:50 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Fri, Apr 22, 2022 at 04:50:12PM -0700, Roman Gushchin wrote:
> On Fri, Apr 22, 2022 at 08:57:28AM -0700, David Vernet wrote:
> > test_memcg_sock() in the cgroup memcg tests, verifies expected memory
> > accounting for sockets. The test forks a process which functions as a TCP
> > server, and sends large buffers back and forth between itself (as the TCP
> > client) and the forked TCP server. While doing so, it verifies that
> > memory.current and memory.stat.sock look correct.
> > 
> > There is currently a check in tcp_client() which asserts memory.current >=
> > memory.stat.sock. This check is racy, as between memory.current and
> > memory.stat.sock being queried, a packet could come in which causes
> > mem_cgroup_charge_skmem() to be invoked. This could cause memory.stat.sock
> > to exceed memory.current. Reversing the order of querying doesn't address
> > the problem either, as memory may be reclaimed between the two calls.
> 
> But just curious, does it fix the flakiness (assuming there is no memory
> pressure)?

Yes, it does fix the flakiness. I saw it fail once or twice in my runs, but
to your point that was only in the presence of memory pressure, which could
make many of the tests in the file fail. Let me know if you'd prefer to put
the check back in, and instead reverse the order of querying memory.current
and memory.stat.sock.

> 
> > Instead, this patch just removes that assertion altogether, and instead
> > relies on the values_close() check that follows to validate the expected
> > accounting.
> 
> Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
> 

Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-23 15:19         ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-23 15:19 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team


> On Apr 23, 2022, at 4:30 AM, David Vernet <void@manifault.com> wrote:
> 
> On Fri, Apr 22, 2022 at 04:04:15PM -0700, Roman Gushchin wrote:
>> 
> 
> Thanks for the reviews on this patchset, Roman. FYI I think Andrew already
> merged these patches to the -mm tree. I'll send out a follow-on patch that
> fixes everything you pointed out, both here and on the other patches in the
> set.

The mm tree isn’t a git tree, but a collection of the text patches, managed by Andrew. So you can send a new version and Andrew can update it in place. It’s happening all the time: mostly for adding reviewed-by/acked-by tags etc, but for code updates as well.
It’s not uncommon for some patchset to mature while being in the mm tree, this allows to include them into linux-next and give some more testing, but without doing many reverts/fixups (Andrew is often squashing fixups into the original patch too). So long story short, you can just send a new version, especially because all changes all minor.

Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-23 15:19         ` Roman Gushchin
  0 siblings, 0 replies; 34+ messages in thread
From: Roman Gushchin @ 2022-04-23 15:19 UTC (permalink / raw)
  To: David Vernet
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg


> On Apr 23, 2022, at 4:30 AM, David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org> wrote:
> 
> On Fri, Apr 22, 2022 at 04:04:15PM -0700, Roman Gushchin wrote:
>> 
> 
> Thanks for the reviews on this patchset, Roman. FYI I think Andrew already
> merged these patches to the -mm tree. I'll send out a follow-on patch that
> fixes everything you pointed out, both here and on the other patches in the
> set.

The mm tree isn’t a git tree, but a collection of the text patches, managed by Andrew. So you can send a new version and Andrew can update it in place. It’s happening all the time: mostly for adding reviewed-by/acked-by tags etc, but for code updates as well.
It’s not uncommon for some patchset to mature while being in the mm tree, this allows to include them into linux-next and give some more testing, but without doing many reverts/fixups (Andrew is often squashing fixups into the original patch too). So long story short, you can just send a new version, especially because all changes all minor.

Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-23 15:33           ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 15:33 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm, tj, linux-kernel, linux-mm, cgroups, hannes, mhocko,
	shakeelb, kernel-team

On Sat, Apr 23, 2022 at 08:19:12AM -0700, Roman Gushchin wrote:
> 
> > On Apr 23, 2022, at 4:30 AM, David Vernet <void@manifault.com> wrote:
> > 
> > On Fri, Apr 22, 2022 at 04:04:15PM -0700, Roman Gushchin wrote:
> >> 
> > 
> > Thanks for the reviews on this patchset, Roman. FYI I think Andrew already
> > merged these patches to the -mm tree. I'll send out a follow-on patch that
> > fixes everything you pointed out, both here and on the other patches in the
> > set.
> 
> The mm tree isn’t a git tree, but a collection of the text patches, managed by Andrew. So you can send a new version and Andrew can update it in place. It’s happening all the time: mostly for adding reviewed-by/acked-by tags etc, but for code updates as well.
> It’s not uncommon for some patchset to mature while being in the mm tree, this allows to include them into linux-next and give some more testing, but without doing many reverts/fixups (Andrew is often squashing fixups into the original patch too). So long story short, you can just send a new version, especially because all changes all minor.

Ah, that makes sense. Thanks for explaining that. I'll send out a v2 of the
patches shortly, then!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests
@ 2022-04-23 15:33           ` David Vernet
  0 siblings, 0 replies; 34+ messages in thread
From: David Vernet @ 2022-04-23 15:33 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b, tj-DgEjT+Ai2ygdnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	hannes-druUgvl0LCNAfugRpC6u6w, mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, kernel-team-b10kYP2dOMg

On Sat, Apr 23, 2022 at 08:19:12AM -0700, Roman Gushchin wrote:
> 
> > On Apr 23, 2022, at 4:30 AM, David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org> wrote:
> > 
> > On Fri, Apr 22, 2022 at 04:04:15PM -0700, Roman Gushchin wrote:
> >> 
> > 
> > Thanks for the reviews on this patchset, Roman. FYI I think Andrew already
> > merged these patches to the -mm tree. I'll send out a follow-on patch that
> > fixes everything you pointed out, both here and on the other patches in the
> > set.
> 
> The mm tree isn’t a git tree, but a collection of the text patches, managed by Andrew. So you can send a new version and Andrew can update it in place. It’s happening all the time: mostly for adding reviewed-by/acked-by tags etc, but for code updates as well.
> It’s not uncommon for some patchset to mature while being in the mm tree, this allows to include them into linux-next and give some more testing, but without doing many reverts/fixups (Andrew is often squashing fixups into the original patch too). So long story short, you can just send a new version, especially because all changes all minor.

Ah, that makes sense. Thanks for explaining that. I'll send out a v2 of the
patches shortly, then!

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2022-04-23 15:33 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-22 15:57 [PATCH 0/5] Fix bugs in memcontroller cgroup tests David Vernet
2022-04-22 15:57 ` David Vernet
2022-04-22 15:57 ` [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:04   ` Roman Gushchin
2022-04-22 23:04     ` Roman Gushchin
2022-04-23 11:30     ` David Vernet
2022-04-23 11:30       ` David Vernet
2022-04-23 15:19       ` Roman Gushchin
2022-04-23 15:19         ` Roman Gushchin
2022-04-23 15:33         ` David Vernet
2022-04-23 15:33           ` David Vernet
2022-04-22 15:57 ` [PATCH 2/5] cgroup: Account for memory_recursiveprot in test_memcg_low() David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:06   ` Roman Gushchin
2022-04-22 23:06     ` Roman Gushchin
2022-04-23 11:33     ` David Vernet
2022-04-23 11:33       ` David Vernet
2022-04-22 15:57 ` [PATCH 3/5] cgroup: Account for memory_localevents in test_memcg_oom_group_leaf_events() David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:14   ` Roman Gushchin
2022-04-22 23:14     ` Roman Gushchin
2022-04-23 11:36     ` David Vernet
2022-04-23 11:36       ` David Vernet
2022-04-22 15:57 ` [PATCH 4/5] cgroup: Removing racy check in test_memcg_sock() David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:50   ` Roman Gushchin
2022-04-22 23:50     ` Roman Gushchin
2022-04-23 11:50     ` David Vernet
2022-04-23 11:50       ` David Vernet
2022-04-22 15:57 ` [PATCH 5/5] cgroup: Fix racy check in alloc_pagecache_max_30M() helper function David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:56   ` Roman Gushchin
2022-04-22 23:56     ` Roman Gushchin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.