All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Vernet <void@manifault.com>
To: akpm@linux-foundation.org
Cc: tj@kernel.org, roman.gushchin@linux.dev,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	cgroups@vger.kernel.org, hannes@cmpxchg.org, mhocko@kernel.org,
	shakeelb@google.com, kernel-team@fb.com, void@manifault.com
Subject: [PATCH 5/5] cgroup: Fix racy check in alloc_pagecache_max_30M() helper function
Date: Fri, 22 Apr 2022 08:57:29 -0700	[thread overview]
Message-ID: <20220422155728.3055914-6-void@manifault.com> (raw)
In-Reply-To: <20220422155728.3055914-1-void@manifault.com>

alloc_pagecache_max_30M() in the cgroup memcg tests performs a 50MB
pagecache allocation, which it expects to be capped at 30MB due to the
calling process having a memory.high setting of 30MB. After the allocation,
the function contains a check that verifies that MB(29) < memory.current <=
MB(30). This check can actually fail non-deterministically.

The testcases that use this function are test_memcg_high() and
test_memcg_max(), which set memory.min and memory.max to 30MB respectively
for the cgroup under test. The allocation can slightly exceed this number
in both cases, and for memory.max, the process performing the allocation
will not have the OOM killer invoked as it's performing a pagecache
allocation.  This patchset therefore updates the above check to instead use
the verify_close() helper function.

Signed-off-by: David Vernet <void@manifault.com>
---
 tools/testing/selftests/cgroup/test_memcontrol.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index c4735fa36a3d..088850f01ae7 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -564,9 +564,14 @@ static int alloc_pagecache_max_30M(const char *cgroup, void *arg)
 {
 	size_t size = MB(50);
 	int ret = -1;
-	long current;
+	long current, high, max;
 	int fd;
 
+	high = cg_read_long(cgroup, "memory.high");
+	max = cg_read_long(cgroup, "memory.max");
+	if (high != MB(30) && max != MB(30))
+		goto cleanup;
+
 	fd = get_temp_fd();
 	if (fd < 0)
 		return -1;
@@ -575,7 +580,7 @@ static int alloc_pagecache_max_30M(const char *cgroup, void *arg)
 		goto cleanup;
 
 	current = cg_read_long(cgroup, "memory.current");
-	if (current <= MB(29) || current > MB(30))
+	if (!values_close(current, MB(30), 5))
 		goto cleanup;
 
 	ret = 0;
-- 
2.30.2


WARNING: multiple messages have this Message-ID (diff)
From: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org
Cc: tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	kernel-team-b10kYP2dOMg@public.gmane.org,
	void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org
Subject: [PATCH 5/5] cgroup: Fix racy check in alloc_pagecache_max_30M() helper function
Date: Fri, 22 Apr 2022 08:57:29 -0700	[thread overview]
Message-ID: <20220422155728.3055914-6-void@manifault.com> (raw)
In-Reply-To: <20220422155728.3055914-1-void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>

alloc_pagecache_max_30M() in the cgroup memcg tests performs a 50MB
pagecache allocation, which it expects to be capped at 30MB due to the
calling process having a memory.high setting of 30MB. After the allocation,
the function contains a check that verifies that MB(29) < memory.current <=
MB(30). This check can actually fail non-deterministically.

The testcases that use this function are test_memcg_high() and
test_memcg_max(), which set memory.min and memory.max to 30MB respectively
for the cgroup under test. The allocation can slightly exceed this number
in both cases, and for memory.max, the process performing the allocation
will not have the OOM killer invoked as it's performing a pagecache
allocation.  This patchset therefore updates the above check to instead use
the verify_close() helper function.

Signed-off-by: David Vernet <void-gq6j2QGBifHby3iVrkZq2A@public.gmane.org>
---
 tools/testing/selftests/cgroup/test_memcontrol.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index c4735fa36a3d..088850f01ae7 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -564,9 +564,14 @@ static int alloc_pagecache_max_30M(const char *cgroup, void *arg)
 {
 	size_t size = MB(50);
 	int ret = -1;
-	long current;
+	long current, high, max;
 	int fd;
 
+	high = cg_read_long(cgroup, "memory.high");
+	max = cg_read_long(cgroup, "memory.max");
+	if (high != MB(30) && max != MB(30))
+		goto cleanup;
+
 	fd = get_temp_fd();
 	if (fd < 0)
 		return -1;
@@ -575,7 +580,7 @@ static int alloc_pagecache_max_30M(const char *cgroup, void *arg)
 		goto cleanup;
 
 	current = cg_read_long(cgroup, "memory.current");
-	if (current <= MB(29) || current > MB(30))
+	if (!values_close(current, MB(30), 5))
 		goto cleanup;
 
 	ret = 0;
-- 
2.30.2


  parent reply	other threads:[~2022-04-22 15:58 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-22 15:57 [PATCH 0/5] Fix bugs in memcontroller cgroup tests David Vernet
2022-04-22 15:57 ` David Vernet
2022-04-22 15:57 ` [PATCH 1/5] cgroups: Refactor children cgroups in memcg tests David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:04   ` Roman Gushchin
2022-04-22 23:04     ` Roman Gushchin
2022-04-23 11:30     ` David Vernet
2022-04-23 11:30       ` David Vernet
2022-04-23 15:19       ` Roman Gushchin
2022-04-23 15:19         ` Roman Gushchin
2022-04-23 15:33         ` David Vernet
2022-04-23 15:33           ` David Vernet
2022-04-22 15:57 ` [PATCH 2/5] cgroup: Account for memory_recursiveprot in test_memcg_low() David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:06   ` Roman Gushchin
2022-04-22 23:06     ` Roman Gushchin
2022-04-23 11:33     ` David Vernet
2022-04-23 11:33       ` David Vernet
2022-04-22 15:57 ` [PATCH 3/5] cgroup: Account for memory_localevents in test_memcg_oom_group_leaf_events() David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:14   ` Roman Gushchin
2022-04-22 23:14     ` Roman Gushchin
2022-04-23 11:36     ` David Vernet
2022-04-23 11:36       ` David Vernet
2022-04-22 15:57 ` [PATCH 4/5] cgroup: Removing racy check in test_memcg_sock() David Vernet
2022-04-22 15:57   ` David Vernet
2022-04-22 23:50   ` Roman Gushchin
2022-04-22 23:50     ` Roman Gushchin
2022-04-23 11:50     ` David Vernet
2022-04-23 11:50       ` David Vernet
2022-04-22 15:57 ` David Vernet [this message]
2022-04-22 15:57   ` [PATCH 5/5] cgroup: Fix racy check in alloc_pagecache_max_30M() helper function David Vernet
2022-04-22 23:56   ` Roman Gushchin
2022-04-22 23:56     ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220422155728.3055914-6-void@manifault.com \
    --to=void@manifault.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.