All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm/hugetlb: add mempolicy check in the reservation routine
@ 2020-07-23  7:44 Muchun Song
  2020-07-24  7:39 ` Michal Hocko
  0 siblings, 1 reply; 4+ messages in thread
From: Muchun Song @ 2020-07-23  7:44 UTC (permalink / raw)
  To: mike.kravetz, akpm; +Cc: linux-mm, linux-kernel, Muchun Song, Jianchao Guo

In the reservation routine, we only check whether the cpuset meets
the memory allocation requirements. But we ignore the mempolicy of
MPOL_BIND case. If someone mmap hugetlb succeeds, but the subsequent
memory allocation may fail due to mempolicy restrictions and receives
the SIGBUS signal. This can be reproduced by the follow steps.

 1) Compile the test case.
    cd tools/testing/selftests/vm/
    gcc map_hugetlb.c -o map_hugetlb

 2) Pre-allocate huge pages. Suppose there are 2 numa nodes in the
    system. Each node will pre-allocate one huge page.
    echo 2 > /proc/sys/vm/nr_hugepages

 3) Run test case(mmap 4MB). We receive the SIGBUS signal.
    numactl --membind=0 ./map_hugetlb 4

With this patch applied, the mmap will fail in the step 3) and throw
"mmap: Cannot allocate memory".

Reported-by: Jianchao Guo <guojianchao@bytedance.com>
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 30 +++++++++++++++++++++++++++---
 1 file changed, 27 insertions(+), 3 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 589c330df4db..e946f41b4dcb 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3463,12 +3463,36 @@ static int __init default_hugepagesz_setup(char *s)
 }
 __setup("default_hugepagesz=", default_hugepagesz_setup);
 
-static unsigned int cpuset_mems_nr(unsigned int *array)
+static nodemask_t *mempolicy_current_bind_nodemask(void)
+{
+	struct mempolicy *mpol;
+	nodemask_t *nodemask;
+
+	mpol = get_task_policy(current);
+	if (mpol->mode == MPOL_BIND)
+		nodemask = &mpol->v.nodes;
+	else
+		nodemask = NULL;
+
+	return nodemask;
+}
+
+static unsigned int allowed_mems_nr(unsigned int *array)
 {
 	int node;
 	unsigned int nr = 0;
+	nodemask_t *mempolicy_allowed, *mems_allowed, nodemask;
+
+	mempolicy_allowed = mempolicy_current_bind_nodemask();
+	if (mempolicy_allowed) {
+		nodes_and(nodemask, cpuset_current_mems_allowed,
+			  *mempolicy_allowed);
+		mems_allowed = &nodemask;
+	} else {
+		mems_allowed = &cpuset_current_mems_allowed;
+	}
 
-	for_each_node_mask(node, cpuset_current_mems_allowed)
+	for_each_node_mask(node, *mems_allowed)
 		nr += array[node];
 
 	return nr;
@@ -3653,7 +3677,7 @@ static int hugetlb_acct_memory(struct hstate *h, long delta)
 		if (gather_surplus_pages(h, delta) < 0)
 			goto out;
 
-		if (delta > cpuset_mems_nr(h->free_huge_pages_node)) {
+		if (delta > allowed_mems_nr(h->free_huge_pages_node)) {
 			return_unused_surplus_pages(h, delta);
 			goto out;
 		}
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-07-24  9:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-23  7:44 [PATCH] mm/hugetlb: add mempolicy check in the reservation routine Muchun Song
2020-07-24  7:39 ` Michal Hocko
2020-07-24  9:04   ` [Phishing Risk] [External] " Muchun Song
2020-07-24  9:04     ` Muchun Song

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.