linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Feng Tang <feng.tang@intel.com>
To: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Ben Widawsky <ben.widawsky@intel.com>
Cc: linux-kernel@vger.kernel.org,
	Andrea Arcangeli <aarcange@redhat.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>, Andi Kleen <ak@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	ying.huang@intel.com, Feng Tang <feng.tang@intel.com>
Subject: [PATCH v1 1/4] mm/mempolicy: skip nodemask intersect check for 'interleave' when oom
Date: Wed, 26 May 2021 13:01:39 +0800	[thread overview]
Message-ID: <1622005302-23027-2-git-send-email-feng.tang@intel.com> (raw)
In-Reply-To: <1622005302-23027-1-git-send-email-feng.tang@intel.com>

mempolicy_nodemask_intersects() is used in oom case to check if a
task may have memory allocated on some memory nodes.

Currently, the nodes_intersects() is run for both 'bind' and 'interleave'
policies. But they are different regarding memory allocation, the nodemask
is a forced requirement for 'bind', while just a hint for 'interleave'.
Like in alloc_pages_vma():

	nmask = policy_nodemask(gfp, pol);
        preferred_nid = policy_node(gfp, pol, node);
        page = __alloc_pages(gfp, order, preferred_nid, nmask);

in plicy_nodemask(), only 'bind' policy may return its desired nodemask,
while others return NULL.  And this 'NULL' enables the 'interleave' policy
can get memory from other nodes than its nodemask.

So skip the nodemask intersect check for 'interleave' policy.

Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Feng Tang <feng.tang@intel.com>
---
 mm/mempolicy.c | 24 ++++--------------------
 1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d79fa29..1964cca 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2098,7 +2098,7 @@ bool init_nodemask_of_mempolicy(nodemask_t *mask)
  *
  * If tsk's mempolicy is "default" [NULL], return 'true' to indicate default
  * policy.  Otherwise, check for intersection between mask and the policy
- * nodemask for 'bind' or 'interleave' policy.  For 'preferred' or 'local'
+ * nodemask for 'bind' policy.  For 'interleave', 'preferred' or 'local'
  * policy, always return true since it may allocate elsewhere on fallback.
  *
  * Takes task_lock(tsk) to prevent freeing of its mempolicy.
@@ -2111,29 +2111,13 @@ bool mempolicy_nodemask_intersects(struct task_struct *tsk,
 
 	if (!mask)
 		return ret;
+
 	task_lock(tsk);
 	mempolicy = tsk->mempolicy;
-	if (!mempolicy)
-		goto out;
-
-	switch (mempolicy->mode) {
-	case MPOL_PREFERRED:
-		/*
-		 * MPOL_PREFERRED and MPOL_F_LOCAL are only preferred nodes to
-		 * allocate from, they may fallback to other nodes when oom.
-		 * Thus, it's possible for tsk to have allocated memory from
-		 * nodes in mask.
-		 */
-		break;
-	case MPOL_BIND:
-	case MPOL_INTERLEAVE:
+	if (mempolicy && mempolicy->mode == MPOL_BIND)
 		ret = nodes_intersects(mempolicy->v.nodes, *mask);
-		break;
-	default:
-		BUG();
-	}
-out:
 	task_unlock(tsk);
+
 	return ret;
 }
 
-- 
2.7.4



  reply	other threads:[~2021-05-26  5:01 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-26  5:01 [PATCH v1 0/4] mm/mempolicy: some fix and semantics cleanup Feng Tang
2021-05-26  5:01 ` Feng Tang [this message]
2021-05-27  7:30   ` [PATCH v1 1/4] mm/mempolicy: skip nodemask intersect check for 'interleave' when oom Michal Hocko
2021-05-27 13:05     ` Feng Tang
2021-05-27 13:15       ` Michal Hocko
2021-05-27 13:22         ` Feng Tang
2021-05-26  5:01 ` [PATCH v1 2/4] mm/mempolicy: unify the preprocessing for mbind and set_mempolicy Feng Tang
2021-05-27  7:39   ` Michal Hocko
2021-05-27 12:31     ` Feng Tang
2021-05-26  5:01 ` [PATCH v1 3/4] mm/mempolicy: don't handle MPOL_LOCAL like a fake MPOL_PREFERRED policy Feng Tang
2021-05-27  8:12   ` Michal Hocko
2021-05-27 12:06     ` Feng Tang
2021-05-27 12:16       ` Michal Hocko
2021-05-26  5:01 ` [PATCH v1 4/4] mm/mempolicy: kill MPOL_F_LOCAL bit Feng Tang
2021-05-27  8:20   ` Michal Hocko
2021-05-27 12:10     ` Feng Tang
2021-05-27 12:26       ` Michal Hocko
2021-05-27 13:34         ` Feng Tang
2021-05-27 15:34           ` Michal Hocko
2021-05-28  4:39             ` Feng Tang
2021-05-31  7:00               ` Michal Hocko
2021-05-31  7:32                 ` Feng Tang
2021-05-31  8:22                   ` Michal Hocko
2021-05-31  8:29                     ` Feng Tang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1622005302-23027-2-git-send-email-feng.tang@intel.com \
    --to=feng.tang@intel.com \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=ben.widawsky@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).