From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51E8FC433E1 for ; Fri, 19 Jun 2020 16:25:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 22EA32067D for ; Fri, 19 Jun 2020 16:25:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 22EA32067D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B8D3A8D00E0; Fri, 19 Jun 2020 12:24:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7A038D00DE; Fri, 19 Jun 2020 12:24:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CA248D00E0; Fri, 19 Jun 2020 12:24:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 660BB8D00DE for ; Fri, 19 Jun 2020 12:24:37 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 20AB615B0C2 for ; Fri, 19 Jun 2020 16:24:37 +0000 (UTC) X-FDA: 76946484594.16.land58_3b086e926e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id DD32F10059970 for ; Fri, 19 Jun 2020 16:24:36 +0000 (UTC) X-HE-Tag: land58_3b086e926e1a X-Filterd-Recvd-Size: 5365 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:36 +0000 (UTC) IronPort-SDR: O7N+5YJiXSLgcuWJobuXPcMrUSJ2JhVU6Fyw1H9lKmbJi5TSym9EnjFeCzYJ/ztxfbLHE5E3PH b9MmgwpHF7CA== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="141280176" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="141280176" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:32 -0700 IronPort-SDR: ejqW/jBzApwb9uy3Z45iajkUTb3mOntknlN+idLtjDeLrqHbc0Of2uNQcckcxZFV2xwjErZare 3DHJC/kXPruw== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264368420" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:31 -0700 From: Ben Widawsky To: linux-mm Cc: Ben Widawsky , Andrew Morton , Dave Hansen , Li Xinhai , Michal Hocko , Vlastimil Babka Subject: [PATCH 14/18] mm/mempolicy: Introduce policy_preferred_nodes() Date: Fri, 19 Jun 2020 09:24:21 -0700 Message-Id: <20200619162425.1052382-15-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162425.1052382-1-ben.widawsky@intel.com> References: <20200619162425.1052382-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DD32F10059970 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Current code provides a policy_node() helper which given a preferred node, flags, and policy will help determine the preferred node. Going forward it is desirable to have this same functionality given a set of nodes, rather than a single node. policy_node is then implemented in terms of the now more generic policy_preferred_nodes. I went back and forth as to whether this function should take in a set of preferred nodes and modify that. Something like: policy_preferred_nodes(gfp, *policy, *mask); That idea was nice as it allowed the policy function to create the mask to be used. Ultimately, it turns out callers don't need such fanciness, and those callers would use this mask directly in page allocation functions that can accept NULL for a preference mask. So having this function return NULL when there is no ideal mask turns out to be beneficial. Cc: Andrew Morton Cc: Dave Hansen Cc: Li Xinhai Cc: Michal Hocko Cc: Vlastimil Babka Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 57 +++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 47 insertions(+), 10 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eb2520d68a04..3c48f299d344 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1946,24 +1946,61 @@ static nodemask_t *policy_nodemask(gfp_t gfp, str= uct mempolicy *policy) return NULL; } =20 -/* Return the node id preferred by the given mempolicy, or the given id = */ -static int policy_node(gfp_t gfp, struct mempolicy *policy, - int nd) +/* + * Returns a nodemask to be used for preference if the given policy dict= ates. + * Otherwise, returns NULL and the caller should likely use + * nodemask_of_node(numa_mem_id()); + */ +static nodemask_t *policy_preferred_nodes(gfp_t gfp, struct mempolicy *p= olicy) { - if ((policy->mode =3D=3D MPOL_PREFERRED || - policy->mode =3D=3D MPOL_PREFERRED_MANY) && - !(policy->flags & MPOL_F_LOCAL)) { - nd =3D first_node(policy->v.preferred_nodes); - } else { + nodemask_t *pol_pref =3D &policy->v.preferred_nodes; + + /* + * There are 2 "levels" of policy. What the callers asked for + * (prefmask), and what the memory policy should be for the given gfp. + * The memory policy takes preference in the case that prefmask isn't a + * subset of the mem policy. + */ + switch (policy->mode) { + case MPOL_PREFERRED: + /* local, or buggy policy */ + if (policy->flags & MPOL_F_LOCAL || + WARN_ON(nodes_weight(*pol_pref) !=3D 1)) + return NULL; + else + return pol_pref; + break; + case MPOL_PREFERRED_MANY: + if (WARN_ON(nodes_weight(*pol_pref) =3D=3D 0)) + return NULL; + else + return pol_pref; + break; + default: + case MPOL_INTERLEAVE: + case MPOL_BIND: /* * __GFP_THISNODE shouldn't even be used with the bind policy * because we might easily break the expectation to stay on the * requested node and not break the policy. */ - WARN_ON_ONCE(policy->mode =3D=3D MPOL_BIND && (gfp & __GFP_THISNODE)); + WARN_ON_ONCE(gfp & __GFP_THISNODE); + break; } =20 - return nd; + return NULL; +} + +/* Return the node id preferred by the given mempolicy, or the given id = */ +static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) +{ + nodemask_t *tmp; + + tmp =3D policy_preferred_nodes(gfp, policy); + if (tmp) + return first_node(*tmp); + else + return nd; } =20 /* Do dynamic interleaving for a process */ --=20 2.27.0