From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89E4CC433E2 for ; Fri, 19 Jun 2020 16:25:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 558A42067D for ; Fri, 19 Jun 2020 16:25:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 558A42067D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1517C8D00DE; Fri, 19 Jun 2020 12:24:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B6818D00E1; Fri, 19 Jun 2020 12:24:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E704A8D00DE; Fri, 19 Jun 2020 12:24:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id BDE128D00E1 for ; Fri, 19 Jun 2020 12:24:37 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 80165180AD806 for ; Fri, 19 Jun 2020 16:24:37 +0000 (UTC) X-FDA: 76946484594.22.uncle58_5b059af26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 501931809F4FA for ; Fri, 19 Jun 2020 16:24:37 +0000 (UTC) X-HE-Tag: uncle58_5b059af26e1a X-Filterd-Recvd-Size: 6621 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:36 +0000 (UTC) IronPort-SDR: ML57aXtcMravLn2r7KWGd7Gl7vWkb1NUOvC4emlFJWBDI3exKdDsyTesG4p7a6gLppIX51vSoT T61zgHkPk5pw== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="141280181" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="141280181" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:32 -0700 IronPort-SDR: v6DnmVNH9PS/6lwBBLaZPF/6YMXDxcsISHZJr805IJNbB4ngx/v73B3fX8nhZqiTHdozQwebEw w19zfihkAZNg== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264368456" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:32 -0700 From: Ben Widawsky To: linux-mm Cc: Ben Widawsky , Andrew Morton , Dave Hansen , Mike Kravetz , Mina Almasry , Vlastimil Babka Subject: [PATCH 15/18] mm: convert callers of __alloc_pages_nodemask to pmask Date: Fri, 19 Jun 2020 09:24:22 -0700 Message-Id: <20200619162425.1052382-16-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162425.1052382-1-ben.widawsky@intel.com> References: <20200619162425.1052382-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 501931809F4FA X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that the infrastructure is in place to both select, and allocate a set of preferred nodes as specified by policy (or perhaps in the future, the calling function), start transitioning over functions that can benefit from this. This patch looks stupid. It seems to artificially insert a nodemask on the stack, then just use the first node from that mask - in other words, a nop just adding overhead. It does. The reason for this is it's a preparatory patch for when we switch over to __alloc_pages_nodemask() to using a mask for preferences. This helps with readability and bisectability. Cc: Andrew Morton Cc: Dave Hansen Cc: Mike Kravetz Cc: Mina Almasry Cc: Vlastimil Babka Signed-off-by: Ben Widawsky --- mm/hugetlb.c | 11 ++++++++--- mm/mempolicy.c | 38 +++++++++++++++++++++++--------------- 2 files changed, 31 insertions(+), 18 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 57ece74e3aae..71b6750661df 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1687,6 +1687,12 @@ static struct page *alloc_buddy_huge_page(struct h= state *h, int order =3D huge_page_order(h); struct page *page; bool alloc_try_hard =3D true; + nodemask_t pmask; + + if (nid =3D=3D NUMA_NO_NODE) + nid =3D numa_mem_id(); + + pmask =3D nodemask_of_node(nid); =20 /* * By default we always try hard to allocate the page with @@ -1700,9 +1706,8 @@ static struct page *alloc_buddy_huge_page(struct hs= tate *h, gfp_mask |=3D __GFP_COMP|__GFP_NOWARN; if (alloc_try_hard) gfp_mask |=3D __GFP_RETRY_MAYFAIL; - if (nid =3D=3D NUMA_NO_NODE) - nid =3D numa_mem_id(); - page =3D __alloc_pages_nodemask(gfp_mask, order, nid, nmask); + page =3D __alloc_pages_nodemask(gfp_mask, order, first_node(pmask), + nmask); if (page) __count_vm_event(HTLB_BUDDY_PGALLOC); else diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 3c48f299d344..9521bb46aa00 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2270,11 +2270,11 @@ static struct page *alloc_page_interleave(gfp_t g= fp, unsigned order, } =20 static struct page *alloc_pages_vma_thp(gfp_t gfp, struct mempolicy *pol= , - int order, int node) + int order, nodemask_t *prefmask) { nodemask_t *nmask; struct page *page; - int hpage_node =3D node; + int hpage_node =3D first_node(*prefmask); =20 /* * For hugepage allocation and non-interleave policy which allows the @@ -2286,9 +2286,6 @@ static struct page *alloc_pages_vma_thp(gfp_t gfp, = struct mempolicy *pol, * If the policy is interleave or multiple preferred nodes, or does not * allow the current node in its nodemask, we allocate the standard way= . */ - if (pol->mode =3D=3D MPOL_PREFERRED && !(pol->flags & MPOL_F_LOCAL)) - hpage_node =3D first_node(pol->v.preferred_nodes); - nmask =3D policy_nodemask(gfp, pol); =20 /* @@ -2340,10 +2337,14 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_a= rea_struct *vma, { struct mempolicy *pol; struct page *page; - int preferred_nid; - nodemask_t *nmask; + nodemask_t *nmask, *pmask, tmp; =20 pol =3D get_vma_policy(vma, addr); + pmask =3D policy_preferred_nodes(gfp, pol); + if (!pmask) { + tmp =3D nodemask_of_node(node); + pmask =3D &tmp; + } =20 if (pol->mode =3D=3D MPOL_INTERLEAVE) { unsigned nid; @@ -2353,12 +2354,12 @@ alloc_pages_vma(gfp_t gfp, int order, struct vm_a= rea_struct *vma, page =3D alloc_page_interleave(gfp, order, nid); } else if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { - page =3D alloc_pages_vma_thp(gfp, pol, order, node); + page =3D alloc_pages_vma_thp(gfp, pol, order, pmask); mpol_cond_put(pol); } else { nmask =3D policy_nodemask(gfp, pol); - preferred_nid =3D policy_node(gfp, pol, node); - page =3D __alloc_pages_nodemask(gfp, order, preferred_nid, nmask); + page =3D __alloc_pages_nodemask(gfp, order, first_node(*pmask), + nmask); mpol_cond_put(pol); } =20 @@ -2393,12 +2394,19 @@ struct page *alloc_pages_current(gfp_t gfp, unsig= ned order) * No reference counting needed for current->mempolicy * nor system default_policy */ - if (pol->mode =3D=3D MPOL_INTERLEAVE) + if (pol->mode =3D=3D MPOL_INTERLEAVE) { page =3D alloc_page_interleave(gfp, order, interleave_nodes(pol)); - else - page =3D __alloc_pages_nodemask(gfp, order, - policy_node(gfp, pol, numa_node_id()), - policy_nodemask(gfp, pol)); + } else { + nodemask_t tmp, *pmask; + + pmask =3D policy_preferred_nodes(gfp, pol); + if (!pmask) { + tmp =3D nodemask_of_node(numa_node_id()); + pmask =3D &tmp; + } + page =3D __alloc_pages_nodemask(gfp, order, first_node(*pmask), + policy_nodemask(gfp, pol)); + } =20 return page; } --=20 2.27.0