From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34987C433DF for ; Fri, 19 Jun 2020 16:25:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF5AA2168B for ; Fri, 19 Jun 2020 16:25:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF5AA2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC3168D00DB; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C26208D00DF; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 969D28D00DB; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id 66EC58D00D8 for ; Fri, 19 Jun 2020 12:24:36 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2688154C88 for ; Fri, 19 Jun 2020 16:24:36 +0000 (UTC) X-FDA: 76946484552.16.kiss71_380725e26e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id E877C1017D44D for ; Fri, 19 Jun 2020 16:24:35 +0000 (UTC) X-HE-Tag: kiss71_380725e26e1a X-Filterd-Recvd-Size: 3019 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:35 +0000 (UTC) IronPort-SDR: S0Ta899fTfFqnpPii72biU4siTfPFv6y/4kFegRxdcNNxqjxLkOJ75LuO8piqEmQs4+F3Zgfv3 RsFLjacFTccA== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="141280167" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="141280167" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:31 -0700 IronPort-SDR: XJJALdrAq7Nipe1KewxFUg58dqI6sy2ibf/soOtR7y4AcACSnarWM1Utq0z2rzESMKha4iAHVL GnF/BBAC+v7A== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264368338" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:31 -0700 From: Ben Widawsky To: linux-mm Cc: Ben Widawsky , Andrew Morton , Vlastimil Babka Subject: [PATCH 12/18] mm/mempolicy: Use __alloc_page_node for interleaved Date: Fri, 19 Jun 2020 09:24:19 -0700 Message-Id: <20200619162425.1052382-13-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162425.1052382-1-ben.widawsky@intel.com> References: <20200619162425.1052382-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: E877C1017D44D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This helps reduce the consumers of the interface and get us in better shape to clean up some of the low level page allocation routines. The goal in doing that is to eventually limit the places we'll need to declare nodemask_t variables on the stack (more on that later). Currently the only distinction between __alloc_pages_node and __alloc_pages is that the former does sanity checks on the gfp flags and the nid. In the case of interleave nodes, this isn't necessary because the caller has already figured out the right nid and flags with interleave_nodes(), This kills the only real user of __alloc_pages, which can then be removed later. Cc: Andrew Morton Cc: Vlastimil Babka Signed-off-by: Ben Widawsky --- mm/mempolicy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 3ce2354fed44..eb2520d68a04 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2220,7 +2220,7 @@ static struct page *alloc_page_interleave(gfp_t gfp= , unsigned order, { struct page *page; =20 - page =3D __alloc_pages(gfp, order, nid); + page =3D __alloc_pages_node(nid, gfp, order); /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */ if (!static_branch_likely(&vm_numa_stat_key)) return page; --=20 2.27.0