From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59B2EC63797 for ; Thu, 22 Jul 2021 08:11:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E39AF61283 for ; Thu, 22 Jul 2021 08:11:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E39AF61283 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5B0546B0036; Thu, 22 Jul 2021 04:11:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 560226B005D; Thu, 22 Jul 2021 04:11:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44E416B006C; Thu, 22 Jul 2021 04:11:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 286536B0036 for ; Thu, 22 Jul 2021 04:11:32 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BDFD78249980 for ; Thu, 22 Jul 2021 08:11:31 +0000 (UTC) X-FDA: 78389504382.11.5D13FC2 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf22.hostedemail.com (Postfix) with ESMTP id 1B1CF1B39D for ; Thu, 22 Jul 2021 08:11:27 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10052"; a="211314507" X-IronPort-AV: E=Sophos;i="5.84,260,1620716400"; d="scan'208";a="211314507" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jul 2021 01:11:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,260,1620716400"; d="scan'208";a="501660028" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.151]) by FMSMGA003.fm.intel.com with ESMTP; 22 Jul 2021 01:11:22 -0700 Date: Thu, 22 Jul 2021 16:11:22 +0800 From: Feng Tang To: Mike Kravetz Cc: linux-mm@kvack.org, Andrew Morton , Michal Hocko , David Rientjes , Dave Hansen , Ben Widawsky , linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Andrea Arcangeli , Mel Gorman , Randy Dunlap , Vlastimil Babka , Andi Kleen , Dan Williams , ying.huang@intel.com Subject: Re: [PATCH v6 4/6] mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY Message-ID: <20210722081122.GA2169@shbuild999.sh.intel.com> References: <1626077374-81682-1-git-send-email-feng.tang@intel.com> <1626077374-81682-5-git-send-email-feng.tang@intel.com> <7cdf88d8-9eea-5547-ee77-7d46829bf2dd@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7cdf88d8-9eea-5547-ee77-7d46829bf2dd@oracle.com> User-Agent: Mutt/1.5.24 (2015-08-30) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=none (imf22.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=feng.tang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam05 X-Stat-Signature: 9u41spsjohmkq3qfqko4n7m49ekonjek X-Rspamd-Queue-Id: 1B1CF1B39D X-HE-Tag: 1626941487-862441 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mike, On Wed, Jul 21, 2021 at 01:49:15PM -0700, Mike Kravetz wrote: > On 7/12/21 1:09 AM, Feng Tang wrote: > > From: Ben Widawsky > > > > Implement the missing huge page allocation functionality while obeying > > the preferred node semantics. This is similar to the implementation > > for general page allocation, as it uses a fallback mechanism to try > > multiple preferred nodes first, and then all other nodes. > > > > [Thanks to 0day bot for caching the missing #ifdef CONFIG_NUMA issue] > > > > Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widawsky@intel.com > > Suggested-by: Michal Hocko > > Signed-off-by: Ben Widawsky > > Co-developed-by: Feng Tang > > Signed-off-by: Feng Tang > > --- > > mm/hugetlb.c | 25 +++++++++++++++++++++++++ > > mm/mempolicy.c | 3 ++- > > 2 files changed, 27 insertions(+), 1 deletion(-) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 924553aa8f78..3e84508c1b8c 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1164,7 +1164,18 @@ static struct page *dequeue_huge_page_vma(struct hstate *h, > > > > gfp_mask = htlb_alloc_mask(h); > > nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); > > +#ifdef CONFIG_NUMA > > + if (mpol->mode == MPOL_PREFERRED_MANY) { > > + page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); > > + if (page) > > + goto check_reserve; > > + /* Fallback to all nodes */ > > + nodemask = NULL; > > + } > > +#endif > > page = dequeue_huge_page_nodemask(h, gfp_mask, nid, nodemask); > > + > > +check_reserve: > > if (page && !avoid_reserve && vma_has_reserves(vma, chg)) { > > SetHPageRestoreReserve(page); > > h->resv_huge_pages--; > > @@ -2095,6 +2106,20 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, > > nodemask_t *nodemask; > > > > nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); > > +#ifdef CONFIG_NUMA > > + if (mpol->mode == MPOL_PREFERRED_MANY) { > > + gfp_t gfp = (gfp_mask | __GFP_NOWARN) & ~__GFP_DIRECT_RECLAIM; > > I believe __GFP_NOWARN will be added later in alloc_buddy_huge_page, so > no need to add here? Thanks for the suggestion, will remove it. > > + > > + page = alloc_surplus_huge_page(h, gfp, nid, nodemask); > > + if (page) { > > + mpol_cond_put(mpol); > > + return page; > > + } > > + > > + /* Fallback to all nodes */ > > + nodemask = NULL; > > + } > > +#endif > > page = alloc_surplus_huge_page(h, gfp_mask, nid, nodemask); > > mpol_cond_put(mpol); > > > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > > index 9dce67fc9bb6..93f8789758a7 100644 > > --- a/mm/mempolicy.c > > +++ b/mm/mempolicy.c > > @@ -2054,7 +2054,8 @@ int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, > > huge_page_shift(hstate_vma(vma))); > > } else { > > nid = policy_node(gfp_flags, *mpol, numa_node_id()); > > - if ((*mpol)->mode == MPOL_BIND) > > + if ((*mpol)->mode == MPOL_BIND || > > + (*mpol)->mode == MPOL_PREFERRED_MANY) > > *nodemask = &(*mpol)->nodes; > > } > > return nid; > > > > Other than the one nit above, > > Reviewed-by: Mike Kravetz Thanks! Andrew, I have to ask for your help again to fold this to the 4/6 patch, thanks! - Feng ---------------------------8<-------------------------------------------- >From de1cd29d8da96856a6d754a30a4c7585d87b8348 Mon Sep 17 00:00:00 2001 From: Feng Tang Date: Thu, 22 Jul 2021 16:00:49 +0800 Subject: [PATCH] mm/hugetlb: remove the unneeded __GFP_NOWARN flag setting As the alloc_buddy_huge_page() will set it anyway. Suggested-by: Mike Kravetz Signed-off-by: Feng Tang --- mm/hugetlb.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 528947d..a96e283 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2162,9 +2162,9 @@ struct page *alloc_buddy_huge_page_with_mpol(struct hstate *h, nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); #ifdef CONFIG_NUMA if (mpol->mode == MPOL_PREFERRED_MANY) { - gfp_t gfp = (gfp_mask | __GFP_NOWARN) & ~__GFP_DIRECT_RECLAIM; - - page = alloc_surplus_huge_page(h, gfp, nid, nodemask, false); + page = alloc_surplus_huge_page(h, + gfp_mask & ~__GFP_DIRECT_RECLAIM, + nid, nodemask, false); if (page) { mpol_cond_put(mpol); return page; -- 2.7.4