From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6052BC4338F for ; Wed, 28 Jul 2021 13:41:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 119B160F45 for ; Wed, 28 Jul 2021 13:41:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 119B160F45 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5C9166B005D; Wed, 28 Jul 2021 09:41:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5798E6B0070; Wed, 28 Jul 2021 09:41:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 46A368D0001; Wed, 28 Jul 2021 09:41:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id 2CE836B005D for ; Wed, 28 Jul 2021 09:41:48 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C7C4E8249980 for ; Wed, 28 Jul 2021 13:41:47 +0000 (UTC) X-FDA: 78412109454.32.8DBC780 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf26.hostedemail.com (Postfix) with ESMTP id C3E142016567 for ; Wed, 28 Jul 2021 13:41:45 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10058"; a="212696822" X-IronPort-AV: E=Sophos;i="5.84,276,1620716400"; d="scan'208";a="212696822" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jul 2021 06:41:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,276,1620716400"; d="scan'208";a="506465039" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.151]) by FMSMGA003.fm.intel.com with ESMTP; 28 Jul 2021 06:41:38 -0700 Date: Wed, 28 Jul 2021 21:41:37 +0800 From: Feng Tang To: Michal Hocko Cc: linux-mm@kvack.org, Andrew Morton , David Rientjes , Dave Hansen , Ben Widawsky , linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Andrea Arcangeli , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Andi Kleen , Dan Williams , ying.huang@intel.com Subject: Re: [PATCH v6 5/6] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Message-ID: <20210728134137.GA43486@shbuild999.sh.intel.com> References: <1626077374-81682-1-git-send-email-feng.tang@intel.com> <1626077374-81682-6-git-send-email-feng.tang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Authentication-Results: imf26.hostedemail.com; dkim=none; spf=none (imf26.hostedemail.com: domain of feng.tang@intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=feng.tang@intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-Rspamd-Server: rspam02 X-Stat-Signature: bns8m3m93qu7ejunxekstiwy95adofjp X-Rspamd-Queue-Id: C3E142016567 X-HE-Tag: 1627479705-755033 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 28, 2021 at 02:47:23PM +0200, Michal Hocko wrote: > On Mon 12-07-21 16:09:33, Feng Tang wrote: > > From: Ben Widawsky > > > > Adds a new mode to the existing mempolicy modes, MPOL_PREFERRED_MANY. > > > > MPOL_PREFERRED_MANY will be adequately documented in the internal > > admin-guide with this patch. Eventually, the man pages for mbind(2), > > get_mempolicy(2), set_mempolicy(2) and numactl(8) will also have text > > about this mode. Those shall contain the canonical reference. > > > > NUMA systems continue to become more prevalent. New technologies like > > PMEM make finer grain control over memory access patterns increasingly > > desirable. MPOL_PREFERRED_MANY allows userspace to specify a set of > > nodes that will be tried first when performing allocations. If those > > allocations fail, all remaining nodes will be tried. It's a straight > > forward API which solves many of the presumptive needs of system > > administrators wanting to optimize workloads on such machines. The mode > > will work either per VMA, or per thread. > > > > Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widawsky@intel.com > > Signed-off-by: Ben Widawsky > > Signed-off-by: Feng Tang > > --- > > Documentation/admin-guide/mm/numa_memory_policy.rst | 16 ++++++++++++---- > > mm/mempolicy.c | 7 +------ > > 2 files changed, 13 insertions(+), 10 deletions(-) > > > > diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst > > index 067a90a1499c..cd653561e531 100644 > > --- a/Documentation/admin-guide/mm/numa_memory_policy.rst > > +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst > > @@ -245,6 +245,14 @@ MPOL_INTERLEAVED > > address range or file. During system boot up, the temporary > > interleaved system default policy works in this mode. > > > > +MPOL_PREFERRED_MANY > > + This mode specifies that the allocation should be attempted from the > > + nodemask specified in the policy. If that allocation fails, the kernel > > + will search other nodes, in order of increasing distance from the first > > + set bit in the nodemask based on information provided by the platform > > + firmware. It is similar to MPOL_PREFERRED with the main exception that > > + is an error to have an empty nodemask. > > I believe the target audience of this documents are users rather than > kernel developers and for those the wording might be rather cryptic. I > would rephrase like this > This mode specifices that the allocation should be preferrably > satisfied from the nodemask specified in the policy. If there is > a memory pressure on all nodes in the nodemask the allocation > can fall back to all existing numa nodes. This is effectively > MPOL_PREFERRED allowed for a mask rather than a single node. > > With that or similar feel free to add > Acked-by: Michal Hocko Thanks! Will revise the test as suggested. - Feng > -- > Michal Hocko > SUSE Labs