From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05F38C4338F for ; Wed, 28 Jul 2021 12:47:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBB3460F9E for ; Wed, 28 Jul 2021 12:47:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236130AbhG1Mra (ORCPT ); Wed, 28 Jul 2021 08:47:30 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:51072 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235130AbhG1Mr2 (ORCPT ); Wed, 28 Jul 2021 08:47:28 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id B58FA22319; Wed, 28 Jul 2021 12:47:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1627476445; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=EY9hnViqgNkITZKfwU8A+2DXRShAbB8e006BrNU9EUA=; b=p/UNkGke6IeE6hvLVkNQT3mfouqZHYRvLlWkdaVdkw3vUpU1ih7lZuutI4zV3Hf8sPAhBB oyAcN9O1Xzb/RsOHMnjNyBcu8IPM43Vpyf2V+Wi/3moXSQvxB4cPA9ZEAoMezQCueKLpYr XQjwdTGzkzZvThjBdS1sig55zyTJYfU= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 4FF98A3B81; Wed, 28 Jul 2021 12:47:24 +0000 (UTC) Date: Wed, 28 Jul 2021 14:47:23 +0200 From: Michal Hocko To: Feng Tang Cc: linux-mm@kvack.org, Andrew Morton , David Rientjes , Dave Hansen , Ben Widawsky , linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Andrea Arcangeli , Mel Gorman , Mike Kravetz , Randy Dunlap , Vlastimil Babka , Andi Kleen , Dan Williams , ying.huang@intel.com Subject: Re: [PATCH v6 5/6] mm/mempolicy: Advertise new MPOL_PREFERRED_MANY Message-ID: References: <1626077374-81682-1-git-send-email-feng.tang@intel.com> <1626077374-81682-6-git-send-email-feng.tang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1626077374-81682-6-git-send-email-feng.tang@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 12-07-21 16:09:33, Feng Tang wrote: > From: Ben Widawsky > > Adds a new mode to the existing mempolicy modes, MPOL_PREFERRED_MANY. > > MPOL_PREFERRED_MANY will be adequately documented in the internal > admin-guide with this patch. Eventually, the man pages for mbind(2), > get_mempolicy(2), set_mempolicy(2) and numactl(8) will also have text > about this mode. Those shall contain the canonical reference. > > NUMA systems continue to become more prevalent. New technologies like > PMEM make finer grain control over memory access patterns increasingly > desirable. MPOL_PREFERRED_MANY allows userspace to specify a set of > nodes that will be tried first when performing allocations. If those > allocations fail, all remaining nodes will be tried. It's a straight > forward API which solves many of the presumptive needs of system > administrators wanting to optimize workloads on such machines. The mode > will work either per VMA, or per thread. > > Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widawsky@intel.com > Signed-off-by: Ben Widawsky > Signed-off-by: Feng Tang > --- > Documentation/admin-guide/mm/numa_memory_policy.rst | 16 ++++++++++++---- > mm/mempolicy.c | 7 +------ > 2 files changed, 13 insertions(+), 10 deletions(-) > > diff --git a/Documentation/admin-guide/mm/numa_memory_policy.rst b/Documentation/admin-guide/mm/numa_memory_policy.rst > index 067a90a1499c..cd653561e531 100644 > --- a/Documentation/admin-guide/mm/numa_memory_policy.rst > +++ b/Documentation/admin-guide/mm/numa_memory_policy.rst > @@ -245,6 +245,14 @@ MPOL_INTERLEAVED > address range or file. During system boot up, the temporary > interleaved system default policy works in this mode. > > +MPOL_PREFERRED_MANY > + This mode specifies that the allocation should be attempted from the > + nodemask specified in the policy. If that allocation fails, the kernel > + will search other nodes, in order of increasing distance from the first > + set bit in the nodemask based on information provided by the platform > + firmware. It is similar to MPOL_PREFERRED with the main exception that > + is an error to have an empty nodemask. I believe the target audience of this documents are users rather than kernel developers and for those the wording might be rather cryptic. I would rephrase like this This mode specifices that the allocation should be preferrably satisfied from the nodemask specified in the policy. If there is a memory pressure on all nodes in the nodemask the allocation can fall back to all existing numa nodes. This is effectively MPOL_PREFERRED allowed for a mask rather than a single node. With that or similar feel free to add Acked-by: Michal Hocko -- Michal Hocko SUSE Labs