From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B7FAC433E1 for ; Wed, 24 Jun 2020 07:52:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EEA912085B for ; Wed, 24 Jun 2020 07:52:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EEA912085B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 69F446B0003; Wed, 24 Jun 2020 03:52:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64EE86B0005; Wed, 24 Jun 2020 03:52:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5166C6B0007; Wed, 24 Jun 2020 03:52:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 371336B0003 for ; Wed, 24 Jun 2020 03:52:20 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EDACF1EE6 for ; Wed, 24 Jun 2020 07:52:19 +0000 (UTC) X-FDA: 76963337598.09.spy19_03124d226e42 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id C5071180AD806 for ; Wed, 24 Jun 2020 07:52:19 +0000 (UTC) X-HE-Tag: spy19_03124d226e42 X-Filterd-Recvd-Size: 5303 Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 24 Jun 2020 07:52:19 +0000 (UTC) Received: by mail-wm1-f67.google.com with SMTP id u26so3905413wmn.1 for ; Wed, 24 Jun 2020 00:52:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=vEfxaSqCVYVbhmxk9OFTIs6iy9TarTkNEwCUYfPyeYY=; b=HlliyHmqS2rTArlHmork/9O8LSkQVB9ILDqYE3Nm8G/4dGN1nJ38K3qZTck49UDiCx 2DTG0G69aOQPJAybJR5iHDe80+02cLBiZ3jE2QjEHF894rtSkZ/LvBGNMDh+/icMz6XJ 6MHss6a4WmgIOebcOD9wEb06H/rfoxLLe9KPVr9EkPk/YsKKnlaepHc1J1rtDTTL//aH j3Yps+/7iIqWemwa2vp55/6fs9xH3fklGw8ZGNYz37TcC2REKthSsxH+IAOccUhMmZK+ DcvToSdRpJRzC45LaD6bPLKLVqfJUVLLumZqZTnZrmkAASWGZkTvJ3URSdt0iC13lSqm ldmg== X-Gm-Message-State: AOAM532GV1A1VRi//nfLiEZGIPm0DLgblHgNxgb0vXTqxa78IiqoURER RKrUhWiWomymx0xbLwg+x/o= X-Google-Smtp-Source: ABdhPJzNio31HT5WXaJ/z40RekkTBMk1/J8+VjwLSXqC41uB6a3gUDqLOEqeF6hJDEEUG/cafvVWpg== X-Received: by 2002:a1c:7414:: with SMTP id p20mr28035865wmc.124.1592985138263; Wed, 24 Jun 2020 00:52:18 -0700 (PDT) Received: from localhost (ip-37-188-168-3.eurotel.cz. [37.188.168.3]) by smtp.gmail.com with ESMTPSA id a16sm25100777wrx.8.2020.06.24.00.52.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jun 2020 00:52:17 -0700 (PDT) Date: Wed, 24 Jun 2020 09:52:16 +0200 From: Michal Hocko To: Ben Widawsky Cc: linux-mm , Andi Kleen , Andrew Morton , Christoph Lameter , Dan Williams , Dave Hansen , David Hildenbrand , David Rientjes , Jason Gunthorpe , Johannes Weiner , Jonathan Corbet , Kuppuswamy Sathyanarayanan , Lee Schermerhorn , Li Xinhai , Mel Gorman , Mike Kravetz , Mina Almasry , Tejun Heo , Vlastimil Babka , linux-api@vger.kernel.org Subject: Re: [PATCH 00/18] multiple preferred nodes Message-ID: <20200624075216.GC1320@dhcp22.suse.cz> References: <20200619162425.1052382-1-ben.widawsky@intel.com> <20200622070957.GB31426@dhcp22.suse.cz> <20200623112048.GR31426@dhcp22.suse.cz> <20200623161211.qjup5km5eiisy5wy@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200623161211.qjup5km5eiisy5wy@intel.com> X-Rspamd-Queue-Id: C5071180AD806 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 23-06-20 09:12:11, Ben Widawsky wrote: > On 20-06-23 13:20:48, Michal Hocko wrote: [...] > > It would be also great to provide a high level semantic description > > here. I have very quickly glanced through patches and they are not > > really trivial to follow with many incremental steps so the higher level > > intention is lost easily. > > > > Do I get it right that the default semantic is essentially > > - allocate page from the given nodemask (with __GFP_RETRY_MAYFAIL > > semantic) > > - fallback to numa unrestricted allocation with the default > > numa policy on the failure > > > > Or are there any usecases to modify how hard to keep the preference over > > the fallback? > > tl;dr is: yes, and no usecases. OK, then I am wondering why the change has to be so involved. Except for syscall plumbing the only real change to the allocator path would be something like static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy) { /* Lower zones don't get a nodemask applied for MPOL_BIND */ if (unlikely(policy->mode == MPOL_BIND || policy->mode == MPOL_PREFERED_MANY) && apply_policy_zone(policy, gfp_zone(gfp)) && cpuset_nodemask_valid_mems_allowed(&policy->v.nodes)) return &policy->v.nodes; return NULL; } alloc_pages_current if (pol->mode == MPOL_INTERLEAVE) page = alloc_page_interleave(gfp, order, interleave_nodes(pol)); else { gfp_t gfp_attempt = gfp; /* * Make sure the first allocation attempt will try hard * but eventually fail without OOM killer or other * disruption before falling back to the full nodemask */ if (pol->mode == MPOL_PREFERED_MANY) gfp_attempt |= __GFP_RETRY_MAYFAIL; page = __alloc_pages_nodemask(gfp_attempt, order, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol)); if (!page && pol->mode == MPOL_PREFERED_MANY) page = __alloc_pages_nodemask(gfp, order, numa_node_id(), NULL); } return page; similar (well slightly more hairy) in alloc_pages_vma Or do I miss something that really requires more involved approach like building custom zonelists and other larger changes to the allocator? -- Michal Hocko SUSE Labs