From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751078AbdAQVHx (ORCPT ); Tue, 17 Jan 2017 16:07:53 -0500 Received: from outbound-smtp07.blacknight.com ([46.22.139.12]:49827 "EHLO outbound-smtp07.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750863AbdAQVHv (ORCPT ); Tue, 17 Jan 2017 16:07:51 -0500 Date: Tue, 17 Jan 2017 21:07:49 +0000 From: Mel Gorman To: Vlastimil Babka Cc: Jesper Dangaard Brouer , Andrew Morton , Linux Kernel , Linux-MM , Hillf Danton , Michal Hocko Subject: Re: [PATCH 1/4] mm, page_alloc: Split buffered_rmqueue Message-ID: <20170117210749.rzpsavbx5gztsx6o@techsingularity.net> References: <20170117092954.15413-1-mgorman@techsingularity.net> <20170117092954.15413-2-mgorman@techsingularity.net> <20170117190732.0fc733ec@redhat.com> <2df88f73-a32d-4b71-d4de-3a0ad8831d9a@suse.cz> <20170117202008.pcufk5qencdgkgpj@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20170117202008.pcufk5qencdgkgpj@techsingularity.net> User-Agent: Mutt/1.6.2 (2016-07-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 17, 2017 at 08:20:08PM +0000, Mel Gorman wrote: > It's late so I'm fairly tired but assuming I can reproduce this in the > morning, the first thing I'll try is something like this to force a reread > of mems_allowed; > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index ebea51cc0135..3fc2b3a8d301 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3774,13 +3774,6 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, > .migratetype = gfpflags_to_migratetype(gfp_mask), > }; > > - if (cpusets_enabled()) { > - alloc_mask |= __GFP_HARDWALL; > - alloc_flags |= ALLOC_CPUSET; > - if (!ac.nodemask) > - ac.nodemask = &cpuset_current_mems_allowed; > - } > - > gfp_mask &= gfp_allowed_mask; > > lockdep_trace_alloc(gfp_mask); > @@ -3802,6 +3795,13 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, > alloc_flags |= ALLOC_CMA; > > retry_cpuset: > + if (cpusets_enabled()) { > + alloc_mask |= __GFP_HARDWALL; > + alloc_flags |= ALLOC_CPUSET; > + if (!nodemask) > + ac.nodemask = &cpuset_current_mems_allowed; > + } > + > cpuset_mems_cookie = read_mems_allowed_begin(); > > /* Dirty zone balancing only done in the fast path */ > I later recalled that we looked at this before and didn't think a reinit was necessary because the location of cpuset_current_mems_allowed doesn't change so I came back and took another look. The location doesn't change but after the first attempt, we reset ac.nodemask to the given nodemask and don't recheck current_mems_allowed if the cpuset changed. The application of memory policies versus cpusets is a mess so it'll take time to pick apart to see if this is even remotely in the right direction. -- Mel Gorman SUSE Labs