From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753945AbaIBOBY (ORCPT ); Tue, 2 Sep 2014 10:01:24 -0400 Received: from zene.cmpxchg.org ([85.214.230.12]:59889 "EHLO zene.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753586AbaIBOBW (ORCPT ); Tue, 2 Sep 2014 10:01:22 -0400 Date: Tue, 2 Sep 2014 10:01:16 -0400 From: Johannes Weiner To: Vlastimil Babka Cc: Mel Gorman , Andrew Morton , Linux Kernel , Linux-MM , Linux-FSDevel Subject: Re: [PATCH 6/6] mm: page_alloc: Reduce cost of the fair zone allocation policy Message-ID: <20140902140116.GD29501@cmpxchg.org> References: <1404893588-21371-1-git-send-email-mgorman@suse.de> <1404893588-21371-7-git-send-email-mgorman@suse.de> <53E4EC53.1050904@suse.cz> <20140811121241.GD7970@suse.de> <53E8B83D.1070004@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <53E8B83D.1070004@suse.cz> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 11, 2014 at 02:34:05PM +0200, Vlastimil Babka wrote: > On 08/11/2014 02:12 PM, Mel Gorman wrote: > >On Fri, Aug 08, 2014 at 05:27:15PM +0200, Vlastimil Babka wrote: > >>On 07/09/2014 10:13 AM, Mel Gorman wrote: > >>>--- a/mm/page_alloc.c > >>>+++ b/mm/page_alloc.c > >>>@@ -1604,6 +1604,9 @@ again: > >>> } > >>> > >>> __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order)); > >> > >>This can underflow zero, right? > >> > > > >Yes, because of per-cpu accounting drift. > > I meant mainly because of order > 0. > > >>>+ if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 && > >> > >>AFAICS, zone_page_state will correct negative values to zero only for > >>CONFIG_SMP. Won't this check be broken on !CONFIG_SMP? > >> > > > >On !CONFIG_SMP how can there be per-cpu accounting drift that would make > >that counter negative? > > Well original code used "if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0)" > elsewhere, that you are replacing with zone_is_fair_depleted check. I > assumed it's because it can get negative due to order > 0. I might have not > looked thoroughly enough but it seems to me there's nothing that would > prevent it, such as skipping a zone because its remaining batch is lower > than 1 << order. > So I think the check should be "<= 0" to be safe. Any updates on this? The counter can definitely underflow on !CONFIG_SMP, and then the flag gets out of sync with the actual batch state. I'd still prefer just removing this flag again; it's extra complexity and error prone (case in point) while the upsides are not even measurable in real life. --- diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 318df7051850..0bd77f730b38 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -534,7 +534,6 @@ typedef enum { ZONE_WRITEBACK, /* reclaim scanning has recently found * many pages under writeback */ - ZONE_FAIR_DEPLETED, /* fair zone policy batch depleted */ } zone_flags_t; static inline void zone_set_flag(struct zone *zone, zone_flags_t flag) @@ -572,11 +571,6 @@ static inline int zone_is_reclaim_locked(const struct zone *zone) return test_bit(ZONE_RECLAIM_LOCKED, &zone->flags); } -static inline int zone_is_fair_depleted(const struct zone *zone) -{ - return test_bit(ZONE_FAIR_DEPLETED, &zone->flags); -} - static inline int zone_is_oom_locked(const struct zone *zone) { return test_bit(ZONE_OOM_LOCKED, &zone->flags); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 18cee0d4c8a2..d913809a328f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1612,9 +1612,6 @@ again: } __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order)); - if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 && - !zone_is_fair_depleted(zone)) - zone_set_flag(zone, ZONE_FAIR_DEPLETED); __count_zone_vm_events(PGALLOC, zone, 1 << order); zone_statistics(preferred_zone, zone, gfp_flags); @@ -1934,7 +1931,6 @@ static void reset_alloc_batches(struct zone *preferred_zone) mod_zone_page_state(zone, NR_ALLOC_BATCH, high_wmark_pages(zone) - low_wmark_pages(zone) - atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])); - zone_clear_flag(zone, ZONE_FAIR_DEPLETED); } while (zone++ != preferred_zone); } @@ -1985,7 +1981,7 @@ zonelist_scan: if (alloc_flags & ALLOC_FAIR) { if (!zone_local(preferred_zone, zone)) break; - if (zone_is_fair_depleted(zone)) { + if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0) { nr_fair_skipped++; continue; }