From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from outbound-smtp57.blacknight.com (outbound-smtp57.blacknight.com [46.22.136.241]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A879629CA for ; Tue, 30 Nov 2021 11:31:08 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp57.blacknight.com (Postfix) with ESMTPS id 50017FAFC2 for ; Tue, 30 Nov 2021 11:22:47 +0000 (GMT) Received: (qmail 5292 invoked from network); 30 Nov 2021 11:22:47 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.17.29]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 30 Nov 2021 11:22:46 -0000 Date: Tue, 30 Nov 2021 11:22:44 +0000 From: Mel Gorman To: Mike Galbraith Cc: Alexey Avramov , Andrew Morton , Michal Hocko , Vlastimil Babka , Rik van Riel , Darrick Wong , regressions@lists.linux.dev, Linux-fsdevel , Linux-MM , LKML Subject: Re: [PATCH 1/1] mm: vmscan: Reduce throttling due to a failure to make progress Message-ID: <20211130112244.GQ3366@techsingularity.net> References: <20211125151853.8540-1-mgorman@techsingularity.net> <20211127011246.7a8ac7b8@mail.inbox.lv> <20211129150117.GO3366@techsingularity.net> Precedence: bulk X-Mailing-List: regressions@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) On Tue, Nov 30, 2021 at 11:14:32AM +0100, Mike Galbraith wrote: > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index fb9584641ac7..1af12072f40e 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -1021,6 +1021,39 @@ static void handle_write_error(struct address_space *mapping, > >         unlock_page(page); > >  } > >   > > +bool skip_throttle_noprogress(pg_data_t *pgdat) > > +{ > > +       int reclaimable = 0, write_pending = 0; > > +       int i; > > + > > +       /* > > +        * If kswapd is disabled, reschedule if necessary but do not > > +        * throttle as the system is likely near OOM. > > +        */ > > +       if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES) > > +               return true; > > + > > +       /* > > +        * If there are a lot of dirty/writeback pages then do not > > +        * throttle as throttling will occur when the pages cycle > > +        * towards the end of the LRU if still under writeback. > > +        */ > > +       for (i = 0; i < MAX_NR_ZONES; i++) { > > +               struct zone *zone = pgdat->node_zones + i; > > + > > +               if (!populated_zone(zone)) > > +                       continue; > > + > > +               reclaimable += zone_reclaimable_pages(zone); > > +               write_pending += zone_page_state_snapshot(zone, > > +                                                 NR_ZONE_WRITE_PENDING); > > +       } > > +       if (2 * write_pending <= reclaimable) > > That is always true here... > Always true for you or always true in general? The intent of the check is "are a majority of reclaimable pages marked WRITE_PENDING?". It's similar to the check that existed prior to 132b0d21d21f ("mm/page_alloc: remove the throttling logic from the page allocator"). -- Mel Gorman SUSE Labs