From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mel Gorman Subject: Re: [PATCH v5 02/31] vmscan: take at least one pass with shrinkers Date: Thu, 9 May 2013 12:12:27 +0100 Message-ID: <20130509111226.GR11497@suse.de> References: <1368079608-5611-1-git-send-email-glommer@openvz.org> <1368079608-5611-3-git-send-email-glommer@openvz.org> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Cc: linux-mm@kvack.org, Andrew Morton , cgroups@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com, Johannes Weiner , Michal Hocko , hughd@google.com, Greg Thelen , linux-fsdevel@vger.kernel.org, Theodore Ts'o , Al Viro To: Glauber Costa Return-path: Received: from cantor2.suse.de ([195.135.220.15]:39281 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753953Ab3EILMc (ORCPT ); Thu, 9 May 2013 07:12:32 -0400 Content-Disposition: inline In-Reply-To: <1368079608-5611-3-git-send-email-glommer@openvz.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Thu, May 09, 2013 at 10:06:19AM +0400, Glauber Costa wrote: > In very low free kernel memory situations, it may be the case that we > have less objects to free than our initial batch size. If this is the > case, it is better to shrink those, and open space for the new workload > then to keep them and fail the new allocations. For the purpose of > defining what "very low memory" means, we will purposefuly exclude > kswapd runs. > > More specifically, this happens because we encode this in a loop with > the condition: "while (total_scan >= batch_size)". So if we are in such > a case, we'll not even enter the loop. > > This patch modifies turns it into a do () while {} loop, that will > guarantee that we scan it at least once, while keeping the behaviour > exactly the same for the cases in which total_scan > batch_size. > > [ v5: differentiate no-scan case, don't do this for kswapd ] > > Signed-off-by: Glauber Costa > Reviewed-by: Dave Chinner > Reviewed-by: Carlos Maiolino > CC: "Theodore Ts'o" > CC: Al Viro > --- > mm/vmscan.c | 24 +++++++++++++++++++++--- > 1 file changed, 21 insertions(+), 3 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index fa6a853..49691da 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -281,12 +281,30 @@ unsigned long shrink_slab(struct shrink_control *shrink, > nr_pages_scanned, lru_pages, > max_pass, delta, total_scan); > > - while (total_scan >= batch_size) { > + do { > int nr_before; > > + /* > + * When we are kswapd, there is no need for us to go > + * desperate and try to reclaim any number of objects > + * regardless of batch size. Direct reclaim, OTOH, may > + * benefit from freeing objects in any quantities. If > + * the workload is actually stressing those objects, > + * this may be the difference between succeeding or > + * failing an allocation. > + */ > + if ((total_scan < batch_size) && current_is_kswapd()) > + break; > + /* > + * Differentiate between "few objects" and "no objects" > + * as returned by the count step. > + */ > + if (!total_scan) > + break; > + To reduce the risk of slab reclaiming the world in the reasonable cases I outlined after the leader mail, I would go further than this and either limit it to memcg after shrinkers are memcg aware or only do the full scan if direct reclaim and priority == 0. What do you think? -- Mel Gorman SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx181.postini.com [74.125.245.181]) by kanga.kvack.org (Postfix) with SMTP id 026386B0034 for ; Thu, 9 May 2013 07:12:31 -0400 (EDT) Date: Thu, 9 May 2013 12:12:27 +0100 From: Mel Gorman Subject: Re: [PATCH v5 02/31] vmscan: take at least one pass with shrinkers Message-ID: <20130509111226.GR11497@suse.de> References: <1368079608-5611-1-git-send-email-glommer@openvz.org> <1368079608-5611-3-git-send-email-glommer@openvz.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <1368079608-5611-3-git-send-email-glommer@openvz.org> Sender: owner-linux-mm@kvack.org List-ID: To: Glauber Costa Cc: linux-mm@kvack.org, Andrew Morton , cgroups@vger.kernel.org, kamezawa.hiroyu@jp.fujitsu.com, Johannes Weiner , Michal Hocko , hughd@google.com, Greg Thelen , linux-fsdevel@vger.kernel.org, Theodore Ts'o , Al Viro On Thu, May 09, 2013 at 10:06:19AM +0400, Glauber Costa wrote: > In very low free kernel memory situations, it may be the case that we > have less objects to free than our initial batch size. If this is the > case, it is better to shrink those, and open space for the new workload > then to keep them and fail the new allocations. For the purpose of > defining what "very low memory" means, we will purposefuly exclude > kswapd runs. > > More specifically, this happens because we encode this in a loop with > the condition: "while (total_scan >= batch_size)". So if we are in such > a case, we'll not even enter the loop. > > This patch modifies turns it into a do () while {} loop, that will > guarantee that we scan it at least once, while keeping the behaviour > exactly the same for the cases in which total_scan > batch_size. > > [ v5: differentiate no-scan case, don't do this for kswapd ] > > Signed-off-by: Glauber Costa > Reviewed-by: Dave Chinner > Reviewed-by: Carlos Maiolino > CC: "Theodore Ts'o" > CC: Al Viro > --- > mm/vmscan.c | 24 +++++++++++++++++++++--- > 1 file changed, 21 insertions(+), 3 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index fa6a853..49691da 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -281,12 +281,30 @@ unsigned long shrink_slab(struct shrink_control *shrink, > nr_pages_scanned, lru_pages, > max_pass, delta, total_scan); > > - while (total_scan >= batch_size) { > + do { > int nr_before; > > + /* > + * When we are kswapd, there is no need for us to go > + * desperate and try to reclaim any number of objects > + * regardless of batch size. Direct reclaim, OTOH, may > + * benefit from freeing objects in any quantities. If > + * the workload is actually stressing those objects, > + * this may be the difference between succeeding or > + * failing an allocation. > + */ > + if ((total_scan < batch_size) && current_is_kswapd()) > + break; > + /* > + * Differentiate between "few objects" and "no objects" > + * as returned by the count step. > + */ > + if (!total_scan) > + break; > + To reduce the risk of slab reclaiming the world in the reasonable cases I outlined after the leader mail, I would go further than this and either limit it to memcg after shrinkers are memcg aware or only do the full scan if direct reclaim and priority == 0. What do you think? -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org