From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752251AbcFWMxR (ORCPT ); Thu, 23 Jun 2016 08:53:17 -0400 Received: from outbound-smtp10.blacknight.com ([46.22.139.15]:58172 "EHLO outbound-smtp10.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751855AbcFWMxQ (ORCPT ); Thu, 23 Jun 2016 08:53:16 -0400 Date: Thu, 23 Jun 2016 13:53:12 +0100 From: Mel Gorman To: Michal Hocko Cc: Andrew Morton , Linux-MM , Rik van Riel , Vlastimil Babka , Johannes Weiner , LKML Subject: Re: [PATCH 15/27] mm, page_alloc: Consider dirtyable memory in terms of nodes Message-ID: <20160623125312.GW1868@techsingularity.net> References: <1466518566-30034-1-git-send-email-mgorman@techsingularity.net> <1466518566-30034-16-git-send-email-mgorman@techsingularity.net> <20160622141521.GC7527@dhcp22.suse.cz> <20160622142756.GH9208@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20160622142756.GH9208@dhcp22.suse.cz> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 22, 2016 at 04:27:57PM +0200, Michal Hocko wrote: > > which can use it (e.g. vmalloc). I understand how this is both an > > inherent problem of 32b with a larger high:low ratio and why it is hard > > to at least pretend we can cope with it with node based approach but we > > should at least document it. > > > > I workaround would be to enable highmem_dirtyable_memory which can lead > > to premature OOM killer for some workloads AFAIR. > [...] > > > static unsigned long highmem_dirtyable_memory(unsigned long total) > > > { > > > #ifdef CONFIG_HIGHMEM > > > - int node; > > > unsigned long x = 0; > > > - int i; > > > - > > > - for_each_node_state(node, N_HIGH_MEMORY) { > > > - for (i = 0; i < MAX_NR_ZONES; i++) { > > > - struct zone *z = &NODE_DATA(node)->node_zones[i]; > > > > > > - if (is_highmem(z)) > > > - x += zone_dirtyable_memory(z); > > > - } > > > - } > > Hmm, I have just noticed that we have NR_ZONE_LRU_ANON resp. > NR_ZONE_LRU_FILE so we can estimate the amount of highmem contribution > to the global counters by the following or similar: > > for_each_node_state(node, N_HIGH_MEMORY) { > for (i = 0; i < MAX_NR_ZONES; i++) { > struct zone *z = &NODE_DATA(node)->node_zones[i]; > > if (!is_highmem(z)) > continue; > > x += zone_page_state(z, NR_FREE_PAGES) + zone_page_state(z, NR_ZONE_LRU_FILE) - high_wmark_pages(zone); > } > > high wmark reduction would be to emulate the reserve. What do you think? Agreed with minor modifications. Went with this for_each_node_state(node, N_HIGH_MEMORY) { for (i = ZONE_NORMAL + 1; i < MAX_NR_ZONES; i++) { struct zone *z; if (!is_highmem_idx(z)) continue; z = &NODE_DATA(node)->node_zones[i]; x += zone_page_state(z, NR_FREE_PAGES) + zone_page_state(z, NR_ZONE_LRU_FILE) - high_wmark_pages(zone); } } -- Mel Gorman SUSE Labs