From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752290AbcF1IKU (ORCPT ); Tue, 28 Jun 2016 04:10:20 -0400 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:51680 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751419AbcF1IJq (ORCPT ); Tue, 28 Jun 2016 04:09:46 -0400 X-Original-SENDERIP: 156.147.1.125 X-Original-MAILFROM: iamjoonsoo.kim@lge.com X-Original-SENDERIP: 10.177.222.138 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Tue, 28 Jun 2016 17:12:17 +0900 From: Joonsoo Kim To: Vlastimil Babka Cc: Andrew Morton , Rik van Riel , Johannes Weiner , mgorman@techsingularity.net, Laura Abbott , Minchan Kim , Marek Szyprowski , Michal Nazarewicz , "Aneesh Kumar K.V" , Rui Teng , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 1/6] mm/page_alloc: recalculate some of zone threshold when on/offline memory Message-ID: <20160628081217.GA19731@js1304-P5Q-DELUXE> References: <1464243748-16367-1-git-send-email-iamjoonsoo.kim@lge.com> <1464243748-16367-2-git-send-email-iamjoonsoo.kim@lge.com> <921a37c6-b1e8-576f-095b-48e153bfd1d6@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <921a37c6-b1e8-576f-095b-48e153bfd1d6@suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 24, 2016 at 03:20:43PM +0200, Vlastimil Babka wrote: > On 05/26/2016 08:22 AM, js1304@gmail.com wrote: > >From: Joonsoo Kim > > > >Some of zone threshold depends on number of managed pages in the zone. > >When memory is going on/offline, it can be changed and we need to > >adjust them. > > > >This patch add recalculation to appropriate places and clean-up > >related function for better maintanance. > > Can you be more specific about the user visible effect? Presumably > it's not affecting just ZONE_CMA? Yes, it's also affecting memory hotplug. > I assume it's fixing the thresholds where only part of node is > onlined or offlined? Or are they currently wrong even when whole > node is onlined/offlined? When memory hotplug happens, managed_pages changes and we need to recalculate everything based on managed_pages. min_slab_pages and min_unmapped_pages are missed so this patch does it, too. Thanks. > > (Sorry but I can't really orient myself in the maze of memory hotplug :( > > Thanks, > Vlastimil > > >Signed-off-by: Joonsoo Kim > >--- > > mm/page_alloc.c | 36 +++++++++++++++++++++++++++++------- > > 1 file changed, 29 insertions(+), 7 deletions(-) > > > >diff --git a/mm/page_alloc.c b/mm/page_alloc.c > >index d27e8b9..90e5a82 100644 > >--- a/mm/page_alloc.c > >+++ b/mm/page_alloc.c > >@@ -4874,6 +4874,8 @@ int local_memory_node(int node) > > } > > #endif > > > >+static void setup_min_unmapped_ratio(struct zone *zone); > >+static void setup_min_slab_ratio(struct zone *zone); > > #else /* CONFIG_NUMA */ > > > > static void set_zonelist_order(void) > >@@ -5988,9 +5990,8 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat) > > zone->managed_pages = is_highmem_idx(j) ? realsize : freesize; > > #ifdef CONFIG_NUMA > > zone->node = nid; > >- zone->min_unmapped_pages = (freesize*sysctl_min_unmapped_ratio) > >- / 100; > >- zone->min_slab_pages = (freesize * sysctl_min_slab_ratio) / 100; > >+ setup_min_unmapped_ratio(zone); > >+ setup_min_slab_ratio(zone); > > #endif > > zone->name = zone_names[j]; > > spin_lock_init(&zone->lock); > >@@ -6896,6 +6897,7 @@ int __meminit init_per_zone_wmark_min(void) > > { > > unsigned long lowmem_kbytes; > > int new_min_free_kbytes; > >+ struct zone *zone; > > > > lowmem_kbytes = nr_free_buffer_pages() * (PAGE_SIZE >> 10); > > new_min_free_kbytes = int_sqrt(lowmem_kbytes * 16); > >@@ -6913,6 +6915,14 @@ int __meminit init_per_zone_wmark_min(void) > > setup_per_zone_wmarks(); > > refresh_zone_stat_thresholds(); > > setup_per_zone_lowmem_reserve(); > >+ > >+ for_each_zone(zone) { > >+#ifdef CONFIG_NUMA > >+ setup_min_unmapped_ratio(zone); > >+ setup_min_slab_ratio(zone); > >+#endif > >+ } > >+ > > return 0; > > } > > core_initcall(init_per_zone_wmark_min) > >@@ -6954,6 +6964,12 @@ int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write, > > } > > > > #ifdef CONFIG_NUMA > >+static void setup_min_unmapped_ratio(struct zone *zone) > >+{ > >+ zone->min_unmapped_pages = (zone->managed_pages * > >+ sysctl_min_unmapped_ratio) / 100; > >+} > >+ > > int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write, > > void __user *buffer, size_t *length, loff_t *ppos) > > { > >@@ -6965,11 +6981,17 @@ int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write, > > return rc; > > > > for_each_zone(zone) > >- zone->min_unmapped_pages = (zone->managed_pages * > >- sysctl_min_unmapped_ratio) / 100; > >+ setup_min_unmapped_ratio(zone); > >+ > > return 0; > > } > > > >+static void setup_min_slab_ratio(struct zone *zone) > >+{ > >+ zone->min_slab_pages = (zone->managed_pages * > >+ sysctl_min_slab_ratio) / 100; > >+} > >+ > > int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, > > void __user *buffer, size_t *length, loff_t *ppos) > > { > >@@ -6981,8 +7003,8 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, > > return rc; > > > > for_each_zone(zone) > >- zone->min_slab_pages = (zone->managed_pages * > >- sysctl_min_slab_ratio) / 100; > >+ setup_min_slab_ratio(zone); > >+ > > return 0; > > } > > #endif > > > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org