linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: js1304@gmail.com, Andrew Morton <akpm@linux-foundation.org>
Cc: Rik van Riel <riel@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	mgorman@techsingularity.net, Laura Abbott <lauraa@codeaurora.org>,
	Minchan Kim <minchan@kernel.org>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Michal Nazarewicz <mina86@mina86.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>,
	Rui Teng <rui.teng@linux.vnet.ibm.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Subject: Re: [PATCH v3 1/6] mm/page_alloc: recalculate some of zone threshold when on/offline memory
Date: Fri, 24 Jun 2016 15:20:43 +0200	[thread overview]
Message-ID: <921a37c6-b1e8-576f-095b-48e153bfd1d6@suse.cz> (raw)
In-Reply-To: <1464243748-16367-2-git-send-email-iamjoonsoo.kim@lge.com>

On 05/26/2016 08:22 AM, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> Some of zone threshold depends on number of managed pages in the zone.
> When memory is going on/offline, it can be changed and we need to
> adjust them.
>
> This patch add recalculation to appropriate places and clean-up
> related function for better maintanance.

Can you be more specific about the user visible effect? Presumably it's 
not affecting just ZONE_CMA?
I assume it's fixing the thresholds where only part of node is onlined 
or offlined? Or are they currently wrong even when whole node is 
onlined/offlined?

(Sorry but I can't really orient myself in the maze of memory hotplug :(

Thanks,
Vlastimil

> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  mm/page_alloc.c | 36 +++++++++++++++++++++++++++++-------
>  1 file changed, 29 insertions(+), 7 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d27e8b9..90e5a82 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4874,6 +4874,8 @@ int local_memory_node(int node)
>  }
>  #endif
>
> +static void setup_min_unmapped_ratio(struct zone *zone);
> +static void setup_min_slab_ratio(struct zone *zone);
>  #else	/* CONFIG_NUMA */
>
>  static void set_zonelist_order(void)
> @@ -5988,9 +5990,8 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat)
>  		zone->managed_pages = is_highmem_idx(j) ? realsize : freesize;
>  #ifdef CONFIG_NUMA
>  		zone->node = nid;
> -		zone->min_unmapped_pages = (freesize*sysctl_min_unmapped_ratio)
> -						/ 100;
> -		zone->min_slab_pages = (freesize * sysctl_min_slab_ratio) / 100;
> +		setup_min_unmapped_ratio(zone);
> +		setup_min_slab_ratio(zone);
>  #endif
>  		zone->name = zone_names[j];
>  		spin_lock_init(&zone->lock);
> @@ -6896,6 +6897,7 @@ int __meminit init_per_zone_wmark_min(void)
>  {
>  	unsigned long lowmem_kbytes;
>  	int new_min_free_kbytes;
> +	struct zone *zone;
>
>  	lowmem_kbytes = nr_free_buffer_pages() * (PAGE_SIZE >> 10);
>  	new_min_free_kbytes = int_sqrt(lowmem_kbytes * 16);
> @@ -6913,6 +6915,14 @@ int __meminit init_per_zone_wmark_min(void)
>  	setup_per_zone_wmarks();
>  	refresh_zone_stat_thresholds();
>  	setup_per_zone_lowmem_reserve();
> +
> +	for_each_zone(zone) {
> +#ifdef CONFIG_NUMA
> +		setup_min_unmapped_ratio(zone);
> +		setup_min_slab_ratio(zone);
> +#endif
> +	}
> +
>  	return 0;
>  }
>  core_initcall(init_per_zone_wmark_min)
> @@ -6954,6 +6964,12 @@ int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write,
>  }
>
>  #ifdef CONFIG_NUMA
> +static void setup_min_unmapped_ratio(struct zone *zone)
> +{
> +	zone->min_unmapped_pages = (zone->managed_pages *
> +			sysctl_min_unmapped_ratio) / 100;
> +}
> +
>  int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
>  	void __user *buffer, size_t *length, loff_t *ppos)
>  {
> @@ -6965,11 +6981,17 @@ int sysctl_min_unmapped_ratio_sysctl_handler(struct ctl_table *table, int write,
>  		return rc;
>
>  	for_each_zone(zone)
> -		zone->min_unmapped_pages = (zone->managed_pages *
> -				sysctl_min_unmapped_ratio) / 100;
> +		setup_min_unmapped_ratio(zone);
> +
>  	return 0;
>  }
>
> +static void setup_min_slab_ratio(struct zone *zone)
> +{
> +	zone->min_slab_pages = (zone->managed_pages *
> +			sysctl_min_slab_ratio) / 100;
> +}
> +
>  int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
>  	void __user *buffer, size_t *length, loff_t *ppos)
>  {
> @@ -6981,8 +7003,8 @@ int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write,
>  		return rc;
>
>  	for_each_zone(zone)
> -		zone->min_slab_pages = (zone->managed_pages *
> -				sysctl_min_slab_ratio) / 100;
> +		setup_min_slab_ratio(zone);
> +
>  	return 0;
>  }
>  #endif
>

  reply	other threads:[~2016-06-24 13:20 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-26  6:22 [PATCH v3 0/6] Introduce ZONE_CMA js1304
2016-05-26  6:22 ` [PATCH v3 1/6] mm/page_alloc: recalculate some of zone threshold when on/offline memory js1304
2016-06-24 13:20   ` Vlastimil Babka [this message]
2016-06-28  8:12     ` Joonsoo Kim
2016-05-26  6:22 ` [PATCH v3 2/6] mm/cma: introduce new zone, ZONE_CMA js1304
2016-05-26  6:22 ` [PATCH v3 3/6] mm/cma: populate ZONE_CMA js1304
2016-06-22  9:23   ` Chen Feng
2016-06-23  2:52     ` Joonsoo Kim
2016-06-28 11:23       ` Chen Feng
2016-06-29  8:00         ` Joonsoo Kim
2016-06-27  8:24   ` Vlastimil Babka
2016-06-28  8:31     ` Joonsoo Kim
2016-05-26  6:22 ` [PATCH v3 4/6] mm/cma: remove ALLOC_CMA js1304
2016-06-27  9:30   ` Vlastimil Babka
2016-06-28  8:16     ` Joonsoo Kim
2016-05-26  6:22 ` [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA js1304
2016-05-27  1:42   ` Chen Feng
2016-05-27  5:32     ` Joonsoo Kim
2016-06-27  9:46   ` Vlastimil Babka
2016-06-28  8:17     ` Joonsoo Kim
2016-05-26  6:22 ` [PATCH v3 6/6] mm/cma: remove per zone CMA stat js1304
2016-06-27  9:54   ` Vlastimil Babka
2016-05-26  8:04 ` [PATCH v3 0/6] Introduce ZONE_CMA Feng Tang
2016-05-27  5:28   ` Joonsoo Kim
2016-05-27  6:25     ` Feng Tang
2016-05-27  6:42       ` Joonsoo Kim
2016-05-27  7:27         ` Feng Tang
2016-05-30  5:45           ` Joonsoo Kim
2016-06-17  7:38           ` Chen Feng
2016-06-20  6:48             ` Joonsoo Kim
2016-06-21  2:08               ` Chen Feng
2016-06-21  6:56                 ` Joonsoo Kim
2016-06-27 11:25 ` Balbir Singh
2016-06-29  7:57   ` Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=921a37c6-b1e8-576f-095b-48e153bfd1d6@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=js1304@gmail.com \
    --cc=lauraa@codeaurora.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=m.szyprowski@samsung.com \
    --cc=mgorman@techsingularity.net \
    --cc=mina86@mina86.com \
    --cc=minchan@kernel.org \
    --cc=riel@redhat.com \
    --cc=rui.teng@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).