linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Michal Hocko <mhocko@kernel.org>, linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Jerome Glisse <jglisse@redhat.com>,
	Reza Arbab <arbab@linux.vnet.ibm.com>,
	Yasuaki Ishimatsu <yasu.isimatu@gmail.com>,
	qiuxishi@huawei.com, Kani Toshimitsu <toshi.kani@hpe.com>,
	slaoub@gmail.com, Joonsoo Kim <js1304@gmail.com>,
	Andi Kleen <ak@linux.intel.com>,
	David Rientjes <rientjes@google.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Dan Williams <dan.j.williams@gmail.com>,
	Heiko Carstens <heiko.carstens@de.ibm.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>,
	Martin Schwidefsky <schwidefsky@de.ibm.com>
Subject: Re: [PATCH v3 6/9] mm, memory_hotplug: do not associate hotadded memory to zones until online
Date: Thu, 20 Apr 2017 10:25:27 +0200	[thread overview]
Message-ID: <49b6c3e2-0e68-b77e-31d6-f589d3b4822e@suse.cz> (raw)
In-Reply-To: <20170410162547.GM4618@dhcp22.suse.cz>

On 04/10/2017 06:25 PM, Michal Hocko wrote:
> This contains two minor fixes spotted based on testing by Igor Mammedov.
> ---
> From d829579cc7061255f818f9aeaa3aa2cd82fec75a Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@suse.com>
> Date: Wed, 29 Mar 2017 16:07:00 +0200
> Subject: [PATCH] mm, memory_hotplug: do not associate hotadded memory to zones
>  until online
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
> 
> The current memory hotplug implementation relies on having all the
> struct pages associate with a zone/node during the physical hotplug phase
> (arch_add_memory->__add_pages->__add_section->__add_zone). In the vast
> majority of cases this means that they are added to ZONE_NORMAL. This
> has been so since 9d99aaa31f59 ("[PATCH] x86_64: Support memory hotadd
> without sparsemem") and it wasn't a big deal back then because movable
> onlining didn't exist yet.
> 
> Much later memory hotplug wanted to (ab)use ZONE_MOVABLE for movable
> onlining 511c2aba8f07 ("mm, memory-hotplug: dynamic configure movable
> memory and portion memory") and then things got more complicated. Rather
> than reconsidering the zone association which was no longer needed
> (because the memory hotplug already depended on SPARSEMEM) a convoluted
> semantic of zone shifting has been developed. Only the currently last
> memblock or the one adjacent to the zone_movable can be onlined movable.
> This essentially means that the online type changes as the new memblocks
> are added.
> 
> Let's simulate memory hot online manually
> Normal Movable
> 
> /sys/devices/system/memory/memory32/valid_zones:Normal
> /sys/devices/system/memory/memory33/valid_zones:Normal Movable
> 
> /sys/devices/system/memory/memory32/valid_zones:Normal
> /sys/devices/system/memory/memory33/valid_zones:Normal
> /sys/devices/system/memory/memory34/valid_zones:Normal Movable
> 
> /sys/devices/system/memory/memory32/valid_zones:Normal
> /sys/devices/system/memory/memory33/valid_zones:Normal Movable
> /sys/devices/system/memory/memory34/valid_zones:Movable Normal

Commands seem to be missing above?

> This is an awkward semantic because an udev event is sent as soon as the
> block is onlined and an udev handler might want to online it based on
> some policy (e.g. association with a node) but it will inherently race
> with new blocks showing up.
> 
> This patch changes the physical online phase to not associate pages
> with any zone at all. All the pages are just marked reserved and wait
> for the onlining phase to be associated with the zone as per the online
> request. There are only two requirements
> 	- existing ZONE_NORMAL and ZONE_MOVABLE cannot overlap
> 	- ZONE_NORMAL precedes ZONE_MOVABLE in physical addresses
> the later on is not an inherent requirement and can be changed in the
> future. It preserves the current behavior and made the code slightly
> simpler. This is subject to change in future.
> 
> This means that the same physical online steps as above will lead to the
> following state:
> Normal Movable
> 
> /sys/devices/system/memory/memory32/valid_zones:Normal Movable
> /sys/devices/system/memory/memory33/valid_zones:Normal Movable
> 
> /sys/devices/system/memory/memory32/valid_zones:Normal Movable
> /sys/devices/system/memory/memory33/valid_zones:Normal Movable
> /sys/devices/system/memory/memory34/valid_zones:Normal Movable
> 
> /sys/devices/system/memory/memory32/valid_zones:Normal Movable
> /sys/devices/system/memory/memory33/valid_zones:Normal Movable
> /sys/devices/system/memory/memory34/valid_zones:Movable

Ditto.

> Implementation:
> The current move_pfn_range is reimplemented to check the above
> requirements (allow_online_pfn_range) and then updates the respective
> zone (move_pfn_range_to_zone), the pgdat and links all the pages in the
> pfn range with the zone/node. __add_pages is updated to not require the
> zone and only initializes sections in the range. This allowed to
> simplify the arch_add_memory code (s390 could get rid of quite some
> of code).
> 
> devm_memremap_pages is the only user of arch_add_memory which relies
> on the zone association because it only hooks into the memory hotplug
> only half way. It uses it to associate the new memory with ZONE_DEVICE
> but doesn't allow it to be {on,off}lined via sysfs. This means that this
> particular code path has to call move_pfn_range_to_zone explicitly.
> 
> The original zone shifting code is kept in place and will be removed in
> the follow up patch for an easier review.
> 
> Changes since v1
> - we have to associate the page with the node early (in __add_section),
>   because pfn_to_node depends on struct page containing this
>   information - based on testing by Reza Arbab
> - resize_{zone,pgdat}_range has to check whether they are popoulated -
>   Reza Arbab
> - fix devm_memremap_pages to use pfn rather than physical address -
>   Jérôme Glisse
> - move_pfn_range has to check for intersection with zone_movable rather
>   than to rely on allow_online_pfn_range(MMOP_ONLINE_MOVABLE) for
>   MMOP_ONLINE_KEEP
> 
> Changes since v2
> - fix show_valid_zones nr_pages calculation
> - allow_online_pfn_range has to check managed pages rather than present
> 
> Cc: Dan Williams <dan.j.williams@gmail.com>
> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
> Cc: linux-arch@vger.kernel.org
> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # For s390 bits
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  arch/ia64/mm/init.c            |   9 +-
>  arch/powerpc/mm/mem.c          |  10 +-
>  arch/s390/mm/init.c            |  30 +-----
>  arch/sh/mm/init.c              |   8 +-
>  arch/x86/mm/init_32.c          |   5 +-
>  arch/x86/mm/init_64.c          |   9 +-
>  drivers/base/memory.c          |  52 ++++++-----
>  include/linux/memory_hotplug.h |  13 +--
>  include/linux/mmzone.h         |  14 +++
>  kernel/memremap.c              |   4 +
>  mm/memory_hotplug.c            | 201 +++++++++++++++++++++++++----------------
>  mm/sparse.c                    |   3 +-
>  12 files changed, 186 insertions(+), 172 deletions(-)

...

> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -533,6 +533,20 @@ static inline bool zone_is_empty(struct zone *zone)
>  }
>  
>  /*
> + * Return true if [start_pfn, start_pfn + nr_pages) range has a non-mpty


							       non-empty

> + * intersection with the given zone
> + */
> +static inline bool zone_intersects(struct zone *zone,
> +		unsigned long start_pfn, unsigned long nr_pages)
> +{

I'm looking at your current mmotm tree branch, which looks like this:

+ * Return true if [start_pfn, start_pfn + nr_pages) range has a non-mpty
+ * intersection with the given zone
+ */
+static inline bool zone_intersects(struct zone *zone,
+               unsigned long start_pfn, unsigned long nr_pages)
+{
+       if (zone_is_empty(zone))
+               return false;
+       if (zone->zone_start_pfn <= start_pfn && start_pfn < zone_end_pfn(zone))
+               return true;
+       if (start_pfn + nr_pages > zone->zone_start_pfn)
+               return true;

A false positive is possible here, when start_pfn >= zone_end_pfn(zone)?

+       return false;
+}
+
+/*

...

> @@ -1029,39 +1018,114 @@ static void node_states_set_node(int node, struct memory_notify *arg)
>  	node_set_state(node, N_MEMORY);
>  }
>  
> -bool zone_can_shift(unsigned long pfn, unsigned long nr_pages,
> -		   enum zone_type target, int *zone_shift)
> +bool allow_online_pfn_range(int nid, unsigned long pfn, unsigned long nr_pages, int online_type)
>  {
> -	struct zone *zone = page_zone(pfn_to_page(pfn));
> -	enum zone_type idx = zone_idx(zone);
> -	int i;
> +	struct pglist_data *pgdat = NODE_DATA(nid);
> +	struct zone *movable_zone = &pgdat->node_zones[ZONE_MOVABLE];
> +	struct zone *normal_zone =  &pgdat->node_zones[ZONE_NORMAL];
>  
> -	*zone_shift = 0;
> +	/*
> +	 * TODO there shouldn't be any inherent reason to have ZONE_NORMAL
> +	 * physically before ZONE_MOVABLE. All we need is they do not
> +	 * overlap. Historically we didn't allow ZONE_NORMAL after ZONE_MOVABLE
> +	 * though so let's stick with it for simplicity for now.
> +	 * TODO make sure we do not overlap with ZONE_DEVICE

Is this last TODO a blocker, unlike the others?

...

@ -1074,29 +1138,16 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
>  	int nid;
>  	int ret;
>  	struct memory_notify arg;
> -	int zone_shift = 0;
>  
> -	/*
> -	 * This doesn't need a lock to do pfn_to_page().
> -	 * The section can't be removed here because of the
> -	 * memory_block->state_mutex.
> -	 */
> -	zone = page_zone(pfn_to_page(pfn));
> -
> -	if ((zone_idx(zone) > ZONE_NORMAL ||
> -	    online_type == MMOP_ONLINE_MOVABLE) &&
> -	    !can_online_high_movable(pfn_to_nid(pfn)))
> +	nid = pfn_to_nid(pfn);
> +	if (!allow_online_pfn_range(nid, pfn, nr_pages, online_type))
>  		return -EINVAL;
>  
> -	if (online_type == MMOP_ONLINE_KERNEL) {
> -		if (!zone_can_shift(pfn, nr_pages, ZONE_NORMAL, &zone_shift))
> -			return -EINVAL;
> -	} else if (online_type == MMOP_ONLINE_MOVABLE) {
> -		if (!zone_can_shift(pfn, nr_pages, ZONE_MOVABLE, &zone_shift))
> -			return -EINVAL;
> -	}
> +	if (online_type == MMOP_ONLINE_MOVABLE && !can_online_high_movable(nid))
> +		return -EINVAL;
>  
> -	zone = move_pfn_range(zone_shift, pfn, pfn + nr_pages);
> +	/* associate pfn range with the zone */
> +	zone = move_pfn_range(online_type, nid, pfn, nr_pages);
>  	if (!zone)
>  		return -EINVAL;

Nit: This !zone currently cannot happen.

>  
> @@ -1104,8 +1155,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
>  	arg.nr_pages = nr_pages;
>  	node_states_check_changes_online(nr_pages, zone, &arg);
>  
> -	nid = zone_to_nid(zone);
> -
>  	ret = memory_notify(MEM_GOING_ONLINE, &arg);
>  	ret = notifier_to_errno(ret);
>  	if (ret)
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 6903c8fc3085..d75407882598 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -686,10 +686,9 @@ static void free_map_bootmem(struct page *memmap)
>   * set.  If this is <=0, then that means that the passed-in
>   * map was not consumed and must be freed.
>   */
> -int __meminit sparse_add_one_section(struct zone *zone, unsigned long start_pfn)
> +int __meminit sparse_add_one_section(struct pglist_data *pgdat, unsigned long start_pfn)
>  {
>  	unsigned long section_nr = pfn_to_section_nr(start_pfn);
> -	struct pglist_data *pgdat = zone->zone_pgdat;
>  	struct mem_section *ms;
>  	struct page *memmap;
>  	unsigned long *usemap;
> 

  reply	other threads:[~2017-04-20  8:25 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-10 11:03 [PATCH -v2 0/9] mm: make movable onlining suck less Michal Hocko
2017-04-10 11:03 ` [PATCH 1/9] mm: remove return value from init_currently_empty_zone Michal Hocko
2017-04-11  8:10   ` Balbir Singh
2017-04-13 12:03   ` Vlastimil Babka
2017-04-13 19:43   ` YASUAKI ISHIMATSU
2017-04-10 11:03 ` [PATCH 2/9] mm, memory_hotplug: use node instead of zone in can_online_high_movable Michal Hocko
2017-04-13 12:46   ` Vlastimil Babka
2017-04-13 19:45   ` YASUAKI ISHIMATSU
2017-04-10 11:03 ` [PATCH 3/9] mm: drop page_initialized check from get_nid_for_pfn Michal Hocko
2017-04-13 12:59   ` Vlastimil Babka
2017-04-10 11:03 ` [PATCH 4/9] mm, memory_hotplug: get rid of is_zone_device_section Michal Hocko
2017-04-10 16:20   ` Jerome Glisse
2017-04-10 16:31     ` Michal Hocko
2017-04-13 13:05   ` Vlastimil Babka
2017-04-17 20:12   ` Jerome Glisse
2017-04-18  7:19     ` Michal Hocko
2017-04-10 11:03 ` [PATCH 5/9] mm, memory_hotplug: split up register_one_node Michal Hocko
2017-04-13 14:05   ` Vlastimil Babka
2017-04-13 14:13     ` Michal Hocko
2017-04-10 11:03 ` [PATCH 6/9] mm, memory_hotplug: do not associate hotadded memory to zones until online Michal Hocko
2017-04-10 16:25   ` [PATCH v3 " Michal Hocko
2017-04-20  8:25     ` Vlastimil Babka [this message]
2017-04-20  9:06       ` Michal Hocko
2017-04-20 10:51         ` Vlastimil Babka
2017-04-10 11:03 ` [PATCH 7/9] mm, memory_hotplug: replace for_device by want_memblock in arch_add_memory Michal Hocko
2017-04-20  8:29   ` Vlastimil Babka
2017-04-10 11:03 ` [PATCH 8/9] mm, memory_hotplug: fix the section mismatch warning Michal Hocko
2017-04-10 11:03 ` [PATCH 9/9] mm, memory_hotplug: remove unused cruft after memory hotplug rework Michal Hocko
2017-04-20  8:38   ` Vlastimil Babka
2017-04-10 14:27 ` [PATCH -v2 0/9] mm: make movable onlining suck less Igor Mammedov
2017-04-10 14:56   ` Michal Hocko
2017-04-10 15:22     ` Michal Hocko
2017-04-10 15:31       ` Michal Hocko
2017-04-11  8:01     ` Igor Mammedov
2017-04-11  8:41       ` Michal Hocko
2017-04-11  9:53         ` Igor Mammedov
2017-04-11 10:47           ` Michal Hocko
2017-04-10 16:02   ` Michal Hocko
2017-04-18  8:23     ` Vlastimil Babka
2017-04-10 16:09   ` Michal Hocko
2017-04-11  6:38     ` Igor Mammedov
2017-04-11  9:23       ` Michal Hocko
2017-04-11  9:59         ` Igor Mammedov
2017-04-11 11:01           ` Michal Hocko
2017-04-11 11:38             ` Michal Hocko
2017-04-11 12:38               ` Michal Hocko
2017-04-10 15:43 ` Reza Arbab
2017-04-11  8:59   ` Michal Hocko
2017-04-10 16:35 ` Jerome Glisse
2017-04-10 17:53   ` Michal Hocko
2017-04-11  2:51   ` Balbir Singh
2017-04-11 17:03 ` Michal Hocko
2017-04-17 21:51   ` Dan Williams
2017-04-18  7:14     ` Michal Hocko
2017-04-18 16:42       ` Dan Williams
2017-04-18 19:54         ` Michal Hocko
2017-04-20  3:37           ` Dan Williams
2017-04-15 12:17 ` Michal Hocko
2017-04-15 12:17   ` [PATCH 1/3] mm: consider zone which is not fully populated to have holes Michal Hocko
2017-04-18  8:45     ` Vlastimil Babka
2017-04-18  9:27       ` Michal Hocko
2017-04-19 11:59         ` Vlastimil Babka
2017-04-19 12:16           ` Michal Hocko
2017-04-19 12:34             ` Vlastimil Babka
2017-04-19 12:50               ` Michal Hocko
2017-04-15 12:17   ` [PATCH 2/3] mm, compaction: skip over holes in __reset_isolation_suitable Michal Hocko
2017-04-15 12:17   ` [PATCH 3/3] mm: __first_valid_page skip over offline pages Michal Hocko
2017-04-17  5:47   ` your mail Joonsoo Kim
2017-04-17  8:15     ` Michal Hocko
2017-04-20  1:27       ` Joonsoo Kim
2017-04-20  7:28         ` Michal Hocko
2017-04-20  8:49           ` Michal Hocko
2017-04-20 11:56             ` Vlastimil Babka
2017-04-20 12:13               ` Michal Hocko
2017-04-21  2:46             ` [lkp-robot] 73821bb516: WARNING:at_mm/memblock.c:#memblock_virt_alloc_internal kernel test robot
2017-04-21  8:05               ` Michal Hocko
2017-04-21  4:38           ` your mail Joonsoo Kim
2017-04-21  7:16             ` Michal Hocko
2017-04-24  1:44               ` Joonsoo Kim
2017-04-24  7:53                 ` Michal Hocko
2017-04-25  2:50                   ` Joonsoo Kim
2017-04-26  9:19                     ` Michal Hocko
2017-04-27  2:08                       ` Joonsoo Kim
2017-04-27 15:10                         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49b6c3e2-0e68-b77e-31d6-f589d3b4822e@suse.cz \
    --to=vbabka@suse.cz \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=arbab@linux.vnet.ibm.com \
    --cc=dan.j.williams@gmail.com \
    --cc=daniel.kiper@oracle.com \
    --cc=heiko.carstens@de.ibm.com \
    --cc=imammedo@redhat.com \
    --cc=jglisse@redhat.com \
    --cc=js1304@gmail.com \
    --cc=laijs@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@kernel.org \
    --cc=qiuxishi@huawei.com \
    --cc=rientjes@google.com \
    --cc=schwidefsky@de.ibm.com \
    --cc=slaoub@gmail.com \
    --cc=toshi.kani@hpe.com \
    --cc=vkuznets@redhat.com \
    --cc=yasu.isimatu@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).