linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm/memory_hotplug: Fix try_offline_node()
@ 2019-11-01 22:11 David Hildenbrand
  2019-11-02 11:23 ` David Hildenbrand
  0 siblings, 1 reply; 2+ messages in thread
From: David Hildenbrand @ 2019-11-01 22:11 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Tang Chen, Greg Kroah-Hartman,
	Rafael J. Wysocki, Andrew Morton, Keith Busch, Jiri Olsa,
	Peter Zijlstra (Intel),
	Jani Nikula, Nayna Jain, Michal Hocko, Oscar Salvador,
	Stephen Rothwell, Dan Williams, Pavel Tatashin

try_offline_node() is pretty much broken right now:
- The node span is updated when onlining memory, not when adding it. We
  ignore memory that was mever onlined. Bad.
- We touch possible garbage memmaps. The pfn_to_nid(pfn) can easily
  trigger a kernel panic. Bad for memory that is offline but also bad
  for subsection hotadd with ZONE_DEVICE, whereby the memmap of the first
  PFN of a section might contain garbage.
- Sections belonging to mixed nodes are not properly considered.

As memory blocks might belong to multiple nodes, we would have to walk all
pageblocks (or at least subsections) within present sections. However,
we don't have a way to identify whether a memmap that is not online was
initialized (relevant for ZONE_DEVICE). This makes things more complicated.

Luckily, we can piggy pack on the node span and the nid stored in
memory blocks. Currently, the node span is grown when calling
move_pfn_range_to_zone() - e.g., when onlining memory, and shrunk when
removing memory, before calling try_offline_node(). Sysfs links are
created via link_mem_sections(), e.g., during boot or when adding memory.

If the node still spans memory or if any memory block belongs to the
nid, we don't set the node offline. As memory blocks that span multiple
nodes cannot get offlined, the nid stored in memory blocks is reliable
enough (for such online memory blocks, the node still spans the memory).

Note: We will soon stop shrinking the ZONE_DEVICE zone and the node span
when removing ZONE_DEVICE memory to fix similar issues (access of garbage
memmaps) - until we have a reliable way to identify whether these memmaps
were properly initialized. This implies later, that once a node had
ZONE_DEVICE memory, we won't be able to set a node offline -
which should be acceptable.

Since commit f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded
memory to zones until online") memory that is added is not assoziated
with a zone/node (memmap not initialized). The introducing
commit 60a5a19e7419 ("memory-hotplug: remove sysfs file of node") already
missed that we could have multiple nodes for a section and that the
zone/node span is updated when onlining pages, not when adding them.

I tested this by hotplugging two DIMMs to a memory-less and cpu-less NUMA
node. The node is properly onlined when adding the DIMMs. When removing
the DIMMs, the node is properly offlined.

Fixes: 60a5a19e7419 ("memory-hotplug: remove sysfs file of node")
Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") # visiable after d0dc12e86b319
Cc: Tang Chen <tangchen@cn.fujitsu.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Nayna Jain <nayna@linux.ibm.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---

v1 -> v2:
- Drop sysfs handling, simplify, and add a comment
- Make sure to include last section fully

We stop shrinking the ZONE_DEVICE zone after the following patch:
 [PATCH v6 04/10] mm/memory_hotplug: Don't access uninitialized memmaps
 in shrink_zone_span()
This implies, the above note regarding ZONE_DEVICE on a node blocking a
node from getting offlined until we sorted out how to properly shrink
the ZONE_DEVICE zone.

This patch is especially important for:
 [PATCH v6 05/10] mm/memory_hotplug: Shrink zones when offlining
 memory
As the BUG fixed with this patch becomes now easier to observe when memory
is offlined (in contrast to when memory would never have been onlined
before).

As both patches are stable fixes and in next/master for a long time, we
should probably pull this patch in front of both and also backport this
patch at least to
 Cc: stable@vger.kernel.org # v4.13+
I have not checked yet if there are real blockers to do that. I guess not.

---
 mm/memory_hotplug.c | 45 +++++++++++++++++++++++++++++----------------
 1 file changed, 29 insertions(+), 16 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0140c20837b6..b5f696491577 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1634,6 +1634,18 @@ static int check_cpu_on_node(pg_data_t *pgdat)
 	return 0;
 }
 
+static int check_no_memblock_for_node_cb(struct memory_block *mem, void *arg)
+{
+	int nid = *(int *)arg;
+
+	/*
+	 * If a memory block belongs to multiple nodes, the stored nid is not
+	 * reliable. However, such blocks are always online (e.g., cannot get
+	 * offlined) and, therefore, are still spanned by the node.
+	 */
+	return mem->nid == nid ? -EEXIST : 0;
+}
+
 /**
  * try_offline_node
  * @nid: the node ID
@@ -1645,26 +1657,27 @@ static int check_cpu_on_node(pg_data_t *pgdat)
  */
 void try_offline_node(int nid)
 {
+	const unsigned long end_section_nr = __highest_present_section_nr + 1;
 	pg_data_t *pgdat = NODE_DATA(nid);
-	unsigned long start_pfn = pgdat->node_start_pfn;
-	unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages;
-	unsigned long pfn;
-
-	for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-		unsigned long section_nr = pfn_to_section_nr(pfn);
-
-		if (!present_section_nr(section_nr))
-			continue;
+	int rc;
 
-		if (pfn_to_nid(pfn) != nid)
-			continue;
+	/*
+	 * If the node still spans pages (especially ZONE_DEVICE), don't
+	 * offline it. A node spans memory after move_pfn_range_to_zone(),
+	 * e.g., after the memory block was onlined.
+	 */
+	if (pgdat->node_spanned_pages)
+		return;
 
-		/*
-		 * some memory sections of this node are not removed, and we
-		 * can't offline node now.
-		 */
+	/*
+	 * Especially offline memory blocks might not be spanned by the
+	 * node. They will get spanned by the node once they get onlined.
+	 * However, they link to the node in sysfs and can get onlined later.
+	 */
+	rc = walk_memory_blocks(0, PFN_PHYS(section_nr_to_pfn(end_section_nr)),
+				&nid, check_no_memblock_for_node_cb);
+	if (rc)
 		return;
-	}
 
 	if (check_cpu_on_node(pgdat))
 		return;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] mm/memory_hotplug: Fix try_offline_node()
  2019-11-01 22:11 [PATCH v2] mm/memory_hotplug: Fix try_offline_node() David Hildenbrand
@ 2019-11-02 11:23 ` David Hildenbrand
  0 siblings, 0 replies; 2+ messages in thread
From: David Hildenbrand @ 2019-11-02 11:23 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, Tang Chen, Greg Kroah-Hartman, Rafael J. Wysocki,
	Andrew Morton, Keith Busch, Jiri Olsa, Peter Zijlstra (Intel),
	Jani Nikula, Nayna Jain, Michal Hocko, Oscar Salvador,
	Stephen Rothwell, Dan Williams, Pavel Tatashin

On 01.11.19 23:11, David Hildenbrand wrote:
> try_offline_node() is pretty much broken right now:
> - The node span is updated when onlining memory, not when adding it. We
>    ignore memory that was mever onlined. Bad.
> - We touch possible garbage memmaps. The pfn_to_nid(pfn) can easily
>    trigger a kernel panic. Bad for memory that is offline but also bad
>    for subsection hotadd with ZONE_DEVICE, whereby the memmap of the first
>    PFN of a section might contain garbage.
> - Sections belonging to mixed nodes are not properly considered.
> 
> As memory blocks might belong to multiple nodes, we would have to walk all
> pageblocks (or at least subsections) within present sections. However,
> we don't have a way to identify whether a memmap that is not online was
> initialized (relevant for ZONE_DEVICE). This makes things more complicated.
> 
> Luckily, we can piggy pack on the node span and the nid stored in
> memory blocks. Currently, the node span is grown when calling
> move_pfn_range_to_zone() - e.g., when onlining memory, and shrunk when
> removing memory, before calling try_offline_node(). Sysfs links are
> created via link_mem_sections(), e.g., during boot or when adding memory.
> 
> If the node still spans memory or if any memory block belongs to the
> nid, we don't set the node offline. As memory blocks that span multiple
> nodes cannot get offlined, the nid stored in memory blocks is reliable
> enough (for such online memory blocks, the node still spans the memory).
> 
> Note: We will soon stop shrinking the ZONE_DEVICE zone and the node span
> when removing ZONE_DEVICE memory to fix similar issues (access of garbage
> memmaps) - until we have a reliable way to identify whether these memmaps
> were properly initialized. This implies later, that once a node had
> ZONE_DEVICE memory, we won't be able to set a node offline -
> which should be acceptable.
> 
> Since commit f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded
> memory to zones until online") memory that is added is not assoziated
> with a zone/node (memmap not initialized). The introducing
> commit 60a5a19e7419 ("memory-hotplug: remove sysfs file of node") already
> missed that we could have multiple nodes for a section and that the
> zone/node span is updated when onlining pages, not when adding them.
> 
> I tested this by hotplugging two DIMMs to a memory-less and cpu-less NUMA
> node. The node is properly onlined when adding the DIMMs. When removing
> the DIMMs, the node is properly offlined.
> 
> Fixes: 60a5a19e7419 ("memory-hotplug: remove sysfs file of node")
> Fixes: f1dd2cd13c4b ("mm, memory_hotplug: do not associate hotadded memory to zones until online") # visiable after d0dc12e86b319
> Cc: Tang Chen <tangchen@cn.fujitsu.com>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: "Rafael J. Wysocki" <rafael@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Keith Busch <keith.busch@intel.com>
> Cc: Jiri Olsa <jolsa@kernel.org>
> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
> Cc: Jani Nikula <jani.nikula@intel.com>
> Cc: Nayna Jain <nayna@linux.ibm.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Stephen Rothwell <sfr@canb.auug.org.au>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> 
> v1 -> v2:
> - Drop sysfs handling, simplify, and add a comment
> - Make sure to include last section fully
> 
> We stop shrinking the ZONE_DEVICE zone after the following patch:
>   [PATCH v6 04/10] mm/memory_hotplug: Don't access uninitialized memmaps
>   in shrink_zone_span()
> This implies, the above note regarding ZONE_DEVICE on a node blocking a
> node from getting offlined until we sorted out how to properly shrink
> the ZONE_DEVICE zone.
> 
> This patch is especially important for:
>   [PATCH v6 05/10] mm/memory_hotplug: Shrink zones when offlining
>   memory
> As the BUG fixed with this patch becomes now easier to observe when memory
> is offlined (in contrast to when memory would never have been onlined
> before).
> 
> As both patches are stable fixes and in next/master for a long time, we
> should probably pull this patch in front of both and also backport this
> patch at least to
>   Cc: stable@vger.kernel.org # v4.13+
> I have not checked yet if there are real blockers to do that. I guess not.
> 
> ---
>   mm/memory_hotplug.c | 45 +++++++++++++++++++++++++++++----------------
>   1 file changed, 29 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 0140c20837b6..b5f696491577 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1634,6 +1634,18 @@ static int check_cpu_on_node(pg_data_t *pgdat)
>   	return 0;
>   }
>   
> +static int check_no_memblock_for_node_cb(struct memory_block *mem, void *arg)
> +{
> +	int nid = *(int *)arg;
> +
> +	/*
> +	 * If a memory block belongs to multiple nodes, the stored nid is not
> +	 * reliable. However, such blocks are always online (e.g., cannot get
> +	 * offlined) and, therefore, are still spanned by the node.
> +	 */
> +	return mem->nid == nid ? -EEXIST : 0;
> +}
> +
>   /**
>    * try_offline_node
>    * @nid: the node ID
> @@ -1645,26 +1657,27 @@ static int check_cpu_on_node(pg_data_t *pgdat)
>    */
>   void try_offline_node(int nid)
>   {
> +	const unsigned long end_section_nr = __highest_present_section_nr + 1;
>   	pg_data_t *pgdat = NODE_DATA(nid);
> -	unsigned long start_pfn = pgdat->node_start_pfn;
> -	unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages;
> -	unsigned long pfn;
> -
> -	for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
> -		unsigned long section_nr = pfn_to_section_nr(pfn);
> -
> -		if (!present_section_nr(section_nr))
> -			continue;
> +	int rc;
>   
> -		if (pfn_to_nid(pfn) != nid)
> -			continue;
> +	/*
> +	 * If the node still spans pages (especially ZONE_DEVICE), don't
> +	 * offline it. A node spans memory after move_pfn_range_to_zone(),
> +	 * e.g., after the memory block was onlined.
> +	 */
> +	if (pgdat->node_spanned_pages)
> +		return;
>   
> -		/*
> -		 * some memory sections of this node are not removed, and we
> -		 * can't offline node now.
> -		 */
> +	/*
> +	 * Especially offline memory blocks might not be spanned by the
> +	 * node. They will get spanned by the node once they get onlined.
> +	 * However, they link to the node in sysfs and can get onlined later.
> +	 */
> +	rc = walk_memory_blocks(0, PFN_PHYS(section_nr_to_pfn(end_section_nr)),
> +				&nid, check_no_memblock_for_node_cb);

walk_memory_block() might be fairly inefficient for this use case (as it 
uses subsys_find_device_by_id() on any possible memory block, which is a 
list scan).

I guess I will introduce a walk_each_memory_block() that uses 
bus_for_each_dev() under the hood.

Sorry for the noise :)


-- 

Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-11-02 11:24 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-01 22:11 [PATCH v2] mm/memory_hotplug: Fix try_offline_node() David Hildenbrand
2019-11-02 11:23 ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).