linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] mm/shuffle: fix and cleanups
@ 2020-06-19 12:59 David Hildenbrand
  2020-06-19 12:59 ` [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps David Hildenbrand
                   ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: David Hildenbrand @ 2020-06-19 12:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Alexander Duyck, Andrew Morton,
	Dan Williams, Huang Ying, Johannes Weiner, Mel Gorman,
	Michal Hocko, Minchan Kim, Wei Yang

Patch #1 is a fix for overlapping zones and offline sections. Patch #2
documents why we have to shuffle on memory hotplug, when onlining memory.
Patch #3 removes dynamic reconfiguration which is currently dead code.

v1 -> v2:
- Replace "mm/memory_hotplug: don't shuffle complete zone when onlining
  memory" by "mm/memory_hotplug: document why shuffle_zone() is relevant"
- "mm/shuffle: remove dynamic reconfiguration"
-- Add details why autodetection is not implemented

David Hildenbrand (3):
  mm/shuffle: don't move pages between zones and don't read garbage
    memmaps
  mm/memory_hotplug: document why shuffle_zone() is relevant
  mm/shuffle: remove dynamic reconfiguration

 mm/memory_hotplug.c |  8 ++++++++
 mm/shuffle.c        | 46 +++++++++++----------------------------------
 mm/shuffle.h        | 17 -----------------
 3 files changed, 19 insertions(+), 52 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-19 12:59 [PATCH v2 0/3] mm/shuffle: fix and cleanups David Hildenbrand
@ 2020-06-19 12:59 ` David Hildenbrand
  2020-06-20  1:37   ` Williams, Dan J
  2020-06-22  8:26   ` Wei Yang
  2020-06-19 12:59 ` [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant David Hildenbrand
  2020-06-19 12:59 ` [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration David Hildenbrand
  2 siblings, 2 replies; 30+ messages in thread
From: David Hildenbrand @ 2020-06-19 12:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Michal Hocko, stable, Andrew Morton,
	Johannes Weiner, Minchan Kim, Huang Ying, Wei Yang, Mel Gorman

Especially with memory hotplug, we can have offline sections (with a
garbage memmap) and overlapping zones. We have to make sure to only
touch initialized memmaps (online sections managed by the buddy) and that
the zone matches, to not move pages between zones.

To test if this can actually happen, I added a simple
	BUG_ON(page_zone(page_i) != page_zone(page_j));
right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
onlining the first memory block "online_movable" and the second memory
block "online_kernel", it will trigger the BUG, as both zones (NORMAL
and MOVABLE) overlap.

This might result in all kinds of weird situations (e.g., double
allocations, list corruptions, unmovable allocations ending up in the
movable zone).

Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: stable@vger.kernel.org # v5.2+
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/shuffle.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/shuffle.c b/mm/shuffle.c
index 44406d9977c77..dd13ab851b3ee 100644
--- a/mm/shuffle.c
+++ b/mm/shuffle.c
@@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
  * For two pages to be swapped in the shuffle, they must be free (on a
  * 'free_area' lru), have the same order, and have the same migratetype.
  */
-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
+static struct page * __meminit shuffle_valid_page(struct zone *zone,
+						  unsigned long pfn, int order)
 {
-	struct page *page;
+	struct page *page = pfn_to_online_page(pfn);
 
 	/*
 	 * Given we're dealing with randomly selected pfns in a zone we
 	 * need to ask questions like...
 	 */
 
-	/* ...is the pfn even in the memmap? */
-	if (!pfn_valid_within(pfn))
+	/* ... is the page managed by the buddy? */
+	if (!page)
 		return NULL;
 
-	/* ...is the pfn in a present section or a hole? */
-	if (!pfn_in_present_section(pfn))
+	/* ... is the page assigned to the same zone? */
+	if (page_zone(page) != zone)
 		return NULL;
 
 	/* ...is the page free and currently on a free_area list? */
-	page = pfn_to_page(pfn);
 	if (!PageBuddy(page))
 		return NULL;
 
@@ -123,7 +123,7 @@ void __meminit __shuffle_zone(struct zone *z)
 		 * page_j randomly selected in the span @zone_start_pfn to
 		 * @spanned_pages.
 		 */
-		page_i = shuffle_valid_page(i, order);
+		page_i = shuffle_valid_page(z, i, order);
 		if (!page_i)
 			continue;
 
@@ -137,7 +137,7 @@ void __meminit __shuffle_zone(struct zone *z)
 			j = z->zone_start_pfn +
 				ALIGN_DOWN(get_random_long() % z->spanned_pages,
 						order_pages);
-			page_j = shuffle_valid_page(j, order);
+			page_j = shuffle_valid_page(z, j, order);
 			if (page_j && page_j != page_i)
 				break;
 		}
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant
  2020-06-19 12:59 [PATCH v2 0/3] mm/shuffle: fix and cleanups David Hildenbrand
  2020-06-19 12:59 ` [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps David Hildenbrand
@ 2020-06-19 12:59 ` David Hildenbrand
  2020-06-20  1:41   ` Dan Williams
  2020-06-22 15:32   ` Michal Hocko
  2020-06-19 12:59 ` [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration David Hildenbrand
  2 siblings, 2 replies; 30+ messages in thread
From: David Hildenbrand @ 2020-06-19 12:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Alexander Duyck,
	Dan Williams, Michal Hocko

It's not completely obvious why we have to shuffle the complete zone, as
some sort of shuffling is already performed when onlining pages via
__free_one_page(), placing MAX_ORDER-1 pages either to the head or the tail
of the freelist. Let's document why we have to shuffle the complete zone
when exposing larger, contiguous physical memory areas to the buddy.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/memory_hotplug.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9b34e03e730a4..a0d81d404823d 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -822,6 +822,14 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 	zone->zone_pgdat->node_present_pages += onlined_pages;
 	pgdat_resize_unlock(zone->zone_pgdat, &flags);
 
+	/*
+	 * When exposing larger, physically contiguous memory areas to the
+	 * buddy, shuffling in the buddy (when freeing onlined pages, putting
+	 * them either to the head or the tail of the freelist) is only helpful
+	 * for mainining the shuffle, but not for creating the initial shuffle.
+	 * Shuffle the whole zone to make sure the just onlined pages are
+	 * properly distributed across the whole freelist.
+	 */
 	shuffle_zone(zone);
 
 	node_states_set_node(nid, &arg);
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration
  2020-06-19 12:59 [PATCH v2 0/3] mm/shuffle: fix and cleanups David Hildenbrand
  2020-06-19 12:59 ` [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps David Hildenbrand
  2020-06-19 12:59 ` [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant David Hildenbrand
@ 2020-06-19 12:59 ` David Hildenbrand
  2020-06-20  1:49   ` Dan Williams
                     ` (2 more replies)
  2 siblings, 3 replies; 30+ messages in thread
From: David Hildenbrand @ 2020-06-19 12:59 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Johannes Weiner,
	Michal Hocko, Minchan Kim, Huang Ying, Wei Yang, Mel Gorman,
	Dan Williams

Commit e900a918b098 ("mm: shuffle initial free memory to improve
memory-side-cache utilization") promised "autodetection of a
memory-side-cache (to be added in a follow-on patch)" over a year ago.

The original series included patches [1], however, they were dropped
during review [2] to be followed-up later.

Due to lack of platforms that publish an HMAT, autodetection is currently
not implemented. However, manual activation is actively used [3]. Let's
simplify for now and re-add when really (ever?) needed.

[1] https://lkml.kernel.org/r/154510700291.1941238.817190985966612531.stgit@dwillia2-desk3.amr.corp.intel.com
[2] https://lkml.kernel.org/r/154690326478.676627.103843791978176914.stgit@dwillia2-desk3.amr.corp.intel.com
[3] https://lkml.kernel.org/r/CAPcyv4irwGUU2x+c6b4L=KbB1dnasNKaaZd6oSpYjL9kfsnROQ@mail.gmail.com

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/shuffle.c | 28 ++--------------------------
 mm/shuffle.h | 17 -----------------
 2 files changed, 2 insertions(+), 43 deletions(-)

diff --git a/mm/shuffle.c b/mm/shuffle.c
index dd13ab851b3ee..9b5cd4b004b0f 100644
--- a/mm/shuffle.c
+++ b/mm/shuffle.c
@@ -10,33 +10,11 @@
 #include "shuffle.h"
 
 DEFINE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
-static unsigned long shuffle_state __ro_after_init;
-
-/*
- * Depending on the architecture, module parameter parsing may run
- * before, or after the cache detection. SHUFFLE_FORCE_DISABLE prevents,
- * or reverts the enabling of the shuffle implementation. SHUFFLE_ENABLE
- * attempts to turn on the implementation, but aborts if it finds
- * SHUFFLE_FORCE_DISABLE already set.
- */
-__meminit void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
-{
-	if (ctl == SHUFFLE_FORCE_DISABLE)
-		set_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state);
-
-	if (test_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state)) {
-		if (test_and_clear_bit(SHUFFLE_ENABLE, &shuffle_state))
-			static_branch_disable(&page_alloc_shuffle_key);
-	} else if (ctl == SHUFFLE_ENABLE
-			&& !test_and_set_bit(SHUFFLE_ENABLE, &shuffle_state))
-		static_branch_enable(&page_alloc_shuffle_key);
-}
 
 static bool shuffle_param;
 static int shuffle_show(char *buffer, const struct kernel_param *kp)
 {
-	return sprintf(buffer, "%c\n", test_bit(SHUFFLE_ENABLE, &shuffle_state)
-			? 'Y' : 'N');
+	return sprintf(buffer, "%c\n", shuffle_param ? 'Y' : 'N');
 }
 
 static __meminit int shuffle_store(const char *val,
@@ -47,9 +25,7 @@ static __meminit int shuffle_store(const char *val,
 	if (rc < 0)
 		return rc;
 	if (shuffle_param)
-		page_alloc_shuffle(SHUFFLE_ENABLE);
-	else
-		page_alloc_shuffle(SHUFFLE_FORCE_DISABLE);
+		static_branch_enable(&page_alloc_shuffle_key);
 	return 0;
 }
 module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
diff --git a/mm/shuffle.h b/mm/shuffle.h
index 4d79f03b6658f..71b784f0b7c3e 100644
--- a/mm/shuffle.h
+++ b/mm/shuffle.h
@@ -4,23 +4,10 @@
 #define _MM_SHUFFLE_H
 #include <linux/jump_label.h>
 
-/*
- * SHUFFLE_ENABLE is called from the command line enabling path, or by
- * platform-firmware enabling that indicates the presence of a
- * direct-mapped memory-side-cache. SHUFFLE_FORCE_DISABLE is called from
- * the command line path and overrides any previous or future
- * SHUFFLE_ENABLE.
- */
-enum mm_shuffle_ctl {
-	SHUFFLE_ENABLE,
-	SHUFFLE_FORCE_DISABLE,
-};
-
 #define SHUFFLE_ORDER (MAX_ORDER-1)
 
 #ifdef CONFIG_SHUFFLE_PAGE_ALLOCATOR
 DECLARE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
-extern void page_alloc_shuffle(enum mm_shuffle_ctl ctl);
 extern void __shuffle_free_memory(pg_data_t *pgdat);
 extern bool shuffle_pick_tail(void);
 static inline void shuffle_free_memory(pg_data_t *pgdat)
@@ -58,10 +45,6 @@ static inline void shuffle_zone(struct zone *z)
 {
 }
 
-static inline void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
-{
-}
-
 static inline bool is_shuffle_order(int order)
 {
 	return false;
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-19 12:59 ` [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps David Hildenbrand
@ 2020-06-20  1:37   ` Williams, Dan J
  2020-06-22  8:26   ` Wei Yang
  1 sibling, 0 replies; 30+ messages in thread
From: Williams, Dan J @ 2020-06-20  1:37 UTC (permalink / raw)
  To: linux-kernel, david
  Cc: akpm, Huang, Ying, linux-mm, richard.weiyang, hannes, minchan,
	mhocko, mgorman, stable

On Fri, 2020-06-19 at 14:59 +0200, David Hildenbrand wrote:
> Especially with memory hotplug, we can have offline sections (with a
> garbage memmap) and overlapping zones. We have to make sure to only
> touch initialized memmaps (online sections managed by the buddy) and
> that
> the zone matches, to not move pages between zones.
> 
> To test if this can actually happen, I added a simple
> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM
> and
> onlining the first memory block "online_movable" and the second
> memory
> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
> and MOVABLE) overlap.
> 
> This might result in all kinds of weird situations (e.g., double
> allocations, list corruptions, unmovable allocations ending up in the
> movable zone).
> 
> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve
> memory-side-cache utilization")
> Acked-by: Michal Hocko <mhocko@suse.com>
> Cc: stable@vger.kernel.org # v5.2+
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Huang Ying <ying.huang@intel.com>
> Cc: Wei Yang <richard.weiyang@gmail.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Looks good to me.

Acked-by: Dan Williams <dan.j.williams@intel.com>



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant
  2020-06-19 12:59 ` [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant David Hildenbrand
@ 2020-06-20  1:41   ` Dan Williams
  2020-06-22  7:27     ` David Hildenbrand
  2020-06-22 15:32   ` Michal Hocko
  1 sibling, 1 reply; 30+ messages in thread
From: Dan Williams @ 2020-06-20  1:41 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Linux Kernel Mailing List, Linux MM, Andrew Morton,
	Alexander Duyck, Michal Hocko

On Fri, Jun 19, 2020 at 6:00 AM David Hildenbrand <david@redhat.com> wrote:
>
> It's not completely obvious why we have to shuffle the complete zone, as
> some sort of shuffling is already performed when onlining pages via
> __free_one_page(), placing MAX_ORDER-1 pages either to the head or the tail
> of the freelist. Let's document why we have to shuffle the complete zone
> when exposing larger, contiguous physical memory areas to the buddy.
>

How about?

Fixes: e900a918b098 ("mm: shuffle initial free memory to improve
memory-side-cache utilization")

...just like Patch1 since that original commit was missing the proper
commentary in the code?


> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/memory_hotplug.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 9b34e03e730a4..a0d81d404823d 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -822,6 +822,14 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>         zone->zone_pgdat->node_present_pages += onlined_pages;
>         pgdat_resize_unlock(zone->zone_pgdat, &flags);
>
> +       /*
> +        * When exposing larger, physically contiguous memory areas to the
> +        * buddy, shuffling in the buddy (when freeing onlined pages, putting
> +        * them either to the head or the tail of the freelist) is only helpful
> +        * for mainining the shuffle, but not for creating the initial shuffle.

s/mainining/maintaining/

> +        * Shuffle the whole zone to make sure the just onlined pages are
> +        * properly distributed across the whole freelist.
> +        */
>         shuffle_zone(zone);
>
>         node_states_set_node(nid, &arg);

Other than the above minor fixups you can add:

Acked-by: Dan Williams <dan.j.williams@intel.com>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration
  2020-06-19 12:59 ` [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration David Hildenbrand
@ 2020-06-20  1:49   ` Dan Williams
  2020-06-22  7:33     ` David Hildenbrand
  2020-06-22 15:37   ` Michal Hocko
  2020-06-23  1:22   ` Wei Yang
  2 siblings, 1 reply; 30+ messages in thread
From: Dan Williams @ 2020-06-20  1:49 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Linux Kernel Mailing List, Linux MM, Andrew Morton,
	Johannes Weiner, Michal Hocko, Minchan Kim, Huang Ying, Wei Yang,
	Mel Gorman

On Fri, Jun 19, 2020 at 5:59 AM David Hildenbrand <david@redhat.com> wrote:
>
> Commit e900a918b098 ("mm: shuffle initial free memory to improve
> memory-side-cache utilization") promised "autodetection of a
> memory-side-cache (to be added in a follow-on patch)" over a year ago.
>
> The original series included patches [1], however, they were dropped
> during review [2] to be followed-up later.
>
> Due to lack of platforms that publish an HMAT, autodetection is currently
> not implemented. However, manual activation is actively used [3]. Let's
> simplify for now and re-add when really (ever?) needed.
>
> [1] https://lkml.kernel.org/r/154510700291.1941238.817190985966612531.stgit@dwillia2-desk3.amr.corp.intel.com
> [2] https://lkml.kernel.org/r/154690326478.676627.103843791978176914.stgit@dwillia2-desk3.amr.corp.intel.com
> [3] https://lkml.kernel.org/r/CAPcyv4irwGUU2x+c6b4L=KbB1dnasNKaaZd6oSpYjL9kfsnROQ@mail.gmail.com
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Huang Ying <ying.huang@intel.com>
> Cc: Wei Yang <richard.weiyang@gmail.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/shuffle.c | 28 ++--------------------------
>  mm/shuffle.h | 17 -----------------
>  2 files changed, 2 insertions(+), 43 deletions(-)
>
> diff --git a/mm/shuffle.c b/mm/shuffle.c
> index dd13ab851b3ee..9b5cd4b004b0f 100644
> --- a/mm/shuffle.c
> +++ b/mm/shuffle.c
> @@ -10,33 +10,11 @@
>  #include "shuffle.h"
>
>  DEFINE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
> -static unsigned long shuffle_state __ro_after_init;
> -
> -/*
> - * Depending on the architecture, module parameter parsing may run
> - * before, or after the cache detection. SHUFFLE_FORCE_DISABLE prevents,
> - * or reverts the enabling of the shuffle implementation. SHUFFLE_ENABLE
> - * attempts to turn on the implementation, but aborts if it finds
> - * SHUFFLE_FORCE_DISABLE already set.
> - */
> -__meminit void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
> -{
> -       if (ctl == SHUFFLE_FORCE_DISABLE)
> -               set_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state);
> -
> -       if (test_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state)) {
> -               if (test_and_clear_bit(SHUFFLE_ENABLE, &shuffle_state))
> -                       static_branch_disable(&page_alloc_shuffle_key);
> -       } else if (ctl == SHUFFLE_ENABLE
> -                       && !test_and_set_bit(SHUFFLE_ENABLE, &shuffle_state))
> -               static_branch_enable(&page_alloc_shuffle_key);
> -}
>
>  static bool shuffle_param;
>  static int shuffle_show(char *buffer, const struct kernel_param *kp)
>  {
> -       return sprintf(buffer, "%c\n", test_bit(SHUFFLE_ENABLE, &shuffle_state)
> -                       ? 'Y' : 'N');
> +       return sprintf(buffer, "%c\n", shuffle_param ? 'Y' : 'N');
>  }
>
>  static __meminit int shuffle_store(const char *val,
> @@ -47,9 +25,7 @@ static __meminit int shuffle_store(const char *val,
>         if (rc < 0)
>                 return rc;
>         if (shuffle_param)
> -               page_alloc_shuffle(SHUFFLE_ENABLE);
> -       else
> -               page_alloc_shuffle(SHUFFLE_FORCE_DISABLE);
> +               static_branch_enable(&page_alloc_shuffle_key);
>         return 0;
>  }

Let's do proper input validation here and require 1 / 'true' to enable
shuffling and not also allow 0 to be an 'enable' value.

Other than that looks like the right move to me until end users or
distros start asking for the kernel to do this by default, I'm not
aware of any of those requests to date. People seem fine to set the
boot option.

After the above fixups you can add:

Acked-by: Dan Williams <dan.j.williams@intel.com>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant
  2020-06-20  1:41   ` Dan Williams
@ 2020-06-22  7:27     ` David Hildenbrand
  2020-06-23 21:15       ` Dan Williams
  0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2020-06-22  7:27 UTC (permalink / raw)
  To: Dan Williams
  Cc: Linux Kernel Mailing List, Linux MM, Andrew Morton,
	Alexander Duyck, Michal Hocko

On 20.06.20 03:41, Dan Williams wrote:
> On Fri, Jun 19, 2020 at 6:00 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> It's not completely obvious why we have to shuffle the complete zone, as
>> some sort of shuffling is already performed when onlining pages via
>> __free_one_page(), placing MAX_ORDER-1 pages either to the head or the tail
>> of the freelist. Let's document why we have to shuffle the complete zone
>> when exposing larger, contiguous physical memory areas to the buddy.
>>
> 
> How about?
> 
> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve
> memory-side-cache utilization")
> 
> ...just like Patch1 since that original commit was missing the proper
> commentary in the code?

Hmm, mixed feelings. I (working for a distributor :) ) prefer fixes tags
for actual BUGs, as described in

Documentation/process/submitting-patches.rst: "If your patch fixes a bug
in a specific commit, e.g. you found an issue using ``git bisect``,
please use the 'Fixes:' tag with the first 12 characters" ...

So unless there are strong feelings, I'll not add a fixes tag (although
I agree, that it should have been contained in the original commit).

>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>> Cc: Dan Williams <dan.j.williams@intel.com>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>  mm/memory_hotplug.c | 8 ++++++++
>>  1 file changed, 8 insertions(+)
>>
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index 9b34e03e730a4..a0d81d404823d 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -822,6 +822,14 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>>         zone->zone_pgdat->node_present_pages += onlined_pages;
>>         pgdat_resize_unlock(zone->zone_pgdat, &flags);
>>
>> +       /*
>> +        * When exposing larger, physically contiguous memory areas to the
>> +        * buddy, shuffling in the buddy (when freeing onlined pages, putting
>> +        * them either to the head or the tail of the freelist) is only helpful
>> +        * for mainining the shuffle, but not for creating the initial shuffle.
> 
> s/mainining/maintaining/

Huh, what went wrong there :) Thanks!

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration
  2020-06-20  1:49   ` Dan Williams
@ 2020-06-22  7:33     ` David Hildenbrand
  2020-06-22  8:37       ` Wei Yang
  2020-06-23 22:18       ` Dan Williams
  0 siblings, 2 replies; 30+ messages in thread
From: David Hildenbrand @ 2020-06-22  7:33 UTC (permalink / raw)
  To: Dan Williams
  Cc: Linux Kernel Mailing List, Linux MM, Andrew Morton,
	Johannes Weiner, Michal Hocko, Minchan Kim, Huang Ying, Wei Yang,
	Mel Gorman

On 20.06.20 03:49, Dan Williams wrote:
> On Fri, Jun 19, 2020 at 5:59 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> Commit e900a918b098 ("mm: shuffle initial free memory to improve
>> memory-side-cache utilization") promised "autodetection of a
>> memory-side-cache (to be added in a follow-on patch)" over a year ago.
>>
>> The original series included patches [1], however, they were dropped
>> during review [2] to be followed-up later.
>>
>> Due to lack of platforms that publish an HMAT, autodetection is currently
>> not implemented. However, manual activation is actively used [3]. Let's
>> simplify for now and re-add when really (ever?) needed.
>>
>> [1] https://lkml.kernel.org/r/154510700291.1941238.817190985966612531.stgit@dwillia2-desk3.amr.corp.intel.com
>> [2] https://lkml.kernel.org/r/154690326478.676627.103843791978176914.stgit@dwillia2-desk3.amr.corp.intel.com
>> [3] https://lkml.kernel.org/r/CAPcyv4irwGUU2x+c6b4L=KbB1dnasNKaaZd6oSpYjL9kfsnROQ@mail.gmail.com
>>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Minchan Kim <minchan@kernel.org>
>> Cc: Huang Ying <ying.huang@intel.com>
>> Cc: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Dan Williams <dan.j.williams@intel.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>  mm/shuffle.c | 28 ++--------------------------
>>  mm/shuffle.h | 17 -----------------
>>  2 files changed, 2 insertions(+), 43 deletions(-)
>>
>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>> index dd13ab851b3ee..9b5cd4b004b0f 100644
>> --- a/mm/shuffle.c
>> +++ b/mm/shuffle.c
>> @@ -10,33 +10,11 @@
>>  #include "shuffle.h"
>>
>>  DEFINE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
>> -static unsigned long shuffle_state __ro_after_init;
>> -
>> -/*
>> - * Depending on the architecture, module parameter parsing may run
>> - * before, or after the cache detection. SHUFFLE_FORCE_DISABLE prevents,
>> - * or reverts the enabling of the shuffle implementation. SHUFFLE_ENABLE
>> - * attempts to turn on the implementation, but aborts if it finds
>> - * SHUFFLE_FORCE_DISABLE already set.
>> - */
>> -__meminit void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
>> -{
>> -       if (ctl == SHUFFLE_FORCE_DISABLE)
>> -               set_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state);
>> -
>> -       if (test_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state)) {
>> -               if (test_and_clear_bit(SHUFFLE_ENABLE, &shuffle_state))
>> -                       static_branch_disable(&page_alloc_shuffle_key);
>> -       } else if (ctl == SHUFFLE_ENABLE
>> -                       && !test_and_set_bit(SHUFFLE_ENABLE, &shuffle_state))
>> -               static_branch_enable(&page_alloc_shuffle_key);
>> -}
>>
>>  static bool shuffle_param;
>>  static int shuffle_show(char *buffer, const struct kernel_param *kp)
>>  {
>> -       return sprintf(buffer, "%c\n", test_bit(SHUFFLE_ENABLE, &shuffle_state)
>> -                       ? 'Y' : 'N');
>> +       return sprintf(buffer, "%c\n", shuffle_param ? 'Y' : 'N');
>>  }
>>
>>  static __meminit int shuffle_store(const char *val,
>> @@ -47,9 +25,7 @@ static __meminit int shuffle_store(const char *val,
>>         if (rc < 0)
>>                 return rc;
>>         if (shuffle_param)
>> -               page_alloc_shuffle(SHUFFLE_ENABLE);
>> -       else
>> -               page_alloc_shuffle(SHUFFLE_FORCE_DISABLE);
>> +               static_branch_enable(&page_alloc_shuffle_key);
>>         return 0;
>>  }
> 
> Let's do proper input validation here and require 1 / 'true' to enable
> shuffling and not also allow 0 to be an 'enable' value.

I don't think that's currently done?

param_set_bool(val, kp) will only default val==NULL to 'true'. Passing 0
will properly be handled by strtobool(). Or am I missing something?

Thanks!

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-19 12:59 ` [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps David Hildenbrand
  2020-06-20  1:37   ` Williams, Dan J
@ 2020-06-22  8:26   ` Wei Yang
  2020-06-22  8:43     ` David Hildenbrand
  1 sibling, 1 reply; 30+ messages in thread
From: Wei Yang @ 2020-06-22  8:26 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, Michal Hocko, stable, Andrew Morton,
	Johannes Weiner, Minchan Kim, Huang Ying, Wei Yang, Mel Gorman

On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>Especially with memory hotplug, we can have offline sections (with a
>garbage memmap) and overlapping zones. We have to make sure to only
>touch initialized memmaps (online sections managed by the buddy) and that
>the zone matches, to not move pages between zones.
>
>To test if this can actually happen, I added a simple
>	BUG_ON(page_zone(page_i) != page_zone(page_j));
>right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>onlining the first memory block "online_movable" and the second memory
>block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>and MOVABLE) overlap.
>
>This might result in all kinds of weird situations (e.g., double
>allocations, list corruptions, unmovable allocations ending up in the
>movable zone).
>
>Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>Acked-by: Michal Hocko <mhocko@suse.com>
>Cc: stable@vger.kernel.org # v5.2+
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Johannes Weiner <hannes@cmpxchg.org>
>Cc: Michal Hocko <mhocko@suse.com>
>Cc: Minchan Kim <minchan@kernel.org>
>Cc: Huang Ying <ying.huang@intel.com>
>Cc: Wei Yang <richard.weiyang@gmail.com>
>Cc: Mel Gorman <mgorman@techsingularity.net>
>Signed-off-by: David Hildenbrand <david@redhat.com>
>---
> mm/shuffle.c | 18 +++++++++---------
> 1 file changed, 9 insertions(+), 9 deletions(-)
>
>diff --git a/mm/shuffle.c b/mm/shuffle.c
>index 44406d9977c77..dd13ab851b3ee 100644
>--- a/mm/shuffle.c
>+++ b/mm/shuffle.c
>@@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>  * For two pages to be swapped in the shuffle, they must be free (on a
>  * 'free_area' lru), have the same order, and have the same migratetype.
>  */
>-static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>+static struct page * __meminit shuffle_valid_page(struct zone *zone,
>+						  unsigned long pfn, int order)
> {
>-	struct page *page;
>+	struct page *page = pfn_to_online_page(pfn);

Hi, David and Dan,

One thing I want to confirm here is we won't have partially online section,
right? We can add a sub-section to system, but we won't manage it by buddy.

With this confirmed:

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

> 
> 	/*
> 	 * Given we're dealing with randomly selected pfns in a zone we
> 	 * need to ask questions like...
> 	 */
> 
>-	/* ...is the pfn even in the memmap? */
>-	if (!pfn_valid_within(pfn))
>+	/* ... is the page managed by the buddy? */
>+	if (!page)
> 		return NULL;
> 
>-	/* ...is the pfn in a present section or a hole? */
>-	if (!pfn_in_present_section(pfn))
>+	/* ... is the page assigned to the same zone? */
>+	if (page_zone(page) != zone)
> 		return NULL;
> 
> 	/* ...is the page free and currently on a free_area list? */
>-	page = pfn_to_page(pfn);
> 	if (!PageBuddy(page))
> 		return NULL;
> 
>@@ -123,7 +123,7 @@ void __meminit __shuffle_zone(struct zone *z)
> 		 * page_j randomly selected in the span @zone_start_pfn to
> 		 * @spanned_pages.
> 		 */
>-		page_i = shuffle_valid_page(i, order);
>+		page_i = shuffle_valid_page(z, i, order);
> 		if (!page_i)
> 			continue;
> 
>@@ -137,7 +137,7 @@ void __meminit __shuffle_zone(struct zone *z)
> 			j = z->zone_start_pfn +
> 				ALIGN_DOWN(get_random_long() % z->spanned_pages,
> 						order_pages);
>-			page_j = shuffle_valid_page(j, order);
>+			page_j = shuffle_valid_page(z, j, order);
> 			if (page_j && page_j != page_i)
> 				break;
> 		}
>-- 
>2.26.2

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration
  2020-06-22  7:33     ` David Hildenbrand
@ 2020-06-22  8:37       ` Wei Yang
  2020-06-23 22:18       ` Dan Williams
  1 sibling, 0 replies; 30+ messages in thread
From: Wei Yang @ 2020-06-22  8:37 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Dan Williams, Linux Kernel Mailing List, Linux MM, Andrew Morton,
	Johannes Weiner, Michal Hocko, Minchan Kim, Huang Ying, Wei Yang,
	Mel Gorman

On Mon, Jun 22, 2020 at 09:33:28AM +0200, David Hildenbrand wrote:
>On 20.06.20 03:49, Dan Williams wrote:
>> On Fri, Jun 19, 2020 at 5:59 AM David Hildenbrand <david@redhat.com> wrote:
>>>
>>> Commit e900a918b098 ("mm: shuffle initial free memory to improve
>>> memory-side-cache utilization") promised "autodetection of a
>>> memory-side-cache (to be added in a follow-on patch)" over a year ago.
>>>
>>> The original series included patches [1], however, they were dropped
>>> during review [2] to be followed-up later.
>>>
>>> Due to lack of platforms that publish an HMAT, autodetection is currently
>>> not implemented. However, manual activation is actively used [3]. Let's
>>> simplify for now and re-add when really (ever?) needed.
>>>
>>> [1] https://lkml.kernel.org/r/154510700291.1941238.817190985966612531.stgit@dwillia2-desk3.amr.corp.intel.com
>>> [2] https://lkml.kernel.org/r/154690326478.676627.103843791978176914.stgit@dwillia2-desk3.amr.corp.intel.com
>>> [3] https://lkml.kernel.org/r/CAPcyv4irwGUU2x+c6b4L=KbB1dnasNKaaZd6oSpYjL9kfsnROQ@mail.gmail.com
>>>
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>> Cc: Michal Hocko <mhocko@suse.com>
>>> Cc: Minchan Kim <minchan@kernel.org>
>>> Cc: Huang Ying <ying.huang@intel.com>
>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>> Cc: Dan Williams <dan.j.williams@intel.com>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>  mm/shuffle.c | 28 ++--------------------------
>>>  mm/shuffle.h | 17 -----------------
>>>  2 files changed, 2 insertions(+), 43 deletions(-)
>>>
>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>> index dd13ab851b3ee..9b5cd4b004b0f 100644
>>> --- a/mm/shuffle.c
>>> +++ b/mm/shuffle.c
>>> @@ -10,33 +10,11 @@
>>>  #include "shuffle.h"
>>>
>>>  DEFINE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
>>> -static unsigned long shuffle_state __ro_after_init;
>>> -
>>> -/*
>>> - * Depending on the architecture, module parameter parsing may run
>>> - * before, or after the cache detection. SHUFFLE_FORCE_DISABLE prevents,
>>> - * or reverts the enabling of the shuffle implementation. SHUFFLE_ENABLE
>>> - * attempts to turn on the implementation, but aborts if it finds
>>> - * SHUFFLE_FORCE_DISABLE already set.
>>> - */
>>> -__meminit void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
>>> -{
>>> -       if (ctl == SHUFFLE_FORCE_DISABLE)
>>> -               set_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state);
>>> -
>>> -       if (test_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state)) {
>>> -               if (test_and_clear_bit(SHUFFLE_ENABLE, &shuffle_state))
>>> -                       static_branch_disable(&page_alloc_shuffle_key);
>>> -       } else if (ctl == SHUFFLE_ENABLE
>>> -                       && !test_and_set_bit(SHUFFLE_ENABLE, &shuffle_state))
>>> -               static_branch_enable(&page_alloc_shuffle_key);
>>> -}
>>>
>>>  static bool shuffle_param;
>>>  static int shuffle_show(char *buffer, const struct kernel_param *kp)
>>>  {
>>> -       return sprintf(buffer, "%c\n", test_bit(SHUFFLE_ENABLE, &shuffle_state)
>>> -                       ? 'Y' : 'N');
>>> +       return sprintf(buffer, "%c\n", shuffle_param ? 'Y' : 'N');
>>>  }
>>>
>>>  static __meminit int shuffle_store(const char *val,
>>> @@ -47,9 +25,7 @@ static __meminit int shuffle_store(const char *val,
>>>         if (rc < 0)
>>>                 return rc;
>>>         if (shuffle_param)
>>> -               page_alloc_shuffle(SHUFFLE_ENABLE);
>>> -       else
>>> -               page_alloc_shuffle(SHUFFLE_FORCE_DISABLE);
>>> +               static_branch_enable(&page_alloc_shuffle_key);
>>>         return 0;
>>>  }
>> 
>> Let's do proper input validation here and require 1 / 'true' to enable
>> shuffling and not also allow 0 to be an 'enable' value.
>
>I don't think that's currently done?
>
>param_set_bool(val, kp) will only default val==NULL to 'true'. Passing 0
>will properly be handled by strtobool(). Or am I missing something?
>

Agree with this statement.

>Thanks!
>
>-- 
>Thanks,
>
>David / dhildenb

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-22  8:26   ` Wei Yang
@ 2020-06-22  8:43     ` David Hildenbrand
  2020-06-22  9:22       ` Wei Yang
  0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2020-06-22  8:43 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, linux-mm, Michal Hocko, stable, Andrew Morton,
	Johannes Weiner, Minchan Kim, Huang Ying, Wei Yang, Mel Gorman

On 22.06.20 10:26, Wei Yang wrote:
> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>> Especially with memory hotplug, we can have offline sections (with a
>> garbage memmap) and overlapping zones. We have to make sure to only
>> touch initialized memmaps (online sections managed by the buddy) and that
>> the zone matches, to not move pages between zones.
>>
>> To test if this can actually happen, I added a simple
>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>> onlining the first memory block "online_movable" and the second memory
>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>> and MOVABLE) overlap.
>>
>> This might result in all kinds of weird situations (e.g., double
>> allocations, list corruptions, unmovable allocations ending up in the
>> movable zone).
>>
>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>> Acked-by: Michal Hocko <mhocko@suse.com>
>> Cc: stable@vger.kernel.org # v5.2+
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Minchan Kim <minchan@kernel.org>
>> Cc: Huang Ying <ying.huang@intel.com>
>> Cc: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> mm/shuffle.c | 18 +++++++++---------
>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>
>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>> index 44406d9977c77..dd13ab851b3ee 100644
>> --- a/mm/shuffle.c
>> +++ b/mm/shuffle.c
>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>  */
>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>> +						  unsigned long pfn, int order)
>> {
>> -	struct page *page;
>> +	struct page *page = pfn_to_online_page(pfn);
> 
> Hi, David and Dan,
> 
> One thing I want to confirm here is we won't have partially online section,
> right? We can add a sub-section to system, but we won't manage it by buddy.

Hi,

there is still a BUG with sub-section hot-add (devmem), which broke
pfn_to_online_page() in corner cases (especially, see the description in
include/linux/mmzone.h). We can have a boot-memory section partially
populated and marked online. Then, we can hot-add devmem, marking the
remaining pfns valid - and as the section is maked online, also as online.

This is, however, a different problem to solve and affects most other
pfn walkers as well. The "if (page_zone(page) != zone)" checks guards us
from most harm, as the devmem zone won't match.

Thanks!

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-22  8:43     ` David Hildenbrand
@ 2020-06-22  9:22       ` Wei Yang
  2020-06-22  9:51         ` David Hildenbrand
  2020-06-22 14:11         ` David Hildenbrand
  0 siblings, 2 replies; 30+ messages in thread
From: Wei Yang @ 2020-06-22  9:22 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Wei Yang, linux-kernel, linux-mm, Michal Hocko, stable,
	Andrew Morton, Johannes Weiner, Minchan Kim, Huang Ying,
	Wei Yang, Mel Gorman

On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>On 22.06.20 10:26, Wei Yang wrote:
>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>> Especially with memory hotplug, we can have offline sections (with a
>>> garbage memmap) and overlapping zones. We have to make sure to only
>>> touch initialized memmaps (online sections managed by the buddy) and that
>>> the zone matches, to not move pages between zones.
>>>
>>> To test if this can actually happen, I added a simple
>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>> onlining the first memory block "online_movable" and the second memory
>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>> and MOVABLE) overlap.
>>>
>>> This might result in all kinds of weird situations (e.g., double
>>> allocations, list corruptions, unmovable allocations ending up in the
>>> movable zone).
>>>
>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>> Cc: stable@vger.kernel.org # v5.2+
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>> Cc: Michal Hocko <mhocko@suse.com>
>>> Cc: Minchan Kim <minchan@kernel.org>
>>> Cc: Huang Ying <ying.huang@intel.com>
>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>> mm/shuffle.c | 18 +++++++++---------
>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>> index 44406d9977c77..dd13ab851b3ee 100644
>>> --- a/mm/shuffle.c
>>> +++ b/mm/shuffle.c
>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>  */
>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>> +						  unsigned long pfn, int order)
>>> {
>>> -	struct page *page;
>>> +	struct page *page = pfn_to_online_page(pfn);
>> 
>> Hi, David and Dan,
>> 
>> One thing I want to confirm here is we won't have partially online section,
>> right? We can add a sub-section to system, but we won't manage it by buddy.
>
>Hi,
>
>there is still a BUG with sub-section hot-add (devmem), which broke
>pfn_to_online_page() in corner cases (especially, see the description in
>include/linux/mmzone.h). We can have a boot-memory section partially
>populated and marked online. Then, we can hot-add devmem, marking the
>remaining pfns valid - and as the section is maked online, also as online.

Oh, yes, I see this description.

This means we could have section marked as online, but with a sub-section even
not added.

While the good news is even the sub-section is not added, but its memmap is
populated for an early section. So the page returned from pfn_to_online_page()
is a valid one.

But what would happen, if the sub-section is removed after added? Would
section_deactivate() release related memmap to this "struct page"?

>
>This is, however, a different problem to solve and affects most other
>pfn walkers as well. The "if (page_zone(page) != zone)" checks guards us
>from most harm, as the devmem zone won't match.
>

Yes, a different problem, just jump into my mind. Hope this won't affect this
patch.

>Thanks!
>
>-- 
>Thanks,
>
>David / dhildenb

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-22  9:22       ` Wei Yang
@ 2020-06-22  9:51         ` David Hildenbrand
  2020-06-22 13:10           ` Wei Yang
  2020-06-22 14:11         ` David Hildenbrand
  1 sibling, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2020-06-22  9:51 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, linux-mm, Michal Hocko, stable, Andrew Morton,
	Johannes Weiner, Minchan Kim, Huang Ying, Wei Yang, Mel Gorman

On 22.06.20 11:22, Wei Yang wrote:
> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>> On 22.06.20 10:26, Wei Yang wrote:
>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>> Especially with memory hotplug, we can have offline sections (with a
>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>> the zone matches, to not move pages between zones.
>>>>
>>>> To test if this can actually happen, I added a simple
>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>> onlining the first memory block "online_movable" and the second memory
>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>> and MOVABLE) overlap.
>>>>
>>>> This might result in all kinds of weird situations (e.g., double
>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>> movable zone).
>>>>
>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>> Cc: stable@vger.kernel.org # v5.2+
>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>> mm/shuffle.c | 18 +++++++++---------
>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>> --- a/mm/shuffle.c
>>>> +++ b/mm/shuffle.c
>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>  */
>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>> +						  unsigned long pfn, int order)
>>>> {
>>>> -	struct page *page;
>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>
>>> Hi, David and Dan,
>>>
>>> One thing I want to confirm here is we won't have partially online section,
>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>
>> Hi,
>>
>> there is still a BUG with sub-section hot-add (devmem), which broke
>> pfn_to_online_page() in corner cases (especially, see the description in
>> include/linux/mmzone.h). We can have a boot-memory section partially
>> populated and marked online. Then, we can hot-add devmem, marking the
>> remaining pfns valid - and as the section is maked online, also as online.
> 
> Oh, yes, I see this description.
> 
> This means we could have section marked as online, but with a sub-section even
> not added.
> 
> While the good news is even the sub-section is not added, but its memmap is
> populated for an early section. So the page returned from pfn_to_online_page()
> is a valid one.
> 
> But what would happen, if the sub-section is removed after added? Would
> section_deactivate() release related memmap to this "struct page"?

If devmem is removed, the memmap will be freed and the sub-sections are
marked as non-present. So this works as expected.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-22  9:51         ` David Hildenbrand
@ 2020-06-22 13:10           ` Wei Yang
  2020-06-22 14:06             ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: Wei Yang @ 2020-06-22 13:10 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Wei Yang, linux-kernel, linux-mm, Michal Hocko, stable,
	Andrew Morton, Johannes Weiner, Minchan Kim, Huang Ying,
	Wei Yang, Mel Gorman

On Mon, Jun 22, 2020 at 11:51:34AM +0200, David Hildenbrand wrote:
>On 22.06.20 11:22, Wei Yang wrote:
>> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>>> On 22.06.20 10:26, Wei Yang wrote:
>>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>>> Especially with memory hotplug, we can have offline sections (with a
>>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>>> the zone matches, to not move pages between zones.
>>>>>
>>>>> To test if this can actually happen, I added a simple
>>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>>> onlining the first memory block "online_movable" and the second memory
>>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>>> and MOVABLE) overlap.
>>>>>
>>>>> This might result in all kinds of weird situations (e.g., double
>>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>>> movable zone).
>>>>>
>>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>>> Cc: stable@vger.kernel.org # v5.2+
>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>> ---
>>>>> mm/shuffle.c | 18 +++++++++---------
>>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>>> --- a/mm/shuffle.c
>>>>> +++ b/mm/shuffle.c
>>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>>  */
>>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>>> +						  unsigned long pfn, int order)
>>>>> {
>>>>> -	struct page *page;
>>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>>
>>>> Hi, David and Dan,
>>>>
>>>> One thing I want to confirm here is we won't have partially online section,
>>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>>
>>> Hi,
>>>
>>> there is still a BUG with sub-section hot-add (devmem), which broke
>>> pfn_to_online_page() in corner cases (especially, see the description in
>>> include/linux/mmzone.h). We can have a boot-memory section partially
>>> populated and marked online. Then, we can hot-add devmem, marking the
>>> remaining pfns valid - and as the section is maked online, also as online.
>> 
>> Oh, yes, I see this description.
>> 
>> This means we could have section marked as online, but with a sub-section even
>> not added.
>> 
>> While the good news is even the sub-section is not added, but its memmap is
>> populated for an early section. So the page returned from pfn_to_online_page()
>> is a valid one.
>> 
>> But what would happen, if the sub-section is removed after added? Would
>> section_deactivate() release related memmap to this "struct page"?
>
>If devmem is removed, the memmap will be freed and the sub-sections are
>marked as non-present. So this works as expected.
>

Sorry, I may not catch your point. If my understanding is correct, the
above behavior happens in function section_deactivate().

Let me draw my understanding of function section_deactivate():

    section_deactivate(pfn, nr_pages)
        clear_subsection_map(pfn, nr_pages)
	depopulate_section_memmap(pfn, nr_pages)

Since we just remove a sub-section, I skipped some un-related codes. These two
functions would:

  * clear bitmap in ms->usage->subsection_map
  * free memmap for the sub-section

While since the section is not empty, ms->section_mem_map is not set no null.

Per my understanding, the section present state is set in ms->section_mem_map
with SECTION_MARKED_PRESENT. It looks we don't clear it when just remote a
sub-section.

Do I miss something?

>-- 
>Thanks,
>
>David / dhildenb

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-22 13:10           ` Wei Yang
@ 2020-06-22 14:06             ` David Hildenbrand
  2020-06-22 21:55               ` Wei Yang
  0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2020-06-22 14:06 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, linux-mm, Michal Hocko, stable, Andrew Morton,
	Johannes Weiner, Minchan Kim, Huang Ying, Wei Yang, Mel Gorman

On 22.06.20 15:10, Wei Yang wrote:
> On Mon, Jun 22, 2020 at 11:51:34AM +0200, David Hildenbrand wrote:
>> On 22.06.20 11:22, Wei Yang wrote:
>>> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>>>> On 22.06.20 10:26, Wei Yang wrote:
>>>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>>>> Especially with memory hotplug, we can have offline sections (with a
>>>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>>>> the zone matches, to not move pages between zones.
>>>>>>
>>>>>> To test if this can actually happen, I added a simple
>>>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>>>> onlining the first memory block "online_movable" and the second memory
>>>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>>>> and MOVABLE) overlap.
>>>>>>
>>>>>> This might result in all kinds of weird situations (e.g., double
>>>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>>>> movable zone).
>>>>>>
>>>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>>>> Cc: stable@vger.kernel.org # v5.2+
>>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>>> ---
>>>>>> mm/shuffle.c | 18 +++++++++---------
>>>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>>>
>>>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>>>> --- a/mm/shuffle.c
>>>>>> +++ b/mm/shuffle.c
>>>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>>>  */
>>>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>>>> +						  unsigned long pfn, int order)
>>>>>> {
>>>>>> -	struct page *page;
>>>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>>>
>>>>> Hi, David and Dan,
>>>>>
>>>>> One thing I want to confirm here is we won't have partially online section,
>>>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>>>
>>>> Hi,
>>>>
>>>> there is still a BUG with sub-section hot-add (devmem), which broke
>>>> pfn_to_online_page() in corner cases (especially, see the description in
>>>> include/linux/mmzone.h). We can have a boot-memory section partially
>>>> populated and marked online. Then, we can hot-add devmem, marking the
>>>> remaining pfns valid - and as the section is maked online, also as online.
>>>
>>> Oh, yes, I see this description.
>>>
>>> This means we could have section marked as online, but with a sub-section even
>>> not added.
>>>
>>> While the good news is even the sub-section is not added, but its memmap is
>>> populated for an early section. So the page returned from pfn_to_online_page()
>>> is a valid one.
>>>
>>> But what would happen, if the sub-section is removed after added? Would
>>> section_deactivate() release related memmap to this "struct page"?
>>
>> If devmem is removed, the memmap will be freed and the sub-sections are
>> marked as non-present. So this works as expected.
>>
> 
> Sorry, I may not catch your point. If my understanding is correct, the
> above behavior happens in function section_deactivate().
> 
> Let me draw my understanding of function section_deactivate():
> 
>     section_deactivate(pfn, nr_pages)
>         clear_subsection_map(pfn, nr_pages)
> 	depopulate_section_memmap(pfn, nr_pages)
> 
> Since we just remove a sub-section, I skipped some un-related codes. These two
> functions would:
> 
>   * clear bitmap in ms->usage->subsection_map
>   * free memmap for the sub-section
> 
> While since the section is not empty, ms->section_mem_map is not set no null.

Let me clarify, sub-section hotremove works differently when overlying
with (online) boot memory within a section.

Early sections (IOW, boot memory) are never partially removed. See
mm/sparse.c:section_deactivate(). We only free a early memmap when the
section is completely empty. Also see how
include/linux/mmzone.h:pfn_valid() handles early sections.

So when we have a partially present section with boot memory, we
a) marked the whole section present and online (there is only a single
   bit)
b) allocated the memmap for the whole section
c) Only exposed the relevant pages to the buddy. The memmap of non-
   present parts in a section were initialized and are reserved.

pfn_valid() will return for all non-present pfns valid, because there is
a memmap. pfn_to_online_page() will return for all pfns "true", because
we only have a single bit for the whole section. This has been the case
before sub-section hotplug and is still the case. It simply looks like
just another memory hole for which we have a memmap.

Now, with devmem it is possible to suddenly change these sub-section
holes (memmaps) to become ZONE_DEVICE memory. pfn_to_online_page() would
have to detect that and report a "false". Possible fixes were already
discussed (e.g., sub-section online map instead of a single bit).

Again, the zone check safes us from the worst, just as in the case of
all other pfn walkers that use (as documented) pfn_to_online_page(). It
still needs a fix as dicussed, but it seems to work reasonably fine like
that for now.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-22  9:22       ` Wei Yang
  2020-06-22  9:51         ` David Hildenbrand
@ 2020-06-22 14:11         ` David Hildenbrand
  1 sibling, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2020-06-22 14:11 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, linux-mm, Michal Hocko, stable, Andrew Morton,
	Johannes Weiner, Minchan Kim, Huang Ying, Wei Yang, Mel Gorman

On 22.06.20 11:22, Wei Yang wrote:
> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>> On 22.06.20 10:26, Wei Yang wrote:
>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>> Especially with memory hotplug, we can have offline sections (with a
>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>> the zone matches, to not move pages between zones.
>>>>
>>>> To test if this can actually happen, I added a simple
>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>> onlining the first memory block "online_movable" and the second memory
>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>> and MOVABLE) overlap.
>>>>
>>>> This might result in all kinds of weird situations (e.g., double
>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>> movable zone).
>>>>
>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>> Cc: stable@vger.kernel.org # v5.2+
>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>> mm/shuffle.c | 18 +++++++++---------
>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>> --- a/mm/shuffle.c
>>>> +++ b/mm/shuffle.c
>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>  */
>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>> +						  unsigned long pfn, int order)
>>>> {
>>>> -	struct page *page;
>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>
>>> Hi, David and Dan,
>>>
>>> One thing I want to confirm here is we won't have partially online section,
>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>
>> Hi,
>>
>> there is still a BUG with sub-section hot-add (devmem), which broke
>> pfn_to_online_page() in corner cases (especially, see the description in
>> include/linux/mmzone.h). We can have a boot-memory section partially
>> populated and marked online. Then, we can hot-add devmem, marking the
>> remaining pfns valid - and as the section is maked online, also as online.
> 
> Oh, yes, I see this description.
> 
> This means we could have section marked as online, but with a sub-section even
> not added.
> 
> While the good news is even the sub-section is not added, but its memmap is
> populated for an early section. So the page returned from pfn_to_online_page()
> is a valid one.
> 
> But what would happen, if the sub-section is removed after added? Would
> section_deactivate() release related memmap to this "struct page"?

Just to clarify now that I get your point: No it would not, as it is an
early section, and the early section is not completely empty.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant
  2020-06-19 12:59 ` [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant David Hildenbrand
  2020-06-20  1:41   ` Dan Williams
@ 2020-06-22 15:32   ` Michal Hocko
  1 sibling, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2020-06-22 15:32 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, Andrew Morton, Alexander Duyck, Dan Williams

On Fri 19-06-20 14:59:21, David Hildenbrand wrote:
> It's not completely obvious why we have to shuffle the complete zone, as
> some sort of shuffling is already performed when onlining pages via
> __free_one_page(), placing MAX_ORDER-1 pages either to the head or the tail
> of the freelist. Let's document why we have to shuffle the complete zone
> when exposing larger, contiguous physical memory areas to the buddy.
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

OK, this is an improvement. I would still prefer to have this claim
backed by some numbers but it seems we are not going to get any so we
can at least pretend to try as hard as possible especially when this is
not a hot path.

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/memory_hotplug.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 9b34e03e730a4..a0d81d404823d 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -822,6 +822,14 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>  	zone->zone_pgdat->node_present_pages += onlined_pages;
>  	pgdat_resize_unlock(zone->zone_pgdat, &flags);
>  
> +	/*
> +	 * When exposing larger, physically contiguous memory areas to the
> +	 * buddy, shuffling in the buddy (when freeing onlined pages, putting
> +	 * them either to the head or the tail of the freelist) is only helpful
> +	 * for mainining the shuffle, but not for creating the initial shuffle.
> +	 * Shuffle the whole zone to make sure the just onlined pages are
> +	 * properly distributed across the whole freelist.
> +	 */
>  	shuffle_zone(zone);
>  
>  	node_states_set_node(nid, &arg);
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration
  2020-06-19 12:59 ` [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration David Hildenbrand
  2020-06-20  1:49   ` Dan Williams
@ 2020-06-22 15:37   ` Michal Hocko
  2020-06-23  1:22   ` Wei Yang
  2 siblings, 0 replies; 30+ messages in thread
From: Michal Hocko @ 2020-06-22 15:37 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, Andrew Morton, Johannes Weiner,
	Minchan Kim, Huang Ying, Wei Yang, Mel Gorman, Dan Williams

On Fri 19-06-20 14:59:22, David Hildenbrand wrote:
> Commit e900a918b098 ("mm: shuffle initial free memory to improve
> memory-side-cache utilization") promised "autodetection of a
> memory-side-cache (to be added in a follow-on patch)" over a year ago.
> 
> The original series included patches [1], however, they were dropped
> during review [2] to be followed-up later.
> 
> Due to lack of platforms that publish an HMAT, autodetection is currently
> not implemented. However, manual activation is actively used [3]. Let's
> simplify for now and re-add when really (ever?) needed.
> 
> [1] https://lkml.kernel.org/r/154510700291.1941238.817190985966612531.stgit@dwillia2-desk3.amr.corp.intel.com
> [2] https://lkml.kernel.org/r/154690326478.676627.103843791978176914.stgit@dwillia2-desk3.amr.corp.intel.com
> [3] https://lkml.kernel.org/r/CAPcyv4irwGUU2x+c6b4L=KbB1dnasNKaaZd6oSpYjL9kfsnROQ@mail.gmail.com
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Huang Ying <ying.huang@intel.com>
> Cc: Wei Yang <richard.weiyang@gmail.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/shuffle.c | 28 ++--------------------------
>  mm/shuffle.h | 17 -----------------
>  2 files changed, 2 insertions(+), 43 deletions(-)
> 
> diff --git a/mm/shuffle.c b/mm/shuffle.c
> index dd13ab851b3ee..9b5cd4b004b0f 100644
> --- a/mm/shuffle.c
> +++ b/mm/shuffle.c
> @@ -10,33 +10,11 @@
>  #include "shuffle.h"
>  
>  DEFINE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
> -static unsigned long shuffle_state __ro_after_init;
> -
> -/*
> - * Depending on the architecture, module parameter parsing may run
> - * before, or after the cache detection. SHUFFLE_FORCE_DISABLE prevents,
> - * or reverts the enabling of the shuffle implementation. SHUFFLE_ENABLE
> - * attempts to turn on the implementation, but aborts if it finds
> - * SHUFFLE_FORCE_DISABLE already set.
> - */
> -__meminit void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
> -{
> -	if (ctl == SHUFFLE_FORCE_DISABLE)
> -		set_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state);
> -
> -	if (test_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state)) {
> -		if (test_and_clear_bit(SHUFFLE_ENABLE, &shuffle_state))
> -			static_branch_disable(&page_alloc_shuffle_key);
> -	} else if (ctl == SHUFFLE_ENABLE
> -			&& !test_and_set_bit(SHUFFLE_ENABLE, &shuffle_state))
> -		static_branch_enable(&page_alloc_shuffle_key);
> -}
>  
>  static bool shuffle_param;
>  static int shuffle_show(char *buffer, const struct kernel_param *kp)
>  {
> -	return sprintf(buffer, "%c\n", test_bit(SHUFFLE_ENABLE, &shuffle_state)
> -			? 'Y' : 'N');
> +	return sprintf(buffer, "%c\n", shuffle_param ? 'Y' : 'N');
>  }
>  
>  static __meminit int shuffle_store(const char *val,
> @@ -47,9 +25,7 @@ static __meminit int shuffle_store(const char *val,
>  	if (rc < 0)
>  		return rc;
>  	if (shuffle_param)
> -		page_alloc_shuffle(SHUFFLE_ENABLE);
> -	else
> -		page_alloc_shuffle(SHUFFLE_FORCE_DISABLE);
> +		static_branch_enable(&page_alloc_shuffle_key);
>  	return 0;
>  }
>  module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
> diff --git a/mm/shuffle.h b/mm/shuffle.h
> index 4d79f03b6658f..71b784f0b7c3e 100644
> --- a/mm/shuffle.h
> +++ b/mm/shuffle.h
> @@ -4,23 +4,10 @@
>  #define _MM_SHUFFLE_H
>  #include <linux/jump_label.h>
>  
> -/*
> - * SHUFFLE_ENABLE is called from the command line enabling path, or by
> - * platform-firmware enabling that indicates the presence of a
> - * direct-mapped memory-side-cache. SHUFFLE_FORCE_DISABLE is called from
> - * the command line path and overrides any previous or future
> - * SHUFFLE_ENABLE.
> - */
> -enum mm_shuffle_ctl {
> -	SHUFFLE_ENABLE,
> -	SHUFFLE_FORCE_DISABLE,
> -};
> -
>  #define SHUFFLE_ORDER (MAX_ORDER-1)
>  
>  #ifdef CONFIG_SHUFFLE_PAGE_ALLOCATOR
>  DECLARE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
> -extern void page_alloc_shuffle(enum mm_shuffle_ctl ctl);
>  extern void __shuffle_free_memory(pg_data_t *pgdat);
>  extern bool shuffle_pick_tail(void);
>  static inline void shuffle_free_memory(pg_data_t *pgdat)
> @@ -58,10 +45,6 @@ static inline void shuffle_zone(struct zone *z)
>  {
>  }
>  
> -static inline void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
> -{
> -}
> -
>  static inline bool is_shuffle_order(int order)
>  {
>  	return false;
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-22 14:06             ` David Hildenbrand
@ 2020-06-22 21:55               ` Wei Yang
  2020-06-23  7:39                 ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: Wei Yang @ 2020-06-22 21:55 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Wei Yang, linux-kernel, linux-mm, Michal Hocko, stable,
	Andrew Morton, Johannes Weiner, Minchan Kim, Huang Ying,
	Wei Yang, Mel Gorman

On Mon, Jun 22, 2020 at 04:06:15PM +0200, David Hildenbrand wrote:
>On 22.06.20 15:10, Wei Yang wrote:
>> On Mon, Jun 22, 2020 at 11:51:34AM +0200, David Hildenbrand wrote:
>>> On 22.06.20 11:22, Wei Yang wrote:
>>>> On Mon, Jun 22, 2020 at 10:43:11AM +0200, David Hildenbrand wrote:
>>>>> On 22.06.20 10:26, Wei Yang wrote:
>>>>>> On Fri, Jun 19, 2020 at 02:59:20PM +0200, David Hildenbrand wrote:
>>>>>>> Especially with memory hotplug, we can have offline sections (with a
>>>>>>> garbage memmap) and overlapping zones. We have to make sure to only
>>>>>>> touch initialized memmaps (online sections managed by the buddy) and that
>>>>>>> the zone matches, to not move pages between zones.
>>>>>>>
>>>>>>> To test if this can actually happen, I added a simple
>>>>>>> 	BUG_ON(page_zone(page_i) != page_zone(page_j));
>>>>>>> right before the swap. When hotplugging a 256M DIMM to a 4G x86-64 VM and
>>>>>>> onlining the first memory block "online_movable" and the second memory
>>>>>>> block "online_kernel", it will trigger the BUG, as both zones (NORMAL
>>>>>>> and MOVABLE) overlap.
>>>>>>>
>>>>>>> This might result in all kinds of weird situations (e.g., double
>>>>>>> allocations, list corruptions, unmovable allocations ending up in the
>>>>>>> movable zone).
>>>>>>>
>>>>>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve memory-side-cache utilization")
>>>>>>> Acked-by: Michal Hocko <mhocko@suse.com>
>>>>>>> Cc: stable@vger.kernel.org # v5.2+
>>>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>>>>>> Cc: Michal Hocko <mhocko@suse.com>
>>>>>>> Cc: Minchan Kim <minchan@kernel.org>
>>>>>>> Cc: Huang Ying <ying.huang@intel.com>
>>>>>>> Cc: Wei Yang <richard.weiyang@gmail.com>
>>>>>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>>>> ---
>>>>>>> mm/shuffle.c | 18 +++++++++---------
>>>>>>> 1 file changed, 9 insertions(+), 9 deletions(-)
>>>>>>>
>>>>>>> diff --git a/mm/shuffle.c b/mm/shuffle.c
>>>>>>> index 44406d9977c77..dd13ab851b3ee 100644
>>>>>>> --- a/mm/shuffle.c
>>>>>>> +++ b/mm/shuffle.c
>>>>>>> @@ -58,25 +58,25 @@ module_param_call(shuffle, shuffle_store, shuffle_show, &shuffle_param, 0400);
>>>>>>>  * For two pages to be swapped in the shuffle, they must be free (on a
>>>>>>>  * 'free_area' lru), have the same order, and have the same migratetype.
>>>>>>>  */
>>>>>>> -static struct page * __meminit shuffle_valid_page(unsigned long pfn, int order)
>>>>>>> +static struct page * __meminit shuffle_valid_page(struct zone *zone,
>>>>>>> +						  unsigned long pfn, int order)
>>>>>>> {
>>>>>>> -	struct page *page;
>>>>>>> +	struct page *page = pfn_to_online_page(pfn);
>>>>>>
>>>>>> Hi, David and Dan,
>>>>>>
>>>>>> One thing I want to confirm here is we won't have partially online section,
>>>>>> right? We can add a sub-section to system, but we won't manage it by buddy.
>>>>>
>>>>> Hi,
>>>>>
>>>>> there is still a BUG with sub-section hot-add (devmem), which broke
>>>>> pfn_to_online_page() in corner cases (especially, see the description in
>>>>> include/linux/mmzone.h). We can have a boot-memory section partially
>>>>> populated and marked online. Then, we can hot-add devmem, marking the
>>>>> remaining pfns valid - and as the section is maked online, also as online.
>>>>
>>>> Oh, yes, I see this description.
>>>>
>>>> This means we could have section marked as online, but with a sub-section even
>>>> not added.
>>>>
>>>> While the good news is even the sub-section is not added, but its memmap is
>>>> populated for an early section. So the page returned from pfn_to_online_page()
>>>> is a valid one.
>>>>
>>>> But what would happen, if the sub-section is removed after added? Would
>>>> section_deactivate() release related memmap to this "struct page"?
>>>
>>> If devmem is removed, the memmap will be freed and the sub-sections are
>>> marked as non-present. So this works as expected.
>>>
>> 
>> Sorry, I may not catch your point. If my understanding is correct, the
>> above behavior happens in function section_deactivate().
>> 
>> Let me draw my understanding of function section_deactivate():
>> 
>>     section_deactivate(pfn, nr_pages)
>>         clear_subsection_map(pfn, nr_pages)
>> 	depopulate_section_memmap(pfn, nr_pages)
>> 
>> Since we just remove a sub-section, I skipped some un-related codes. These two
>> functions would:
>> 
>>   * clear bitmap in ms->usage->subsection_map
>>   * free memmap for the sub-section
>> 
>> While since the section is not empty, ms->section_mem_map is not set no null.
>
>Let me clarify, sub-section hotremove works differently when overlying
>with (online) boot memory within a section.
>
>Early sections (IOW, boot memory) are never partially removed. See

Thanks for your time and patience. 

Looked into the comment of section_deactivate():

 * 1. deactivation of a partial hot-added section (only possible in
 *    the SPARSEMEM_VMEMMAP=y case).
 *      a) section was present at memory init.
 *      b) section was hot-added post memory init.

Case a) seems do partial remove for an early section?

>mm/sparse.c:section_deactivate(). We only free a early memmap when the
>section is completely empty. Also see how

Hmm.. I thought this is the behavior for early section, while it looks current
code doesn't work like this:

       if (section_is_early && memmap)
               free_map_bootmem(memmap);
       else
	       depopulate_section_memmap(pfn, nr_pages, altmap);

section_is_early is always "true" for early section, while memmap is not-NULL
only when sub-section map is empty.

If my understanding is correct, when we remove a sub-section in early section,
the code would call depopulate_section_memmap(), which in turn free related
memmap. By removing the memmap, the return value from pfn_to_online_page() is
not a valid one.

Maybe we want to write the code like this:

       if (section_is_early)
               if (memmap)
                       free_map_bootmem(memmap);
       else
	       depopulate_section_memmap(pfn, nr_pages, altmap);

This makes sure we only free memmap for early section only when the whole
section is removed.

>include/linux/mmzone.h:pfn_valid() handles early sections.
>
>So when we have a partially present section with boot memory, we
>a) marked the whole section present and online (there is only a single
>   bit)
>b) allocated the memmap for the whole section
>c) Only exposed the relevant pages to the buddy. The memmap of non-
>   present parts in a section were initialized and are reserved.
>
>pfn_valid() will return for all non-present pfns valid, because there is
>a memmap. pfn_to_online_page() will return for all pfns "true", because
>we only have a single bit for the whole section. This has been the case
>before sub-section hotplug and is still the case. It simply looks like
>just another memory hole for which we have a memmap.
>
>Now, with devmem it is possible to suddenly change these sub-section
>holes (memmaps) to become ZONE_DEVICE memory. pfn_to_online_page() would
>have to detect that and report a "false". Possible fixes were already
>discussed (e.g., sub-section online map instead of a single bit).
>
>Again, the zone check safes us from the worst, just as in the case of
>all other pfn walkers that use (as documented) pfn_to_online_page(). It
>still needs a fix as dicussed, but it seems to work reasonably fine like
>that for now.
>
>-- 
>Thanks,
>
>David / dhildenb

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration
  2020-06-19 12:59 ` [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration David Hildenbrand
  2020-06-20  1:49   ` Dan Williams
  2020-06-22 15:37   ` Michal Hocko
@ 2020-06-23  1:22   ` Wei Yang
  2 siblings, 0 replies; 30+ messages in thread
From: Wei Yang @ 2020-06-23  1:22 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, Andrew Morton, Johannes Weiner,
	Michal Hocko, Minchan Kim, Huang Ying, Wei Yang, Mel Gorman,
	Dan Williams

On Fri, Jun 19, 2020 at 02:59:22PM +0200, David Hildenbrand wrote:
>Commit e900a918b098 ("mm: shuffle initial free memory to improve
>memory-side-cache utilization") promised "autodetection of a
>memory-side-cache (to be added in a follow-on patch)" over a year ago.
>
>The original series included patches [1], however, they were dropped
>during review [2] to be followed-up later.
>
>Due to lack of platforms that publish an HMAT, autodetection is currently
>not implemented. However, manual activation is actively used [3]. Let's
>simplify for now and re-add when really (ever?) needed.
>
>[1] https://lkml.kernel.org/r/154510700291.1941238.817190985966612531.stgit@dwillia2-desk3.amr.corp.intel.com
>[2] https://lkml.kernel.org/r/154690326478.676627.103843791978176914.stgit@dwillia2-desk3.amr.corp.intel.com
>[3] https://lkml.kernel.org/r/CAPcyv4irwGUU2x+c6b4L=KbB1dnasNKaaZd6oSpYjL9kfsnROQ@mail.gmail.com
>
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Johannes Weiner <hannes@cmpxchg.org>
>Cc: Michal Hocko <mhocko@suse.com>
>Cc: Minchan Kim <minchan@kernel.org>
>Cc: Huang Ying <ying.huang@intel.com>
>Cc: Wei Yang <richard.weiyang@gmail.com>
>Cc: Mel Gorman <mgorman@techsingularity.net>
>Cc: Dan Williams <dan.j.williams@intel.com>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-22 21:55               ` Wei Yang
@ 2020-06-23  7:39                 ` David Hildenbrand
  2020-06-23  7:55                   ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2020-06-23  7:39 UTC (permalink / raw)
  To: Wei Yang
  Cc: Wei Yang, linux-kernel, linux-mm, Michal Hocko, stable,
	Andrew Morton, Johannes Weiner, Minchan Kim, Huang Ying,
	Mel Gorman, Dan Williams

> Hmm.. I thought this is the behavior for early section, while it looks current
> code doesn't work like this:
> 
>        if (section_is_early && memmap)
>                free_map_bootmem(memmap);
>        else
> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
> 
> section_is_early is always "true" for early section, while memmap is not-NULL
> only when sub-section map is empty.
> 
> If my understanding is correct, when we remove a sub-section in early section,
> the code would call depopulate_section_memmap(), which in turn free related
> memmap. By removing the memmap, the return value from pfn_to_online_page() is
> not a valid one.

I think you're right, and pfn_valid() would also return true, as it is
an early section. This looks broken.

> 
> Maybe we want to write the code like this:
> 
>        if (section_is_early)
>                if (memmap)
>                        free_map_bootmem(memmap);
>        else
> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
> 

I guess that should be the way to go

@Dan, I think what Wei proposes here is correct, right? Or how does it
work in the VMEMMAP case with early sections?

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-23  7:39                 ` David Hildenbrand
@ 2020-06-23  7:55                   ` David Hildenbrand
  2020-06-23  9:30                     ` Wei Yang
  0 siblings, 1 reply; 30+ messages in thread
From: David Hildenbrand @ 2020-06-23  7:55 UTC (permalink / raw)
  To: Wei Yang
  Cc: Wei Yang, linux-kernel, linux-mm, Michal Hocko, stable,
	Andrew Morton, Johannes Weiner, Minchan Kim, Huang Ying,
	Mel Gorman, Dan Williams

On 23.06.20 09:39, David Hildenbrand wrote:
>> Hmm.. I thought this is the behavior for early section, while it looks current
>> code doesn't work like this:
>>
>>        if (section_is_early && memmap)
>>                free_map_bootmem(memmap);
>>        else
>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>
>> section_is_early is always "true" for early section, while memmap is not-NULL
>> only when sub-section map is empty.
>>
>> If my understanding is correct, when we remove a sub-section in early section,
>> the code would call depopulate_section_memmap(), which in turn free related
>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
>> not a valid one.
> 
> I think you're right, and pfn_valid() would also return true, as it is
> an early section. This looks broken.
> 
>>
>> Maybe we want to write the code like this:
>>
>>        if (section_is_early)
>>                if (memmap)
>>                        free_map_bootmem(memmap);
>>        else
>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>
> 
> I guess that should be the way to go
> 
> @Dan, I think what Wei proposes here is correct, right? Or how does it
> work in the VMEMMAP case with early sections?
> 

Especially, if you would re-hot-add, section_activate() would assume
there is a memmap, it must not be removed.

@Wei, can you send a patch?

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-23  7:55                   ` David Hildenbrand
@ 2020-06-23  9:30                     ` Wei Yang
  2020-07-24  3:08                       ` Andrew Morton
  0 siblings, 1 reply; 30+ messages in thread
From: Wei Yang @ 2020-06-23  9:30 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Wei Yang, Wei Yang, linux-kernel, linux-mm, Michal Hocko, stable,
	Andrew Morton, Johannes Weiner, Minchan Kim, Huang Ying,
	Mel Gorman, Dan Williams

On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:
>On 23.06.20 09:39, David Hildenbrand wrote:
>>> Hmm.. I thought this is the behavior for early section, while it looks current
>>> code doesn't work like this:
>>>
>>>        if (section_is_early && memmap)
>>>                free_map_bootmem(memmap);
>>>        else
>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>>
>>> section_is_early is always "true" for early section, while memmap is not-NULL
>>> only when sub-section map is empty.
>>>
>>> If my understanding is correct, when we remove a sub-section in early section,
>>> the code would call depopulate_section_memmap(), which in turn free related
>>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
>>> not a valid one.
>> 
>> I think you're right, and pfn_valid() would also return true, as it is
>> an early section. This looks broken.
>> 
>>>
>>> Maybe we want to write the code like this:
>>>
>>>        if (section_is_early)
>>>                if (memmap)
>>>                        free_map_bootmem(memmap);
>>>        else
>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>>
>> 
>> I guess that should be the way to go
>> 
>> @Dan, I think what Wei proposes here is correct, right? Or how does it
>> work in the VMEMMAP case with early sections?
>> 
>
>Especially, if you would re-hot-add, section_activate() would assume
>there is a memmap, it must not be removed.
>

You are right here. I didn't notice it.

>@Wei, can you send a patch?
>

Sure, let me prepare for it.

>-- 
>Thanks,
>
>David / dhildenb

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant
  2020-06-22  7:27     ` David Hildenbrand
@ 2020-06-23 21:15       ` Dan Williams
  2020-06-24  9:31         ` David Hildenbrand
  0 siblings, 1 reply; 30+ messages in thread
From: Dan Williams @ 2020-06-23 21:15 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Linux Kernel Mailing List, Linux MM, Andrew Morton,
	Alexander Duyck, Michal Hocko

On Mon, Jun 22, 2020 at 12:28 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 20.06.20 03:41, Dan Williams wrote:
> > On Fri, Jun 19, 2020 at 6:00 AM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> It's not completely obvious why we have to shuffle the complete zone, as
> >> some sort of shuffling is already performed when onlining pages via
> >> __free_one_page(), placing MAX_ORDER-1 pages either to the head or the tail
> >> of the freelist. Let's document why we have to shuffle the complete zone
> >> when exposing larger, contiguous physical memory areas to the buddy.
> >>
> >
> > How about?
> >
> > Fixes: e900a918b098 ("mm: shuffle initial free memory to improve
> > memory-side-cache utilization")
> >
> > ...just like Patch1 since that original commit was missing the proper
> > commentary in the code?
>
> Hmm, mixed feelings. I (working for a distributor :) ) prefer fixes tags
> for actual BUGs, as described in
>
> Documentation/process/submitting-patches.rst: "If your patch fixes a bug
> in a specific commit, e.g. you found an issue using ``git bisect``,
> please use the 'Fixes:' tag with the first 12 characters" ...
>
> So unless there are strong feelings, I'll not add a fixes tag (although
> I agree, that it should have been contained in the original commit).

It doesn't need to be "Fixes", but how about at least mentioning the
original commit as a breadcrumb so that some future "git blame"
archaeology effort is streamlined.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration
  2020-06-22  7:33     ` David Hildenbrand
  2020-06-22  8:37       ` Wei Yang
@ 2020-06-23 22:18       ` Dan Williams
  1 sibling, 0 replies; 30+ messages in thread
From: Dan Williams @ 2020-06-23 22:18 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Linux Kernel Mailing List, Linux MM, Andrew Morton,
	Johannes Weiner, Michal Hocko, Minchan Kim, Huang Ying, Wei Yang,
	Mel Gorman

On Mon, Jun 22, 2020 at 12:33 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 20.06.20 03:49, Dan Williams wrote:
> > On Fri, Jun 19, 2020 at 5:59 AM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> Commit e900a918b098 ("mm: shuffle initial free memory to improve
> >> memory-side-cache utilization") promised "autodetection of a
> >> memory-side-cache (to be added in a follow-on patch)" over a year ago.
> >>
> >> The original series included patches [1], however, they were dropped
> >> during review [2] to be followed-up later.
> >>
> >> Due to lack of platforms that publish an HMAT, autodetection is currently
> >> not implemented. However, manual activation is actively used [3]. Let's
> >> simplify for now and re-add when really (ever?) needed.
> >>
> >> [1] https://lkml.kernel.org/r/154510700291.1941238.817190985966612531.stgit@dwillia2-desk3.amr.corp.intel.com
> >> [2] https://lkml.kernel.org/r/154690326478.676627.103843791978176914.stgit@dwillia2-desk3.amr.corp.intel.com
> >> [3] https://lkml.kernel.org/r/CAPcyv4irwGUU2x+c6b4L=KbB1dnasNKaaZd6oSpYjL9kfsnROQ@mail.gmail.com
> >>
> >> Cc: Andrew Morton <akpm@linux-foundation.org>
> >> Cc: Johannes Weiner <hannes@cmpxchg.org>
> >> Cc: Michal Hocko <mhocko@suse.com>
> >> Cc: Minchan Kim <minchan@kernel.org>
> >> Cc: Huang Ying <ying.huang@intel.com>
> >> Cc: Wei Yang <richard.weiyang@gmail.com>
> >> Cc: Mel Gorman <mgorman@techsingularity.net>
> >> Cc: Dan Williams <dan.j.williams@intel.com>
> >> Signed-off-by: David Hildenbrand <david@redhat.com>
> >> ---
> >>  mm/shuffle.c | 28 ++--------------------------
> >>  mm/shuffle.h | 17 -----------------
> >>  2 files changed, 2 insertions(+), 43 deletions(-)
> >>
> >> diff --git a/mm/shuffle.c b/mm/shuffle.c
> >> index dd13ab851b3ee..9b5cd4b004b0f 100644
> >> --- a/mm/shuffle.c
> >> +++ b/mm/shuffle.c
> >> @@ -10,33 +10,11 @@
> >>  #include "shuffle.h"
> >>
> >>  DEFINE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
> >> -static unsigned long shuffle_state __ro_after_init;
> >> -
> >> -/*
> >> - * Depending on the architecture, module parameter parsing may run
> >> - * before, or after the cache detection. SHUFFLE_FORCE_DISABLE prevents,
> >> - * or reverts the enabling of the shuffle implementation. SHUFFLE_ENABLE
> >> - * attempts to turn on the implementation, but aborts if it finds
> >> - * SHUFFLE_FORCE_DISABLE already set.
> >> - */
> >> -__meminit void page_alloc_shuffle(enum mm_shuffle_ctl ctl)
> >> -{
> >> -       if (ctl == SHUFFLE_FORCE_DISABLE)
> >> -               set_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state);
> >> -
> >> -       if (test_bit(SHUFFLE_FORCE_DISABLE, &shuffle_state)) {
> >> -               if (test_and_clear_bit(SHUFFLE_ENABLE, &shuffle_state))
> >> -                       static_branch_disable(&page_alloc_shuffle_key);
> >> -       } else if (ctl == SHUFFLE_ENABLE
> >> -                       && !test_and_set_bit(SHUFFLE_ENABLE, &shuffle_state))
> >> -               static_branch_enable(&page_alloc_shuffle_key);
> >> -}
> >>
> >>  static bool shuffle_param;
> >>  static int shuffle_show(char *buffer, const struct kernel_param *kp)
> >>  {
> >> -       return sprintf(buffer, "%c\n", test_bit(SHUFFLE_ENABLE, &shuffle_state)
> >> -                       ? 'Y' : 'N');
> >> +       return sprintf(buffer, "%c\n", shuffle_param ? 'Y' : 'N');
> >>  }
> >>
> >>  static __meminit int shuffle_store(const char *val,
> >> @@ -47,9 +25,7 @@ static __meminit int shuffle_store(const char *val,
> >>         if (rc < 0)
> >>                 return rc;
> >>         if (shuffle_param)
> >> -               page_alloc_shuffle(SHUFFLE_ENABLE);
> >> -       else
> >> -               page_alloc_shuffle(SHUFFLE_FORCE_DISABLE);
> >> +               static_branch_enable(&page_alloc_shuffle_key);
> >>         return 0;
> >>  }
> >
> > Let's do proper input validation here and require 1 / 'true' to enable
> > shuffling and not also allow 0 to be an 'enable' value.
>
> I don't think that's currently done?
>
> param_set_bool(val, kp) will only default val==NULL to 'true'. Passing 0
> will properly be handled by strtobool(). Or am I missing something?
>

No, I misread the patch and thought the conditional was being removed.

All good now.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant
  2020-06-23 21:15       ` Dan Williams
@ 2020-06-24  9:31         ` David Hildenbrand
  0 siblings, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2020-06-24  9:31 UTC (permalink / raw)
  To: Dan Williams
  Cc: Linux Kernel Mailing List, Linux MM, Andrew Morton,
	Alexander Duyck, Michal Hocko

On 23.06.20 23:15, Dan Williams wrote:
> On Mon, Jun 22, 2020 at 12:28 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 20.06.20 03:41, Dan Williams wrote:
>>> On Fri, Jun 19, 2020 at 6:00 AM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> It's not completely obvious why we have to shuffle the complete zone, as
>>>> some sort of shuffling is already performed when onlining pages via
>>>> __free_one_page(), placing MAX_ORDER-1 pages either to the head or the tail
>>>> of the freelist. Let's document why we have to shuffle the complete zone
>>>> when exposing larger, contiguous physical memory areas to the buddy.
>>>>
>>>
>>> How about?
>>>
>>> Fixes: e900a918b098 ("mm: shuffle initial free memory to improve
>>> memory-side-cache utilization")
>>>
>>> ...just like Patch1 since that original commit was missing the proper
>>> commentary in the code?
>>
>> Hmm, mixed feelings. I (working for a distributor :) ) prefer fixes tags
>> for actual BUGs, as described in
>>
>> Documentation/process/submitting-patches.rst: "If your patch fixes a bug
>> in a specific commit, e.g. you found an issue using ``git bisect``,
>> please use the 'Fixes:' tag with the first 12 characters" ...
>>
>> So unless there are strong feelings, I'll not add a fixes tag (although
>> I agree, that it should have been contained in the original commit).
> 
> It doesn't need to be "Fixes", but how about at least mentioning the
> original commit as a breadcrumb so that some future "git blame"
> archaeology effort is streamlined.
> 

Makes sense, I'll mention it as

It's not completely obvious why we have to shuffle the complete zone (
introduced in commit e900a918b098 ("mm: shuffle initial free memory to
...

thanks!

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-06-23  9:30                     ` Wei Yang
@ 2020-07-24  3:08                       ` Andrew Morton
  2020-07-24  5:45                         ` Wei Yang
  2020-07-24  8:20                         ` David Hildenbrand
  0 siblings, 2 replies; 30+ messages in thread
From: Andrew Morton @ 2020-07-24  3:08 UTC (permalink / raw)
  To: Wei Yang
  Cc: David Hildenbrand, Wei Yang, linux-kernel, linux-mm,
	Michal Hocko, stable, Johannes Weiner, Minchan Kim, Huang Ying,
	Mel Gorman, Dan Williams

On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:

> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:
> >On 23.06.20 09:39, David Hildenbrand wrote:
> >>> Hmm.. I thought this is the behavior for early section, while it looks current
> >>> code doesn't work like this:
> >>>
> >>>        if (section_is_early && memmap)
> >>>                free_map_bootmem(memmap);
> >>>        else
> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
> >>>
> >>> section_is_early is always "true" for early section, while memmap is not-NULL
> >>> only when sub-section map is empty.
> >>>
> >>> If my understanding is correct, when we remove a sub-section in early section,
> >>> the code would call depopulate_section_memmap(), which in turn free related
> >>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
> >>> not a valid one.
> >> 
> >> I think you're right, and pfn_valid() would also return true, as it is
> >> an early section. This looks broken.
> >> 
> >>>
> >>> Maybe we want to write the code like this:
> >>>
> >>>        if (section_is_early)
> >>>                if (memmap)
> >>>                        free_map_bootmem(memmap);
> >>>        else
> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
> >>>
> >> 
> >> I guess that should be the way to go
> >> 
> >> @Dan, I think what Wei proposes here is correct, right? Or how does it
> >> work in the VMEMMAP case with early sections?
> >> 
> >
> >Especially, if you would re-hot-add, section_activate() would assume
> >there is a memmap, it must not be removed.
> >
> 
> You are right here. I didn't notice it.
> 
> >@Wei, can you send a patch?
> >
> 
> Sure, let me prepare for it.

Still awaiting this, and the v3 patch was identical to this v2 patch.

It's tagged for -stable, so there's some urgency.  Should we just go
ahead with the decently-tested v2?



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-07-24  3:08                       ` Andrew Morton
@ 2020-07-24  5:45                         ` Wei Yang
  2020-07-24  8:20                         ` David Hildenbrand
  1 sibling, 0 replies; 30+ messages in thread
From: Wei Yang @ 2020-07-24  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Wei Yang, David Hildenbrand, Wei Yang, linux-kernel, linux-mm,
	Michal Hocko, stable, Johannes Weiner, Minchan Kim, Huang Ying,
	Mel Gorman, Dan Williams

On Thu, Jul 23, 2020 at 08:08:46PM -0700, Andrew Morton wrote:
>On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:
>
>> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:
>> >On 23.06.20 09:39, David Hildenbrand wrote:
>> >>> Hmm.. I thought this is the behavior for early section, while it looks current
>> >>> code doesn't work like this:
>> >>>
>> >>>        if (section_is_early && memmap)
>> >>>                free_map_bootmem(memmap);
>> >>>        else
>> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>> >>>
>> >>> section_is_early is always "true" for early section, while memmap is not-NULL
>> >>> only when sub-section map is empty.
>> >>>
>> >>> If my understanding is correct, when we remove a sub-section in early section,
>> >>> the code would call depopulate_section_memmap(), which in turn free related
>> >>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
>> >>> not a valid one.
>> >> 
>> >> I think you're right, and pfn_valid() would also return true, as it is
>> >> an early section. This looks broken.
>> >> 
>> >>>
>> >>> Maybe we want to write the code like this:
>> >>>
>> >>>        if (section_is_early)
>> >>>                if (memmap)
>> >>>                        free_map_bootmem(memmap);
>> >>>        else
>> >>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>> >>>
>> >> 
>> >> I guess that should be the way to go
>> >> 
>> >> @Dan, I think what Wei proposes here is correct, right? Or how does it
>> >> work in the VMEMMAP case with early sections?
>> >> 
>> >
>> >Especially, if you would re-hot-add, section_activate() would assume
>> >there is a memmap, it must not be removed.
>> >
>> 
>> You are right here. I didn't notice it.
>> 
>> >@Wei, can you send a patch?
>> >
>> 
>> Sure, let me prepare for it.
>
>Still awaiting this, and the v3 patch was identical to this v2 patch.
>
>It's tagged for -stable, so there's some urgency.  Should we just go
>ahead with the decently-tested v2?

This message is to me right?

I thought the fix patch is merged, the patch link may be
https://lkml.org/lkml/2020/6/23/380.

If I missed something, just let me know.



-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps
  2020-07-24  3:08                       ` Andrew Morton
  2020-07-24  5:45                         ` Wei Yang
@ 2020-07-24  8:20                         ` David Hildenbrand
  1 sibling, 0 replies; 30+ messages in thread
From: David Hildenbrand @ 2020-07-24  8:20 UTC (permalink / raw)
  To: Andrew Morton, Wei Yang
  Cc: Wei Yang, linux-kernel, linux-mm, Michal Hocko, stable,
	Johannes Weiner, Minchan Kim, Huang Ying, Mel Gorman,
	Dan Williams

On 24.07.20 05:08, Andrew Morton wrote:
> On Tue, 23 Jun 2020 17:30:18 +0800 Wei Yang <richard.weiyang@linux.alibaba.com> wrote:
> 
>> On Tue, Jun 23, 2020 at 09:55:43AM +0200, David Hildenbrand wrote:
>>> On 23.06.20 09:39, David Hildenbrand wrote:
>>>>> Hmm.. I thought this is the behavior for early section, while it looks current
>>>>> code doesn't work like this:
>>>>>
>>>>>        if (section_is_early && memmap)
>>>>>                free_map_bootmem(memmap);
>>>>>        else
>>>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>>>>
>>>>> section_is_early is always "true" for early section, while memmap is not-NULL
>>>>> only when sub-section map is empty.
>>>>>
>>>>> If my understanding is correct, when we remove a sub-section in early section,
>>>>> the code would call depopulate_section_memmap(), which in turn free related
>>>>> memmap. By removing the memmap, the return value from pfn_to_online_page() is
>>>>> not a valid one.
>>>>
>>>> I think you're right, and pfn_valid() would also return true, as it is
>>>> an early section. This looks broken.
>>>>
>>>>>
>>>>> Maybe we want to write the code like this:
>>>>>
>>>>>        if (section_is_early)
>>>>>                if (memmap)
>>>>>                        free_map_bootmem(memmap);
>>>>>        else
>>>>> 	       depopulate_section_memmap(pfn, nr_pages, altmap);
>>>>>
>>>>
>>>> I guess that should be the way to go
>>>>
>>>> @Dan, I think what Wei proposes here is correct, right? Or how does it
>>>> work in the VMEMMAP case with early sections?
>>>>
>>>
>>> Especially, if you would re-hot-add, section_activate() would assume
>>> there is a memmap, it must not be removed.
>>>
>>
>> You are right here. I didn't notice it.
>>
>>> @Wei, can you send a patch?
>>>
>>
>> Sure, let me prepare for it.
> 
> Still awaiting this, and the v3 patch was identical to this v2 patch.
> 
> It's tagged for -stable, so there's some urgency.  Should we just go
> ahead with the decently-tested v2?

This patch (mm/shuffle: don't move pages between zones and don't read
garbage memmaps) is good enough for upstream. While the issue reported
by Wei was valid (and needs to be fixed), the user in this patch is just
one of many affected users. Nothing special.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2020-07-24  8:20 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-19 12:59 [PATCH v2 0/3] mm/shuffle: fix and cleanups David Hildenbrand
2020-06-19 12:59 ` [PATCH v2 1/3] mm/shuffle: don't move pages between zones and don't read garbage memmaps David Hildenbrand
2020-06-20  1:37   ` Williams, Dan J
2020-06-22  8:26   ` Wei Yang
2020-06-22  8:43     ` David Hildenbrand
2020-06-22  9:22       ` Wei Yang
2020-06-22  9:51         ` David Hildenbrand
2020-06-22 13:10           ` Wei Yang
2020-06-22 14:06             ` David Hildenbrand
2020-06-22 21:55               ` Wei Yang
2020-06-23  7:39                 ` David Hildenbrand
2020-06-23  7:55                   ` David Hildenbrand
2020-06-23  9:30                     ` Wei Yang
2020-07-24  3:08                       ` Andrew Morton
2020-07-24  5:45                         ` Wei Yang
2020-07-24  8:20                         ` David Hildenbrand
2020-06-22 14:11         ` David Hildenbrand
2020-06-19 12:59 ` [PATCH v2 2/3] mm/memory_hotplug: document why shuffle_zone() is relevant David Hildenbrand
2020-06-20  1:41   ` Dan Williams
2020-06-22  7:27     ` David Hildenbrand
2020-06-23 21:15       ` Dan Williams
2020-06-24  9:31         ` David Hildenbrand
2020-06-22 15:32   ` Michal Hocko
2020-06-19 12:59 ` [PATCH v2 3/3] mm/shuffle: remove dynamic reconfiguration David Hildenbrand
2020-06-20  1:49   ` Dan Williams
2020-06-22  7:33     ` David Hildenbrand
2020-06-22  8:37       ` Wei Yang
2020-06-23 22:18       ` Dan Williams
2020-06-22 15:37   ` Michal Hocko
2020-06-23  1:22   ` Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).