linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] powerpc/mm/hash: Time improvements for memory hot(un)plug
@ 2021-03-12  7:29 Leonardo Bras
  2021-03-12  7:29 ` [PATCH 1/3] powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug Leonardo Bras
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Leonardo Bras @ 2021-03-12  7:29 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras,
	Andrew Morton, Leonardo Bras, Sandipan Das, Aneesh Kumar K.V,
	Logan Gunthorpe, Mike Rapoport, Bharata B Rao, Dan Williams,
	Nicholas Piggin, Nathan Lynch, David Hildenbrand, Laurent Dufour,
	Scott Cheloha, David Gibson
  Cc: linuxppc-dev, linux-kernel

This patchset intends to reduce time needed for processing memory
hotplug/hotunplug in hash guests.

The first one, makes sure guests with pagesize over 4k don't need to
go through HPT resize-downs after memory hotplug.

The second and third patches make hotplug / hotunplug perform a single
HPT resize per operation, instead of one for each shift change, or one
for each LMB in case of resize-down error.

Why haven't the same mechanism used for both memory hotplug and hotunplug?
They both have different requirements:

Memory hotplug causes (usually) HPT resize-ups, which are fine happening
at the start of hotplug, but resize-ups should not ever be disabled, as
other mechanisms may try to increase memory, hitting issues with a HPT
that is too small.

Memory hotunplug causes HPT resize-downs, which can be disabled (HPT will
just remain larger for a while), but need to happen at the end of an
hotunplug operation. If we want to batch it, we need to disable
resize-downs and perform it only at the end.

Tests done with this patchset in the same machine / guest config:
Starting memory: 129GB, DIMM: 256GB
Before patchset: hotplug = 710s, hotunplug = 621s.
After patchset: hotplug  = 21s, hotunplug = 100s.

Any feedback will be appreciated!
I believe the code may not be very well placed in available files,
so please give some feedback on that.

Best regards,

Leonardo Bras (3):
  powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug
  powerpc/mm/hash: Avoid multiple HPT resize-ups on memory hotplug
  powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug

 arch/powerpc/include/asm/book3s/64/hash.h     |  4 +
 arch/powerpc/include/asm/sparsemem.h          |  4 +
 arch/powerpc/mm/book3s64/hash_utils.c         | 78 +++++++++++++++----
 arch/powerpc/mm/book3s64/pgtable.c            | 18 +++++
 .../platforms/pseries/hotplug-memory.c        | 22 ++++++
 5 files changed, 111 insertions(+), 15 deletions(-)

-- 
2.29.2


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug
  2021-03-12  7:29 [PATCH 0/3] powerpc/mm/hash: Time improvements for memory hot(un)plug Leonardo Bras
@ 2021-03-12  7:29 ` Leonardo Bras
  2021-03-22  6:49   ` David Gibson
  2021-03-12  7:29 ` [PATCH 2/3] powerpc/mm/hash: Avoid multiple HPT resize-ups on " Leonardo Bras
  2021-03-12  7:29 ` [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug Leonardo Bras
  2 siblings, 1 reply; 12+ messages in thread
From: Leonardo Bras @ 2021-03-12  7:29 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras,
	Andrew Morton, Leonardo Bras, Sandipan Das, Aneesh Kumar K.V,
	Logan Gunthorpe, Mike Rapoport, Bharata B Rao, Dan Williams,
	Nicholas Piggin, Nathan Lynch, David Hildenbrand, Laurent Dufour,
	Scott Cheloha, David Gibson
  Cc: linuxppc-dev, linux-kernel

Because hypervisors may need to create HPTs without knowing the guest
page size, the smallest used page-size (4k) may be chosen, resulting in
a HPT that is possibly bigger than needed.

On a guest with bigger page-sizes, the amount of entries for HTP may be
too high, causing the guest to ask for a HPT resize-down on the first
hotplug.

This becomes a problem when HPT resize-down fails, and causes the
HPT resize to be performed on every LMB added, until HPT size is
compatible to guest memory size, causing a major slowdown.

So, avoiding HPT resizing-down on hot-add significantly improves memory
hotplug times.

As an example, hotplugging 256GB on a 129GB guest took 710s without this
patch, and 21s after applied.

Signed-off-by: Leonardo Bras <leobras.c@gmail.com>
---
 arch/powerpc/mm/book3s64/hash_utils.c | 36 ++++++++++++++++-----------
 1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 73b06adb6eeb..cfb3ec164f56 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -794,7 +794,7 @@ static unsigned long __init htab_get_table_size(void)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG
-static int resize_hpt_for_hotplug(unsigned long new_mem_size)
+static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
 {
 	unsigned target_hpt_shift;
 
@@ -803,19 +803,25 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size)
 
 	target_hpt_shift = htab_shift_for_mem_size(new_mem_size);
 
-	/*
-	 * To avoid lots of HPT resizes if memory size is fluctuating
-	 * across a boundary, we deliberately have some hysterisis
-	 * here: we immediately increase the HPT size if the target
-	 * shift exceeds the current shift, but we won't attempt to
-	 * reduce unless the target shift is at least 2 below the
-	 * current shift
-	 */
-	if (target_hpt_shift > ppc64_pft_size ||
-	    target_hpt_shift < ppc64_pft_size - 1)
-		return mmu_hash_ops.resize_hpt(target_hpt_shift);
+	if (shrinking) {
 
-	return 0;
+		/*
+		 * To avoid lots of HPT resizes if memory size is fluctuating
+		 * across a boundary, we deliberately have some hysterisis
+		 * here: we immediately increase the HPT size if the target
+		 * shift exceeds the current shift, but we won't attempt to
+		 * reduce unless the target shift is at least 2 below the
+		 * current shift
+		 */
+
+		if (target_hpt_shift >= ppc64_pft_size - 1)
+			return 0;
+
+	} else if (target_hpt_shift <= ppc64_pft_size) {
+		return 0;
+	}
+
+	return mmu_hash_ops.resize_hpt(target_hpt_shift);
 }
 
 int hash__create_section_mapping(unsigned long start, unsigned long end,
@@ -828,7 +834,7 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
 		return -1;
 	}
 
-	resize_hpt_for_hotplug(memblock_phys_mem_size());
+	resize_hpt_for_hotplug(memblock_phys_mem_size(), false);
 
 	rc = htab_bolt_mapping(start, end, __pa(start),
 			       pgprot_val(prot), mmu_linear_psize,
@@ -847,7 +853,7 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
 	int rc = htab_remove_mapping(start, end, mmu_linear_psize,
 				     mmu_kernel_ssize);
 
-	if (resize_hpt_for_hotplug(memblock_phys_mem_size()) == -ENOSPC)
+	if (resize_hpt_for_hotplug(memblock_phys_mem_size(), true) == -ENOSPC)
 		pr_warn("Hash collision while resizing HPT\n");
 
 	return rc;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] powerpc/mm/hash: Avoid multiple HPT resize-ups on memory hotplug
  2021-03-12  7:29 [PATCH 0/3] powerpc/mm/hash: Time improvements for memory hot(un)plug Leonardo Bras
  2021-03-12  7:29 ` [PATCH 1/3] powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug Leonardo Bras
@ 2021-03-12  7:29 ` Leonardo Bras
  2021-03-22  7:55   ` David Gibson
  2021-03-12  7:29 ` [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug Leonardo Bras
  2 siblings, 1 reply; 12+ messages in thread
From: Leonardo Bras @ 2021-03-12  7:29 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras,
	Andrew Morton, Leonardo Bras, Sandipan Das, Aneesh Kumar K.V,
	Logan Gunthorpe, Mike Rapoport, Bharata B Rao, Dan Williams,
	Nicholas Piggin, Nathan Lynch, David Hildenbrand, Laurent Dufour,
	Scott Cheloha, David Gibson
  Cc: linuxppc-dev, linux-kernel

Every time a memory hotplug happens, and the memory limit crosses a 2^n
value, it may be necessary to perform HPT resizing-up, which can take
some time (over 100ms in my tests).

It usually is not an issue, but it can take some time if a lot of memory
is added to a guest with little starting memory:
Adding 256G to a 2GB guest, for example will require 8 HPT resizes.

Perform an HPT resize before memory hotplug, updating HPT to its
final size (considering a successful hotplug), taking the number of
HPT resizes to at most one per memory hotplug action.

Signed-off-by: Leonardo Bras <leobras.c@gmail.com>
---
 arch/powerpc/include/asm/book3s/64/hash.h       |  2 ++
 arch/powerpc/include/asm/sparsemem.h            |  2 ++
 arch/powerpc/mm/book3s64/hash_utils.c           | 14 ++++++++++++++
 arch/powerpc/mm/book3s64/pgtable.c              |  6 ++++++
 arch/powerpc/platforms/pseries/hotplug-memory.c |  6 ++++++
 5 files changed, 30 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index d959b0195ad9..843b0a178590 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -255,6 +255,8 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
 				 int nid, pgprot_t prot);
 int hash__remove_section_mapping(unsigned long start, unsigned long end);
 
+void hash_memory_batch_expand_prepare(unsigned long newsize);
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __KERNEL__ */
 #endif /* _ASM_POWERPC_BOOK3S_64_HASH_H */
diff --git a/arch/powerpc/include/asm/sparsemem.h b/arch/powerpc/include/asm/sparsemem.h
index d072866842e4..16b5f5300c84 100644
--- a/arch/powerpc/include/asm/sparsemem.h
+++ b/arch/powerpc/include/asm/sparsemem.h
@@ -17,6 +17,8 @@ extern int remove_section_mapping(unsigned long start, unsigned long end);
 extern int memory_add_physaddr_to_nid(u64 start);
 #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
 
+void memory_batch_expand_prepare(unsigned long newsize);
+
 #ifdef CONFIG_NUMA
 extern int hot_add_scn_to_nid(unsigned long scn_addr);
 #else
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index cfb3ec164f56..1f6aa0bf27e7 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -858,6 +858,20 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
 
 	return rc;
 }
+
+void hash_memory_batch_expand_prepare(unsigned long newsize)
+{
+	/*
+	 * Resizing-up HPT should never fail, but there are some cases system starts with higher
+	 * SHIFT than required, and we go through the funny case of resizing HPT down while
+	 * adding memory
+	 */
+
+	while (resize_hpt_for_hotplug(newsize, false) == -ENOSPC) {
+		newsize *= 2;
+		pr_warn("Hash collision while resizing HPT\n");
+	}
+}
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
 static void __init hash_init_partition_table(phys_addr_t hash_table,
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 5b3a3bae21aa..f1cd8af0f67f 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -193,6 +193,12 @@ int __meminit remove_section_mapping(unsigned long start, unsigned long end)
 
 	return hash__remove_section_mapping(start, end);
 }
+
+void memory_batch_expand_prepare(unsigned long newsize)
+{
+	if (!radix_enabled())
+		hash_memory_batch_expand_prepare(newsize);
+}
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
 void __init mmu_partition_table_init(void)
diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index 8377f1f7c78e..353c71249214 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -671,6 +671,8 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
 	if (lmbs_available < lmbs_to_add)
 		return -EINVAL;
 
+	memory_batch_expand_prepare(memblock_phys_mem_size() + lmbs_to_add * drmem_lmb_size());
+
 	for_each_drmem_lmb(lmb) {
 		if (lmb->flags & DRCONF_MEM_ASSIGNED)
 			continue;
@@ -734,6 +736,8 @@ static int dlpar_memory_add_by_index(u32 drc_index)
 
 	pr_info("Attempting to hot-add LMB, drc index %x\n", drc_index);
 
+	memory_batch_expand_prepare(memblock_phys_mem_size() +
+				     drmem_info->n_lmbs * drmem_lmb_size());
 	lmb_found = 0;
 	for_each_drmem_lmb(lmb) {
 		if (lmb->drc_index == drc_index) {
@@ -788,6 +792,8 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
 	if (lmbs_available < lmbs_to_add)
 		return -EINVAL;
 
+	memory_batch_expand_prepare(memblock_phys_mem_size() + lmbs_to_add * drmem_lmb_size());
+
 	for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
 		if (lmb->flags & DRCONF_MEM_ASSIGNED)
 			continue;
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug
  2021-03-12  7:29 [PATCH 0/3] powerpc/mm/hash: Time improvements for memory hot(un)plug Leonardo Bras
  2021-03-12  7:29 ` [PATCH 1/3] powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug Leonardo Bras
  2021-03-12  7:29 ` [PATCH 2/3] powerpc/mm/hash: Avoid multiple HPT resize-ups on " Leonardo Bras
@ 2021-03-12  7:29 ` Leonardo Bras
  2021-03-22 23:45   ` David Gibson
  2 siblings, 1 reply; 12+ messages in thread
From: Leonardo Bras @ 2021-03-12  7:29 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras,
	Andrew Morton, Leonardo Bras, Sandipan Das, Aneesh Kumar K.V,
	Logan Gunthorpe, Mike Rapoport, Bharata B Rao, Dan Williams,
	Nicholas Piggin, Nathan Lynch, David Hildenbrand, Laurent Dufour,
	Scott Cheloha, David Gibson
  Cc: linuxppc-dev, linux-kernel

During memory hotunplug, after each LMB is removed, the HPT may be
resized-down if it would map a max of 4 times the current amount of memory.
(2 shifts, due to introduced histeresis)

It usually is not an issue, but it can take a lot of time if HPT
resizing-down fails. This happens  because resize-down failures
usually repeat at each LMB removal, until there are no more bolted entries
conflict, which can take a while to happen.

This can be solved by doing a single HPT resize at the end of memory
hotunplug, after all requested entries are removed.

To make this happen, it's necessary to temporarily disable all HPT
resize-downs before hotunplug, re-enable them after hotunplug ends,
and then resize-down HPT to the current memory size.

As an example, hotunplugging 256GB from a 385GB guest took 621s without
this patch, and 100s after applied.

Signed-off-by: Leonardo Bras <leobras.c@gmail.com>
---
 arch/powerpc/include/asm/book3s/64/hash.h     |  2 ++
 arch/powerpc/include/asm/sparsemem.h          |  2 ++
 arch/powerpc/mm/book3s64/hash_utils.c         | 28 +++++++++++++++++++
 arch/powerpc/mm/book3s64/pgtable.c            | 12 ++++++++
 .../platforms/pseries/hotplug-memory.c        | 16 +++++++++++
 5 files changed, 60 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 843b0a178590..f92697c107f7 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -256,6 +256,8 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
 int hash__remove_section_mapping(unsigned long start, unsigned long end);
 
 void hash_memory_batch_expand_prepare(unsigned long newsize);
+void hash_memory_batch_shrink_begin(void);
+void hash_memory_batch_shrink_end(void);
 
 #endif /* !__ASSEMBLY__ */
 #endif /* __KERNEL__ */
diff --git a/arch/powerpc/include/asm/sparsemem.h b/arch/powerpc/include/asm/sparsemem.h
index 16b5f5300c84..a7a8a0d070fc 100644
--- a/arch/powerpc/include/asm/sparsemem.h
+++ b/arch/powerpc/include/asm/sparsemem.h
@@ -18,6 +18,8 @@ extern int memory_add_physaddr_to_nid(u64 start);
 #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
 
 void memory_batch_expand_prepare(unsigned long newsize);
+void memory_batch_shrink_begin(void);
+void memory_batch_shrink_end(void);
 
 #ifdef CONFIG_NUMA
 extern int hot_add_scn_to_nid(unsigned long scn_addr);
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 1f6aa0bf27e7..e16f207de8e4 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -794,6 +794,9 @@ static unsigned long __init htab_get_table_size(void)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG
+
+atomic_t hpt_resize_disable = ATOMIC_INIT(0);
+
 static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
 {
 	unsigned target_hpt_shift;
@@ -805,6 +808,10 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
 
 	if (shrinking) {
 
+		/* When batch removing entries, only resizes HPT at the end. */
+		if (atomic_read_acquire(&hpt_resize_disable))
+			return 0;
+
 		/*
 		 * To avoid lots of HPT resizes if memory size is fluctuating
 		 * across a boundary, we deliberately have some hysterisis
@@ -872,6 +879,27 @@ void hash_memory_batch_expand_prepare(unsigned long newsize)
 		pr_warn("Hash collision while resizing HPT\n");
 	}
 }
+
+void hash_memory_batch_shrink_begin(void)
+{
+	/* Disable HPT resize-down during hot-unplug */
+	atomic_set_release(&hpt_resize_disable, 1);
+}
+
+void hash_memory_batch_shrink_end(void)
+{
+	unsigned long newsize;
+
+	/* Re-enables HPT resize-down after hot-unplug */
+	atomic_set_release(&hpt_resize_disable, 0);
+
+	newsize = memblock_phys_mem_size();
+	/* Resize to smallest SHIFT possible */
+	while (resize_hpt_for_hotplug(newsize, true) == -ENOSPC) {
+		newsize *= 2;
+		pr_warn("Hash collision while resizing HPT\n");
+	}
+}
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
 static void __init hash_init_partition_table(phys_addr_t hash_table,
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index f1cd8af0f67f..e01681e22e00 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -199,6 +199,18 @@ void memory_batch_expand_prepare(unsigned long newsize)
 	if (!radix_enabled())
 		hash_memory_batch_expand_prepare(newsize);
 }
+
+void memory_batch_shrink_begin(void)
+{
+	if (!radix_enabled())
+		hash_memory_batch_shrink_begin();
+}
+
+void memory_batch_shrink_end(void)
+{
+	if (!radix_enabled())
+		hash_memory_batch_shrink_end();
+}
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
 void __init mmu_partition_table_init(void)
diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index 353c71249214..9182fb5b5c01 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -425,6 +425,8 @@ static int dlpar_memory_remove_by_count(u32 lmbs_to_remove)
 		return -EINVAL;
 	}
 
+	memory_batch_shrink_begin();
+
 	for_each_drmem_lmb(lmb) {
 		rc = dlpar_remove_lmb(lmb);
 		if (rc)
@@ -470,6 +472,8 @@ static int dlpar_memory_remove_by_count(u32 lmbs_to_remove)
 		rc = 0;
 	}
 
+	memory_batch_shrink_end();
+
 	return rc;
 }
 
@@ -481,6 +485,8 @@ static int dlpar_memory_remove_by_index(u32 drc_index)
 
 	pr_debug("Attempting to hot-remove LMB, drc index %x\n", drc_index);
 
+	memory_batch_shrink_begin();
+
 	lmb_found = 0;
 	for_each_drmem_lmb(lmb) {
 		if (lmb->drc_index == drc_index) {
@@ -502,6 +508,8 @@ static int dlpar_memory_remove_by_index(u32 drc_index)
 	else
 		pr_debug("Memory at %llx was hot-removed\n", lmb->base_addr);
 
+	memory_batch_shrink_end();
+
 	return rc;
 }
 
@@ -532,6 +540,8 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
 	if (lmbs_available < lmbs_to_remove)
 		return -EINVAL;
 
+	memory_batch_shrink_begin();
+
 	for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
 		if (!(lmb->flags & DRCONF_MEM_ASSIGNED))
 			continue;
@@ -572,6 +582,8 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
 		}
 	}
 
+	memory_batch_shrink_end();
+
 	return rc;
 }
 
@@ -700,6 +712,7 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
 	if (lmbs_added != lmbs_to_add) {
 		pr_err("Memory hot-add failed, removing any added LMBs\n");
 
+		memory_batch_shrink_begin();
 		for_each_drmem_lmb(lmb) {
 			if (!drmem_lmb_reserved(lmb))
 				continue;
@@ -713,6 +726,7 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
 
 			drmem_remove_lmb_reservation(lmb);
 		}
+		memory_batch_shrink_end();
 		rc = -EINVAL;
 	} else {
 		for_each_drmem_lmb(lmb) {
@@ -814,6 +828,7 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
 	if (rc) {
 		pr_err("Memory indexed-count-add failed, removing any added LMBs\n");
 
+		memory_batch_shrink_begin();
 		for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
 			if (!drmem_lmb_reserved(lmb))
 				continue;
@@ -827,6 +842,7 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
 
 			drmem_remove_lmb_reservation(lmb);
 		}
+		memory_batch_shrink_end();
 		rc = -EINVAL;
 	} else {
 		for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug
  2021-03-12  7:29 ` [PATCH 1/3] powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug Leonardo Bras
@ 2021-03-22  6:49   ` David Gibson
  2021-04-09  2:16     ` Leonardo Bras
  0 siblings, 1 reply; 12+ messages in thread
From: David Gibson @ 2021-03-22  6:49 UTC (permalink / raw)
  To: Leonardo Bras
  Cc: Nathan Lynch, David Hildenbrand, Scott Cheloha, linux-kernel,
	linuxppc-dev, Nicholas Piggin, Bharata B Rao, Paul Mackerras,
	Sandipan Das, Aneesh Kumar K.V, Andrew Morton, Laurent Dufour,
	Logan Gunthorpe, Dan Williams, Mike Rapoport

[-- Attachment #1: Type: text/plain, Size: 4282 bytes --]

On Fri, Mar 12, 2021 at 04:29:39AM -0300, Leonardo Bras wrote:
> Because hypervisors may need to create HPTs without knowing the guest
> page size, the smallest used page-size (4k) may be chosen, resulting in
> a HPT that is possibly bigger than needed.
> 
> On a guest with bigger page-sizes, the amount of entries for HTP may be
> too high, causing the guest to ask for a HPT resize-down on the first
> hotplug.
> 
> This becomes a problem when HPT resize-down fails, and causes the
> HPT resize to be performed on every LMB added, until HPT size is
> compatible to guest memory size, causing a major slowdown.
> 
> So, avoiding HPT resizing-down on hot-add significantly improves memory
> hotplug times.
> 
> As an example, hotplugging 256GB on a 129GB guest took 710s without this
> patch, and 21s after applied.
> 
> Signed-off-by: Leonardo Bras <leobras.c@gmail.com>

I don't love this approach.  Adding the extra flag at this level seems
a bit inelegant, and it means we're passing up an easy opportunity to
reduce our resource footprint on the host.

But... maybe we'll have to do it.  I'd like to see if we can get
things to work well enough with just the "batching" to avoid multiple
resize attempts first.

> ---
>  arch/powerpc/mm/book3s64/hash_utils.c | 36 ++++++++++++++++-----------
>  1 file changed, 21 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
> index 73b06adb6eeb..cfb3ec164f56 100644
> --- a/arch/powerpc/mm/book3s64/hash_utils.c
> +++ b/arch/powerpc/mm/book3s64/hash_utils.c
> @@ -794,7 +794,7 @@ static unsigned long __init htab_get_table_size(void)
>  }
>  
>  #ifdef CONFIG_MEMORY_HOTPLUG
> -static int resize_hpt_for_hotplug(unsigned long new_mem_size)
> +static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
>  {
>  	unsigned target_hpt_shift;
>  
> @@ -803,19 +803,25 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size)
>  
>  	target_hpt_shift = htab_shift_for_mem_size(new_mem_size);
>  
> -	/*
> -	 * To avoid lots of HPT resizes if memory size is fluctuating
> -	 * across a boundary, we deliberately have some hysterisis
> -	 * here: we immediately increase the HPT size if the target
> -	 * shift exceeds the current shift, but we won't attempt to
> -	 * reduce unless the target shift is at least 2 below the
> -	 * current shift
> -	 */
> -	if (target_hpt_shift > ppc64_pft_size ||
> -	    target_hpt_shift < ppc64_pft_size - 1)
> -		return mmu_hash_ops.resize_hpt(target_hpt_shift);
> +	if (shrinking) {
>  
> -	return 0;
> +		/*
> +		 * To avoid lots of HPT resizes if memory size is fluctuating
> +		 * across a boundary, we deliberately have some hysterisis
> +		 * here: we immediately increase the HPT size if the target
> +		 * shift exceeds the current shift, but we won't attempt to
> +		 * reduce unless the target shift is at least 2 below the
> +		 * current shift
> +		 */
> +
> +		if (target_hpt_shift >= ppc64_pft_size - 1)
> +			return 0;
> +
> +	} else if (target_hpt_shift <= ppc64_pft_size) {
> +		return 0;
> +	}
> +
> +	return mmu_hash_ops.resize_hpt(target_hpt_shift);
>  }
>  
>  int hash__create_section_mapping(unsigned long start, unsigned long end,
> @@ -828,7 +834,7 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
>  		return -1;
>  	}
>  
> -	resize_hpt_for_hotplug(memblock_phys_mem_size());
> +	resize_hpt_for_hotplug(memblock_phys_mem_size(), false);
>  
>  	rc = htab_bolt_mapping(start, end, __pa(start),
>  			       pgprot_val(prot), mmu_linear_psize,
> @@ -847,7 +853,7 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
>  	int rc = htab_remove_mapping(start, end, mmu_linear_psize,
>  				     mmu_kernel_ssize);
>  
> -	if (resize_hpt_for_hotplug(memblock_phys_mem_size()) == -ENOSPC)
> +	if (resize_hpt_for_hotplug(memblock_phys_mem_size(), true) == -ENOSPC)
>  		pr_warn("Hash collision while resizing HPT\n");
>  
>  	return rc;

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] powerpc/mm/hash: Avoid multiple HPT resize-ups on memory hotplug
  2021-03-12  7:29 ` [PATCH 2/3] powerpc/mm/hash: Avoid multiple HPT resize-ups on " Leonardo Bras
@ 2021-03-22  7:55   ` David Gibson
  2021-04-09  2:51     ` Leonardo Bras
  0 siblings, 1 reply; 12+ messages in thread
From: David Gibson @ 2021-03-22  7:55 UTC (permalink / raw)
  To: Leonardo Bras
  Cc: Nathan Lynch, David Hildenbrand, Scott Cheloha, linux-kernel,
	linuxppc-dev, Nicholas Piggin, Bharata B Rao, Paul Mackerras,
	Sandipan Das, Aneesh Kumar K.V, Andrew Morton, Laurent Dufour,
	Logan Gunthorpe, Dan Williams, Mike Rapoport

[-- Attachment #1: Type: text/plain, Size: 6496 bytes --]

On Fri, Mar 12, 2021 at 04:29:40AM -0300, Leonardo Bras wrote:
> Every time a memory hotplug happens, and the memory limit crosses a 2^n
> value, it may be necessary to perform HPT resizing-up, which can take
> some time (over 100ms in my tests).
> 
> It usually is not an issue, but it can take some time if a lot of memory
> is added to a guest with little starting memory:
> Adding 256G to a 2GB guest, for example will require 8 HPT resizes.
> 
> Perform an HPT resize before memory hotplug, updating HPT to its
> final size (considering a successful hotplug), taking the number of
> HPT resizes to at most one per memory hotplug action.
> 
> Signed-off-by: Leonardo Bras <leobras.c@gmail.com>
> ---
>  arch/powerpc/include/asm/book3s/64/hash.h       |  2 ++
>  arch/powerpc/include/asm/sparsemem.h            |  2 ++
>  arch/powerpc/mm/book3s64/hash_utils.c           | 14 ++++++++++++++
>  arch/powerpc/mm/book3s64/pgtable.c              |  6 ++++++
>  arch/powerpc/platforms/pseries/hotplug-memory.c |  6 ++++++
>  5 files changed, 30 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
> index d959b0195ad9..843b0a178590 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
> @@ -255,6 +255,8 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
>  				 int nid, pgprot_t prot);
>  int hash__remove_section_mapping(unsigned long start, unsigned long end);
>  
> +void hash_memory_batch_expand_prepare(unsigned long newsize);
> +
>  #endif /* !__ASSEMBLY__ */
>  #endif /* __KERNEL__ */
>  #endif /* _ASM_POWERPC_BOOK3S_64_HASH_H */
> diff --git a/arch/powerpc/include/asm/sparsemem.h b/arch/powerpc/include/asm/sparsemem.h
> index d072866842e4..16b5f5300c84 100644
> --- a/arch/powerpc/include/asm/sparsemem.h
> +++ b/arch/powerpc/include/asm/sparsemem.h
> @@ -17,6 +17,8 @@ extern int remove_section_mapping(unsigned long start, unsigned long end);
>  extern int memory_add_physaddr_to_nid(u64 start);
>  #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
>  
> +void memory_batch_expand_prepare(unsigned long newsize);
> +
>  #ifdef CONFIG_NUMA
>  extern int hot_add_scn_to_nid(unsigned long scn_addr);
>  #else
> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
> index cfb3ec164f56..1f6aa0bf27e7 100644
> --- a/arch/powerpc/mm/book3s64/hash_utils.c
> +++ b/arch/powerpc/mm/book3s64/hash_utils.c
> @@ -858,6 +858,20 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
>  
>  	return rc;
>  }
> +
> +void hash_memory_batch_expand_prepare(unsigned long newsize)
> +{
> +	/*
> +	 * Resizing-up HPT should never fail, but there are some cases system starts with higher
> +	 * SHIFT than required, and we go through the funny case of resizing HPT down while
> +	 * adding memory
> +	 */
> +
> +	while (resize_hpt_for_hotplug(newsize, false) == -ENOSPC) {
> +		newsize *= 2;
> +		pr_warn("Hash collision while resizing HPT\n");

This unbounded increase in newsize makes me nervous - we should be
bounded by the current size of the HPT at least.  In practice we
should be fine, since the resize should always succeed by the time we
reach our current HPT size, but that's far from obvious from this
point in the code.

And... you're doubling newsize which is a value which might not be a
power of 2.  I'm wondering if there's an edge case where this could
actually cause us to skip the current size and erroneously resize to
one bigger than we have currently.

> +	}
> +}
>  #endif /* CONFIG_MEMORY_HOTPLUG */
>  
>  static void __init hash_init_partition_table(phys_addr_t hash_table,
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 5b3a3bae21aa..f1cd8af0f67f 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -193,6 +193,12 @@ int __meminit remove_section_mapping(unsigned long start, unsigned long end)
>  
>  	return hash__remove_section_mapping(start, end);
>  }
> +
> +void memory_batch_expand_prepare(unsigned long newsize)

This wrapper doesn't seem useful.

> +{
> +	if (!radix_enabled())
> +		hash_memory_batch_expand_prepare(newsize);
> +}
>  #endif /* CONFIG_MEMORY_HOTPLUG */
>  
>  void __init mmu_partition_table_init(void)
> diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
> index 8377f1f7c78e..353c71249214 100644
> --- a/arch/powerpc/platforms/pseries/hotplug-memory.c
> +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
> @@ -671,6 +671,8 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
>  	if (lmbs_available < lmbs_to_add)
>  		return -EINVAL;
>  
> +	memory_batch_expand_prepare(memblock_phys_mem_size() + lmbs_to_add * drmem_lmb_size());
> +
>  	for_each_drmem_lmb(lmb) {
>  		if (lmb->flags & DRCONF_MEM_ASSIGNED)
>  			continue;
> @@ -734,6 +736,8 @@ static int dlpar_memory_add_by_index(u32 drc_index)
>  
>  	pr_info("Attempting to hot-add LMB, drc index %x\n", drc_index);
>  
> +	memory_batch_expand_prepare(memblock_phys_mem_size() +
> +				     drmem_info->n_lmbs * drmem_lmb_size());

This doesn't look right.  memory_add_by_index() is adding a *single*
LMB, I think using drmem_info->n_lmbs here means you're counting this
as adding again as much memory as you already have hotplugged.

>  	lmb_found = 0;
>  	for_each_drmem_lmb(lmb) {
>  		if (lmb->drc_index == drc_index) {
> @@ -788,6 +792,8 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
>  	if (lmbs_available < lmbs_to_add)
>  		return -EINVAL;
>  
> +	memory_batch_expand_prepare(memblock_phys_mem_size() + lmbs_to_add * drmem_lmb_size());
> +
>  	for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
>  		if (lmb->flags & DRCONF_MEM_ASSIGNED)
>  			continue;

I don't see memory_batch_expand_prepare() suppressing any existing HPT
resizes.  Won't this just resize to the right size for the full add,
then resize several times again as we perform the add?  Or.. I guess
that will be suppressed by patch 1/3.  That's seems kinda fragile,
though.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug
  2021-03-12  7:29 ` [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug Leonardo Bras
@ 2021-03-22 23:45   ` David Gibson
  2021-04-09  3:31     ` Leonardo Bras
  0 siblings, 1 reply; 12+ messages in thread
From: David Gibson @ 2021-03-22 23:45 UTC (permalink / raw)
  To: Leonardo Bras
  Cc: Nathan Lynch, David Hildenbrand, Scott Cheloha, linux-kernel,
	linuxppc-dev, Nicholas Piggin, Bharata B Rao, Paul Mackerras,
	Sandipan Das, Aneesh Kumar K.V, Andrew Morton, Laurent Dufour,
	Logan Gunthorpe, Dan Williams, Mike Rapoport

[-- Attachment #1: Type: text/plain, Size: 9150 bytes --]

On Fri, Mar 12, 2021 at 04:29:41AM -0300, Leonardo Bras wrote:
> During memory hotunplug, after each LMB is removed, the HPT may be
> resized-down if it would map a max of 4 times the current amount of memory.
> (2 shifts, due to introduced histeresis)
> 
> It usually is not an issue, but it can take a lot of time if HPT
> resizing-down fails. This happens  because resize-down failures
> usually repeat at each LMB removal, until there are no more bolted entries
> conflict, which can take a while to happen.
> 
> This can be solved by doing a single HPT resize at the end of memory
> hotunplug, after all requested entries are removed.
> 
> To make this happen, it's necessary to temporarily disable all HPT
> resize-downs before hotunplug, re-enable them after hotunplug ends,
> and then resize-down HPT to the current memory size.
> 
> As an example, hotunplugging 256GB from a 385GB guest took 621s without
> this patch, and 100s after applied.
> 
> Signed-off-by: Leonardo Bras <leobras.c@gmail.com>
> ---
>  arch/powerpc/include/asm/book3s/64/hash.h     |  2 ++
>  arch/powerpc/include/asm/sparsemem.h          |  2 ++
>  arch/powerpc/mm/book3s64/hash_utils.c         | 28 +++++++++++++++++++
>  arch/powerpc/mm/book3s64/pgtable.c            | 12 ++++++++
>  .../platforms/pseries/hotplug-memory.c        | 16 +++++++++++
>  5 files changed, 60 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
> index 843b0a178590..f92697c107f7 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
> @@ -256,6 +256,8 @@ int hash__create_section_mapping(unsigned long start, unsigned long end,
>  int hash__remove_section_mapping(unsigned long start, unsigned long end);
>  
>  void hash_memory_batch_expand_prepare(unsigned long newsize);
> +void hash_memory_batch_shrink_begin(void);
> +void hash_memory_batch_shrink_end(void);
>  
>  #endif /* !__ASSEMBLY__ */
>  #endif /* __KERNEL__ */
> diff --git a/arch/powerpc/include/asm/sparsemem.h b/arch/powerpc/include/asm/sparsemem.h
> index 16b5f5300c84..a7a8a0d070fc 100644
> --- a/arch/powerpc/include/asm/sparsemem.h
> +++ b/arch/powerpc/include/asm/sparsemem.h
> @@ -18,6 +18,8 @@ extern int memory_add_physaddr_to_nid(u64 start);
>  #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
>  
>  void memory_batch_expand_prepare(unsigned long newsize);
> +void memory_batch_shrink_begin(void);
> +void memory_batch_shrink_end(void);
>  
>  #ifdef CONFIG_NUMA
>  extern int hot_add_scn_to_nid(unsigned long scn_addr);
> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
> index 1f6aa0bf27e7..e16f207de8e4 100644
> --- a/arch/powerpc/mm/book3s64/hash_utils.c
> +++ b/arch/powerpc/mm/book3s64/hash_utils.c
> @@ -794,6 +794,9 @@ static unsigned long __init htab_get_table_size(void)
>  }
>  
>  #ifdef CONFIG_MEMORY_HOTPLUG
> +
> +atomic_t hpt_resize_disable = ATOMIC_INIT(0);
> +
>  static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
>  {
>  	unsigned target_hpt_shift;
> @@ -805,6 +808,10 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
>  
>  	if (shrinking) {
>  
> +		/* When batch removing entries, only resizes HPT at the end. */
> +		if (atomic_read_acquire(&hpt_resize_disable))
> +			return 0;
> +

I'm not quite convinced by this locking.  Couldn't hpt_resize_disable
be set after this point, but while you're still inside
resize_hpt_for_hotplug()?  Probably better to use an explicit mutex
(and mutex_trylock()) to make the critical sections clearer.

Except... do we even need the fancy mechanics to suppress the resizes
in one place to do them elswhere.  Couldn't we just replace the
existing resize calls with the batched ones?

>  		/*
>  		 * To avoid lots of HPT resizes if memory size is fluctuating
>  		 * across a boundary, we deliberately have some hysterisis
> @@ -872,6 +879,27 @@ void hash_memory_batch_expand_prepare(unsigned long newsize)
>  		pr_warn("Hash collision while resizing HPT\n");
>  	}
>  }
> +
> +void hash_memory_batch_shrink_begin(void)
> +{
> +	/* Disable HPT resize-down during hot-unplug */
> +	atomic_set_release(&hpt_resize_disable, 1);
> +}
> +
> +void hash_memory_batch_shrink_end(void)
> +{
> +	unsigned long newsize;
> +
> +	/* Re-enables HPT resize-down after hot-unplug */
> +	atomic_set_release(&hpt_resize_disable, 0);
> +
> +	newsize = memblock_phys_mem_size();
> +	/* Resize to smallest SHIFT possible */
> +	while (resize_hpt_for_hotplug(newsize, true) == -ENOSPC) {
> +		newsize *= 2;

As noted earlier, doing this without an explicit cap on the new hpt
size (of the existing size) this makes me nervous.  Less so, but doing
the calculations on memory size, rather than explictly on HPT size /
HPT order also seems kinda clunky.

> +		pr_warn("Hash collision while resizing HPT\n");
> +	}
> +}
>  #endif /* CONFIG_MEMORY_HOTPLUG */
>  
>  static void __init hash_init_partition_table(phys_addr_t hash_table,
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index f1cd8af0f67f..e01681e22e00 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -199,6 +199,18 @@ void memory_batch_expand_prepare(unsigned long newsize)
>  	if (!radix_enabled())
>  		hash_memory_batch_expand_prepare(newsize);
>  }
> +
> +void memory_batch_shrink_begin(void)
> +{
> +	if (!radix_enabled())
> +		hash_memory_batch_shrink_begin();
> +}
> +
> +void memory_batch_shrink_end(void)
> +{
> +	if (!radix_enabled())
> +		hash_memory_batch_shrink_end();
> +}

Again, these wrappers don't seem particularly useful to me.

>  #endif /* CONFIG_MEMORY_HOTPLUG */
>  
>  void __init mmu_partition_table_init(void)
> diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
> index 353c71249214..9182fb5b5c01 100644
> --- a/arch/powerpc/platforms/pseries/hotplug-memory.c
> +++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
> @@ -425,6 +425,8 @@ static int dlpar_memory_remove_by_count(u32 lmbs_to_remove)
>  		return -EINVAL;
>  	}
>  
> +	memory_batch_shrink_begin();
> +
>  	for_each_drmem_lmb(lmb) {
>  		rc = dlpar_remove_lmb(lmb);
>  		if (rc)
> @@ -470,6 +472,8 @@ static int dlpar_memory_remove_by_count(u32 lmbs_to_remove)
>  		rc = 0;
>  	}
>  
> +	memory_batch_shrink_end();
> +
>  	return rc;
>  }
>  
> @@ -481,6 +485,8 @@ static int dlpar_memory_remove_by_index(u32 drc_index)
>  
>  	pr_debug("Attempting to hot-remove LMB, drc index %x\n", drc_index);
>  
> +	memory_batch_shrink_begin();
> +
>  	lmb_found = 0;
>  	for_each_drmem_lmb(lmb) {
>  		if (lmb->drc_index == drc_index) {
> @@ -502,6 +508,8 @@ static int dlpar_memory_remove_by_index(u32 drc_index)
>  	else
>  		pr_debug("Memory at %llx was hot-removed\n", lmb->base_addr);
>  
> +	memory_batch_shrink_end();

remove_by_index only removes a single LMB, so there's no real point to
batching here.

>  	return rc;
>  }
>  
> @@ -532,6 +540,8 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
>  	if (lmbs_available < lmbs_to_remove)
>  		return -EINVAL;
>  
> +	memory_batch_shrink_begin();
> +
>  	for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
>  		if (!(lmb->flags & DRCONF_MEM_ASSIGNED))
>  			continue;
> @@ -572,6 +582,8 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index)
>  		}
>  	}
>  
> +	memory_batch_shrink_end();
> +
>  	return rc;
>  }
>  
> @@ -700,6 +712,7 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
>  	if (lmbs_added != lmbs_to_add) {
>  		pr_err("Memory hot-add failed, removing any added LMBs\n");
>  
> +		memory_batch_shrink_begin();


The effect of these on the memory grow path is far from clear.

>  		for_each_drmem_lmb(lmb) {
>  			if (!drmem_lmb_reserved(lmb))
>  				continue;
> @@ -713,6 +726,7 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
>  
>  			drmem_remove_lmb_reservation(lmb);
>  		}
> +		memory_batch_shrink_end();
>  		rc = -EINVAL;
>  	} else {
>  		for_each_drmem_lmb(lmb) {
> @@ -814,6 +828,7 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
>  	if (rc) {
>  		pr_err("Memory indexed-count-add failed, removing any added LMBs\n");
>  
> +		memory_batch_shrink_begin();
>  		for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
>  			if (!drmem_lmb_reserved(lmb))
>  				continue;
> @@ -827,6 +842,7 @@ static int dlpar_memory_add_by_ic(u32 lmbs_to_add, u32 drc_index)
>  
>  			drmem_remove_lmb_reservation(lmb);
>  		}
> +		memory_batch_shrink_end();
>  		rc = -EINVAL;
>  	} else {
>  		for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug
  2021-03-22  6:49   ` David Gibson
@ 2021-04-09  2:16     ` Leonardo Bras
  0 siblings, 0 replies; 12+ messages in thread
From: Leonardo Bras @ 2021-04-09  2:16 UTC (permalink / raw)
  To: David Gibson
  Cc: Nathan Lynch, David Hildenbrand, Scott Cheloha, linux-kernel,
	linuxppc-dev, Nicholas Piggin, Bharata B Rao, Paul Mackerras,
	Sandipan Das, Aneesh Kumar K.V, Andrew Morton, Laurent Dufour,
	Logan Gunthorpe, Dan Williams, Mike Rapoport

Hello David, thanks for your feedback.

On Mon, 2021-03-22 at 17:49 +1100, David Gibson wrote:
> I don't love this approach.  Adding the extra flag at this level seems
> a bit inelegant, and it means we're passing up an easy opportunity to
> reduce our resource footprint on the host.

I understand, but trying to reduce resource footprint in host, and
mostly failing is what causes hot-add and hot-remove to take so long.

> But... maybe we'll have to do it.  I'd like to see if we can get
> things to work well enough with just the "batching" to avoid multiple
> resize attempts first.

This batching is something I had thought a lot about.
Problem is that there are a lot of generic interfaces between memory
hotplug and actually resizing HPT. I tried a simpler approach in
patches 2 & 3, so I don't touch much stuff there.

Best regards,
Leonardo Bras





^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] powerpc/mm/hash: Avoid multiple HPT resize-ups on memory hotplug
  2021-03-22  7:55   ` David Gibson
@ 2021-04-09  2:51     ` Leonardo Bras
  2021-04-19  5:34       ` David Gibson
  0 siblings, 1 reply; 12+ messages in thread
From: Leonardo Bras @ 2021-04-09  2:51 UTC (permalink / raw)
  To: David Gibson
  Cc: Nathan Lynch, David Hildenbrand, Scott Cheloha, linux-kernel,
	linuxppc-dev, Nicholas Piggin, Bharata B Rao, Paul Mackerras,
	Sandipan Das, Aneesh Kumar K.V, Andrew Morton, Laurent Dufour,
	Logan Gunthorpe, Dan Williams, Mike Rapoport

Hello David, thanks for the feedback!

On Mon, 2021-03-22 at 18:55 +1100, David Gibson wrote:
> > +void hash_memory_batch_expand_prepare(unsigned long newsize)
> > +{
> > +	/*
> > +	 * Resizing-up HPT should never fail, but there are some cases system starts with higher
> > +	 * SHIFT than required, and we go through the funny case of resizing HPT down while
> > +	 * adding memory
> > +	 */
> > +
> > +	while (resize_hpt_for_hotplug(newsize, false) == -ENOSPC) {
> > +		newsize *= 2;
> > +		pr_warn("Hash collision while resizing HPT\n");
> 
> This unbounded increase in newsize makes me nervous - we should be
> bounded by the current size of the HPT at least.  In practice we
> should be fine, since the resize should always succeed by the time we
> reach our current HPT size, but that's far from obvious from this
> point in the code.

Sure, I will add bounds in v2.

> 
> And... you're doubling newsize which is a value which might not be a
> power of 2.  I'm wondering if there's an edge case where this could
> actually cause us to skip the current size and erroneously resize to
> one bigger than we have currently.

I also though that at the start, but it seems quite reliable.
Before using this value, htab_shift_for_mem_size() will always round it
to next power of 2. 
Ex.
Any value between 0b0101 and 0b1000 will be rounded to 0b1000 for shift
calculation. If we multiply it by 2 (same as << 1), we have that
anything between 0b01010 and 0b10000 will be rounded to 0b10000. 

This works just fine as long as we are multiplying. 
Division may have the behavior you expect, as 0b0101 >> 1 would become
0b010 and skip a shift.
	
> > +void memory_batch_expand_prepare(unsigned long newsize)
> 
> This wrapper doesn't seem useful.

Yeah, it does little, but I can't just jump into hash_* functions
directly from hotplug-memory.c, without even knowing if it's using hash
pagetables. (in case the suggestion would be test for disable_radix
inside hash_memory_batch*)

> 
> > +{
> > +	if (!radix_enabled())
> > +		hash_memory_batch_expand_prepare(newsize);
> > +}
> >  #endif /* CONFIG_MEMORY_HOTPLUG */
> >  
> > 
> > +	memory_batch_expand_prepare(memblock_phys_mem_size() +
> > +				     drmem_info->n_lmbs * drmem_lmb_size());
> 
> This doesn't look right.  memory_add_by_index() is adding a *single*
> LMB, I think using drmem_info->n_lmbs here means you're counting this
> as adding again as much memory as you already have hotplugged.

Yeah, my mistake. This makes sense.
I will change it to something like 
memblock_phys_mem_size() + drmem_lmb_size()

> > 
> > +	memory_batch_expand_prepare(memblock_phys_mem_size() + lmbs_to_add * drmem_lmb_size());
> > +
> >  	for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
> >  		if (lmb->flags & DRCONF_MEM_ASSIGNED)
> >  			continue;
> 
> I don't see memory_batch_expand_prepare() suppressing any existing HPT
> resizes.  Won't this just resize to the right size for the full add,
> then resize several times again as we perform the add?  Or.. I guess
> that will be suppressed by patch 1/3. 

Correct.

>  That's seems kinda fragile, though.

What do you mean by fragile here?
What would you suggest doing different?

Best regards,
Leonardo Bras


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug
  2021-03-22 23:45   ` David Gibson
@ 2021-04-09  3:31     ` Leonardo Bras
  2021-04-19  5:37       ` David Gibson
  0 siblings, 1 reply; 12+ messages in thread
From: Leonardo Bras @ 2021-04-09  3:31 UTC (permalink / raw)
  To: David Gibson
  Cc: Nathan Lynch, David Hildenbrand, Scott Cheloha, linux-kernel,
	linuxppc-dev, Nicholas Piggin, Bharata B Rao, Paul Mackerras,
	Sandipan Das, Aneesh Kumar K.V, Andrew Morton, Laurent Dufour,
	Logan Gunthorpe, Dan Williams, Mike Rapoport

Hello David, thanks for commenting.

On Tue, 2021-03-23 at 10:45 +1100, David Gibson wrote:
> > @@ -805,6 +808,10 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
> >  	if (shrinking) {
> > 
> > +		/* When batch removing entries, only resizes HPT at the end. */
> > +		if (atomic_read_acquire(&hpt_resize_disable))
> > +			return 0;
> > +
> 
> I'm not quite convinced by this locking.  Couldn't hpt_resize_disable
> be set after this point, but while you're still inside
> resize_hpt_for_hotplug()?  Probably better to use an explicit mutex
> (and mutex_trylock()) to make the critical sections clearer.

Sure, I can do that for v2.

> Except... do we even need the fancy mechanics to suppress the resizes
> in one place to do them elswhere.  Couldn't we just replace the
> existing resize calls with the batched ones?

How do you think of having batched resizes-down in HPT? 
Other than the current approach, I could only think of a way that would
touch a lot of generic code, and/or duplicate some functions, as
dlpar_add_lmb() does a lot of other stuff.

> > +void hash_memory_batch_shrink_end(void)
> > +{
> > +	unsigned long newsize;
> > +
> > +	/* Re-enables HPT resize-down after hot-unplug */
> > +	atomic_set_release(&hpt_resize_disable, 0);
> > +
> > +	newsize = memblock_phys_mem_size();
> > +	/* Resize to smallest SHIFT possible */
> > +	while (resize_hpt_for_hotplug(newsize, true) == -ENOSPC) {
> > +		newsize *= 2;
> 
> As noted earlier, doing this without an explicit cap on the new hpt
> size (of the existing size) this makes me nervous. 
> 

I can add a stop in v2.

>  Less so, but doing
> the calculations on memory size, rather than explictly on HPT size /
> HPT order also seems kinda clunky.

Agree, but at this point, it would seem kind of a waste to find the
shift from newsize, then calculate (1 << shift) for each retry of
resize_hpt_for_hotplug() only to point that we are retrying the order
value.

But sure, if you think it looks better, I can change that. 

> > +void memory_batch_shrink_begin(void)
> > +{
> > +	if (!radix_enabled())
> > +		hash_memory_batch_shrink_begin();
> > +}
> > +
> > +void memory_batch_shrink_end(void)
> > +{
> > +	if (!radix_enabled())
> > +		hash_memory_batch_shrink_end();
> > +}
> 
> Again, these wrappers don't seem particularly useful to me.

Options would be add 'if (!radix_enabled())' to hotplug-memory.c
functions or to hash* functions, which look kind of wrong.

> > +	memory_batch_shrink_end();
> 
> remove_by_index only removes a single LMB, so there's no real point to
> batching here.

Sure, will be fixed for v2.

> > @@ -700,6 +712,7 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
> >  	if (lmbs_added != lmbs_to_add) {
> >  		pr_err("Memory hot-add failed, removing any added LMBs\n");
> > 
> > +		memory_batch_shrink_begin();
> 
> 
> The effect of these on the memory grow path is far from clear.
> 

On hotplug, HPT is resized-up before adding LMBs.
On hotunplug, HPT is resized-down after removing LMBs.
And each one has it's own mechanism to batch HPT resizes...

I can't understand exactly how using it on hotplug fail path can be any
different than using it on hotunplug.
> 

Can you please help me understanding this?

Best regards,
Leonardo Bras


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] powerpc/mm/hash: Avoid multiple HPT resize-ups on memory hotplug
  2021-04-09  2:51     ` Leonardo Bras
@ 2021-04-19  5:34       ` David Gibson
  0 siblings, 0 replies; 12+ messages in thread
From: David Gibson @ 2021-04-19  5:34 UTC (permalink / raw)
  To: Leonardo Bras
  Cc: Nathan Lynch, David Hildenbrand, Scott Cheloha, linux-kernel,
	linuxppc-dev, Nicholas Piggin, Bharata B Rao, Paul Mackerras,
	Sandipan Das, Aneesh Kumar K.V, Andrew Morton, Laurent Dufour,
	Logan Gunthorpe, Dan Williams, Mike Rapoport

[-- Attachment #1: Type: text/plain, Size: 3759 bytes --]

On Thu, Apr 08, 2021 at 11:51:36PM -0300, Leonardo Bras wrote:
> Hello David, thanks for the feedback!
> 
> On Mon, 2021-03-22 at 18:55 +1100, David Gibson wrote:
> > > +void hash_memory_batch_expand_prepare(unsigned long newsize)
> > > +{
> > > +	/*
> > > +	 * Resizing-up HPT should never fail, but there are some cases system starts with higher
> > > +	 * SHIFT than required, and we go through the funny case of resizing HPT down while
> > > +	 * adding memory
> > > +	 */
> > > +
> > > +	while (resize_hpt_for_hotplug(newsize, false) == -ENOSPC) {
> > > +		newsize *= 2;
> > > +		pr_warn("Hash collision while resizing HPT\n");
> > 
> > This unbounded increase in newsize makes me nervous - we should be
> > bounded by the current size of the HPT at least.  In practice we
> > should be fine, since the resize should always succeed by the time we
> > reach our current HPT size, but that's far from obvious from this
> > point in the code.
> 
> Sure, I will add bounds in v2.
> 
> > 
> > And... you're doubling newsize which is a value which might not be a
> > power of 2.  I'm wondering if there's an edge case where this could
> > actually cause us to skip the current size and erroneously resize to
> > one bigger than we have currently.
> 
> I also though that at the start, but it seems quite reliable.
> Before using this value, htab_shift_for_mem_size() will always round it
> to next power of 2. 
> Ex.
> Any value between 0b0101 and 0b1000 will be rounded to 0b1000 for shift
> calculation. If we multiply it by 2 (same as << 1), we have that
> anything between 0b01010 and 0b10000 will be rounded to 0b10000. 

Ah, good point.

> This works just fine as long as we are multiplying. 
> Division may have the behavior you expect, as 0b0101 >> 1 would become
> 0b010 and skip a shift.
> 	
> > > +void memory_batch_expand_prepare(unsigned long newsize)
> > 
> > This wrapper doesn't seem useful.
> 
> Yeah, it does little, but I can't just jump into hash_* functions
> directly from hotplug-memory.c, without even knowing if it's using hash
> pagetables. (in case the suggestion would be test for disable_radix
> inside hash_memory_batch*)
> 
> > 
> > > +{
> > > +	if (!radix_enabled())
> > > +		hash_memory_batch_expand_prepare(newsize);
> > > +}
> > >  #endif /* CONFIG_MEMORY_HOTPLUG */
> > >  
> > > 
> > > +	memory_batch_expand_prepare(memblock_phys_mem_size() +
> > > +				     drmem_info->n_lmbs * drmem_lmb_size());
> > 
> > This doesn't look right.  memory_add_by_index() is adding a *single*
> > LMB, I think using drmem_info->n_lmbs here means you're counting this
> > as adding again as much memory as you already have hotplugged.
> 
> Yeah, my mistake. This makes sense.
> I will change it to something like 
> memblock_phys_mem_size() + drmem_lmb_size()
> 
> > > 
> > > +	memory_batch_expand_prepare(memblock_phys_mem_size() + lmbs_to_add * drmem_lmb_size());
> > > +
> > >  	for_each_drmem_lmb_in_range(lmb, start_lmb, end_lmb) {
> > >  		if (lmb->flags & DRCONF_MEM_ASSIGNED)
> > >  			continue;
> > 
> > I don't see memory_batch_expand_prepare() suppressing any existing HPT
> > resizes.  Won't this just resize to the right size for the full add,
> > then resize several times again as we perform the add?  Or.. I guess
> > that will be suppressed by patch 1/3. 
> 
> Correct.
> 
> >  That's seems kinda fragile, though.
> 
> What do you mean by fragile here?
> What would you suggest doing different?
> 
> Best regards,
> Leonardo Bras
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug
  2021-04-09  3:31     ` Leonardo Bras
@ 2021-04-19  5:37       ` David Gibson
  0 siblings, 0 replies; 12+ messages in thread
From: David Gibson @ 2021-04-19  5:37 UTC (permalink / raw)
  To: Leonardo Bras
  Cc: Nathan Lynch, David Hildenbrand, Scott Cheloha, linux-kernel,
	linuxppc-dev, Nicholas Piggin, Bharata B Rao, Paul Mackerras,
	Sandipan Das, Aneesh Kumar K.V, Andrew Morton, Laurent Dufour,
	Logan Gunthorpe, Dan Williams, Mike Rapoport

[-- Attachment #1: Type: text/plain, Size: 4230 bytes --]

On Fri, Apr 09, 2021 at 12:31:03AM -0300, Leonardo Bras wrote:
> Hello David, thanks for commenting.
> 
> On Tue, 2021-03-23 at 10:45 +1100, David Gibson wrote:
> > > @@ -805,6 +808,10 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
> > >  	if (shrinking) {
> > > 
> > > +		/* When batch removing entries, only resizes HPT at the end. */
> > > +		if (atomic_read_acquire(&hpt_resize_disable))
> > > +			return 0;
> > > +
> > 
> > I'm not quite convinced by this locking.  Couldn't hpt_resize_disable
> > be set after this point, but while you're still inside
> > resize_hpt_for_hotplug()?  Probably better to use an explicit mutex
> > (and mutex_trylock()) to make the critical sections clearer.
> 
> Sure, I can do that for v2.
> 
> > Except... do we even need the fancy mechanics to suppress the resizes
> > in one place to do them elswhere.  Couldn't we just replace the
> > existing resize calls with the batched ones?
> 
> How do you think of having batched resizes-down in HPT?

I think it's a good idea.  We still have to have the loop to resize
bigger if we can't fit everything into the smallest target size, but
that still only makes the worst case as bad at the always-case is
currently.

> Other than the current approach, I could only think of a way that would
> touch a lot of generic code, and/or duplicate some functions, as
> dlpar_add_lmb() does a lot of other stuff.
> 
> > > +void hash_memory_batch_shrink_end(void)
> > > +{
> > > +	unsigned long newsize;
> > > +
> > > +	/* Re-enables HPT resize-down after hot-unplug */
> > > +	atomic_set_release(&hpt_resize_disable, 0);
> > > +
> > > +	newsize = memblock_phys_mem_size();
> > > +	/* Resize to smallest SHIFT possible */
> > > +	while (resize_hpt_for_hotplug(newsize, true) == -ENOSPC) {
> > > +		newsize *= 2;
> > 
> > As noted earlier, doing this without an explicit cap on the new hpt
> > size (of the existing size) this makes me nervous. 
> > 
> 
> I can add a stop in v2.
> 
> >  Less so, but doing
> > the calculations on memory size, rather than explictly on HPT size /
> > HPT order also seems kinda clunky.
> 
> Agree, but at this point, it would seem kind of a waste to find the
> shift from newsize, then calculate (1 << shift) for each retry of
> resize_hpt_for_hotplug() only to point that we are retrying the order
> value.

Yeah, I see your poiint.

> 
> But sure, if you think it looks better, I can change that. 
> 
> > > +void memory_batch_shrink_begin(void)
> > > +{
> > > +	if (!radix_enabled())
> > > +		hash_memory_batch_shrink_begin();
> > > +}
> > > +
> > > +void memory_batch_shrink_end(void)
> > > +{
> > > +	if (!radix_enabled())
> > > +		hash_memory_batch_shrink_end();
> > > +}
> > 
> > Again, these wrappers don't seem particularly useful to me.
> 
> Options would be add 'if (!radix_enabled())' to hotplug-memory.c
> functions or to hash* functions, which look kind of wrong.

I think the if !radix_enabled in hotplug-memory.c isn't too bad, in
fact possibly helpful as a hint that this is HPT only logic.

> 
> > > +	memory_batch_shrink_end();
> > 
> > remove_by_index only removes a single LMB, so there's no real point to
> > batching here.
> 
> Sure, will be fixed for v2.
> 
> > > @@ -700,6 +712,7 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
> > >  	if (lmbs_added != lmbs_to_add) {
> > >  		pr_err("Memory hot-add failed, removing any added LMBs\n");
> > > 
> > > +		memory_batch_shrink_begin();
> > 
> > 
> > The effect of these on the memory grow path is far from clear.
> > 
> 
> On hotplug, HPT is resized-up before adding LMBs.
> On hotunplug, HPT is resized-down after removing LMBs.
> And each one has it's own mechanism to batch HPT resizes...
> 
> I can't understand exactly how using it on hotplug fail path can be any
> different than using it on hotunplug.
> > 
> 
> Can you please help me understanding this?
> 
> Best regards,
> Leonardo Bras
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-04-19  5:38 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-12  7:29 [PATCH 0/3] powerpc/mm/hash: Time improvements for memory hot(un)plug Leonardo Bras
2021-03-12  7:29 ` [PATCH 1/3] powerpc/mm/hash: Avoid resizing-down HPT on first memory hotplug Leonardo Bras
2021-03-22  6:49   ` David Gibson
2021-04-09  2:16     ` Leonardo Bras
2021-03-12  7:29 ` [PATCH 2/3] powerpc/mm/hash: Avoid multiple HPT resize-ups on " Leonardo Bras
2021-03-22  7:55   ` David Gibson
2021-04-09  2:51     ` Leonardo Bras
2021-04-19  5:34       ` David Gibson
2021-03-12  7:29 ` [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on memory hotunplug Leonardo Bras
2021-03-22 23:45   ` David Gibson
2021-04-09  3:31     ` Leonardo Bras
2021-04-19  5:37       ` David Gibson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).