linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/4] Cleanup and fixups for vmemmap handling
@ 2021-03-09 17:41 Oscar Salvador
  2021-03-09 17:41 ` [PATCH v5 1/4] x86/vmemmap: Drop handling of 4K unaligned vmemmap range Oscar Salvador
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Oscar Salvador @ 2021-03-09 17:41 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel,
	Oscar Salvador

Hi,

this series contains cleanups to remove dead code that handles
unaligned cases for 4K and 1GB pages (patch#1 and patch#2) when
removing the vemmmap range, and a fix (patch#3) to handle the case
when two vmemmap ranges intersect the same PMD.

More details can be found in the respective changelogs.

 v4 -> v5:
 - Rebase on top of 5.12-rc2
 - Addessed feedback from Dave
 - Split previous patch#3 into core-changes (current patch#3) and
   the optimization (current patch#4)
 - Document better what is unused_pmd_start and its optimization
 - Added Acked-by for patch#1

 v3 -> v4:
 - Rebase on top of 5.12-rc1 as Andrew suggested
 - Added last Reviewed-by for the last patch

 v2 -> v3:
 - Make sure we do not clear the PUD entry in case
   we are not removing the whole range.
 - Add Reviewed-by

 v1 -> v2:
 - Remove dead code in remove_pud_table as well
 - Addessed feedback by David
 - Place the vmemap functions that take care of unaligned PMDs
   within CONFIG_SPARSEMEM_VMEMMAP


Oscar Salvador (4):
  x86/vmemmap: Drop handling of 4K unaligned vmemmap range
  x86/vmemmap: Drop handling of 1GB vmemmap ranges
  x86/vmemmap: Handle unpopulated sub-pmd ranges
  x86/vmemmap: Optimize for consecutive sections in partial populated
    PMDs

 arch/x86/mm/init_64.c | 198 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 124 insertions(+), 74 deletions(-)

-- 
2.16.3


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 1/4] x86/vmemmap: Drop handling of 4K unaligned vmemmap range
  2021-03-09 17:41 [PATCH v5 0/4] Cleanup and fixups for vmemmap handling Oscar Salvador
@ 2021-03-09 17:41 ` Oscar Salvador
  2021-03-09 17:41 ` [PATCH v5 2/4] x86/vmemmap: Drop handling of 1GB vmemmap ranges Oscar Salvador
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: Oscar Salvador @ 2021-03-09 17:41 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel,
	Oscar Salvador

remove_pte_table() is prepared to handle the case where either the
start or the end of the range is not PAGE aligned.
This cannot actually happen:

__populate_section_memmap enforces the range to be PMD aligned,
so as long as the size of the struct page remains multiple of 8,
the vmemmap range will be aligned to PAGE_SIZE.

Drop the dead code and place a VM_BUG_ON in vmemmap_{populate,free}
to catch nasty cases.
Note that the VM_BUG_ON is placed in there because vmemmap_{populate,free}
is the gate of all removing and freeing page tables logic.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Suggested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 arch/x86/mm/init_64.c | 48 +++++++++++++-----------------------------------
 1 file changed, 13 insertions(+), 35 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..b0e1d215c83e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -962,7 +962,6 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
 {
 	unsigned long next, pages = 0;
 	pte_t *pte;
-	void *page_addr;
 	phys_addr_t phys_addr;
 
 	pte = pte_start + pte_index(addr);
@@ -983,42 +982,15 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end,
 		if (phys_addr < (phys_addr_t)0x40000000)
 			return;
 
-		if (PAGE_ALIGNED(addr) && PAGE_ALIGNED(next)) {
-			/*
-			 * Do not free direct mapping pages since they were
-			 * freed when offlining, or simplely not in use.
-			 */
-			if (!direct)
-				free_pagetable(pte_page(*pte), 0);
-
-			spin_lock(&init_mm.page_table_lock);
-			pte_clear(&init_mm, addr, pte);
-			spin_unlock(&init_mm.page_table_lock);
+		if (!direct)
+			free_pagetable(pte_page(*pte), 0);
 
-			/* For non-direct mapping, pages means nothing. */
-			pages++;
-		} else {
-			/*
-			 * If we are here, we are freeing vmemmap pages since
-			 * direct mapped memory ranges to be freed are aligned.
-			 *
-			 * If we are not removing the whole page, it means
-			 * other page structs in this page are being used and
-			 * we canot remove them. So fill the unused page_structs
-			 * with 0xFD, and remove the page when it is wholly
-			 * filled with 0xFD.
-			 */
-			memset((void *)addr, PAGE_INUSE, next - addr);
-
-			page_addr = page_address(pte_page(*pte));
-			if (!memchr_inv(page_addr, PAGE_INUSE, PAGE_SIZE)) {
-				free_pagetable(pte_page(*pte), 0);
+		spin_lock(&init_mm.page_table_lock);
+		pte_clear(&init_mm, addr, pte);
+		spin_unlock(&init_mm.page_table_lock);
 
-				spin_lock(&init_mm.page_table_lock);
-				pte_clear(&init_mm, addr, pte);
-				spin_unlock(&init_mm.page_table_lock);
-			}
-		}
+		/* For non-direct mapping, pages means nothing. */
+		pages++;
 	}
 
 	/* Call free_pte_table() in remove_pmd_table(). */
@@ -1197,6 +1169,9 @@ remove_pagetable(unsigned long start, unsigned long end, bool direct,
 void __ref vmemmap_free(unsigned long start, unsigned long end,
 		struct vmem_altmap *altmap)
 {
+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+
 	remove_pagetable(start, end, false, altmap);
 }
 
@@ -1556,6 +1531,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 {
 	int err;
 
+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+
 	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
 		err = vmemmap_populate_basepages(start, end, node, NULL);
 	else if (boot_cpu_has(X86_FEATURE_PSE))
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 2/4] x86/vmemmap: Drop handling of 1GB vmemmap ranges
  2021-03-09 17:41 [PATCH v5 0/4] Cleanup and fixups for vmemmap handling Oscar Salvador
  2021-03-09 17:41 ` [PATCH v5 1/4] x86/vmemmap: Drop handling of 4K unaligned vmemmap range Oscar Salvador
@ 2021-03-09 17:41 ` Oscar Salvador
  2021-03-09 18:34   ` Dave Hansen
  2021-03-09 17:41 ` [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges Oscar Salvador
  2021-03-09 17:41 ` [PATCH v5 4/4] x86/vmemmap: Optimize for consecutive sections in partial populated PMDs Oscar Salvador
  3 siblings, 1 reply; 14+ messages in thread
From: Oscar Salvador @ 2021-03-09 17:41 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel,
	Oscar Salvador

We never get to allocate 1GB pages when mapping the vmemmap range.
Drop the dead code both for the aligned and unaligned cases and leave
only the direct map handling.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Suggested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/mm/init_64.c | 35 +++++++----------------------------
 1 file changed, 7 insertions(+), 28 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b0e1d215c83e..9ecb3c488ac8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1062,7 +1062,6 @@ remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
 	unsigned long next, pages = 0;
 	pmd_t *pmd_base;
 	pud_t *pud;
-	void *page_addr;
 
 	pud = pud_start + pud_index(addr);
 	for (; addr < end; addr = next, pud++) {
@@ -1071,33 +1070,13 @@ remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
 		if (!pud_present(*pud))
 			continue;
 
-		if (pud_large(*pud)) {
-			if (IS_ALIGNED(addr, PUD_SIZE) &&
-			    IS_ALIGNED(next, PUD_SIZE)) {
-				if (!direct)
-					free_pagetable(pud_page(*pud),
-						       get_order(PUD_SIZE));
-
-				spin_lock(&init_mm.page_table_lock);
-				pud_clear(pud);
-				spin_unlock(&init_mm.page_table_lock);
-				pages++;
-			} else {
-				/* If here, we are freeing vmemmap pages. */
-				memset((void *)addr, PAGE_INUSE, next - addr);
-
-				page_addr = page_address(pud_page(*pud));
-				if (!memchr_inv(page_addr, PAGE_INUSE,
-						PUD_SIZE)) {
-					free_pagetable(pud_page(*pud),
-						       get_order(PUD_SIZE));
-
-					spin_lock(&init_mm.page_table_lock);
-					pud_clear(pud);
-					spin_unlock(&init_mm.page_table_lock);
-				}
-			}
-
+		if (pud_large(*pud) &&
+		    IS_ALIGNED(addr, PUD_SIZE) &&
+		    IS_ALIGNED(next, PUD_SIZE)) {
+			spin_lock(&init_mm.page_table_lock);
+			pud_clear(pud);
+			spin_unlock(&init_mm.page_table_lock);
+			pages++;
 			continue;
 		}
 
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges
  2021-03-09 17:41 [PATCH v5 0/4] Cleanup and fixups for vmemmap handling Oscar Salvador
  2021-03-09 17:41 ` [PATCH v5 1/4] x86/vmemmap: Drop handling of 4K unaligned vmemmap range Oscar Salvador
  2021-03-09 17:41 ` [PATCH v5 2/4] x86/vmemmap: Drop handling of 1GB vmemmap ranges Oscar Salvador
@ 2021-03-09 17:41 ` Oscar Salvador
  2021-03-09 17:52   ` David Hildenbrand
  2021-03-09 18:39   ` Dave Hansen
  2021-03-09 17:41 ` [PATCH v5 4/4] x86/vmemmap: Optimize for consecutive sections in partial populated PMDs Oscar Salvador
  3 siblings, 2 replies; 14+ messages in thread
From: Oscar Salvador @ 2021-03-09 17:41 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel,
	Oscar Salvador

When sizeof(struct page) is not a power of 2, sections do not span
a PMD anymore and so when populating them some parts of the PMD will
remain unused.
Because of this, PMDs will be left behind when depopulating sections
since remove_pmd_table() thinks that those unused parts are still in
use.

Fix this by marking the unused parts with PAGE_UNUSED, so memchr_inv()
will do the right thing and will let us free the PMD when the last user
of it is gone.

This patch is based on a similar patch by David Hildenbrand:

https://lore.kernel.org/linux-mm/20200722094558.9828-9-david@redhat.com/

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/mm/init_64.c | 63 ++++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 52 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 9ecb3c488ac8..3bb3988c7681 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -871,7 +871,50 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	return add_pages(nid, start_pfn, nr_pages, params);
 }
 
-#define PAGE_INUSE 0xFD
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+#define PAGE_UNUSED 0xFD
+
+/* Returns true if the PMD is completely unused and thus it can be freed */
+static bool __meminit vmemmap_pmd_is_unused(unsigned long addr, unsigned long end)
+{
+	unsigned long start = ALIGN_DOWN(addr, PMD_SIZE);
+
+	memset((void *)addr, PAGE_UNUSED, end - addr);
+
+	return !memchr_inv((void *)start, PAGE_UNUSED, PMD_SIZE);
+}
+
+static void __meminit vmemmap_use_sub_pmd(unsigned long start)
+{
+	/*
+	 * As we expect to add in the same granularity as we remove, it's
+	 * sufficient to mark only some piece used to block the memmap page from
+	 * getting removed when removing some other adjacent memmap (just in
+	 * case the first memmap never gets initialized e.g., because the memory
+	 * block never gets onlined).
+	 */
+	memset((void *)start, 0, sizeof(struct page));
+}
+
+static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long end)
+{
+	/*
+	 * Could be our memmap page is filled with PAGE_UNUSED already from a
+	 * previous remove. Make sure to reset it.
+	 */
+	vmemmap_use_sub_pmd(start);
+
+	/*
+	 * Mark with PAGE_UNUSED the unused parts of the new memmap range
+	 */
+	if (!IS_ALIGNED(start, PMD_SIZE))
+	        memset((void *)start, PAGE_UNUSED,
+	               start - ALIGN_DOWN(start, PMD_SIZE));
+	if (!IS_ALIGNED(end, PMD_SIZE))
+		memset((void *)end, PAGE_UNUSED,
+		       ALIGN(end, PMD_SIZE) - end);
+}
+#endif
 
 static void __meminit free_pagetable(struct page *page, int order)
 {
@@ -1006,7 +1049,6 @@ remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end,
 	unsigned long next, pages = 0;
 	pte_t *pte_base;
 	pmd_t *pmd;
-	void *page_addr;
 
 	pmd = pmd_start + pmd_index(addr);
 	for (; addr < end; addr = next, pmd++) {
@@ -1026,20 +1068,13 @@ remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end,
 				pmd_clear(pmd);
 				spin_unlock(&init_mm.page_table_lock);
 				pages++;
-			} else {
-				/* If here, we are freeing vmemmap pages. */
-				memset((void *)addr, PAGE_INUSE, next - addr);
-
-				page_addr = page_address(pmd_page(*pmd));
-				if (!memchr_inv(page_addr, PAGE_INUSE,
-						PMD_SIZE)) {
+			} else if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) &&
+				   vmemmap_pmd_is_unused(addr, next)) {
 					free_hugepage_table(pmd_page(*pmd),
 							    altmap);
-
 					spin_lock(&init_mm.page_table_lock);
 					pmd_clear(pmd);
 					spin_unlock(&init_mm.page_table_lock);
-				}
 			}
 
 			continue;
@@ -1492,11 +1527,17 @@ static int __meminit vmemmap_populate_hugepages(unsigned long start,
 
 				addr_end = addr + PMD_SIZE;
 				p_end = p + PMD_SIZE;
+
+				if (!IS_ALIGNED(addr, PMD_SIZE) ||
+				    !IS_ALIGNED(next, PMD_SIZE))
+					vmemmap_use_new_sub_pmd(addr, next);
+
 				continue;
 			} else if (altmap)
 				return -ENOMEM; /* no fallback */
 		} else if (pmd_large(*pmd)) {
 			vmemmap_verify((pte_t *)pmd, node, addr, next);
+			vmemmap_use_sub_pmd(addr);
 			continue;
 		}
 		if (vmemmap_populate_basepages(addr, next, node, NULL))
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 4/4] x86/vmemmap: Optimize for consecutive sections in partial populated PMDs
  2021-03-09 17:41 [PATCH v5 0/4] Cleanup and fixups for vmemmap handling Oscar Salvador
                   ` (2 preceding siblings ...)
  2021-03-09 17:41 ` [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges Oscar Salvador
@ 2021-03-09 17:41 ` Oscar Salvador
  2021-03-09 18:50   ` Dave Hansen
  3 siblings, 1 reply; 14+ messages in thread
From: Oscar Salvador @ 2021-03-09 17:41 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel,
	Oscar Salvador

We can optimize in the case we are adding consecutive sections, so no
memset(PAGE_UNUSED) is needed.
In that case, let us keep track where the unused range of the previous
memory range begins, so we can compare it with start of the range to be
added.
If they are equal, we know sections are added consecutively.

For that purpose, let us introduce 'unused_pmd_start', which always holds
the beginning of the unused memory range.

In the case a section does not contiguously follow the previous one, we
know we can memset [unused_pmd_start, PMD_BOUNDARY) with PAGE_UNUSE.

This patch is based on a similar patch by David Hildenbrand:

https://lore.kernel.org/linux-mm/20200722094558.9828-10-david@redhat.com/

Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
 arch/x86/mm/init_64.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 57 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 3bb3988c7681..9251f841ffb5 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -874,17 +874,40 @@ int arch_add_memory(int nid, u64 start, u64 size,
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 #define PAGE_UNUSED 0xFD
 
+/*
+ * The unused vmemmap range, which was not yet memset(PAGE_UNUSED), ranges
+ * from unused_pmd_start to next PMD_SIZE boundary.
+ */
+static unsigned long unused_pmd_start __meminitdata;
+
+static void __meminit vmemmap_flush_unused_pmd(void)
+{
+	if (!unused_pmd_start)
+		return;
+	/*
+	 * Clears (unused_pmd_start, PMD_END]
+	 */
+	memset((void *)unused_pmd_start, PAGE_UNUSED,
+	       ALIGN(unused_pmd_start, PMD_SIZE) - unused_pmd_start);
+	unused_pmd_start = 0;
+}
+
 /* Returns true if the PMD is completely unused and thus it can be freed */
 static bool __meminit vmemmap_pmd_is_unused(unsigned long addr, unsigned long end)
 {
 	unsigned long start = ALIGN_DOWN(addr, PMD_SIZE);
 
+	/*
+	 * Flush the unused range cache to ensure that memchr_inv() will work
+	 * for the whole range.
+	 */
+	vmemmap_flush_unused_pmd();
 	memset((void *)addr, PAGE_UNUSED, end - addr);
 
 	return !memchr_inv((void *)start, PAGE_UNUSED, PMD_SIZE);
 }
 
-static void __meminit vmemmap_use_sub_pmd(unsigned long start)
+static void __meminit __vmemmap_use_sub_pmd(unsigned long start)
 {
 	/*
 	 * As we expect to add in the same granularity as we remove, it's
@@ -896,13 +919,37 @@ static void __meminit vmemmap_use_sub_pmd(unsigned long start)
 	memset((void *)start, 0, sizeof(struct page));
 }
 
+static void __meminit vmemmap_use_sub_pmd(unsigned long start, unsigned long end)
+{
+	/*
+	 * We only optimize if the new used range directly follows the
+	 * previously unused range (esp., when populating consecutive sections).
+	 */
+	if (unused_pmd_start == start) {
+		if (likely(IS_ALIGNED(end, PMD_SIZE)))
+			unused_pmd_start = 0;
+		else
+			unused_pmd_start = end;
+		return;
+	}
+
+	/*
+	 * If the range does not contiguously follows previous one, make sure
+	 * to mark the unused range of the previous one so it can be removed.
+	 */
+	vmemmap_flush_unused_pmd();
+	__vmemmap_use_sub_pmd(start);
+}
+
 static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long end)
 {
+	vmemmap_flush_unused_pmd();
+
 	/*
 	 * Could be our memmap page is filled with PAGE_UNUSED already from a
 	 * previous remove. Make sure to reset it.
 	 */
-	vmemmap_use_sub_pmd(start);
+	__vmemmap_use_sub_pmd(start);
 
 	/*
 	 * Mark with PAGE_UNUSED the unused parts of the new memmap range
@@ -910,9 +957,14 @@ static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long
 	if (!IS_ALIGNED(start, PMD_SIZE))
 	        memset((void *)start, PAGE_UNUSED,
 	               start - ALIGN_DOWN(start, PMD_SIZE));
+
+	/*
+	 * We want to avoid memset(PAGE_UNUSED) when populating the vmemmap of
+	 * consecutive sections. Remember for the last added PMD where the
+	 * unused range begins.
+	 */
 	if (!IS_ALIGNED(end, PMD_SIZE))
-		memset((void *)end, PAGE_UNUSED,
-		       ALIGN(end, PMD_SIZE) - end);
+		unused_pmd_start = end;
 }
 #endif
 
@@ -1537,7 +1589,7 @@ static int __meminit vmemmap_populate_hugepages(unsigned long start,
 				return -ENOMEM; /* no fallback */
 		} else if (pmd_large(*pmd)) {
 			vmemmap_verify((pte_t *)pmd, node, addr, next);
-			vmemmap_use_sub_pmd(addr);
+			vmemmap_use_sub_pmd(addr, next);
 			continue;
 		}
 		if (vmemmap_populate_basepages(addr, next, node, NULL))
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges
  2021-03-09 17:41 ` [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges Oscar Salvador
@ 2021-03-09 17:52   ` David Hildenbrand
  2021-03-10 17:49     ` Oscar Salvador
  2021-03-09 18:39   ` Dave Hansen
  1 sibling, 1 reply; 14+ messages in thread
From: David Hildenbrand @ 2021-03-09 17:52 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, x86, H . Peter Anvin, Michal Hocko,
	linux-mm, linux-kernel

On 09.03.21 18:41, Oscar Salvador wrote:
> When sizeof(struct page) is not a power of 2, sections do not span
> a PMD anymore and so when populating them some parts of the PMD will
> remain unused.
> Because of this, PMDs will be left behind when depopulating sections
> since remove_pmd_table() thinks that those unused parts are still in
> use.
> 
> Fix this by marking the unused parts with PAGE_UNUSED, so memchr_inv()
> will do the right thing and will let us free the PMD when the last user
> of it is gone.
> 
> This patch is based on a similar patch by David Hildenbrand:
> 
> https://lore.kernel.org/linux-mm/20200722094558.9828-9-david@redhat.com/
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> ---
>   arch/x86/mm/init_64.c | 63 ++++++++++++++++++++++++++++++++++++++++++---------
>   1 file changed, 52 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 9ecb3c488ac8..3bb3988c7681 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -871,7 +871,50 @@ int arch_add_memory(int nid, u64 start, u64 size,
>   	return add_pages(nid, start_pfn, nr_pages, params);
>   }
>   
> -#define PAGE_INUSE 0xFD
> +#ifdef CONFIG_SPARSEMEM_VMEMMAP
> +#define PAGE_UNUSED 0xFD
> +
> +/* Returns true if the PMD is completely unused and thus it can be freed */
> +static bool __meminit vmemmap_pmd_is_unused(unsigned long addr, unsigned long end)
> +{

I don't think the new name is any better. It implies that all it does is 
a check - yet it actually clears the given range. (I prefer the old 
name, but well, I came up with that, so what do I know :D )


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 2/4] x86/vmemmap: Drop handling of 1GB vmemmap ranges
  2021-03-09 17:41 ` [PATCH v5 2/4] x86/vmemmap: Drop handling of 1GB vmemmap ranges Oscar Salvador
@ 2021-03-09 18:34   ` Dave Hansen
  2021-03-09 21:27     ` Oscar Salvador
  0 siblings, 1 reply; 14+ messages in thread
From: Dave Hansen @ 2021-03-09 18:34 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: David Hildenbrand, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel

On 3/9/21 9:41 AM, Oscar Salvador wrote:
> We never get to allocate 1GB pages when mapping the vmemmap range.
> Drop the dead code both for the aligned and unaligned cases and leave
> only the direct map handling.

I was hoping to seem some more meat in this changelog, possibly some of
what David Hildenbrand said in the v4 thread about this patch.
Basically, we don't have code to allocate 1G mappings because it isn't
clear that it would be worth the complexity, and it might also waste memory.

I'm fine with the code, but I would appreciate a beefed-up changelog:

Acked-by: Dave Hansen <dave.hansen@linux.intel.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges
  2021-03-09 17:41 ` [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges Oscar Salvador
  2021-03-09 17:52   ` David Hildenbrand
@ 2021-03-09 18:39   ` Dave Hansen
  1 sibling, 0 replies; 14+ messages in thread
From: Dave Hansen @ 2021-03-09 18:39 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: David Hildenbrand, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel

On 3/9/21 9:41 AM, Oscar Salvador wrote:
> When sizeof(struct page) is not a power of 2, sections do not span
> a PMD anymore and so when populating them some parts of the PMD will
> remain unused.
> Because of this, PMDs will be left behind when depopulating sections
> since remove_pmd_table() thinks that those unused parts are still in
> use.
> 
> Fix this by marking the unused parts with PAGE_UNUSED, so memchr_inv()
> will do the right thing and will let us free the PMD when the last user
> of it is gone.
> 
> This patch is based on a similar patch by David Hildenbrand:
> 
> https://lore.kernel.org/linux-mm/20200722094558.9828-9-david@redhat.com/

Looks good now.  It's much easier to read without the optimization.

Acked-by: Dave Hansen <dave.hansen@linux.intel.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 4/4] x86/vmemmap: Optimize for consecutive sections in partial populated PMDs
  2021-03-09 17:41 ` [PATCH v5 4/4] x86/vmemmap: Optimize for consecutive sections in partial populated PMDs Oscar Salvador
@ 2021-03-09 18:50   ` Dave Hansen
  0 siblings, 0 replies; 14+ messages in thread
From: Dave Hansen @ 2021-03-09 18:50 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: David Hildenbrand, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel

On 3/9/21 9:41 AM, Oscar Salvador wrote:
> We can optimize in the case we are adding consecutive sections, so no
> memset(PAGE_UNUSED) is needed.
> In that case, let us keep track where the unused range of the previous
> memory range begins, so we can compare it with start of the range to be
> added.
> If they are equal, we know sections are added consecutively.
> 
> For that purpose, let us introduce 'unused_pmd_start', which always holds
> the beginning of the unused memory range.
> 
> In the case a section does not contiguously follow the previous one, we
> know we can memset [unused_pmd_start, PMD_BOUNDARY) with PAGE_UNUSE.
> 
> This patch is based on a similar patch by David Hildenbrand:
> 
> https://lore.kernel.org/linux-mm/20200722094558.9828-10-david@redhat.com/
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>

This is much more clear now.  Thanks!

Acked-by: Dave Hansen <dave.hansen@linux.intel.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 2/4] x86/vmemmap: Drop handling of 1GB vmemmap ranges
  2021-03-09 18:34   ` Dave Hansen
@ 2021-03-09 21:27     ` Oscar Salvador
  0 siblings, 0 replies; 14+ messages in thread
From: Oscar Salvador @ 2021-03-09 21:27 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Andrew Morton, David Hildenbrand, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	x86, H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel

On Tue, Mar 09, 2021 at 10:34:51AM -0800, Dave Hansen wrote:
> On 3/9/21 9:41 AM, Oscar Salvador wrote:
> > We never get to allocate 1GB pages when mapping the vmemmap range.
> > Drop the dead code both for the aligned and unaligned cases and leave
> > only the direct map handling.
> 
> I was hoping to seem some more meat in this changelog, possibly some of
> what David Hildenbrand said in the v4 thread about this patch.
> Basically, we don't have code to allocate 1G mappings because it isn't
> clear that it would be worth the complexity, and it might also waste memory.
> 
> I'm fine with the code, but I would appreciate a beefed-up changelog:
> 
> Acked-by: Dave Hansen <dave.hansen@linux.intel.com>

Since I had to do another pass to fix up some compilaton errors,
I added a bit more of explanation in that regard.

Thanks!


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges
  2021-03-09 17:52   ` David Hildenbrand
@ 2021-03-10 17:49     ` Oscar Salvador
  2021-03-10 17:58       ` David Hildenbrand
  0 siblings, 1 reply; 14+ messages in thread
From: Oscar Salvador @ 2021-03-10 17:49 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel

On Tue, Mar 09, 2021 at 06:52:38PM +0100, David Hildenbrand wrote:
> > -#define PAGE_INUSE 0xFD
> > +#ifdef CONFIG_SPARSEMEM_VMEMMAP
> > +#define PAGE_UNUSED 0xFD
> > +
> > +/* Returns true if the PMD is completely unused and thus it can be freed */
> > +static bool __meminit vmemmap_pmd_is_unused(unsigned long addr, unsigned long end)
> > +{
> 
> I don't think the new name is any better. It implies that all it does is a
> check - yet it actually clears the given range. (I prefer the old name, but
> well, I came up with that, so what do I know :D )

Sorry, I did not mean to offend here.

Something like: vmemmap_is_pmd_unused_after_clearing_it would be a bit better
I guess.
Tbh, both this and previous one looked fine to me, but I understand where Dave
confusion was coming from, that is why I decided to rename it.

Maybe a middle-ground would have been to expand the comment above.

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges
  2021-03-10 17:49     ` Oscar Salvador
@ 2021-03-10 17:58       ` David Hildenbrand
  2021-03-10 21:58         ` Oscar Salvador
  0 siblings, 1 reply; 14+ messages in thread
From: David Hildenbrand @ 2021-03-10 17:58 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Andrew Morton, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel

On 10.03.21 18:49, Oscar Salvador wrote:
> On Tue, Mar 09, 2021 at 06:52:38PM +0100, David Hildenbrand wrote:
>>> -#define PAGE_INUSE 0xFD
>>> +#ifdef CONFIG_SPARSEMEM_VMEMMAP
>>> +#define PAGE_UNUSED 0xFD
>>> +
>>> +/* Returns true if the PMD is completely unused and thus it can be freed */
>>> +static bool __meminit vmemmap_pmd_is_unused(unsigned long addr, unsigned long end)
>>> +{
>>
>> I don't think the new name is any better. It implies that all it does is a
>> check - yet it actually clears the given range. (I prefer the old name, but
>> well, I came up with that, so what do I know :D )
> 
> Sorry, I did not mean to offend here.

Oh, I didn't feel offended - I was rather expressing that my opinion 
might be biased because I came up with these names in the s390x variant ;)

> 
> Something like: vmemmap_is_pmd_unused_after_clearing_it would be a bit better
> I guess.
> Tbh, both this and previous one looked fine to me, but I understand where Dave
> confusion was coming from, that is why I decided to rename it.
> 
> Maybe a middle-ground would have been to expand the comment above.

Thinking again, I guess it might be a good idea to factor out the core 
functions into common code. For the optimization part, it might make 
sense too pass some "state" structure that contains e.g., 
"unused_pmd_start".

Then we don't have diverging implementations of essentially the same thing.

Of course, we can do that on top of this series - unifying both 
implementations.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges
  2021-03-10 17:58       ` David Hildenbrand
@ 2021-03-10 21:58         ` Oscar Salvador
  2021-03-11 16:38           ` David Hildenbrand
  0 siblings, 1 reply; 14+ messages in thread
From: Oscar Salvador @ 2021-03-10 21:58 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel

On Wed, Mar 10, 2021 at 06:58:26PM +0100, David Hildenbrand wrote:
> Thinking again, I guess it might be a good idea to factor out the core
> functions into common code. For the optimization part, it might make sense
> too pass some "state" structure that contains e.g., "unused_pmd_start".

Yeah, that really sounds like a good thing to do.

> 
> Then we don't have diverging implementations of essentially the same thing.
> 
> Of course, we can do that on top of this series - unifying both
> implementations.

I would rather do it on top of this series, not because I am lazy, but
rather fairly busy and I will not be able to spend much time on it
anytime soon.

Once this series gets merged, I commit to have a look into that.

Thanks!


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges
  2021-03-10 21:58         ` Oscar Salvador
@ 2021-03-11 16:38           ` David Hildenbrand
  0 siblings, 0 replies; 14+ messages in thread
From: David Hildenbrand @ 2021-03-11 16:38 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Andrew Morton, Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
	H . Peter Anvin, Michal Hocko, linux-mm, linux-kernel

On 10.03.21 22:58, Oscar Salvador wrote:
> On Wed, Mar 10, 2021 at 06:58:26PM +0100, David Hildenbrand wrote:
>> Thinking again, I guess it might be a good idea to factor out the core
>> functions into common code. For the optimization part, it might make sense
>> too pass some "state" structure that contains e.g., "unused_pmd_start".
> 
> Yeah, that really sounds like a good thing to do.
> 
>>
>> Then we don't have diverging implementations of essentially the same thing.
>>
>> Of course, we can do that on top of this series - unifying both
>> implementations.
> 
> I would rather do it on top of this series, not because I am lazy, but
> rather fairly busy and I will not be able to spend much time on it
> anytime soon.
> 
> Once this series gets merged, I commit to have a look into that.
> 

Sure, makes sense - thanks!


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-03-11 16:39 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-09 17:41 [PATCH v5 0/4] Cleanup and fixups for vmemmap handling Oscar Salvador
2021-03-09 17:41 ` [PATCH v5 1/4] x86/vmemmap: Drop handling of 4K unaligned vmemmap range Oscar Salvador
2021-03-09 17:41 ` [PATCH v5 2/4] x86/vmemmap: Drop handling of 1GB vmemmap ranges Oscar Salvador
2021-03-09 18:34   ` Dave Hansen
2021-03-09 21:27     ` Oscar Salvador
2021-03-09 17:41 ` [PATCH v5 3/4] x86/vmemmap: Handle unpopulated sub-pmd ranges Oscar Salvador
2021-03-09 17:52   ` David Hildenbrand
2021-03-10 17:49     ` Oscar Salvador
2021-03-10 17:58       ` David Hildenbrand
2021-03-10 21:58         ` Oscar Salvador
2021-03-11 16:38           ` David Hildenbrand
2021-03-09 18:39   ` Dave Hansen
2021-03-09 17:41 ` [PATCH v5 4/4] x86/vmemmap: Optimize for consecutive sections in partial populated PMDs Oscar Salvador
2021-03-09 18:50   ` Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).