All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch v2 0/6] Refactor split_mem_range with proper helper and loop
@ 2019-12-05  2:13 Wei Yang
  2019-12-05  2:13 ` [Patch v2 1/6] x86/mm: Remove second argument of split_mem_range() Wei Yang
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Wei Yang @ 2019-12-05  2:13 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: richard.weiyang, dave.hansen, luto, peterz, tglx, Wei Yang

split_mem_range is used to prepare range before mapping kernel page table.

After first version, Thomas suggested some brilliant idea to re-write the
logic.

Wei split the big patch into pieces and did some tests.  To verify the
functionality, Wei abstract the code into userland and did following test
cases:
    
        * ranges fits only 4K
        * ranges fits only 2M
        * ranges fits only 1G
        * ranges fits 4K and 2M
        * ranges fits 2M and 1G
        * ranges fits 4K, 2M and 1G
        * ranges fits 4K, 2M and 1G but w/o 1G size
        * ranges fits 4K, 2M and 1G with only 4K size
    
    Below is the test result:
    
        ### Split [4K, 16K][0x00001000-0x00004000]:
        [mem 0x00001000-0x00003fff] page size 4K
        ### Split [4M, 64M][0x00400000-0x04000000]:
        [mem 0x00400000-0x03ffffff] page size 2M
        ### Split [0G, 2G][0000000000-0x80000000]:
        [mem 0000000000-0x7fffffff] page size 1G
        ### Split [16K, 4M + 16K][0x00004000-0x00404000]:
        [mem 0x00004000-0x001fffff] page size 4K
        [mem 0x00200000-0x003fffff] page size 2M
        [mem 0x00400000-0x00403fff] page size 4K
        ### Split [4M, 2G + 2M][0x00400000-0x80200000]:
        [mem 0x00400000-0x3fffffff] page size 2M
        [mem 0x40000000-0x7fffffff] page size 1G
        [mem 0x80000000-0x801fffff] page size 2M
        ### Split [4M - 16K, 2G + 2M + 16K][0x003fc000-0x80204000]:
        [mem 0x003fc000-0x003fffff] page size 4K
        [mem 0x00400000-0x3fffffff] page size 2M
        [mem 0x40000000-0x7fffffff] page size 1G
        [mem 0x80000000-0x801fffff] page size 2M
        [mem 0x80200000-0x80203fff] page size 4K
        ### Split w/o 1G size [4M - 16K, 2G + 2M + 16K][0x003fc000-0x80204000]:
        [mem 0x003fc000-0x003fffff] page size 4K
        [mem 0x00400000-0x801fffff] page size 2M
        [mem 0x80200000-0x80203fff] page size 4K
        ### Split w/ only 4K [4M - 16K, 2G + 2M + 16K][0x003fc000-0x80204000]:
        [mem 0x003fc000-0x80203fff] page size 4K

Thomas Gleixner (1):
  x86/mm: Refactor split_mem_range with proper helper and loop

Wei Yang (5):
  x86/mm: Remove second argument of split_mem_range()
  x86/mm: Add attribute __ro_after_init to after_bootmem
  x86/mm: Make page_size_mask unsigned int clearly
  x86/mm: Refine debug print string retrieval function
  x86/mm: Use address directly in split_mem_range()

 arch/x86/mm/init.c | 259 ++++++++++++++++++---------------------------
 1 file changed, 103 insertions(+), 156 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Patch v2 1/6] x86/mm: Remove second argument of split_mem_range()
  2019-12-05  2:13 [Patch v2 0/6] Refactor split_mem_range with proper helper and loop Wei Yang
@ 2019-12-05  2:13 ` Wei Yang
  2019-12-05  2:13 ` [Patch v2 2/6] x86/mm: Add attribute __ro_after_init to after_bootmem Wei Yang
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2019-12-05  2:13 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: richard.weiyang, dave.hansen, luto, peterz, tglx, Wei Yang

The second argument is always zero.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 arch/x86/mm/init.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e7bb483557c9..916a3d9b5bfd 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -333,13 +333,13 @@ static const char *page_size_string(struct map_range *mr)
 	return str_4k;
 }
 
-static int __meminit split_mem_range(struct map_range *mr, int nr_range,
+static int __meminit split_mem_range(struct map_range *mr,
 				     unsigned long start,
 				     unsigned long end)
 {
 	unsigned long start_pfn, end_pfn, limit_pfn;
 	unsigned long pfn;
-	int i;
+	int i, nr_range = 0;
 
 	limit_pfn = PFN_DOWN(end);
 
@@ -477,7 +477,7 @@ unsigned long __ref init_memory_mapping(unsigned long start,
 	       start, end - 1);
 
 	memset(mr, 0, sizeof(mr));
-	nr_range = split_mem_range(mr, 0, start, end);
+	nr_range = split_mem_range(mr, start, end);
 
 	for (i = 0; i < nr_range; i++)
 		ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Patch v2 2/6] x86/mm: Add attribute __ro_after_init to after_bootmem
  2019-12-05  2:13 [Patch v2 0/6] Refactor split_mem_range with proper helper and loop Wei Yang
  2019-12-05  2:13 ` [Patch v2 1/6] x86/mm: Remove second argument of split_mem_range() Wei Yang
@ 2019-12-05  2:13 ` Wei Yang
  2019-12-05  2:14 ` [Patch v2 3/6] x86/mm: Make page_size_mask unsigned int clearly Wei Yang
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2019-12-05  2:13 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: richard.weiyang, dave.hansen, luto, peterz, tglx, Wei Yang

after_bootmem is only set in mem_init().

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 arch/x86/mm/init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 916a3d9b5bfd..4fa5fd641865 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -160,7 +160,7 @@ void  __init early_alloc_pgt_buf(void)
 	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
 }
 
-int after_bootmem;
+int after_bootmem __ro_after_init;
 
 early_param_on_off("gbpages", "nogbpages", direct_gbpages, CONFIG_X86_DIRECT_GBPAGES);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Patch v2 3/6] x86/mm: Make page_size_mask unsigned int clearly
  2019-12-05  2:13 [Patch v2 0/6] Refactor split_mem_range with proper helper and loop Wei Yang
  2019-12-05  2:13 ` [Patch v2 1/6] x86/mm: Remove second argument of split_mem_range() Wei Yang
  2019-12-05  2:13 ` [Patch v2 2/6] x86/mm: Add attribute __ro_after_init to after_bootmem Wei Yang
@ 2019-12-05  2:14 ` Wei Yang
  2019-12-05  2:14 ` [Patch v2 4/6] x86/mm: Refine debug print string retrieval function Wei Yang
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2019-12-05  2:14 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: richard.weiyang, dave.hansen, luto, peterz, tglx, Wei Yang

page_size_mask is defined as unsigned, so it would be more proper to use
1U to assign and compare this value.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 arch/x86/mm/init.c | 36 ++++++++++++++++++------------------
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 4fa5fd641865..0eb5edb63fa2 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -165,12 +165,12 @@ int after_bootmem __ro_after_init;
 early_param_on_off("gbpages", "nogbpages", direct_gbpages, CONFIG_X86_DIRECT_GBPAGES);
 
 struct map_range {
-	unsigned long start;
-	unsigned long end;
-	unsigned page_size_mask;
+	unsigned long	start;
+	unsigned long	end;
+	unsigned int	page_size_mask;
 };
 
-static int page_size_mask;
+static unsigned int page_size_mask __ro_after_init;
 
 static void __init probe_page_size_mask(void)
 {
@@ -180,7 +180,7 @@ static void __init probe_page_size_mask(void)
 	 * large pages into small in interrupt context, etc.
 	 */
 	if (boot_cpu_has(X86_FEATURE_PSE) && !debug_pagealloc_enabled())
-		page_size_mask |= 1 << PG_LEVEL_2M;
+		page_size_mask |= 1U << PG_LEVEL_2M;
 	else
 		direct_gbpages = 0;
 
@@ -204,7 +204,7 @@ static void __init probe_page_size_mask(void)
 	/* Enable 1 GB linear kernel mappings if available: */
 	if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) {
 		printk(KERN_INFO "Using GB pages for direct mapping\n");
-		page_size_mask |= 1 << PG_LEVEL_1G;
+		page_size_mask |= 1U << PG_LEVEL_1G;
 	} else {
 		direct_gbpages = 0;
 	}
@@ -284,8 +284,8 @@ static void __ref adjust_range_page_size_mask(struct map_range *mr,
 	int i;
 
 	for (i = 0; i < nr_range; i++) {
-		if ((page_size_mask & (1<<PG_LEVEL_2M)) &&
-		    !(mr[i].page_size_mask & (1<<PG_LEVEL_2M))) {
+		if ((page_size_mask & (1U<<PG_LEVEL_2M)) &&
+		    !(mr[i].page_size_mask & (1U<<PG_LEVEL_2M))) {
 			unsigned long start = round_down(mr[i].start, PMD_SIZE);
 			unsigned long end = round_up(mr[i].end, PMD_SIZE);
 
@@ -295,15 +295,15 @@ static void __ref adjust_range_page_size_mask(struct map_range *mr,
 #endif
 
 			if (memblock_is_region_memory(start, end - start))
-				mr[i].page_size_mask |= 1<<PG_LEVEL_2M;
+				mr[i].page_size_mask |= 1U<<PG_LEVEL_2M;
 		}
-		if ((page_size_mask & (1<<PG_LEVEL_1G)) &&
-		    !(mr[i].page_size_mask & (1<<PG_LEVEL_1G))) {
+		if ((page_size_mask & (1U<<PG_LEVEL_1G)) &&
+		    !(mr[i].page_size_mask & (1U<<PG_LEVEL_1G))) {
 			unsigned long start = round_down(mr[i].start, PUD_SIZE);
 			unsigned long end = round_up(mr[i].end, PUD_SIZE);
 
 			if (memblock_is_region_memory(start, end - start))
-				mr[i].page_size_mask |= 1<<PG_LEVEL_1G;
+				mr[i].page_size_mask |= 1U<<PG_LEVEL_1G;
 		}
 	}
 }
@@ -315,7 +315,7 @@ static const char *page_size_string(struct map_range *mr)
 	static const char str_4m[] = "4M";
 	static const char str_4k[] = "4k";
 
-	if (mr->page_size_mask & (1<<PG_LEVEL_1G))
+	if (mr->page_size_mask & (1U<<PG_LEVEL_1G))
 		return str_1g;
 	/*
 	 * 32-bit without PAE has a 4M large page size.
@@ -324,10 +324,10 @@ static const char *page_size_string(struct map_range *mr)
 	 */
 	if (IS_ENABLED(CONFIG_X86_32) &&
 	    !IS_ENABLED(CONFIG_X86_PAE) &&
-	    mr->page_size_mask & (1<<PG_LEVEL_2M))
+	    mr->page_size_mask & (1U<<PG_LEVEL_2M))
 		return str_4m;
 
-	if (mr->page_size_mask & (1<<PG_LEVEL_2M))
+	if (mr->page_size_mask & (1U<<PG_LEVEL_2M))
 		return str_2m;
 
 	return str_4k;
@@ -378,7 +378,7 @@ static int __meminit split_mem_range(struct map_range *mr,
 
 	if (start_pfn < end_pfn) {
 		nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
-				page_size_mask & (1<<PG_LEVEL_2M));
+				page_size_mask & (1U<<PG_LEVEL_2M));
 		pfn = end_pfn;
 	}
 
@@ -389,7 +389,7 @@ static int __meminit split_mem_range(struct map_range *mr,
 	if (start_pfn < end_pfn) {
 		nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
 				page_size_mask &
-				 ((1<<PG_LEVEL_2M)|(1<<PG_LEVEL_1G)));
+				 ((1U<<PG_LEVEL_2M)|(1U<<PG_LEVEL_1G)));
 		pfn = end_pfn;
 	}
 
@@ -398,7 +398,7 @@ static int __meminit split_mem_range(struct map_range *mr,
 	end_pfn = round_down(limit_pfn, PFN_DOWN(PMD_SIZE));
 	if (start_pfn < end_pfn) {
 		nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
-				page_size_mask & (1<<PG_LEVEL_2M));
+				page_size_mask & (1U<<PG_LEVEL_2M));
 		pfn = end_pfn;
 	}
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Patch v2 4/6] x86/mm: Refine debug print string retrieval function
  2019-12-05  2:13 [Patch v2 0/6] Refactor split_mem_range with proper helper and loop Wei Yang
                   ` (2 preceding siblings ...)
  2019-12-05  2:14 ` [Patch v2 3/6] x86/mm: Make page_size_mask unsigned int clearly Wei Yang
@ 2019-12-05  2:14 ` Wei Yang
  2019-12-05  9:13   ` Peter Zijlstra
  2019-12-05  2:14 ` [Patch v2 5/6] x86/mm: Use address directly in split_mem_range() Wei Yang
  2019-12-05  2:14 ` [Patch v2 6/6] x86/mm: Refactor split_mem_range with proper helper and loop Wei Yang
  5 siblings, 1 reply; 14+ messages in thread
From: Wei Yang @ 2019-12-05  2:14 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: richard.weiyang, dave.hansen, luto, peterz, tglx, Wei Yang

Generally, the mapping page size are:

   4K, 2M, 1G

except in case 32-bit without PAE, the mapping page size are:

   4K, 4M

Based on PG_LEVEL_X definition and mr->page_size_mask, we can calculate
the mapping page size from a predefined string array.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 arch/x86/mm/init.c | 39 +++++++++++++--------------------------
 1 file changed, 13 insertions(+), 26 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 0eb5edb63fa2..ded58a31c679 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -308,29 +308,20 @@ static void __ref adjust_range_page_size_mask(struct map_range *mr,
 	}
 }
 
-static const char *page_size_string(struct map_range *mr)
+static void __meminit mr_print(struct map_range *mr, unsigned int maxidx)
 {
-	static const char str_1g[] = "1G";
-	static const char str_2m[] = "2M";
-	static const char str_4m[] = "4M";
-	static const char str_4k[] = "4k";
-
-	if (mr->page_size_mask & (1U<<PG_LEVEL_1G))
-		return str_1g;
-	/*
-	 * 32-bit without PAE has a 4M large page size.
-	 * PG_LEVEL_2M is misnamed, but we can at least
-	 * print out the right size in the string.
-	 */
-	if (IS_ENABLED(CONFIG_X86_32) &&
-	    !IS_ENABLED(CONFIG_X86_PAE) &&
-	    mr->page_size_mask & (1U<<PG_LEVEL_2M))
-		return str_4m;
-
-	if (mr->page_size_mask & (1U<<PG_LEVEL_2M))
-		return str_2m;
+#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_PAE)
+	static const char *sz[2] = { "4K", "4M" };
+#else
+	static const char *sz[4] = { "4K", "2M", "1G", "" };
+#endif
+	unsigned int idx, s;
 
-	return str_4k;
+	for (idx = 0; idx < maxidx; idx++, mr++) {
+		s = (mr->page_size_mask >> PG_LEVEL_2M) & (ARRAY_SIZE(sz) - 1);
+		pr_debug(" [mem %#010lx-%#010lx] page size %s\n",
+			 mr->start, mr->end - 1, sz[s]);
+	}
 }
 
 static int __meminit split_mem_range(struct map_range *mr,
@@ -425,11 +416,7 @@ static int __meminit split_mem_range(struct map_range *mr,
 		nr_range--;
 	}
 
-	for (i = 0; i < nr_range; i++)
-		pr_debug(" [mem %#010lx-%#010lx] page %s\n",
-				mr[i].start, mr[i].end - 1,
-				page_size_string(&mr[i]));
-
+	mr_print(mr, nr_range);
 	return nr_range;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Patch v2 5/6] x86/mm: Use address directly in split_mem_range()
  2019-12-05  2:13 [Patch v2 0/6] Refactor split_mem_range with proper helper and loop Wei Yang
                   ` (3 preceding siblings ...)
  2019-12-05  2:14 ` [Patch v2 4/6] x86/mm: Refine debug print string retrieval function Wei Yang
@ 2019-12-05  2:14 ` Wei Yang
  2019-12-07  3:36     ` kbuild test robot
  2019-12-05  2:14 ` [Patch v2 6/6] x86/mm: Refactor split_mem_range with proper helper and loop Wei Yang
  5 siblings, 1 reply; 14+ messages in thread
From: Wei Yang @ 2019-12-05  2:14 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: richard.weiyang, dave.hansen, luto, peterz, tglx, Wei Yang

This is not necessary to convert address to pfn to split range. And
finally, convert back to address and store it to map_range.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 arch/x86/mm/init.c | 73 +++++++++++++++++++++++-----------------------
 1 file changed, 36 insertions(+), 37 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index ded58a31c679..5fe3f645f02c 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -259,14 +259,14 @@ static void setup_pcid(void)
 #endif
 
 static int __meminit save_mr(struct map_range *mr, int nr_range,
-			     unsigned long start_pfn, unsigned long end_pfn,
+			     unsigned long start, unsigned long end,
 			     unsigned long page_size_mask)
 {
 	if (start_pfn < end_pfn) {
 		if (nr_range >= NR_RANGE_MR)
 			panic("run out of range for init_memory_mapping\n");
-		mr[nr_range].start = start_pfn<<PAGE_SHIFT;
-		mr[nr_range].end   = end_pfn<<PAGE_SHIFT;
+		mr[nr_range].start = start_pfn;
+		mr[nr_range].end   = end_pfn;
 		mr[nr_range].page_size_mask = page_size_mask;
 		nr_range++;
 	}
@@ -328,14 +328,13 @@ static int __meminit split_mem_range(struct map_range *mr,
 				     unsigned long start,
 				     unsigned long end)
 {
-	unsigned long start_pfn, end_pfn, limit_pfn;
-	unsigned long pfn;
+	unsigned long addr, limit;
 	int i, nr_range = 0;
 
-	limit_pfn = PFN_DOWN(end);
+	limit = end;
 
 	/* head if not big page alignment ? */
-	pfn = start_pfn = PFN_DOWN(start);
+	addr = start;
 #ifdef CONFIG_X86_32
 	/*
 	 * Don't use a large page for the first 2/4MB of memory
@@ -343,61 +342,61 @@ static int __meminit split_mem_range(struct map_range *mr,
 	 * and overlapping MTRRs into large pages can cause
 	 * slowdowns.
 	 */
-	if (pfn == 0)
-		end_pfn = PFN_DOWN(PMD_SIZE);
+	if (addr == 0)
+		end = PMD_SIZE;
 	else
-		end_pfn = round_up(pfn, PFN_DOWN(PMD_SIZE));
+		end = round_up(addr, PMD_SIZE);
 #else /* CONFIG_X86_64 */
-	end_pfn = round_up(pfn, PFN_DOWN(PMD_SIZE));
+	end = round_up(addr, PMD_SIZE);
 #endif
-	if (end_pfn > limit_pfn)
-		end_pfn = limit_pfn;
-	if (start_pfn < end_pfn) {
-		nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
-		pfn = end_pfn;
+	if (end > limit)
+		end = limit;
+	if (start < end) {
+		nr_range = save_mr(mr, nr_range, start, end, 0);
+		addr = end;
 	}
 
 	/* big page (2M) range */
-	start_pfn = round_up(pfn, PFN_DOWN(PMD_SIZE));
+	start = round_up(addr, PMD_SIZE);
 #ifdef CONFIG_X86_32
-	end_pfn = round_down(limit_pfn, PFN_DOWN(PMD_SIZE));
+	end = round_down(limit, PMD_SIZE);
 #else /* CONFIG_X86_64 */
-	end_pfn = round_up(pfn, PFN_DOWN(PUD_SIZE));
-	if (end_pfn > round_down(limit_pfn, PFN_DOWN(PMD_SIZE)))
-		end_pfn = round_down(limit_pfn, PFN_DOWN(PMD_SIZE));
+	end = round_up(addr, PUD_SIZE);
+	if (end > round_down(limit, PMD_SIZE))
+		end = round_down(limit, PMD_SIZE);
 #endif
 
-	if (start_pfn < end_pfn) {
-		nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
+	if (start < end) {
+		nr_range = save_mr(mr, nr_range, start, end,
 				page_size_mask & (1U<<PG_LEVEL_2M));
-		pfn = end_pfn;
+		addr = end;
 	}
 
 #ifdef CONFIG_X86_64
 	/* big page (1G) range */
-	start_pfn = round_up(pfn, PFN_DOWN(PUD_SIZE));
-	end_pfn = round_down(limit_pfn, PFN_DOWN(PUD_SIZE));
-	if (start_pfn < end_pfn) {
-		nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
+	start = round_up(addr, PUD_SIZE);
+	end = round_down(limit, PUD_SIZE);
+	if (start < end) {
+		nr_range = save_mr(mr, nr_range, start, end,
 				page_size_mask &
 				 ((1U<<PG_LEVEL_2M)|(1U<<PG_LEVEL_1G)));
-		pfn = end_pfn;
+		addr = end;
 	}
 
 	/* tail is not big page (1G) alignment */
-	start_pfn = round_up(pfn, PFN_DOWN(PMD_SIZE));
-	end_pfn = round_down(limit_pfn, PFN_DOWN(PMD_SIZE));
-	if (start_pfn < end_pfn) {
-		nr_range = save_mr(mr, nr_range, start_pfn, end_pfn,
+	start = round_up(addr, PMD_SIZE);
+	end = round_down(limit, PMD_SIZE);
+	if (start < end) {
+		nr_range = save_mr(mr, nr_range, start, end,
 				page_size_mask & (1U<<PG_LEVEL_2M));
-		pfn = end_pfn;
+		addr = end;
 	}
 #endif
 
 	/* tail is not big page (2M) alignment */
-	start_pfn = pfn;
-	end_pfn = limit_pfn;
-	nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0);
+	start = addr;
+	end = limit;
+	nr_range = save_mr(mr, nr_range, start, end, 0);
 
 	if (!after_bootmem)
 		adjust_range_page_size_mask(mr, nr_range);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Patch v2 6/6] x86/mm: Refactor split_mem_range with proper helper and loop
  2019-12-05  2:13 [Patch v2 0/6] Refactor split_mem_range with proper helper and loop Wei Yang
                   ` (4 preceding siblings ...)
  2019-12-05  2:14 ` [Patch v2 5/6] x86/mm: Use address directly in split_mem_range() Wei Yang
@ 2019-12-05  2:14 ` Wei Yang
  5 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2019-12-05  2:14 UTC (permalink / raw)
  To: x86, linux-kernel
  Cc: richard.weiyang, dave.hansen, luto, peterz, tglx, Wei Yang

From: Thomas Gleixner <tglx@linutronix.de>

split_mem_range() splits memory range into 4K, 2M or 1G regions to fit
into page table. Current approach is simply a certain of iteration on
the address boundary. When the address meets a boundary, it is saved
into the map_range structure.

This approach has many copied code for boundary check, while we could
improve this by using loop with helper function.

Thomas draft the patch and Wei apply this and test it. To verify the
functionality, Wei abstract the code into userland and did following
test cases:

    * ranges fits only 4K
    * ranges fits only 2M
    * ranges fits only 1G
    * ranges fits 4K and 2M
    * ranges fits 2M and 1G
    * ranges fits 4K, 2M and 1G
    * ranges fits 4K, 2M and 1G but w/o 1G size
    * ranges fits 4K, 2M and 1G with only 4K size

Below is the test result:

    ### Split [4K, 16K][0x00001000-0x00004000]:
    [mem 0x00001000-0x00003fff] page size 4K
    ### Split [4M, 64M][0x00400000-0x04000000]:
    [mem 0x00400000-0x03ffffff] page size 2M
    ### Split [0G, 2G][0000000000-0x80000000]:
    [mem 0000000000-0x7fffffff] page size 1G
    ### Split [16K, 4M + 16K][0x00004000-0x00404000]:
    [mem 0x00004000-0x001fffff] page size 4K
    [mem 0x00200000-0x003fffff] page size 2M
    [mem 0x00400000-0x00403fff] page size 4K
    ### Split [4M, 2G + 2M][0x00400000-0x80200000]:
    [mem 0x00400000-0x3fffffff] page size 2M
    [mem 0x40000000-0x7fffffff] page size 1G
    [mem 0x80000000-0x801fffff] page size 2M
    ### Split [4M - 16K, 2G + 2M + 16K][0x003fc000-0x80204000]:
    [mem 0x003fc000-0x003fffff] page size 4K
    [mem 0x00400000-0x3fffffff] page size 2M
    [mem 0x40000000-0x7fffffff] page size 1G
    [mem 0x80000000-0x801fffff] page size 2M
    [mem 0x80200000-0x80203fff] page size 4K
    ### Split w/o 1G size [4M - 16K, 2G + 2M + 16K][0x003fc000-0x80204000]:
    [mem 0x003fc000-0x003fffff] page size 4K
    [mem 0x00400000-0x801fffff] page size 2M
    [mem 0x80200000-0x80203fff] page size 4K
    ### Split w/ only 4K [4M - 16K, 2G + 2M + 16K][0x003fc000-0x80204000]:
    [mem 0x003fc000-0x80203fff] page size 4K

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Tested-by: Wei Yang <richardw.yang@linux.intel.com>

---
Thomas's SOB is added by me since he is the original author. If this is
not proper please let me know.
---
 arch/x86/mm/init.c | 225 +++++++++++++++++++--------------------------
 1 file changed, 93 insertions(+), 132 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 5fe3f645f02c..14d6d90268f7 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -164,6 +164,17 @@ int after_bootmem __ro_after_init;
 
 early_param_on_off("gbpages", "nogbpages", direct_gbpages, CONFIG_X86_DIRECT_GBPAGES);
 
+#ifdef CONFIG_X86_32
+#define NR_RANGE_MR 3
+#else
+#define NR_RANGE_MR 5
+#endif
+
+struct mapinfo {
+	unsigned int	mask;
+	unsigned int	size;
+};
+
 struct map_range {
 	unsigned long	start;
 	unsigned long	end;
@@ -252,62 +263,6 @@ static void setup_pcid(void)
 	}
 }
 
-#ifdef CONFIG_X86_32
-#define NR_RANGE_MR 3
-#else /* CONFIG_X86_64 */
-#define NR_RANGE_MR 5
-#endif
-
-static int __meminit save_mr(struct map_range *mr, int nr_range,
-			     unsigned long start, unsigned long end,
-			     unsigned long page_size_mask)
-{
-	if (start_pfn < end_pfn) {
-		if (nr_range >= NR_RANGE_MR)
-			panic("run out of range for init_memory_mapping\n");
-		mr[nr_range].start = start_pfn;
-		mr[nr_range].end   = end_pfn;
-		mr[nr_range].page_size_mask = page_size_mask;
-		nr_range++;
-	}
-
-	return nr_range;
-}
-
-/*
- * adjust the page_size_mask for small range to go with
- *	big page size instead small one if nearby are ram too.
- */
-static void __ref adjust_range_page_size_mask(struct map_range *mr,
-							 int nr_range)
-{
-	int i;
-
-	for (i = 0; i < nr_range; i++) {
-		if ((page_size_mask & (1U<<PG_LEVEL_2M)) &&
-		    !(mr[i].page_size_mask & (1U<<PG_LEVEL_2M))) {
-			unsigned long start = round_down(mr[i].start, PMD_SIZE);
-			unsigned long end = round_up(mr[i].end, PMD_SIZE);
-
-#ifdef CONFIG_X86_32
-			if ((end >> PAGE_SHIFT) > max_low_pfn)
-				continue;
-#endif
-
-			if (memblock_is_region_memory(start, end - start))
-				mr[i].page_size_mask |= 1U<<PG_LEVEL_2M;
-		}
-		if ((page_size_mask & (1U<<PG_LEVEL_1G)) &&
-		    !(mr[i].page_size_mask & (1U<<PG_LEVEL_1G))) {
-			unsigned long start = round_down(mr[i].start, PUD_SIZE);
-			unsigned long end = round_up(mr[i].end, PUD_SIZE);
-
-			if (memblock_is_region_memory(start, end - start))
-				mr[i].page_size_mask |= 1U<<PG_LEVEL_1G;
-		}
-	}
-}
-
 static void __meminit mr_print(struct map_range *mr, unsigned int maxidx)
 {
 #if defined(CONFIG_X86_32) && !defined(CONFIG_X86_PAE)
@@ -324,99 +279,105 @@ static void __meminit mr_print(struct map_range *mr, unsigned int maxidx)
 	}
 }
 
-static int __meminit split_mem_range(struct map_range *mr,
-				     unsigned long start,
-				     unsigned long end)
+/*
+ * Try to preserve large mappings during bootmem by expanding the current
+ * range to large page mapping of @size and verifying that the result is
+ * within a memory region.
+ */
+static void __meminit mr_expand(struct map_range *mr, unsigned int size)
 {
-	unsigned long addr, limit;
-	int i, nr_range = 0;
+	unsigned long start = round_down(mr->start, size);
+	unsigned long end = round_up(mr->end, size);
 
-	limit = end;
+	if (IS_ENABLED(CONFIG_X86_32) && (end >> PAGE_SHIFT) > max_low_pfn)
+		return;
 
-	/* head if not big page alignment ? */
-	addr = start;
-#ifdef CONFIG_X86_32
-	/*
-	 * Don't use a large page for the first 2/4MB of memory
-	 * because there are often fixed size MTRRs in there
-	 * and overlapping MTRRs into large pages can cause
-	 * slowdowns.
-	 */
-	if (addr == 0)
-		end = PMD_SIZE;
-	else
-		end = round_up(addr, PMD_SIZE);
-#else /* CONFIG_X86_64 */
-	end = round_up(addr, PMD_SIZE);
-#endif
-	if (end > limit)
-		end = limit;
-	if (start < end) {
-		nr_range = save_mr(mr, nr_range, start, end, 0);
-		addr = end;
+	if (memblock_is_region_memory(start, end - start)) {
+		mr->start = start;
+		mr->end = end;
 	}
+}
 
-	/* big page (2M) range */
-	start = round_up(addr, PMD_SIZE);
-#ifdef CONFIG_X86_32
-	end = round_down(limit, PMD_SIZE);
-#else /* CONFIG_X86_64 */
-	end = round_up(addr, PUD_SIZE);
-	if (end > round_down(limit, PMD_SIZE))
-		end = round_down(limit, PMD_SIZE);
-#endif
+static bool __meminit mr_try_map(struct map_range *mr, const struct mapinfo *mi)
+{
+	unsigned long len;
+
+	/* Check whether the map size is supported. PAGE_SIZE always is. */
+	if (mi->mask && !(mr->page_size_mask & mi->mask))
+		return false;
+
+	if (!after_bootmem)
+		mr_expand(mr, mi->size);
 
-	if (start < end) {
-		nr_range = save_mr(mr, nr_range, start, end,
-				page_size_mask & (1U<<PG_LEVEL_2M));
-		addr = end;
+	if (!IS_ALIGNED(mr->start, mi->size)) {
+		/* Limit the range to the next boundary of this size. */
+		mr->end = min_t(unsigned long, mr->end,
+				round_up(mr->start, mi->size));
+		return false;
 	}
 
-#ifdef CONFIG_X86_64
-	/* big page (1G) range */
-	start = round_up(addr, PUD_SIZE);
-	end = round_down(limit, PUD_SIZE);
-	if (start < end) {
-		nr_range = save_mr(mr, nr_range, start, end,
-				page_size_mask &
-				 ((1U<<PG_LEVEL_2M)|(1U<<PG_LEVEL_1G)));
-		addr = end;
+	if (!IS_ALIGNED(mr->end, mi->size)) {
+		/* Try to fit as much as possible */
+		len = round_down(mr->end - mr->start, mi->size);
+		if (!len)
+			return false;
+		mr->end = mr->start + len;
 	}
 
-	/* tail is not big page (1G) alignment */
-	start = round_up(addr, PMD_SIZE);
-	end = round_down(limit, PMD_SIZE);
-	if (start < end) {
-		nr_range = save_mr(mr, nr_range, start, end,
-				page_size_mask & (1U<<PG_LEVEL_2M));
-		addr = end;
+	/* Store the effective page size mask */
+	mr->page_size_mask = mi->mask;
+	return true;
+}
+
+static void __meminit mr_setup(struct map_range *mr, unsigned long start,
+			       unsigned long end)
+{
+	/*
+	 * On 32bit the first 2/4MB are often covered by fixed size MTRRs.
+	 * Overlapping MTRRs on large pages can cause slowdowns. Force 4k
+	 * mappings.
+	 */
+	if (IS_ENABLED(CONFIG_X86_32) && start < PMD_SIZE) {
+		mr->page_size_mask = 0;
+		mr->end = min_t(unsigned long, end, PMD_SIZE);
+	} else {
+		/* Set the possible mapping sizes and allow full range. */
+		mr->page_size_mask = page_size_mask;
+		mr->end = end;
 	}
+	mr->start = start;
+}
+
+static int __meminit split_mem_range(struct map_range *mr, unsigned long start,
+				     unsigned long end)
+{
+	static const struct mapinfo mapinfos[] = {
+#ifdef CONFIG_X86_64
+		{ .mask = 1U << PG_LEVEL_1G, .size = PUD_SIZE },
 #endif
+		{ .mask = 1U << PG_LEVEL_2M, .size = PMD_SIZE },
+		{ .mask = 0, .size = PAGE_SIZE },
+	};
+	const struct mapinfo *mi;
+	struct map_range *curmr;
+	unsigned long addr;
+	int idx;
 
-	/* tail is not big page (2M) alignment */
-	start = addr;
-	end = limit;
-	nr_range = save_mr(mr, nr_range, start, end, 0);
+	for (idx = 0, addr = start, curmr = mr; addr < end; idx++, curmr++) {
+		BUG_ON(idx == NR_RANGE_MR);
 
-	if (!after_bootmem)
-		adjust_range_page_size_mask(mr, nr_range);
+		mr_setup(curmr, addr, end);
 
-	/* try to merge same page size and continuous */
-	for (i = 0; nr_range > 1 && i < nr_range - 1; i++) {
-		unsigned long old_start;
-		if (mr[i].end != mr[i+1].start ||
-		    mr[i].page_size_mask != mr[i+1].page_size_mask)
-			continue;
-		/* move it */
-		old_start = mr[i].start;
-		memmove(&mr[i], &mr[i+1],
-			(nr_range - 1 - i) * sizeof(struct map_range));
-		mr[i--].start = old_start;
-		nr_range--;
+		/* Try map sizes top down. PAGE_SIZE will always succeed. */
+		for (mi = mapinfos; !mr_try_map(curmr, mi); mi++)
+			;
+
+		/* Get the start address for the next range */
+		addr = curmr->end;
 	}
 
-	mr_print(mr, nr_range);
-	return nr_range;
+	mr_print(mr, idx);
+	return idx;
 }
 
 struct range pfn_mapped[E820_MAX_ENTRIES];
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [Patch v2 4/6] x86/mm: Refine debug print string retrieval function
  2019-12-05  2:14 ` [Patch v2 4/6] x86/mm: Refine debug print string retrieval function Wei Yang
@ 2019-12-05  9:13   ` Peter Zijlstra
  2019-12-06  1:51     ` Wei Yang
  0 siblings, 1 reply; 14+ messages in thread
From: Peter Zijlstra @ 2019-12-05  9:13 UTC (permalink / raw)
  To: Wei Yang
  Cc: x86, linux-kernel, richard.weiyang, dave.hansen, luto, peterz, tglx

On Thu, Dec 05, 2019 at 10:14:01AM +0800, Wei Yang wrote:
> Generally, the mapping page size are:
> 
>    4K, 2M, 1G
> 
> except in case 32-bit without PAE, the mapping page size are:
> 
>    4K, 4M
> 
> Based on PG_LEVEL_X definition and mr->page_size_mask, we can calculate
> the mapping page size from a predefined string array.
> 
> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> ---
>  arch/x86/mm/init.c | 39 +++++++++++++--------------------------
>  1 file changed, 13 insertions(+), 26 deletions(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 0eb5edb63fa2..ded58a31c679 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -308,29 +308,20 @@ static void __ref adjust_range_page_size_mask(struct map_range *mr,
>  	}
>  }
>  
> +static void __meminit mr_print(struct map_range *mr, unsigned int maxidx)
>  {
> +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_PAE)
> +	static const char *sz[2] = { "4K", "4M" };
> +#else
> +	static const char *sz[4] = { "4K", "2M", "1G", "" };
> +#endif
> +	unsigned int idx, s;
>  
> +	for (idx = 0; idx < maxidx; idx++, mr++) {
> +		s = (mr->page_size_mask >> PG_LEVEL_2M) & (ARRAY_SIZE(sz) - 1);

Is it at all possible for !PAE to have 1G here, if you use the sz[4]
definition unconditionally?

> +		pr_debug(" [mem %#010lx-%#010lx] page size %s\n",
> +			 mr->start, mr->end - 1, sz[s]);
> +	}
>  }


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Patch v2 4/6] x86/mm: Refine debug print string retrieval function
  2019-12-05  9:13   ` Peter Zijlstra
@ 2019-12-06  1:51     ` Wei Yang
  2019-12-06 10:27       ` Peter Zijlstra
  0 siblings, 1 reply; 14+ messages in thread
From: Wei Yang @ 2019-12-06  1:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wei Yang, x86, linux-kernel, richard.weiyang, dave.hansen, luto,
	peterz, tglx

On Thu, Dec 05, 2019 at 10:13:11AM +0100, Peter Zijlstra wrote:
>On Thu, Dec 05, 2019 at 10:14:01AM +0800, Wei Yang wrote:
>> Generally, the mapping page size are:
>> 
>>    4K, 2M, 1G
>> 
>> except in case 32-bit without PAE, the mapping page size are:
>> 
>>    4K, 4M
>> 
>> Based on PG_LEVEL_X definition and mr->page_size_mask, we can calculate
>> the mapping page size from a predefined string array.
>> 
>> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
>> ---
>>  arch/x86/mm/init.c | 39 +++++++++++++--------------------------
>>  1 file changed, 13 insertions(+), 26 deletions(-)
>> 
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index 0eb5edb63fa2..ded58a31c679 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -308,29 +308,20 @@ static void __ref adjust_range_page_size_mask(struct map_range *mr,
>>  	}
>>  }
>>  
>> +static void __meminit mr_print(struct map_range *mr, unsigned int maxidx)
>>  {
>> +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_PAE)
>> +	static const char *sz[2] = { "4K", "4M" };
>> +#else
>> +	static const char *sz[4] = { "4K", "2M", "1G", "" };
>> +#endif
>> +	unsigned int idx, s;
>>  
>> +	for (idx = 0; idx < maxidx; idx++, mr++) {
>> +		s = (mr->page_size_mask >> PG_LEVEL_2M) & (ARRAY_SIZE(sz) - 1);
>
>Is it at all possible for !PAE to have 1G here, if you use the sz[4]
>definition unconditionally?
>

You mean remove the ifdef and use sz[4] for both condition?

Then how to differentiate 4M and 2M?

>> +		pr_debug(" [mem %#010lx-%#010lx] page size %s\n",
>> +			 mr->start, mr->end - 1, sz[s]);
>> +	}
>>  }

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Patch v2 4/6] x86/mm: Refine debug print string retrieval function
  2019-12-06  1:51     ` Wei Yang
@ 2019-12-06 10:27       ` Peter Zijlstra
  2019-12-06 15:17         ` Wei Yang
  0 siblings, 1 reply; 14+ messages in thread
From: Peter Zijlstra @ 2019-12-06 10:27 UTC (permalink / raw)
  To: Wei Yang
  Cc: x86, linux-kernel, richard.weiyang, dave.hansen, luto, peterz, tglx

On Fri, Dec 06, 2019 at 09:51:26AM +0800, Wei Yang wrote:

> >> +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_PAE)
> >> +	static const char *sz[2] = { "4K", "4M" };
> >> +#else
> >> +	static const char *sz[4] = { "4K", "2M", "1G", "" };
> >> +#endif
> >> +	unsigned int idx, s;
> >>  
> >> +	for (idx = 0; idx < maxidx; idx++, mr++) {
> >> +		s = (mr->page_size_mask >> PG_LEVEL_2M) & (ARRAY_SIZE(sz) - 1);
> >
> >Is it at all possible for !PAE to have 1G here, if you use the sz[4]
> >definition unconditionally?
> >
> 
> You mean remove the ifdef and use sz[4] for both condition?
> 
> Then how to differentiate 4M and 2M?

Argh, I'm blind.. I failed to spot that. N/m then.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Patch v2 4/6] x86/mm: Refine debug print string retrieval function
  2019-12-06 10:27       ` Peter Zijlstra
@ 2019-12-06 15:17         ` Wei Yang
  0 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2019-12-06 15:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Wei Yang, x86, linux-kernel, richard.weiyang, dave.hansen, luto,
	peterz, tglx

On Fri, Dec 06, 2019 at 11:27:46AM +0100, Peter Zijlstra wrote:
>On Fri, Dec 06, 2019 at 09:51:26AM +0800, Wei Yang wrote:
>
>> >> +#if defined(CONFIG_X86_32) && !defined(CONFIG_X86_PAE)
>> >> +	static const char *sz[2] = { "4K", "4M" };
>> >> +#else
>> >> +	static const char *sz[4] = { "4K", "2M", "1G", "" };
>> >> +#endif
>> >> +	unsigned int idx, s;
>> >>  
>> >> +	for (idx = 0; idx < maxidx; idx++, mr++) {
>> >> +		s = (mr->page_size_mask >> PG_LEVEL_2M) & (ARRAY_SIZE(sz) - 1);
>> >
>> >Is it at all possible for !PAE to have 1G here, if you use the sz[4]
>> >definition unconditionally?
>> >
>> 
>> You mean remove the ifdef and use sz[4] for both condition?
>> 
>> Then how to differentiate 4M and 2M?
>
>Argh, I'm blind.. I failed to spot that. N/m then.

Never mind. I always do the same thing :-(

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Patch v2 5/6] x86/mm: Use address directly in split_mem_range()
  2019-12-05  2:14 ` [Patch v2 5/6] x86/mm: Use address directly in split_mem_range() Wei Yang
@ 2019-12-07  3:36     ` kbuild test robot
  0 siblings, 0 replies; 14+ messages in thread
From: kbuild test robot @ 2019-12-07  3:36 UTC (permalink / raw)
  To: Wei Yang
  Cc: kbuild-all, x86, linux-kernel, richard.weiyang, dave.hansen,
	luto, peterz, tglx, Wei Yang

[-- Attachment #1: Type: text/plain, Size: 3318 bytes --]

Hi Wei,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tip/x86/mm]
[also build test ERROR on tip/auto-latest linus/master v5.4 next-20191206]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Wei-Yang/x86-mm-Remove-second-argument-of-split_mem_range/20191207-061345
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 7f264dab5b60343358e788d4c939c166c22ea4a2
config: i386-tinyconfig (attached as .config)
compiler: gcc-7 (Debian 7.5.0-1) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

Note: the linux-review/Wei-Yang/x86-mm-Remove-second-argument-of-split_mem_range/20191207-061345 HEAD 7f535395f79354bfa29cca182dd203525bcb4237 builds fine.
      It only hurts bisectibility.

All errors (new ones prefixed by >>):

   arch/x86/mm/init.c: In function 'save_mr':
>> arch/x86/mm/init.c:265:6: error: 'start_pfn' undeclared (first use in this function); did you mean 'start'?
     if (start_pfn < end_pfn) {
         ^~~~~~~~~
         start
   arch/x86/mm/init.c:265:6: note: each undeclared identifier is reported only once for each function it appears in
>> arch/x86/mm/init.c:265:18: error: 'end_pfn' undeclared (first use in this function); did you mean 'pgd_pfn'?
     if (start_pfn < end_pfn) {
                     ^~~~~~~
                     pgd_pfn

vim +265 arch/x86/mm/init.c

f765090a2617b8 Pekka Enberg 2009-03-05  260  
dc9dd5cc854cde Jan Beulich  2009-03-12  261  static int __meminit save_mr(struct map_range *mr, int nr_range,
51c6d529efdc86 Wei Yang     2019-12-05  262  			     unsigned long start, unsigned long end,
f765090a2617b8 Pekka Enberg 2009-03-05  263  			     unsigned long page_size_mask)
f765090a2617b8 Pekka Enberg 2009-03-05  264  {
f765090a2617b8 Pekka Enberg 2009-03-05 @265  	if (start_pfn < end_pfn) {
f765090a2617b8 Pekka Enberg 2009-03-05  266  		if (nr_range >= NR_RANGE_MR)
f765090a2617b8 Pekka Enberg 2009-03-05  267  			panic("run out of range for init_memory_mapping\n");
51c6d529efdc86 Wei Yang     2019-12-05  268  		mr[nr_range].start = start_pfn;
51c6d529efdc86 Wei Yang     2019-12-05  269  		mr[nr_range].end   = end_pfn;
f765090a2617b8 Pekka Enberg 2009-03-05  270  		mr[nr_range].page_size_mask = page_size_mask;
f765090a2617b8 Pekka Enberg 2009-03-05  271  		nr_range++;
f765090a2617b8 Pekka Enberg 2009-03-05  272  	}
f765090a2617b8 Pekka Enberg 2009-03-05  273  
f765090a2617b8 Pekka Enberg 2009-03-05  274  	return nr_range;
f765090a2617b8 Pekka Enberg 2009-03-05  275  }
f765090a2617b8 Pekka Enberg 2009-03-05  276  

:::::: The code at line 265 was first introduced by commit
:::::: f765090a2617b8d9cb73b71e0aa850c29460d8be x86: move init_memory_mapping() to common mm/init.c

:::::: TO: Pekka Enberg <penberg@cs.helsinki.fi>
:::::: CC: Ingo Molnar <mingo@elte.hu>

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 7223 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Patch v2 5/6] x86/mm: Use address directly in split_mem_range()
@ 2019-12-07  3:36     ` kbuild test robot
  0 siblings, 0 replies; 14+ messages in thread
From: kbuild test robot @ 2019-12-07  3:36 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3386 bytes --]

Hi Wei,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on tip/x86/mm]
[also build test ERROR on tip/auto-latest linus/master v5.4 next-20191206]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Wei-Yang/x86-mm-Remove-second-argument-of-split_mem_range/20191207-061345
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 7f264dab5b60343358e788d4c939c166c22ea4a2
config: i386-tinyconfig (attached as .config)
compiler: gcc-7 (Debian 7.5.0-1) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

Note: the linux-review/Wei-Yang/x86-mm-Remove-second-argument-of-split_mem_range/20191207-061345 HEAD 7f535395f79354bfa29cca182dd203525bcb4237 builds fine.
      It only hurts bisectibility.

All errors (new ones prefixed by >>):

   arch/x86/mm/init.c: In function 'save_mr':
>> arch/x86/mm/init.c:265:6: error: 'start_pfn' undeclared (first use in this function); did you mean 'start'?
     if (start_pfn < end_pfn) {
         ^~~~~~~~~
         start
   arch/x86/mm/init.c:265:6: note: each undeclared identifier is reported only once for each function it appears in
>> arch/x86/mm/init.c:265:18: error: 'end_pfn' undeclared (first use in this function); did you mean 'pgd_pfn'?
     if (start_pfn < end_pfn) {
                     ^~~~~~~
                     pgd_pfn

vim +265 arch/x86/mm/init.c

f765090a2617b8 Pekka Enberg 2009-03-05  260  
dc9dd5cc854cde Jan Beulich  2009-03-12  261  static int __meminit save_mr(struct map_range *mr, int nr_range,
51c6d529efdc86 Wei Yang     2019-12-05  262  			     unsigned long start, unsigned long end,
f765090a2617b8 Pekka Enberg 2009-03-05  263  			     unsigned long page_size_mask)
f765090a2617b8 Pekka Enberg 2009-03-05  264  {
f765090a2617b8 Pekka Enberg 2009-03-05 @265  	if (start_pfn < end_pfn) {
f765090a2617b8 Pekka Enberg 2009-03-05  266  		if (nr_range >= NR_RANGE_MR)
f765090a2617b8 Pekka Enberg 2009-03-05  267  			panic("run out of range for init_memory_mapping\n");
51c6d529efdc86 Wei Yang     2019-12-05  268  		mr[nr_range].start = start_pfn;
51c6d529efdc86 Wei Yang     2019-12-05  269  		mr[nr_range].end   = end_pfn;
f765090a2617b8 Pekka Enberg 2009-03-05  270  		mr[nr_range].page_size_mask = page_size_mask;
f765090a2617b8 Pekka Enberg 2009-03-05  271  		nr_range++;
f765090a2617b8 Pekka Enberg 2009-03-05  272  	}
f765090a2617b8 Pekka Enberg 2009-03-05  273  
f765090a2617b8 Pekka Enberg 2009-03-05  274  	return nr_range;
f765090a2617b8 Pekka Enberg 2009-03-05  275  }
f765090a2617b8 Pekka Enberg 2009-03-05  276  

:::::: The code at line 265 was first introduced by commit
:::::: f765090a2617b8d9cb73b71e0aa850c29460d8be x86: move init_memory_mapping() to common mm/init.c

:::::: TO: Pekka Enberg <penberg@cs.helsinki.fi>
:::::: CC: Ingo Molnar <mingo@elte.hu>

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org Intel Corporation

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 7223 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Patch v2 5/6] x86/mm: Use address directly in split_mem_range()
  2019-12-07  3:36     ` kbuild test robot
  (?)
@ 2019-12-07  7:17     ` Wei Yang
  -1 siblings, 0 replies; 14+ messages in thread
From: Wei Yang @ 2019-12-07  7:17 UTC (permalink / raw)
  To: kbuild test robot
  Cc: Wei Yang, kbuild-all, x86, linux-kernel, richard.weiyang,
	dave.hansen, luto, peterz, tglx

On Sat, Dec 07, 2019 at 11:36:10AM +0800, kbuild test robot wrote:
>Hi Wei,
>
>Thank you for the patch! Yet something to improve:
>
>[auto build test ERROR on tip/x86/mm]
>[also build test ERROR on tip/auto-latest linus/master v5.4 next-20191206]
>[if your patch is applied to the wrong git tree, please drop us a note to help
>improve the system. BTW, we also suggest to use '--base' option to specify the
>base tree in git format-patch, please see https://stackoverflow.com/a/37406982]
>
>url:    https://github.com/0day-ci/linux/commits/Wei-Yang/x86-mm-Remove-second-argument-of-split_mem_range/20191207-061345
>base:   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git 7f264dab5b60343358e788d4c939c166c22ea4a2
>config: i386-tinyconfig (attached as .config)
>compiler: gcc-7 (Debian 7.5.0-1) 7.5.0
>reproduce:
>        # save the attached .config to linux build tree
>        make ARCH=i386 
>
>If you fix the issue, kindly add following tag
>Reported-by: kbuild test robot <lkp@intel.com>
>
>Note: the linux-review/Wei-Yang/x86-mm-Remove-second-argument-of-split_mem_range/20191207-061345 HEAD 7f535395f79354bfa29cca182dd203525bcb4237 builds fine.
>      It only hurts bisectibility.
>
>All errors (new ones prefixed by >>):
>
>   arch/x86/mm/init.c: In function 'save_mr':
>>> arch/x86/mm/init.c:265:6: error: 'start_pfn' undeclared (first use in this function); did you mean 'start'?
>     if (start_pfn < end_pfn) {
>         ^~~~~~~~~
>         start
>   arch/x86/mm/init.c:265:6: note: each undeclared identifier is reported only once for each function it appears in
>>> arch/x86/mm/init.c:265:18: error: 'end_pfn' undeclared (first use in this function); did you mean 'pgd_pfn'?
>     if (start_pfn < end_pfn) {
>                     ^~~~~~~
>                     pgd_pfn
>

Oops, introduced an error after resolving a conflict. Should be start and end.

Will correct it in next version.


-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-12-07  7:17 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-05  2:13 [Patch v2 0/6] Refactor split_mem_range with proper helper and loop Wei Yang
2019-12-05  2:13 ` [Patch v2 1/6] x86/mm: Remove second argument of split_mem_range() Wei Yang
2019-12-05  2:13 ` [Patch v2 2/6] x86/mm: Add attribute __ro_after_init to after_bootmem Wei Yang
2019-12-05  2:14 ` [Patch v2 3/6] x86/mm: Make page_size_mask unsigned int clearly Wei Yang
2019-12-05  2:14 ` [Patch v2 4/6] x86/mm: Refine debug print string retrieval function Wei Yang
2019-12-05  9:13   ` Peter Zijlstra
2019-12-06  1:51     ` Wei Yang
2019-12-06 10:27       ` Peter Zijlstra
2019-12-06 15:17         ` Wei Yang
2019-12-05  2:14 ` [Patch v2 5/6] x86/mm: Use address directly in split_mem_range() Wei Yang
2019-12-07  3:36   ` kbuild test robot
2019-12-07  3:36     ` kbuild test robot
2019-12-07  7:17     ` Wei Yang
2019-12-05  2:14 ` [Patch v2 6/6] x86/mm: Refactor split_mem_range with proper helper and loop Wei Yang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.