All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch v2 0/8] memblock: clenup
@ 2024-04-25  7:19 Wei Yang
  2024-04-25  7:19 ` [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position Wei Yang
                   ` (7 more replies)
  0 siblings, 8 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-25  7:19 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Changes in v2:

* remove the two round reduction patch
* add 129th memory block to memblock.reserved
* add memblock_reserve_many_may_conflict_check()
* two new patch at the end

Wei Yang (8):
  memblock tests: reserve the 129th memory block at all possible
    position
  memblock tests: add memblock_reserve_many_may_conflict_check()
  mm/memblock: fix comment for memblock_isolate_range()
  mm/memblock: remove consecutive regions at once
  memblock tests: add memblock_overlaps_region_checks
  mm/memblock: return true directly on finding overlap region
  mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap
  mm/memblock: default region's nid may be MAX_NUMNODES

 mm/memblock.c                            |  36 +--
 tools/include/linux/mm.h                 |   1 +
 tools/testing/memblock/tests/basic_api.c | 286 ++++++++++++++++++-----
 tools/testing/memblock/tests/common.c    |   4 +-
 tools/testing/memblock/tests/common.h    |   4 +
 5 files changed, 262 insertions(+), 69 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position
  2024-04-25  7:19 [Patch v2 0/8] memblock: clenup Wei Yang
@ 2024-04-25  7:19 ` Wei Yang
  2024-04-28  6:35   ` Mike Rapoport
  2024-04-25  7:19 ` [Patch v2 4/8] mm/memblock: remove consecutive regions at once Wei Yang
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Wei Yang @ 2024-04-25  7:19 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

In stead of add 129th memory block at the last position, let's try all
possible position.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 tools/testing/memblock/tests/basic_api.c | 121 ++++++++++++-----------
 1 file changed, 65 insertions(+), 56 deletions(-)

diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index f317fe691fc4..1ae62272867a 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -898,84 +898,93 @@ static int memblock_reserve_near_max_check(void)
  * memblock.memory.max, find a new valid memory as
  * reserved.regions.
  */
+/* Keep the gap so these memory region will not be merged. */
+#define MEMORY_BASE(idx) (SZ_128K + (MEM_SIZE * 2) * (idx))
 static int memblock_reserve_many_check(void)
 {
-	int i;
+	int i, skip;
 	void *orig_region;
 	struct region r = {
 		.base = SZ_16K,
 		.size = SZ_16K,
 	};
-	phys_addr_t memory_base = SZ_128K;
 	phys_addr_t new_reserved_regions_size;
 
 	PREFIX_PUSH();
 
-	reset_memblock_regions();
-	memblock_allow_resize();
+	/* Reserve the 129th memory block for all possible positions*/
+	for (skip = 0; skip < INIT_MEMBLOCK_REGIONS + 1; skip++) {
+		reset_memblock_regions();
+		memblock_allow_resize();
 
-	/* Add a valid memory region used by double_array(). */
-	dummy_physical_memory_init();
-	memblock_add(dummy_physical_memory_base(), MEM_SIZE);
+		/* Add a valid memory region used by double_array(). */
+		dummy_physical_memory_init();
+		memblock_add(dummy_physical_memory_base(), MEM_SIZE);
 
-	for (i = 0; i < INIT_MEMBLOCK_REGIONS; i++) {
-		/* Reserve some fakes memory region to fulfill the memblock. */
-		memblock_reserve(memory_base, MEM_SIZE);
+		for (i = 0; i < INIT_MEMBLOCK_REGIONS + 1; i++) {
+			if (i == skip)
+				continue;
 
-		ASSERT_EQ(memblock.reserved.cnt, i + 1);
-		ASSERT_EQ(memblock.reserved.total_size, (i + 1) * MEM_SIZE);
+			/* Reserve some fakes memory region to fulfill the memblock. */
+			memblock_reserve(MEMORY_BASE(i), MEM_SIZE);
 
-		/* Keep the gap so these memory region will not be merged. */
-		memory_base += MEM_SIZE * 2;
-	}
+			if (i < skip) {
+				ASSERT_EQ(memblock.reserved.cnt, i + 1);
+				ASSERT_EQ(memblock.reserved.total_size, (i + 1) * MEM_SIZE);
+			} else {
+				ASSERT_EQ(memblock.reserved.cnt, i);
+				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
+			}
+		}
 
-	orig_region = memblock.reserved.regions;
+		orig_region = memblock.reserved.regions;
 
-	/* This reserve the 129 memory_region, and makes it double array. */
-	memblock_reserve(memory_base, MEM_SIZE);
+		/* This reserve the 129 memory_region, and makes it double array. */
+		memblock_reserve(MEMORY_BASE(skip), MEM_SIZE);
 
-	/*
-	 * This is the memory region size used by the doubled reserved.regions,
-	 * and it has been reserved due to it has been used. The size is used to
-	 * calculate the total_size that the memblock.reserved have now.
-	 */
-	new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
-					sizeof(struct memblock_region));
-	/*
-	 * The double_array() will find a free memory region as the new
-	 * reserved.regions, and the used memory region will be reserved, so
-	 * there will be one more region exist in the reserved memblock. And the
-	 * one more reserved region's size is new_reserved_regions_size.
-	 */
-	ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
-	ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
-						new_reserved_regions_size);
-	ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+		/*
+		 * This is the memory region size used by the doubled reserved.regions,
+		 * and it has been reserved due to it has been used. The size is used to
+		 * calculate the total_size that the memblock.reserved have now.
+		 */
+		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
+						sizeof(struct memblock_region));
+		/*
+		 * The double_array() will find a free memory region as the new
+		 * reserved.regions, and the used memory region will be reserved, so
+		 * there will be one more region exist in the reserved memblock. And the
+		 * one more reserved region's size is new_reserved_regions_size.
+		 */
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
 
-	/*
-	 * Now memblock_double_array() works fine. Let's check after the
-	 * double_array(), the memblock_reserve() still works as normal.
-	 */
-	memblock_reserve(r.base, r.size);
-	ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
-	ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
+		/*
+		 * Now memblock_double_array() works fine. Let's check after the
+		 * double_array(), the memblock_reserve() still works as normal.
+		 */
+		memblock_reserve(r.base, r.size);
+		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
+		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
 
-	ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
-	ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
-						new_reserved_regions_size +
-						r.size);
-	ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size +
+							r.size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
 
-	dummy_physical_memory_cleanup();
+		dummy_physical_memory_cleanup();
 
-	/*
-	 * The current reserved.regions is occupying a range of memory that
-	 * allocated from dummy_physical_memory_init(). After free the memory,
-	 * we must not use it. So restore the origin memory region to make sure
-	 * the tests can run as normal and not affected by the double array.
-	 */
-	memblock.reserved.regions = orig_region;
-	memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
+		/*
+		 * The current reserved.regions is occupying a range of memory that
+		 * allocated from dummy_physical_memory_init(). After free the memory,
+		 * we must not use it. So restore the origin memory region to make sure
+		 * the tests can run as normal and not affected by the double array.
+		 */
+		memblock.reserved.regions = orig_region;
+		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
+	}
 
 	test_pass_pop();
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Patch v2 4/8] mm/memblock: remove consecutive regions at once
  2024-04-25  7:19 [Patch v2 0/8] memblock: clenup Wei Yang
  2024-04-25  7:19 ` [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position Wei Yang
@ 2024-04-25  7:19 ` Wei Yang
  2024-04-28  6:44   ` Mike Rapoport
  2024-04-25  7:19 ` [Patch v2 5/8] memblock tests: add memblock_overlaps_region_checks Wei Yang
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 19+ messages in thread
From: Wei Yang @ 2024-04-25  7:19 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

In memblock_remove_range, we would remove consecutive regions after
isolation.

Instead of removing each region from end to start one by one, we could
remove those consecutive regions at once.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 5a363ef283d0..53e3aa2cd9a2 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -343,12 +343,15 @@ static phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start,
 	return ret;
 }
 
-static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)
+static void __init_memblock memblock_remove_regions(struct memblock_type *type,
+						    unsigned long r,
+						    unsigned int num)
 {
-	type->total_size -= type->regions[r].size;
-	memmove(&type->regions[r], &type->regions[r + 1],
-		(type->cnt - (r + 1)) * sizeof(type->regions[r]));
-	type->cnt--;
+	for (int i = 0; i < num; i++)
+		type->total_size -= type->regions[r + i].size;
+	memmove(&type->regions[r], &type->regions[r + num],
+		(type->cnt - (r + num)) * sizeof(type->regions[r]));
+	type->cnt -= num;
 
 	/* Special case for empty arrays */
 	if (type->cnt == 0) {
@@ -360,6 +363,11 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u
 	}
 }
 
+static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)
+{
+	memblock_remove_regions(type, r, 1);
+}
+
 #ifndef CONFIG_ARCH_KEEP_MEMBLOCK
 /**
  * memblock_discard - discard memory and reserved arrays if they were allocated
@@ -846,14 +854,14 @@ static int __init_memblock memblock_remove_range(struct memblock_type *type,
 					  phys_addr_t base, phys_addr_t size)
 {
 	int start_rgn, end_rgn;
-	int i, ret;
+	int ret;
 
 	ret = memblock_isolate_range(type, base, size, &start_rgn, &end_rgn);
 	if (ret)
 		return ret;
 
-	for (i = end_rgn - 1; i >= start_rgn; i--)
-		memblock_remove_region(type, i);
+	if (end_rgn - start_rgn)
+		memblock_remove_regions(type, start_rgn, end_rgn - start_rgn);
 	return 0;
 }
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Patch v2 5/8] memblock tests: add memblock_overlaps_region_checks
  2024-04-25  7:19 [Patch v2 0/8] memblock: clenup Wei Yang
  2024-04-25  7:19 ` [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position Wei Yang
  2024-04-25  7:19 ` [Patch v2 4/8] mm/memblock: remove consecutive regions at once Wei Yang
@ 2024-04-25  7:19 ` Wei Yang
  2024-04-25  7:19 ` [Patch v2 6/8] mm/memblock: return true directly on finding overlap region Wei Yang
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-25  7:19 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Add a test case for memblock_overlaps_region().

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 tools/testing/memblock/tests/basic_api.c | 48 ++++++++++++++++++++++++
 tools/testing/memblock/tests/common.h    |  3 ++
 2 files changed, 51 insertions(+)

diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index 748950e02589..b4053ceaf832 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -2261,6 +2261,53 @@ static int memblock_trim_memory_checks(void)
 	return 0;
 }
 
+static int memblock_overlaps_region_check(void)
+{
+	struct region r = {
+		.base = SZ_1G,
+		.size = SZ_4M
+	};
+
+	PREFIX_PUSH();
+
+	reset_memblock_regions();
+	memblock_add(r.base, r.size);
+
+	/* Far Away */
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1M, SZ_1M));
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_2G, SZ_1M));
+
+	/* Neighbor */
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_1M, SZ_1M));
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_4M, SZ_1M));
+
+	/* Partial Overlap */
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_1M, SZ_2M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_2M, SZ_2M));
+
+	/* Totally Overlap */
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G, SZ_4M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_2M, SZ_8M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_1M, SZ_1M));
+
+	test_pass_pop();
+
+	return 0;
+}
+
+static int memblock_overlaps_region_checks(void)
+{
+	prefix_reset();
+	prefix_push("memblock_overlaps_region");
+	test_print("Running memblock_overlaps_region tests...\n");
+
+	memblock_overlaps_region_check();
+
+	prefix_pop();
+
+	return 0;
+}
+
 int memblock_basic_checks(void)
 {
 	memblock_initialization_check();
@@ -2270,6 +2317,7 @@ int memblock_basic_checks(void)
 	memblock_free_checks();
 	memblock_bottom_up_checks();
 	memblock_trim_memory_checks();
+	memblock_overlaps_region_checks();
 
 	return 0;
 }
diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
index 741d57315ba6..4c89c92ef25c 100644
--- a/tools/testing/memblock/tests/common.h
+++ b/tools/testing/memblock/tests/common.h
@@ -40,6 +40,9 @@ enum test_flags {
 	assert((_expected) == (_seen)); \
 } while (0)
 
+#define ASSERT_TRUE(_seen) ASSERT_EQ(true, _seen)
+#define ASSERT_FALSE(_seen) ASSERT_EQ(false, _seen)
+
 /**
  * ASSERT_NE():
  * Check the condition
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Patch v2 6/8] mm/memblock: return true directly on finding overlap region
  2024-04-25  7:19 [Patch v2 0/8] memblock: clenup Wei Yang
                   ` (2 preceding siblings ...)
  2024-04-25  7:19 ` [Patch v2 5/8] memblock tests: add memblock_overlaps_region_checks Wei Yang
@ 2024-04-25  7:19 ` Wei Yang
  2024-04-25  7:19 ` [Patch v2 7/8] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap Wei Yang
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-25  7:19 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Not necessary to break and check i against type->cnt again.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 53e3aa2cd9a2..4125506d8af9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -194,8 +194,8 @@ bool __init_memblock memblock_overlaps_region(struct memblock_type *type,
 	for (i = 0; i < type->cnt; i++)
 		if (memblock_addrs_overlap(base, size, type->regions[i].base,
 					   type->regions[i].size))
-			break;
-	return i < type->cnt;
+			return true;
+	return false;
 }
 
 /**
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Patch v2 7/8] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap
  2024-04-25  7:19 [Patch v2 0/8] memblock: clenup Wei Yang
                   ` (3 preceding siblings ...)
  2024-04-25  7:19 ` [Patch v2 6/8] mm/memblock: return true directly on finding overlap region Wei Yang
@ 2024-04-25  7:19 ` Wei Yang
  2024-04-25  7:19 ` [Patch v2 8/8] mm/memblock: default region's nid may be MAX_NUMNODES Wei Yang
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-25  7:19 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Leverage the macro PAGE_ALIGN_DOWN to get pgend.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c            | 2 +-
 tools/include/linux/mm.h | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 4125506d8af9..9b4fa8fea9a2 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -2046,7 +2046,7 @@ static void __init free_memmap(unsigned long start_pfn, unsigned long end_pfn)
 	 * downwards.
 	 */
 	pg = PAGE_ALIGN(__pa(start_pg));
-	pgend = __pa(end_pg) & PAGE_MASK;
+	pgend = PAGE_ALIGN_DOWN(__pa(end_pg));
 
 	/*
 	 * If there are free pages between these, free the section of the
diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h
index 7d73da098047..caf68f5084b3 100644
--- a/tools/include/linux/mm.h
+++ b/tools/include/linux/mm.h
@@ -15,6 +15,7 @@
 #define ALIGN_DOWN(x, a)		__ALIGN_KERNEL((x) - ((a) - 1), (a))
 
 #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE)
+#define PAGE_ALIGN_DOWN(addr) ALIGN_DOWN(addr, PAGE_SIZE)
 
 #define __va(x) ((void *)((unsigned long)(x)))
 #define __pa(x) ((unsigned long)(x))
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Patch v2 8/8] mm/memblock: default region's nid may be MAX_NUMNODES
  2024-04-25  7:19 [Patch v2 0/8] memblock: clenup Wei Yang
                   ` (4 preceding siblings ...)
  2024-04-25  7:19 ` [Patch v2 7/8] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap Wei Yang
@ 2024-04-25  7:19 ` Wei Yang
       [not found] ` <20240425071929.18004-3-richard.weiyang@gmail.com>
       [not found] ` <20240425071929.18004-4-richard.weiyang@gmail.com>
  7 siblings, 0 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-25  7:19 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

On x86, the call flow looks like this:

numa_init()
    memblock_set_node(..., MAX_NUMNODES)
    numa_register_memblks()
        memblock_validate_numa_coverage()

If there is a hole, the nid for this region would stay to be
MAX_NUMNODES. Then memblock_validate_numa_coverage() will miss to report
it.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 9b4fa8fea9a2..3f142c787491 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -759,7 +759,7 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
 
 	/* calculate lose page */
 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
-		if (nid == NUMA_NO_NODE)
+		if (nid == MAX_NUMNODES || nid == NUMA_NO_NODE)
 			nr_pages += end_pfn - start_pfn;
 	}
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position
  2024-04-25  7:19 ` [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position Wei Yang
@ 2024-04-28  6:35   ` Mike Rapoport
  2024-04-28 12:22     ` Wei Yang
  0 siblings, 1 reply; 19+ messages in thread
From: Mike Rapoport @ 2024-04-28  6:35 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Thu, Apr 25, 2024 at 07:19:22AM +0000, Wei Yang wrote:
> In stead of add 129th memory block at the last position, let's try all
> possible position.

Why do you insist on changing the existing test rather than adding a new
one?
 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  tools/testing/memblock/tests/basic_api.c | 121 ++++++++++++-----------
>  1 file changed, 65 insertions(+), 56 deletions(-)
 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 2/8] memblock tests: add memblock_reserve_many_may_conflict_check()
       [not found] ` <20240425071929.18004-3-richard.weiyang@gmail.com>
@ 2024-04-28  6:40   ` Mike Rapoport
  2024-04-28 12:36     ` Wei Yang
  2024-04-30  1:49     ` Wei Yang
  0 siblings, 2 replies; 19+ messages in thread
From: Mike Rapoport @ 2024-04-28  6:40 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Thu, Apr 25, 2024 at 07:19:23AM +0000, Wei Yang wrote:
> This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
> fix overlapping allocation when doubling reserved array").
> 
> This is done by adding the 129th reserve region into memblock.memory. If
> memblock_double_array() use this reserve region as new array, it fails.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  tools/testing/memblock/tests/basic_api.c | 123 +++++++++++++++++++++++
>  tools/testing/memblock/tests/common.c    |   4 +-
>  tools/testing/memblock/tests/common.h    |   1 +
>  3 files changed, 126 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
> index 1ae62272867a..748950e02589 100644
> --- a/tools/testing/memblock/tests/basic_api.c
> +++ b/tools/testing/memblock/tests/basic_api.c
> @@ -991,6 +991,128 @@ static int memblock_reserve_many_check(void)
>  	return 0;
>  }
>  
> +/* Keep the gap so these memory region will not be merged. */

The gap where? What regions should not be merged?
Also please add a comment with the test description

> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
> +static int memblock_reserve_many_may_conflict_check(void)
> +{
> +	int i, skip;
> +	void *orig_region;
> +	struct region r = {
> +		.base = SZ_16K,
> +		.size = SZ_16K,
> +	};
> +	phys_addr_t new_reserved_regions_size;
> +
> +	/*
> +	 *  0        1          129
> +	 *  +---+    +---+      +---+
> +	 *  |32K|    |32K|  ..  |32K|
> +	 *  +---+    +---+      +---+
> +	 *
> +	 * Pre-allocate the range for 129 memory block + one range for double
> +	 * memblock.reserved.regions at idx 0.
> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
> +	 * when doubling reserved array")

Sorry, but I'm failing to parse it

> +	 */
> +	dummy_physical_memory_init();
> +	phys_addr_t memory_base = dummy_physical_memory_base();
> +	phys_addr_t offset = PAGE_ALIGN(memory_base);
> +
> +	PREFIX_PUSH();
> +
> +	/* Reserve the 129th memory block for all possible positions*/
> +	for (skip = 1; skip <= INIT_MEMBLOCK_REGIONS + 1; skip++) {
> +		reset_memblock_regions();
> +		memblock_allow_resize();
> +
> +		reset_memblock_attributes();
> +		/* Add a valid memory region used by double_array(). */
> +		memblock_add(MEMORY_BASE_OFFSET(0, offset), MEM_SIZE);
> +		/*
> +		 * Add a memory region which will be reserved as 129th memory
> +		 * region. This is not expected to be used by double_array().
> +		 */
> +		memblock_add(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
> +
> +		for (i = 1; i <= INIT_MEMBLOCK_REGIONS + 1; i++) {
> +			if (i == skip)
> +				continue;
> +
> +			/* Reserve some fakes memory region to fulfill the memblock. */
> +			memblock_reserve(MEMORY_BASE_OFFSET(i, offset), MEM_SIZE);
> +
> +			if (i < skip) {
> +				ASSERT_EQ(memblock.reserved.cnt, i);
> +				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
> +			} else {
> +				ASSERT_EQ(memblock.reserved.cnt, i - 1);
> +				ASSERT_EQ(memblock.reserved.total_size, (i - 1) * MEM_SIZE);
> +			}
> +		}
> +
> +		orig_region = memblock.reserved.regions;
> +
> +		/* This reserve the 129 memory_region, and makes it double array. */
> +		memblock_reserve(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
> +
> +		/*
> +		 * This is the memory region size used by the doubled reserved.regions,
> +		 * and it has been reserved due to it has been used. The size is used to
> +		 * calculate the total_size that the memblock.reserved have now.
> +		 */
> +		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
> +						sizeof(struct memblock_region));
> +		/*
> +		 * The double_array() will find a free memory region as the new
> +		 * reserved.regions, and the used memory region will be reserved, so
> +		 * there will be one more region exist in the reserved memblock. And the
> +		 * one more reserved region's size is new_reserved_regions_size.
> +		 */
> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
> +							new_reserved_regions_size);
> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
> +
> +		/*
> +		 * The first reserved region is allocated for double array
> +		 * with the size of new_reserved_regions_size and the base to be
> +		 * MEMORY_BASE_OFFSET(0, offset) + SZ_32K - new_reserved_regions_size
> +		 */
> +		ASSERT_EQ(memblock.reserved.regions[0].base + memblock.reserved.regions[0].size,
> +			  MEMORY_BASE_OFFSET(0, offset) + SZ_32K);
> +		ASSERT_EQ(memblock.reserved.regions[0].size, new_reserved_regions_size);
> +
> +		/*
> +		 * Now memblock_double_array() works fine. Let's check after the
> +		 * double_array(), the memblock_reserve() still works as normal.
> +		 */
> +		memblock_reserve(r.base, r.size);
> +		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
> +		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
> +
> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
> +							new_reserved_regions_size +
> +							r.size);
> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
> +
> +		/*
> +		 * The current reserved.regions is occupying a range of memory that
> +		 * allocated from dummy_physical_memory_init(). After free the memory,
> +		 * we must not use it. So restore the origin memory region to make sure
> +		 * the tests can run as normal and not affected by the double array.
> +		 */
> +		memblock.reserved.regions = orig_region;
> +		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
> +	}
> +
> +	dummy_physical_memory_cleanup();
> +
> +	test_pass_pop();
> +
> +	return 0;
> +}
> +
>  static int memblock_reserve_checks(void)
>  {
>  	prefix_reset();
> @@ -1006,6 +1128,7 @@ static int memblock_reserve_checks(void)
>  	memblock_reserve_between_check();
>  	memblock_reserve_near_max_check();
>  	memblock_reserve_many_check();
> +	memblock_reserve_many_may_conflict_check();
>  
>  	prefix_pop();
>  
> diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c
> index c2c569f12178..5633ffc5aaa7 100644
> --- a/tools/testing/memblock/tests/common.c
> +++ b/tools/testing/memblock/tests/common.c
> @@ -61,7 +61,7 @@ void reset_memblock_attributes(void)
>  
>  static inline void fill_memblock(void)
>  {
> -	memset(memory_block.base, 1, MEM_SIZE);
> +	memset(memory_block.base, 1, MEM_ALLOC_SIZE);

I believe PHYS_MEM_SIZE is a better name.

>  }
>  
>  void setup_memblock(void)
> @@ -103,7 +103,7 @@ void setup_numa_memblock(const unsigned int node_fracs[])
>  
>  void dummy_physical_memory_init(void)
>  {
> -	memory_block.base = malloc(MEM_SIZE);
> +	memory_block.base = malloc(MEM_ALLOC_SIZE);
>  	assert(memory_block.base);
>  	fill_memblock();
>  }
> diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
> index b5ec59aa62d7..741d57315ba6 100644
> --- a/tools/testing/memblock/tests/common.h
> +++ b/tools/testing/memblock/tests/common.h
> @@ -12,6 +12,7 @@
>  #include <../selftests/kselftest.h>
>  
>  #define MEM_SIZE		SZ_32K
> +#define MEM_ALLOC_SIZE		SZ_16M

Do we really need 16M? Wouldn't one or two suffice?

>  #define NUMA_NODES		8
>  
>  #define INIT_MEMBLOCK_REGIONS			128
> -- 
> 2.34.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 3/8] mm/memblock: fix comment for memblock_isolate_range()
       [not found] ` <20240425071929.18004-4-richard.weiyang@gmail.com>
@ 2024-04-28  6:43   ` Mike Rapoport
  2024-04-28 13:07     ` Wei Yang
  0 siblings, 1 reply; 19+ messages in thread
From: Mike Rapoport @ 2024-04-28  6:43 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Thu, Apr 25, 2024 at 07:19:24AM +0000, Wei Yang wrote:
> The isolated range is [*@start_rgn, *@end_rgn - 1], while the comment says
> "the end region inside the range" is *@end_rgn.
> 
> Let's correct it.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/memblock.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 98d25689cf10..5a363ef283d0 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -772,12 +772,12 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
>   * @base: base of range to isolate
>   * @size: size of range to isolate
>   * @start_rgn: out parameter for the start of isolated region
> - * @end_rgn: out parameter for the end of isolated region
> + * @end_rgn: out parameter for the (end + 1) of isolated region

end can be inclusive or exclusive, please let's keep this line ...

>   *
>   * Walk @type and ensure that regions don't cross the boundaries defined by
>   * [@base, @base + @size).  Crossing regions are split at the boundaries,
>   * which may create at most two more regions.  The index of the first
> - * region inside the range is returned in *@start_rgn and end in *@end_rgn.
> + * region inside the range is returned in *@start_rgn and (end + 1) in *@end_rgn.

... and emphasise here that end is exclusive, e.g

The index of the first region inside the range is returned in *@start_rgn
and the index of the first region after the range is returned in *@end_rgn.

>   *
>   * Return:
>   * 0 on success, -errno on failure.
> -- 
> 2.34.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 4/8] mm/memblock: remove consecutive regions at once
  2024-04-25  7:19 ` [Patch v2 4/8] mm/memblock: remove consecutive regions at once Wei Yang
@ 2024-04-28  6:44   ` Mike Rapoport
  2024-04-28 12:37     ` Wei Yang
  0 siblings, 1 reply; 19+ messages in thread
From: Mike Rapoport @ 2024-04-28  6:44 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Thu, Apr 25, 2024 at 07:19:25AM +0000, Wei Yang wrote:
> In memblock_remove_range, we would remove consecutive regions after
> isolation.
> 
> Instead of removing each region from end to start one by one, we could
> remove those consecutive regions at once.

I don't think this is worth the churn.
 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/memblock.c | 24 ++++++++++++++++--------
>  1 file changed, 16 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 5a363ef283d0..53e3aa2cd9a2 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -343,12 +343,15 @@ static phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start,
>  	return ret;
>  }
>  
> -static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)
> +static void __init_memblock memblock_remove_regions(struct memblock_type *type,
> +						    unsigned long r,
> +						    unsigned int num)
>  {
> -	type->total_size -= type->regions[r].size;
> -	memmove(&type->regions[r], &type->regions[r + 1],
> -		(type->cnt - (r + 1)) * sizeof(type->regions[r]));
> -	type->cnt--;
> +	for (int i = 0; i < num; i++)
> +		type->total_size -= type->regions[r + i].size;
> +	memmove(&type->regions[r], &type->regions[r + num],
> +		(type->cnt - (r + num)) * sizeof(type->regions[r]));
> +	type->cnt -= num;
>  
>  	/* Special case for empty arrays */
>  	if (type->cnt == 0) {
> @@ -360,6 +363,11 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u
>  	}
>  }
>  
> +static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)
> +{
> +	memblock_remove_regions(type, r, 1);
> +}
> +
>  #ifndef CONFIG_ARCH_KEEP_MEMBLOCK
>  /**
>   * memblock_discard - discard memory and reserved arrays if they were allocated
> @@ -846,14 +854,14 @@ static int __init_memblock memblock_remove_range(struct memblock_type *type,
>  					  phys_addr_t base, phys_addr_t size)
>  {
>  	int start_rgn, end_rgn;
> -	int i, ret;
> +	int ret;
>  
>  	ret = memblock_isolate_range(type, base, size, &start_rgn, &end_rgn);
>  	if (ret)
>  		return ret;
>  
> -	for (i = end_rgn - 1; i >= start_rgn; i--)
> -		memblock_remove_region(type, i);
> +	if (end_rgn - start_rgn)
> +		memblock_remove_regions(type, start_rgn, end_rgn - start_rgn);
>  	return 0;
>  }
>  
> -- 
> 2.34.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position
  2024-04-28  6:35   ` Mike Rapoport
@ 2024-04-28 12:22     ` Wei Yang
  2024-04-29 14:17       ` Mike Rapoport
  0 siblings, 1 reply; 19+ messages in thread
From: Wei Yang @ 2024-04-28 12:22 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Sun, Apr 28, 2024 at 09:35:25AM +0300, Mike Rapoport wrote:
>On Thu, Apr 25, 2024 at 07:19:22AM +0000, Wei Yang wrote:
>> In stead of add 129th memory block at the last position, let's try all
>> possible position.
>
>Why do you insist on changing the existing test rather than adding a new
>one?
> 

Sounds there is some misunderstanding between us.

I am not sure about your idea at first, so I sent a draft to confirm with you.
Then I came up with another version which could trigger the overlap bug.

You mentioned to keep both and not objection to the first draft, which is the
same as this one, I thought this is what you expect.

Well, I will add a new one next round. Do you have some suggestion on the
function name? memblock_reserve_many_all_position_check ?

>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  tools/testing/memblock/tests/basic_api.c | 121 ++++++++++++-----------
>>  1 file changed, 65 insertions(+), 56 deletions(-)
> 
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 2/8] memblock tests: add memblock_reserve_many_may_conflict_check()
  2024-04-28  6:40   ` [Patch v2 2/8] memblock tests: add memblock_reserve_many_may_conflict_check() Mike Rapoport
@ 2024-04-28 12:36     ` Wei Yang
  2024-04-30  1:49     ` Wei Yang
  1 sibling, 0 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-28 12:36 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Sun, Apr 28, 2024 at 09:40:33AM +0300, Mike Rapoport wrote:
>On Thu, Apr 25, 2024 at 07:19:23AM +0000, Wei Yang wrote:
>> This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
>> fix overlapping allocation when doubling reserved array").
>> 
>> This is done by adding the 129th reserve region into memblock.memory. If
>> memblock_double_array() use this reserve region as new array, it fails.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  tools/testing/memblock/tests/basic_api.c | 123 +++++++++++++++++++++++
>>  tools/testing/memblock/tests/common.c    |   4 +-
>>  tools/testing/memblock/tests/common.h    |   1 +
>>  3 files changed, 126 insertions(+), 2 deletions(-)
>> 
>> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
>> index 1ae62272867a..748950e02589 100644
>> --- a/tools/testing/memblock/tests/basic_api.c
>> +++ b/tools/testing/memblock/tests/basic_api.c
>> @@ -991,6 +991,128 @@ static int memblock_reserve_many_check(void)
>>  	return 0;
>>  }
>>  
>> +/* Keep the gap so these memory region will not be merged. */
>
>The gap where? What regions should not be merged?

We would reserve range [1-129] like below. The gap is between each range,
which is 32K, to prevent merge during memblock_reserve().

	 0        1          129
	 +---+    +---+      +---+
	 |32K|    |32K|  ..  |32K|
	 +---+    +---+      +---+
             |gap |
             |<-->|

>Also please add a comment with the test description

Sure.

>
>> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
>> +static int memblock_reserve_many_may_conflict_check(void)
>> +{
>> +	int i, skip;
>> +	void *orig_region;
>> +	struct region r = {
>> +		.base = SZ_16K,
>> +		.size = SZ_16K,
>> +	};
>> +	phys_addr_t new_reserved_regions_size;
>> +
>> +	/*
>> +	 *  0        1          129
>> +	 *  +---+    +---+      +---+
>> +	 *  |32K|    |32K|  ..  |32K|
>> +	 *  +---+    +---+      +---+
>> +	 *
>> +	 * Pre-allocate the range for 129 memory block + one range for double
>> +	 * memblock.reserved.regions at idx 0.
>> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
>> +	 * when doubling reserved array")
>
>Sorry, but I'm failing to parse it
>
>> +	 */
>> +	dummy_physical_memory_init();
>> +	phys_addr_t memory_base = dummy_physical_memory_base();
>> +	phys_addr_t offset = PAGE_ALIGN(memory_base);
>> +
>> +	PREFIX_PUSH();
>> +
>> +	/* Reserve the 129th memory block for all possible positions*/
>> +	for (skip = 1; skip <= INIT_MEMBLOCK_REGIONS + 1; skip++) {
>> +		reset_memblock_regions();
>> +		memblock_allow_resize();
>> +
>> +		reset_memblock_attributes();
>> +		/* Add a valid memory region used by double_array(). */
>> +		memblock_add(MEMORY_BASE_OFFSET(0, offset), MEM_SIZE);
>> +		/*
>> +		 * Add a memory region which will be reserved as 129th memory
>> +		 * region. This is not expected to be used by double_array().
>> +		 */
>> +		memblock_add(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
>> +
>> +		for (i = 1; i <= INIT_MEMBLOCK_REGIONS + 1; i++) {
>> +			if (i == skip)
>> +				continue;
>> +
>> +			/* Reserve some fakes memory region to fulfill the memblock. */
>> +			memblock_reserve(MEMORY_BASE_OFFSET(i, offset), MEM_SIZE);
>> +
>> +			if (i < skip) {
>> +				ASSERT_EQ(memblock.reserved.cnt, i);
>> +				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
>> +			} else {
>> +				ASSERT_EQ(memblock.reserved.cnt, i - 1);
>> +				ASSERT_EQ(memblock.reserved.total_size, (i - 1) * MEM_SIZE);
>> +			}
>> +		}
>> +
>> +		orig_region = memblock.reserved.regions;
>> +
>> +		/* This reserve the 129 memory_region, and makes it double array. */
>> +		memblock_reserve(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
>> +
>> +		/*
>> +		 * This is the memory region size used by the doubled reserved.regions,
>> +		 * and it has been reserved due to it has been used. The size is used to
>> +		 * calculate the total_size that the memblock.reserved have now.
>> +		 */
>> +		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
>> +						sizeof(struct memblock_region));
>> +		/*
>> +		 * The double_array() will find a free memory region as the new
>> +		 * reserved.regions, and the used memory region will be reserved, so
>> +		 * there will be one more region exist in the reserved memblock. And the
>> +		 * one more reserved region's size is new_reserved_regions_size.
>> +		 */
>> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
>> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
>> +							new_reserved_regions_size);
>> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
>> +
>> +		/*
>> +		 * The first reserved region is allocated for double array
>> +		 * with the size of new_reserved_regions_size and the base to be
>> +		 * MEMORY_BASE_OFFSET(0, offset) + SZ_32K - new_reserved_regions_size
>> +		 */
>> +		ASSERT_EQ(memblock.reserved.regions[0].base + memblock.reserved.regions[0].size,
>> +			  MEMORY_BASE_OFFSET(0, offset) + SZ_32K);
>> +		ASSERT_EQ(memblock.reserved.regions[0].size, new_reserved_regions_size);
>> +
>> +		/*
>> +		 * Now memblock_double_array() works fine. Let's check after the
>> +		 * double_array(), the memblock_reserve() still works as normal.
>> +		 */
>> +		memblock_reserve(r.base, r.size);
>> +		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
>> +		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
>> +
>> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
>> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
>> +							new_reserved_regions_size +
>> +							r.size);
>> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
>> +
>> +		/*
>> +		 * The current reserved.regions is occupying a range of memory that
>> +		 * allocated from dummy_physical_memory_init(). After free the memory,
>> +		 * we must not use it. So restore the origin memory region to make sure
>> +		 * the tests can run as normal and not affected by the double array.
>> +		 */
>> +		memblock.reserved.regions = orig_region;
>> +		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
>> +	}
>> +
>> +	dummy_physical_memory_cleanup();
>> +
>> +	test_pass_pop();
>> +
>> +	return 0;
>> +}
>> +
>>  static int memblock_reserve_checks(void)
>>  {
>>  	prefix_reset();
>> @@ -1006,6 +1128,7 @@ static int memblock_reserve_checks(void)
>>  	memblock_reserve_between_check();
>>  	memblock_reserve_near_max_check();
>>  	memblock_reserve_many_check();
>> +	memblock_reserve_many_may_conflict_check();
>>  
>>  	prefix_pop();
>>  
>> diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c
>> index c2c569f12178..5633ffc5aaa7 100644
>> --- a/tools/testing/memblock/tests/common.c
>> +++ b/tools/testing/memblock/tests/common.c
>> @@ -61,7 +61,7 @@ void reset_memblock_attributes(void)
>>  
>>  static inline void fill_memblock(void)
>>  {
>> -	memset(memory_block.base, 1, MEM_SIZE);
>> +	memset(memory_block.base, 1, MEM_ALLOC_SIZE);
>
>I believe PHYS_MEM_SIZE is a better name.
>

Sounds better.

>>  }
>>  
>>  void setup_memblock(void)
>> @@ -103,7 +103,7 @@ void setup_numa_memblock(const unsigned int node_fracs[])
>>  
>>  void dummy_physical_memory_init(void)
>>  {
>> -	memory_block.base = malloc(MEM_SIZE);
>> +	memory_block.base = malloc(MEM_ALLOC_SIZE);
>>  	assert(memory_block.base);
>>  	fill_memblock();
>>  }
>> diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
>> index b5ec59aa62d7..741d57315ba6 100644
>> --- a/tools/testing/memblock/tests/common.h
>> +++ b/tools/testing/memblock/tests/common.h
>> @@ -12,6 +12,7 @@
>>  #include <../selftests/kselftest.h>
>>  
>>  #define MEM_SIZE		SZ_32K
>> +#define MEM_ALLOC_SIZE		SZ_16M
>
>Do we really need 16M? Wouldn't one or two suffice?
>

	 0        1          129
	 +---+    +---+      +---+
	 |32K|    |32K|  ..  |32K|
	 +---+    +---+      +---+

This is the range we are manipulating, which is roughly 

  130 * 64K = 128 * 64K + 128K = 8M + 128K

So I choose the next power of 2, 16M.

Since we insert the 129th range at all possible position, we may allocate the
double array at any place when the bug is triggered. So we need to allocate
the whole range instead of just allocate range 0 as we expect.

>>  #define NUMA_NODES		8
>>  
>>  #define INIT_MEMBLOCK_REGIONS			128
>> -- 
>> 2.34.1
>> 
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 4/8] mm/memblock: remove consecutive regions at once
  2024-04-28  6:44   ` Mike Rapoport
@ 2024-04-28 12:37     ` Wei Yang
  0 siblings, 0 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-28 12:37 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Sun, Apr 28, 2024 at 09:44:30AM +0300, Mike Rapoport wrote:
>On Thu, Apr 25, 2024 at 07:19:25AM +0000, Wei Yang wrote:
>> In memblock_remove_range, we would remove consecutive regions after
>> isolation.
>> 
>> Instead of removing each region from end to start one by one, we could
>> remove those consecutive regions at once.
>
>I don't think this is worth the churn.
> 

Will drop it.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 3/8] mm/memblock: fix comment for memblock_isolate_range()
  2024-04-28  6:43   ` [Patch v2 3/8] mm/memblock: fix comment for memblock_isolate_range() Mike Rapoport
@ 2024-04-28 13:07     ` Wei Yang
  0 siblings, 0 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-28 13:07 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Sun, Apr 28, 2024 at 09:43:54AM +0300, Mike Rapoport wrote:
>On Thu, Apr 25, 2024 at 07:19:24AM +0000, Wei Yang wrote:
>> The isolated range is [*@start_rgn, *@end_rgn - 1], while the comment says
>> "the end region inside the range" is *@end_rgn.
>> 
>> Let's correct it.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  mm/memblock.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/mm/memblock.c b/mm/memblock.c
>> index 98d25689cf10..5a363ef283d0 100644
>> --- a/mm/memblock.c
>> +++ b/mm/memblock.c
>> @@ -772,12 +772,12 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
>>   * @base: base of range to isolate
>>   * @size: size of range to isolate
>>   * @start_rgn: out parameter for the start of isolated region
>> - * @end_rgn: out parameter for the end of isolated region
>> + * @end_rgn: out parameter for the (end + 1) of isolated region
>
>end can be inclusive or exclusive, please let's keep this line ...
>
>>   *
>>   * Walk @type and ensure that regions don't cross the boundaries defined by
>>   * [@base, @base + @size).  Crossing regions are split at the boundaries,
>>   * which may create at most two more regions.  The index of the first
>> - * region inside the range is returned in *@start_rgn and end in *@end_rgn.
>> + * region inside the range is returned in *@start_rgn and (end + 1) in *@end_rgn.
>
>... and emphasise here that end is exclusive, e.g
>
>The index of the first region inside the range is returned in *@start_rgn
>and the index of the first region after the range is returned in *@end_rgn.
>

Looks better, thanks.

>>   *
>>   * Return:
>>   * 0 on success, -errno on failure.
>> -- 
>> 2.34.1
>> 
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position
  2024-04-28 12:22     ` Wei Yang
@ 2024-04-29 14:17       ` Mike Rapoport
  2024-04-30  0:12         ` Wei Yang
  0 siblings, 1 reply; 19+ messages in thread
From: Mike Rapoport @ 2024-04-29 14:17 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Sun, Apr 28, 2024 at 12:22:04PM +0000, Wei Yang wrote:
> On Sun, Apr 28, 2024 at 09:35:25AM +0300, Mike Rapoport wrote:
> >On Thu, Apr 25, 2024 at 07:19:22AM +0000, Wei Yang wrote:
> >> In stead of add 129th memory block at the last position, let's try all
> >> possible position.
> >
> >Why do you insist on changing the existing test rather than adding a new
> >one?
> > 
> 
> Sounds there is some misunderstanding between us.
> 
> I am not sure about your idea at first, so I sent a draft to confirm with you.
> Then I came up with another version which could trigger the overlap bug.
> 
> You mentioned to keep both and not objection to the first draft, which is the
> same as this one, I thought this is what you expect.

Sorry if I wasn't clear. My intention was to keep the existing test and add
a new one rather than update the old test.
 
> Well, I will add a new one next round. Do you have some suggestion on the
> function name? memblock_reserve_many_all_position_check ?

How about memblock_reserve_all_locations_check?
 
> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> >> ---
> >>  tools/testing/memblock/tests/basic_api.c | 121 ++++++++++++-----------
> >>  1 file changed, 65 insertions(+), 56 deletions(-)
> > 
> >
> >-- 
> >Sincerely yours,
> >Mike.
> 
> -- 
> Wei Yang
> Help you, Help me

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position
  2024-04-29 14:17       ` Mike Rapoport
@ 2024-04-30  0:12         ` Wei Yang
  0 siblings, 0 replies; 19+ messages in thread
From: Wei Yang @ 2024-04-30  0:12 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Mon, Apr 29, 2024 at 05:17:19PM +0300, Mike Rapoport wrote:
>On Sun, Apr 28, 2024 at 12:22:04PM +0000, Wei Yang wrote:
>> On Sun, Apr 28, 2024 at 09:35:25AM +0300, Mike Rapoport wrote:
>> >On Thu, Apr 25, 2024 at 07:19:22AM +0000, Wei Yang wrote:
>> >> In stead of add 129th memory block at the last position, let's try all
>> >> possible position.
>> >
>> >Why do you insist on changing the existing test rather than adding a new
>> >one?
>> > 
>> 
>> Sounds there is some misunderstanding between us.
>> 
>> I am not sure about your idea at first, so I sent a draft to confirm with you.
>> Then I came up with another version which could trigger the overlap bug.
>> 
>> You mentioned to keep both and not objection to the first draft, which is the
>> same as this one, I thought this is what you expect.
>
>Sorry if I wasn't clear. My intention was to keep the existing test and add
>a new one rather than update the old test.
> 

You are so kind. Nice to work with you. 

>> Well, I will add a new one next round. Do you have some suggestion on the
>> function name? memblock_reserve_many_all_position_check ?
>
>How about memblock_reserve_all_locations_check?
> 

Yep, sounds good :-)

>> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> >> ---
>> >>  tools/testing/memblock/tests/basic_api.c | 121 ++++++++++++-----------
>> >>  1 file changed, 65 insertions(+), 56 deletions(-)
>> > 
>> >
>> >-- 
>> >Sincerely yours,
>> >Mike.
>> 
>> -- 
>> Wei Yang
>> Help you, Help me
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 2/8] memblock tests: add memblock_reserve_many_may_conflict_check()
  2024-04-28  6:40   ` [Patch v2 2/8] memblock tests: add memblock_reserve_many_may_conflict_check() Mike Rapoport
  2024-04-28 12:36     ` Wei Yang
@ 2024-04-30  1:49     ` Wei Yang
  2024-05-01  8:44       ` Mike Rapoport
  1 sibling, 1 reply; 19+ messages in thread
From: Wei Yang @ 2024-04-30  1:49 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Sun, Apr 28, 2024 at 09:40:33AM +0300, Mike Rapoport wrote:
>On Thu, Apr 25, 2024 at 07:19:23AM +0000, Wei Yang wrote:
>> This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
>> fix overlapping allocation when doubling reserved array").
>> 
>> This is done by adding the 129th reserve region into memblock.memory. If
>> memblock_double_array() use this reserve region as new array, it fails.
>> 
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>>  tools/testing/memblock/tests/basic_api.c | 123 +++++++++++++++++++++++
>>  tools/testing/memblock/tests/common.c    |   4 +-
>>  tools/testing/memblock/tests/common.h    |   1 +
>>  3 files changed, 126 insertions(+), 2 deletions(-)
>> 
>> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
>> index 1ae62272867a..748950e02589 100644
>> --- a/tools/testing/memblock/tests/basic_api.c
>> +++ b/tools/testing/memblock/tests/basic_api.c
>> @@ -991,6 +991,128 @@ static int memblock_reserve_many_check(void)
>>  	return 0;
>>  }
>>  
>> +/* Keep the gap so these memory region will not be merged. */
>
>The gap where? What regions should not be merged?
>Also please add a comment with the test description
>
>> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
>> +static int memblock_reserve_many_may_conflict_check(void)
>> +{
>> +	int i, skip;
>> +	void *orig_region;
>> +	struct region r = {
>> +		.base = SZ_16K,
>> +		.size = SZ_16K,
>> +	};
>> +	phys_addr_t new_reserved_regions_size;
>> +
>> +	/*
>> +	 *  0        1          129
>> +	 *  +---+    +---+      +---+
>> +	 *  |32K|    |32K|  ..  |32K|
>> +	 *  +---+    +---+      +---+
>> +	 *
>> +	 * Pre-allocate the range for 129 memory block + one range for double
>> +	 * memblock.reserved.regions at idx 0.
>> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
>> +	 * when doubling reserved array")
>
>Sorry, but I'm failing to parse it
>

Sorry for missed this one. You mean how this case is triggered?

Suppose current memblock looks like this:

                 0        1    
memblock.memory  +---+    +---+
                 |32K|    |32K|
                 +---+    +---+

                 0                2          128
memblock.reserved+---+            +---+      +---+
                 |32K|            |32K|  ..  |32K|
                 +---+            +---+      +---+
                          ^
                          |
Now we want to reserve range 1 here. Since there are 128 blocks in
memblock.reserved, it will double array. Now memblock_double_array() will
"find" available range in memblock.memory.

Since we set top down, it will find memblock.memory.regions[1]. This conflict
the range we want to reserve.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Patch v2 2/8] memblock tests: add memblock_reserve_many_may_conflict_check()
  2024-04-30  1:49     ` Wei Yang
@ 2024-05-01  8:44       ` Mike Rapoport
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Rapoport @ 2024-05-01  8:44 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Tue, Apr 30, 2024 at 01:49:05AM +0000, Wei Yang wrote:
> On Sun, Apr 28, 2024 at 09:40:33AM +0300, Mike Rapoport wrote:
> >On Thu, Apr 25, 2024 at 07:19:23AM +0000, Wei Yang wrote:
> >> This may trigger the case fixed by commit 48c3b583bbdd ("mm/memblock:
> >> fix overlapping allocation when doubling reserved array").
> >> 
> >> This is done by adding the 129th reserve region into memblock.memory. If
> >> memblock_double_array() use this reserve region as new array, it fails.
> >> 
> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> >> ---
> >>  tools/testing/memblock/tests/basic_api.c | 123 +++++++++++++++++++++++
> >>  tools/testing/memblock/tests/common.c    |   4 +-
> >>  tools/testing/memblock/tests/common.h    |   1 +
> >>  3 files changed, 126 insertions(+), 2 deletions(-)
> >> 
> >> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
> >> index 1ae62272867a..748950e02589 100644
> >> --- a/tools/testing/memblock/tests/basic_api.c
> >> +++ b/tools/testing/memblock/tests/basic_api.c
> >> @@ -991,6 +991,128 @@ static int memblock_reserve_many_check(void)
> >>  	return 0;
> >>  }
> >>  
> >> +/* Keep the gap so these memory region will not be merged. */
> >
> >The gap where? What regions should not be merged?
> >Also please add a comment with the test description
> >
> >> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
> >> +static int memblock_reserve_many_may_conflict_check(void)
> >> +{
> >> +	int i, skip;
> >> +	void *orig_region;
> >> +	struct region r = {
> >> +		.base = SZ_16K,
> >> +		.size = SZ_16K,
> >> +	};
> >> +	phys_addr_t new_reserved_regions_size;
> >> +
> >> +	/*
> >> +	 *  0        1          129
> >> +	 *  +---+    +---+      +---+
> >> +	 *  |32K|    |32K|  ..  |32K|
> >> +	 *  +---+    +---+      +---+
> >> +	 *
> >> +	 * Pre-allocate the range for 129 memory block + one range for double
> >> +	 * memblock.reserved.regions at idx 0.
> >> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
> >> +	 * when doubling reserved array")
> >
> >Sorry, but I'm failing to parse it
> >
> 
> Sorry for missed this one. You mean how this case is triggered?
> 
> Suppose current memblock looks like this:
> 
>                  0        1    
> memblock.memory  +---+    +---+
>                  |32K|    |32K|
>                  +---+    +---+
> 
>                  0                2          128
> memblock.reserved+---+            +---+      +---+
>                  |32K|            |32K|  ..  |32K|
>                  +---+            +---+      +---+
>                           ^
>                           |
> Now we want to reserve range 1 here. Since there are 128 blocks in
> memblock.reserved, it will double array. Now memblock_double_array() will
> "find" available range in memblock.memory.
> 
> Since we set top down, it will find memblock.memory.regions[1]. This conflict
> the range we want to reserve.

Please include something like this explanation in the test description
 
> -- 
> Wei Yang
> Help you, Help me

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-05-01  8:46 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-25  7:19 [Patch v2 0/8] memblock: clenup Wei Yang
2024-04-25  7:19 ` [Patch v2 1/8] memblock tests: reserve the 129th memory block at all possible position Wei Yang
2024-04-28  6:35   ` Mike Rapoport
2024-04-28 12:22     ` Wei Yang
2024-04-29 14:17       ` Mike Rapoport
2024-04-30  0:12         ` Wei Yang
2024-04-25  7:19 ` [Patch v2 4/8] mm/memblock: remove consecutive regions at once Wei Yang
2024-04-28  6:44   ` Mike Rapoport
2024-04-28 12:37     ` Wei Yang
2024-04-25  7:19 ` [Patch v2 5/8] memblock tests: add memblock_overlaps_region_checks Wei Yang
2024-04-25  7:19 ` [Patch v2 6/8] mm/memblock: return true directly on finding overlap region Wei Yang
2024-04-25  7:19 ` [Patch v2 7/8] mm/memblock: use PAGE_ALIGN_DOWN to get pgend in free_memmap Wei Yang
2024-04-25  7:19 ` [Patch v2 8/8] mm/memblock: default region's nid may be MAX_NUMNODES Wei Yang
     [not found] ` <20240425071929.18004-3-richard.weiyang@gmail.com>
2024-04-28  6:40   ` [Patch v2 2/8] memblock tests: add memblock_reserve_many_may_conflict_check() Mike Rapoport
2024-04-28 12:36     ` Wei Yang
2024-04-30  1:49     ` Wei Yang
2024-05-01  8:44       ` Mike Rapoport
     [not found] ` <20240425071929.18004-4-richard.weiyang@gmail.com>
2024-04-28  6:43   ` [Patch v2 3/8] mm/memblock: fix comment for memblock_isolate_range() Mike Rapoport
2024-04-28 13:07     ` Wei Yang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.