linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/4] mm/sparse: Optimize memmap allocation during sparse_init()
@ 2018-05-21 10:15 Baoquan He
  2018-05-21 10:15 ` [PATCH v4 1/4] mm/sparse: Add a static variable nr_present_sections Baoquan He
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Baoquan He @ 2018-05-21 10:15 UTC (permalink / raw)
  To: linux-kernel, akpm, dave.hansen, pagupta
  Cc: linux-mm, kirill.shutemov, Baoquan He

This is v4 post. V3 can be found here:
https://lkml.org/lkml/2018/2/27/928

V1 can be found here:
https://www.spinics.net/lists/linux-mm/msg144486.html

In sparse_init(), two temporary pointer arrays, usemap_map and map_map
are allocated with the size of NR_MEM_SECTIONS. They are used to store
each memory section's usemap and mem map if marked as present. In
5-level paging mode, this will cost 512M memory though they will be
released at the end of sparse_init(). System with few memory, like
kdump kernel which usually only has about 256M, will fail to boot
because of allocation failure if CONFIG_X86_5LEVEL=y.

In this patchset, optimize the memmap allocation code to only use
usemap_map and map_map with the size of nr_present_sections. This
makes kdump kernel boot up with normal crashkernel='' setting when
CONFIG_X86_5LEVEL=y.

Change log:
v3->v4:
  Improve according to Dave's three concerns which are in patch 0004:

  Rename variable 'idx_present' to 'nr_consumed_maps' which used to
  index the memmap and usemap of present sections.

  Add a check if 'nr_consumed_maps' goes beyond nr_present_sections.

  Add code comment above the final for_each_present_section_nr() to
  tell why 'nr_consumed_maps' need be increased in each iteration
  whether the 'ms->section_mem_map' need cleared or out.

v2->v3:
  Change nr_present_sections as __initdata and add code comment
  according to Andrew's suggestion.

  Change the local variable 'i' as idx_present which loops over the
  present sections, and improve the code. These are suggested by
  Dave and Pankaj.

  Add a new patch 0003 which adds a new parameter 'data_unit_size'
  to function alloc_usemap_and_memmap() in which we will update 'data'
  to make it point at new position. However its type 'void *' can't give
  us needed info to do that. Need pass the unit size in. So change code
  in patch 0004 accordingly. This is a code bug fix found when tested
  the memory deployed on multiple nodes.

v1-v2:
  Split out the nr_present_sections adding as a single patch for easier
  reviewing.

  Rewrite patch log according to Dave's suggestion.

  Fix code bug in patch 0002 reported by test robot.

Baoquan He (4):
  mm/sparse: Add a static variable nr_present_sections
  mm/sparsemem: Defer the ms->section_mem_map clearing
  mm/sparse: Add a new parameter 'data_unit_size' for
    alloc_usemap_and_memmap
  mm/sparse: Optimize memmap allocation during sparse_init()

 mm/sparse-vmemmap.c |  6 ++---
 mm/sparse.c         | 72 +++++++++++++++++++++++++++++++++++++++++------------
 2 files changed, 59 insertions(+), 19 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 1/4] mm/sparse: Add a static variable nr_present_sections
  2018-05-21 10:15 [PATCH v4 0/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
@ 2018-05-21 10:15 ` Baoquan He
  2018-06-07 22:46   ` Dave Hansen
  2018-05-21 10:15 ` [PATCH v4 2/4] mm/sparsemem: Defer the ms->section_mem_map clearing Baoquan He
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Baoquan He @ 2018-05-21 10:15 UTC (permalink / raw)
  To: linux-kernel, akpm, dave.hansen, pagupta
  Cc: linux-mm, kirill.shutemov, Baoquan He

It's used to record how many memory sections are marked as present
during system boot up, and will be used in the later patch.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/sparse.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/sparse.c b/mm/sparse.c
index 62eef264a7bd..48cf7b7982e2 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -202,6 +202,12 @@ static inline int next_present_section_nr(int section_nr)
 	      (section_nr <= __highest_present_section_nr));	\
 	     section_nr = next_present_section_nr(section_nr))
 
+/*
+ * Record how many memory sections are marked as present
+ * during system bootup.
+ */
+static int __initdata nr_present_sections;
+
 /* Record a memory area against a node. */
 void __init memory_present(int nid, unsigned long start, unsigned long end)
 {
@@ -231,6 +237,7 @@ void __init memory_present(int nid, unsigned long start, unsigned long end)
 			ms->section_mem_map = sparse_encode_early_nid(nid) |
 							SECTION_IS_ONLINE;
 			section_mark_present(ms);
+			nr_present_sections++;
 		}
 	}
 }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 2/4] mm/sparsemem: Defer the ms->section_mem_map clearing
  2018-05-21 10:15 [PATCH v4 0/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
  2018-05-21 10:15 ` [PATCH v4 1/4] mm/sparse: Add a static variable nr_present_sections Baoquan He
@ 2018-05-21 10:15 ` Baoquan He
  2018-06-07 22:47   ` Dave Hansen
  2018-05-21 10:15 ` [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap Baoquan He
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Baoquan He @ 2018-05-21 10:15 UTC (permalink / raw)
  To: linux-kernel, akpm, dave.hansen, pagupta
  Cc: linux-mm, kirill.shutemov, Baoquan He

In sparse_init(), if CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y, system
will allocate one continuous memory chunk for mem maps on one node and
populate the relevant page tables to map memory section one by one. If
fail to populate for a certain mem section, print warning and its
->section_mem_map will be cleared to cancel the marking of being present.
Like this, the number of mem sections marked as present could become
less during sparse_init() execution.

Here just defer the ms->section_mem_map clearing if failed to populate
its page tables until the last for_each_present_section_nr() loop. This
is in preparation for later optimizing the mem map allocation.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/sparse-vmemmap.c |  1 -
 mm/sparse.c         | 12 ++++++++----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index bd0276d5f66b..640e68f8324b 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -303,7 +303,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
 		ms = __nr_to_section(pnum);
 		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
 		       __func__);
-		ms->section_mem_map = 0;
 	}
 
 	if (vmemmap_buf_start) {
diff --git a/mm/sparse.c b/mm/sparse.c
index 48cf7b7982e2..3d697292be08 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -453,7 +453,6 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
 		ms = __nr_to_section(pnum);
 		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
 		       __func__);
-		ms->section_mem_map = 0;
 	}
 }
 #endif /* !CONFIG_SPARSEMEM_VMEMMAP */
@@ -481,7 +480,6 @@ static struct page __init *sparse_early_mem_map_alloc(unsigned long pnum)
 
 	pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
 	       __func__);
-	ms->section_mem_map = 0;
 	return NULL;
 }
 #endif
@@ -585,17 +583,23 @@ void __init sparse_init(void)
 #endif
 
 	for_each_present_section_nr(0, pnum) {
+		struct mem_section *ms;
+		ms = __nr_to_section(pnum);
 		usemap = usemap_map[pnum];
-		if (!usemap)
+		if (!usemap) {
+			ms->section_mem_map = 0;
 			continue;
+		}
 
 #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
 		map = map_map[pnum];
 #else
 		map = sparse_early_mem_map_alloc(pnum);
 #endif
-		if (!map)
+		if (!map) {
+			ms->section_mem_map = 0;
 			continue;
+		}
 
 		sparse_init_one_section(__nr_to_section(pnum), pnum, map,
 								usemap);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap
  2018-05-21 10:15 [PATCH v4 0/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
  2018-05-21 10:15 ` [PATCH v4 1/4] mm/sparse: Add a static variable nr_present_sections Baoquan He
  2018-05-21 10:15 ` [PATCH v4 2/4] mm/sparsemem: Defer the ms->section_mem_map clearing Baoquan He
@ 2018-05-21 10:15 ` Baoquan He
  2018-06-07 22:48   ` Dave Hansen
  2018-05-21 10:15 ` [PATCH v4 4/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
  2018-06-07 22:17 ` [PATCH v4 0/4] " Andrew Morton
  4 siblings, 1 reply; 17+ messages in thread
From: Baoquan He @ 2018-05-21 10:15 UTC (permalink / raw)
  To: linux-kernel, akpm, dave.hansen, pagupta
  Cc: linux-mm, kirill.shutemov, Baoquan He

It's used to pass the size of map data unit into alloc_usemap_and_memmap,
and is preparation for next patch.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/sparse.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 3d697292be08..4a58f8809542 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -491,10 +491,12 @@ void __weak __meminit vmemmap_populate_print_last(void)
 /**
  *  alloc_usemap_and_memmap - memory alloction for pageblock flags and vmemmap
  *  @map: usemap_map for pageblock flags or mmap_map for vmemmap
+ *  @unit_size: size of map unit
  */
 static void __init alloc_usemap_and_memmap(void (*alloc_func)
 					(void *, unsigned long, unsigned long,
-					unsigned long, int), void *data)
+					unsigned long, int), void *data,
+					int data_unit_size)
 {
 	unsigned long pnum;
 	unsigned long map_count;
@@ -571,7 +573,8 @@ void __init sparse_init(void)
 	if (!usemap_map)
 		panic("can not allocate usemap_map\n");
 	alloc_usemap_and_memmap(sparse_early_usemaps_alloc_node,
-							(void *)usemap_map);
+				(void *)usemap_map,
+				sizeof(usemap_map[0]));
 
 #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
 	size2 = sizeof(struct page *) * NR_MEM_SECTIONS;
@@ -579,7 +582,8 @@ void __init sparse_init(void)
 	if (!map_map)
 		panic("can not allocate map_map\n");
 	alloc_usemap_and_memmap(sparse_early_mem_maps_alloc_node,
-							(void *)map_map);
+				(void *)map_map,
+				sizeof(map_map[0]));
 #endif
 
 	for_each_present_section_nr(0, pnum) {
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 4/4] mm/sparse: Optimize memmap allocation during sparse_init()
  2018-05-21 10:15 [PATCH v4 0/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
                   ` (2 preceding siblings ...)
  2018-05-21 10:15 ` [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap Baoquan He
@ 2018-05-21 10:15 ` Baoquan He
  2018-06-07 22:46   ` Dave Hansen
  2018-06-07 22:17 ` [PATCH v4 0/4] " Andrew Morton
  4 siblings, 1 reply; 17+ messages in thread
From: Baoquan He @ 2018-05-21 10:15 UTC (permalink / raw)
  To: linux-kernel, akpm, dave.hansen, pagupta
  Cc: linux-mm, kirill.shutemov, Baoquan He

In sparse_init(), two temporary pointer arrays, usemap_map and map_map
are allocated with the size of NR_MEM_SECTIONS. They are used to store
each memory section's usemap and mem map if marked as present. With
the help of these two arrays, continuous memory chunk is allocated for
usemap and memmap for memory sections on one node. This avoids too many
memory fragmentations. Like below diagram, '1' indicates the present
memory section, '0' means absent one. The number 'n' could be much
smaller than NR_MEM_SECTIONS on most of systems.

|1|1|1|1|0|0|0|0|1|1|0|0|...|1|0||1|0|...|1||0|1|...|0|
-------------------------------------------------------
 0 1 2 3         4 5         i   i+1     n-1   n

If fail to populate the page tables to map one section's memmap, its
->section_mem_map will be cleared finally to indicate that it's not present.
After use, these two arrays will be released at the end of sparse_init().

In 4-level paging mode, each array costs 4M which can be ignorable. While
in 5-level paging, they costs 256M each, 512M altogether. Kdump kernel
Usually only reserves very few memory, e.g 256M. So, even thouth they are
temporarily allocated, still not acceptable.

In fact, there's no need to allocate them with the size of NR_MEM_SECTIONS.
Since the ->section_mem_map clearing has been deferred to the last, the
number of present memory sections are kept the same during sparse_init()
until we finally clear out the memory section's ->section_mem_map if its
usemap or memmap is not correctly handled. Thus in the middle whenever
for_each_present_section_nr() loop is taken, the i-th present memory
section is always the same one.

Here only allocate usemap_map and map_map with the size of
'nr_present_sections'. For the i-th present memory section, install its
usemap and memmap to usemap_map[i] and mam_map[i] during allocation. Then
in the last for_each_present_section_nr() loop which clears the failed
memory section's ->section_mem_map, fetch usemap and memmap from
usemap_map[] and map_map[] array and set them into mem_section[]
accordingly.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/sparse-vmemmap.c |  5 +++--
 mm/sparse.c         | 43 ++++++++++++++++++++++++++++++++++---------
 2 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 640e68f8324b..a98ec2fb6915 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -281,6 +281,7 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
 	unsigned long pnum;
 	unsigned long size = sizeof(struct page) * PAGES_PER_SECTION;
 	void *vmemmap_buf_start;
+	int nr_consumed_maps = 0;
 
 	size = ALIGN(size, PMD_SIZE);
 	vmemmap_buf_start = __earlyonly_bootmem_alloc(nodeid, size * map_count,
@@ -297,8 +298,8 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
 		if (!present_section_nr(pnum))
 			continue;
 
-		map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
-		if (map_map[pnum])
+		map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL);
+		if (map_map[nr_consumed_maps++])
 			continue;
 		ms = __nr_to_section(pnum);
 		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
diff --git a/mm/sparse.c b/mm/sparse.c
index 4a58f8809542..94c3d7bf1b6a 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -388,6 +388,7 @@ static void __init sparse_early_usemaps_alloc_node(void *data,
 	unsigned long pnum;
 	unsigned long **usemap_map = (unsigned long **)data;
 	int size = usemap_size();
+	int nr_consumed_maps = 0;
 
 	usemap = sparse_early_usemaps_alloc_pgdat_section(NODE_DATA(nodeid),
 							  size * usemap_count);
@@ -399,9 +400,10 @@ static void __init sparse_early_usemaps_alloc_node(void *data,
 	for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
 		if (!present_section_nr(pnum))
 			continue;
-		usemap_map[pnum] = usemap;
+		usemap_map[nr_consumed_maps] = usemap;
 		usemap += size;
-		check_usemap_section_nr(nodeid, usemap_map[pnum]);
+		check_usemap_section_nr(nodeid, usemap_map[nr_consumed_maps]);
+		nr_consumed_maps++;
 	}
 }
 
@@ -426,29 +428,33 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
 	void *map;
 	unsigned long pnum;
 	unsigned long size = sizeof(struct page) * PAGES_PER_SECTION;
+	int nr_consumed_maps;
 
 	size = PAGE_ALIGN(size);
 	map = memblock_virt_alloc_try_nid_raw(size * map_count,
 					      PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
 					      BOOTMEM_ALLOC_ACCESSIBLE, nodeid);
 	if (map) {
+		nr_consumed_maps = 0;
 		for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
 			if (!present_section_nr(pnum))
 				continue;
-			map_map[pnum] = map;
+			map_map[nr_consumed_maps] = map;
 			map += size;
+			nr_consumed_maps++;
 		}
 		return;
 	}
 
 	/* fallback */
+	nr_consumed_maps = 0;
 	for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
 		struct mem_section *ms;
 
 		if (!present_section_nr(pnum))
 			continue;
-		map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
-		if (map_map[pnum])
+		map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL);
+		if (map_map[nr_consumed_maps++])
 			continue;
 		ms = __nr_to_section(pnum);
 		pr_err("%s: sparsemem memory map backing failed some memory will not be available\n",
@@ -528,6 +534,7 @@ static void __init alloc_usemap_and_memmap(void (*alloc_func)
 		/* new start, update count etc*/
 		nodeid_begin = nodeid;
 		pnum_begin = pnum;
+		data += map_count * data_unit_size;
 		map_count = 1;
 	}
 	/* ok, last chunk */
@@ -546,6 +553,7 @@ void __init sparse_init(void)
 	unsigned long *usemap;
 	unsigned long **usemap_map;
 	int size;
+	int nr_consumed_maps = 0;
 #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
 	int size2;
 	struct page **map_map;
@@ -568,7 +576,7 @@ void __init sparse_init(void)
 	 * powerpc need to call sparse_init_one_section right after each
 	 * sparse_early_mem_map_alloc, so allocate usemap_map at first.
 	 */
-	size = sizeof(unsigned long *) * NR_MEM_SECTIONS;
+	size = sizeof(unsigned long *) * nr_present_sections;
 	usemap_map = memblock_virt_alloc(size, 0);
 	if (!usemap_map)
 		panic("can not allocate usemap_map\n");
@@ -577,7 +585,7 @@ void __init sparse_init(void)
 				sizeof(usemap_map[0]));
 
 #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
-	size2 = sizeof(struct page *) * NR_MEM_SECTIONS;
+	size2 = sizeof(struct page *) * nr_present_sections;
 	map_map = memblock_virt_alloc(size2, 0);
 	if (!map_map)
 		panic("can not allocate map_map\n");
@@ -586,27 +594,44 @@ void __init sparse_init(void)
 				sizeof(map_map[0]));
 #endif
 
+	/* The numner of present sections stored in nr_present_sections
+	 * are kept the same since mem sections are marked as present in
+	 * memory_present(). In this for loop, we need check which sections
+	 * failed to allocate memmap or usemap, then clear its
+	 * ->section_mem_map accordingly. During this process, we need
+	 * increase 'alloc_usemap_and_memmap' whether its allocation of
+	 * memmap or usemap failed or not, so that after we handle the i-th
+	 * memory section, can get memmap and usemap of (i+1)-th section
+	 * correctly. */
 	for_each_present_section_nr(0, pnum) {
 		struct mem_section *ms;
+
+		if (nr_consumed_maps >= nr_present_sections) {
+			pr_err("nr_consumed_maps goes beyond nr_present_sections\n");
+			break;
+		}
 		ms = __nr_to_section(pnum);
-		usemap = usemap_map[pnum];
+		usemap = usemap_map[nr_consumed_maps];
 		if (!usemap) {
 			ms->section_mem_map = 0;
+			nr_consumed_maps++;
 			continue;
 		}
 
 #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
-		map = map_map[pnum];
+		map = map_map[nr_consumed_maps];
 #else
 		map = sparse_early_mem_map_alloc(pnum);
 #endif
 		if (!map) {
 			ms->section_mem_map = 0;
+			nr_consumed_maps++;
 			continue;
 		}
 
 		sparse_init_one_section(__nr_to_section(pnum), pnum, map,
 								usemap);
+		nr_consumed_maps++;
 	}
 
 	vmemmap_populate_print_last();
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 0/4] mm/sparse: Optimize memmap allocation during sparse_init()
  2018-05-21 10:15 [PATCH v4 0/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
                   ` (3 preceding siblings ...)
  2018-05-21 10:15 ` [PATCH v4 4/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
@ 2018-06-07 22:17 ` Andrew Morton
  4 siblings, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2018-06-07 22:17 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, dave.hansen, pagupta, linux-mm, kirill.shutemov

On Mon, 21 May 2018 18:15:51 +0800 Baoquan He <bhe@redhat.com> wrote:

> This is v4 post. V3 can be found here:
> https://lkml.org/lkml/2018/2/27/928
> 
> V1 can be found here:
> https://www.spinics.net/lists/linux-mm/msg144486.html
> 
> In sparse_init(), two temporary pointer arrays, usemap_map and map_map
> are allocated with the size of NR_MEM_SECTIONS. They are used to store
> each memory section's usemap and mem map if marked as present. In
> 5-level paging mode, this will cost 512M memory though they will be
> released at the end of sparse_init(). System with few memory, like
> kdump kernel which usually only has about 256M, will fail to boot
> because of allocation failure if CONFIG_X86_5LEVEL=y.
> 
> In this patchset, optimize the memmap allocation code to only use
> usemap_map and map_map with the size of nr_present_sections. This
> makes kdump kernel boot up with normal crashkernel='' setting when
> CONFIG_X86_5LEVEL=y.

We're still a bit short on review input for this series.  Hi, Dave!

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/4] mm/sparse: Optimize memmap allocation during sparse_init()
  2018-05-21 10:15 ` [PATCH v4 4/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
@ 2018-06-07 22:46   ` Dave Hansen
  2018-06-08  7:28     ` Baoquan He
  0 siblings, 1 reply; 17+ messages in thread
From: Dave Hansen @ 2018-06-07 22:46 UTC (permalink / raw)
  To: Baoquan He, linux-kernel, akpm, pagupta; +Cc: linux-mm, kirill.shutemov

> @@ -297,8 +298,8 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
>  		if (!present_section_nr(pnum))
>  			continue;
>  
> -		map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
> -		if (map_map[pnum])
> +		map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL);
> +		if (map_map[nr_consumed_maps++])
>  			continue;
...

This looks wonky.

This seems to say that even if we fail to sparse_mem_map_populate() (it
returns NULL), we still consume a map.  Is that right?

>  	/* fallback */
> +	nr_consumed_maps = 0;
>  	for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
>  		struct mem_section *ms;
>  
>  		if (!present_section_nr(pnum))
>  			continue;
> -		map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
> -		if (map_map[pnum])
> +		map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL);
> +		if (map_map[nr_consumed_maps++])
>  			continue;

Same questionable pattern as above...

>  #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
> -	size2 = sizeof(struct page *) * NR_MEM_SECTIONS;
> +	size2 = sizeof(struct page *) * nr_present_sections;
>  	map_map = memblock_virt_alloc(size2, 0);
>  	if (!map_map)
>  		panic("can not allocate map_map\n");
> @@ -586,27 +594,44 @@ void __init sparse_init(void)
>  				sizeof(map_map[0]));
>  #endif
>  
> +	/* The numner of present sections stored in nr_present_sections

"number"?

Also, this is not correct comment CodingStyle.

> +	 * are kept the same since mem sections are marked as present in
> +	 * memory_present().

Are you just trying to say that we are not making sections present here?

>                         In this for loop, we need check which sections
> +	 * failed to allocate memmap or usemap, then clear its
> +	 * ->section_mem_map accordingly. During this process, we need
> +	 * increase 'alloc_usemap_and_memmap' whether its allocation of
> +	 * memmap or usemap failed or not, so that after we handle the i-th
> +	 * memory section, can get memmap and usemap of (i+1)-th section
> +	 * correctly. */

I'm really scratching my head over this comment.  For instance "increase
'alloc_usemap_and_memmap'" doesn't make any sense to me.  How do you
increase a function?

I wonder if you could give that comment another shot.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 1/4] mm/sparse: Add a static variable nr_present_sections
  2018-05-21 10:15 ` [PATCH v4 1/4] mm/sparse: Add a static variable nr_present_sections Baoquan He
@ 2018-06-07 22:46   ` Dave Hansen
  0 siblings, 0 replies; 17+ messages in thread
From: Dave Hansen @ 2018-06-07 22:46 UTC (permalink / raw)
  To: Baoquan He, linux-kernel, akpm, pagupta; +Cc: linux-mm, kirill.shutemov

On 05/21/2018 03:15 AM, Baoquan He wrote:
> It's used to record how many memory sections are marked as present
> during system boot up, and will be used in the later patch.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>

I think this is fine:

Acked-By: Dave Hansen <dave.hansen@intel.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/4] mm/sparsemem: Defer the ms->section_mem_map clearing
  2018-05-21 10:15 ` [PATCH v4 2/4] mm/sparsemem: Defer the ms->section_mem_map clearing Baoquan He
@ 2018-06-07 22:47   ` Dave Hansen
  0 siblings, 0 replies; 17+ messages in thread
From: Dave Hansen @ 2018-06-07 22:47 UTC (permalink / raw)
  To: Baoquan He, linux-kernel, akpm, pagupta; +Cc: linux-mm, kirill.shutemov

On 05/21/2018 03:15 AM, Baoquan He wrote:
> In sparse_init(), if CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y, system
> will allocate one continuous memory chunk for mem maps on one node and
> populate the relevant page tables to map memory section one by one. If
> fail to populate for a certain mem section, print warning and its
> ->section_mem_map will be cleared to cancel the marking of being present.
> Like this, the number of mem sections marked as present could become
> less during sparse_init() execution.
> 
> Here just defer the ms->section_mem_map clearing if failed to populate
> its page tables until the last for_each_present_section_nr() loop. This
> is in preparation for later optimizing the mem map allocation.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>

Acked-By: Dave Hansen <dave.hansen@intel.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap
  2018-05-21 10:15 ` [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap Baoquan He
@ 2018-06-07 22:48   ` Dave Hansen
  2018-06-08  6:27     ` Baoquan He
  0 siblings, 1 reply; 17+ messages in thread
From: Dave Hansen @ 2018-06-07 22:48 UTC (permalink / raw)
  To: Baoquan He, linux-kernel, akpm, pagupta; +Cc: linux-mm, kirill.shutemov

On 05/21/2018 03:15 AM, Baoquan He wrote:
> It's used to pass the size of map data unit into alloc_usemap_and_memmap,
> and is preparation for next patch.

This is the "what", but not the "why".  Could you add another sentence
or two to explain why we need this?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap
  2018-06-07 22:48   ` Dave Hansen
@ 2018-06-08  6:27     ` Baoquan He
  2018-06-08 14:20       ` Dave Hansen
  0 siblings, 1 reply; 17+ messages in thread
From: Baoquan He @ 2018-06-08  6:27 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel, akpm, pagupta, linux-mm, kirill.shutemov

On 06/07/18 at 03:48pm, Dave Hansen wrote:
> On 05/21/2018 03:15 AM, Baoquan He wrote:
> > It's used to pass the size of map data unit into alloc_usemap_and_memmap,
> > and is preparation for next patch.
> 
> This is the "what", but not the "why".  Could you add another sentence
> or two to explain why we need this?

Thanks for reviewing, Dave.

In alloc_usemap_and_memmap(), it will call
sparse_early_usemaps_alloc_node() or sparse_early_mem_maps_alloc_node()
to allocate usemap and memmap for each node and install them into
usemap_map[] and map_map[]. Here we need pass in the number of present
sections on this node so that we can move pointer of usemap_map[] and
map_map[] to right position.

How do think about above words?

Thanks
Baoquan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/4] mm/sparse: Optimize memmap allocation during sparse_init()
  2018-06-07 22:46   ` Dave Hansen
@ 2018-06-08  7:28     ` Baoquan He
  2018-06-08  7:41       ` Baoquan He
  0 siblings, 1 reply; 17+ messages in thread
From: Baoquan He @ 2018-06-08  7:28 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel, akpm, pagupta, linux-mm, kirill.shutemov

On 06/07/18 at 03:46pm, Dave Hansen wrote:
> > @@ -297,8 +298,8 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
> >  		if (!present_section_nr(pnum))
> >  			continue;
> >  
> > -		map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
> > -		if (map_map[pnum])
> > +		map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL);
> > +		if (map_map[nr_consumed_maps++])
> >  			continue;
> ...
> 
> This looks wonky.
> 
> This seems to say that even if we fail to sparse_mem_map_populate() (it
> returns NULL), we still consume a map.  Is that right?

Yes, the usemap_map[] and map_map[] allocated in sparse_init() are two
temporary pointer array. Here if sparse_mem_map_populate() succeed, it
will return the starting address of the page struct in this section, and
map_map[i] stores the address for later use. If failed, map_map[i] =
NULL, we will check this value in sparse_init() and decide this section
is invalid, then clear it with 'ms->section_mem_map = 0;'. 

This is done on purpose.

> 
> >  	/* fallback */
> > +	nr_consumed_maps = 0;
> >  	for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
> >  		struct mem_section *ms;
> >  
> >  		if (!present_section_nr(pnum))
> >  			continue;
> > -		map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
> > -		if (map_map[pnum])
> > +		map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL);
> > +		if (map_map[nr_consumed_maps++])
> >  			continue;
> 
> Same questionable pattern as above...

Ditto

> 
> >  #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
> > -	size2 = sizeof(struct page *) * NR_MEM_SECTIONS;
> > +	size2 = sizeof(struct page *) * nr_present_sections;
> >  	map_map = memblock_virt_alloc(size2, 0);
> >  	if (!map_map)
> >  		panic("can not allocate map_map\n");
> > @@ -586,27 +594,44 @@ void __init sparse_init(void)
> >  				sizeof(map_map[0]));
> >  #endif
> >  
> > +	/* The numner of present sections stored in nr_present_sections
> 
> "number"?

Yes, will change. Thanks.

> 
> Also, this is not correct comment CodingStyle.

Agree, will update.

> 
> > +	 * are kept the same since mem sections are marked as present in
> > +	 * memory_present().
> 
> Are you just trying to say that we are not making sections present here?

Yes, 'present' has different meaning in different stage. For
struct mem_section **mem_section, we allocate array to prepare to store
pointer pointing at each mem_section in system.

1) in sparse_memory_present_with_active_regions(), we will walk over all
memory regions in memblock and mark those memory sections as 'present'
if it's not hole. Note that we say it's present because it exists in
memblock.

2) in sparse_init(), we will allocate usemap and memmap for each memory
sections, for better memory management, we will try to allocate memory
from that node at one time when handle that node's memory sections. Here
if any failure happened on a certain memory section, e.g
sparse_mem_map_populate() failed case you mentioned, we will clear it by
"ms->section_mem_map = 0", to make it not present. Because if we still
think it's present, and continue useing it, apparently mm system will
corrupt.

> 
> >                         In this for loop, we need check which sections
> > +	 * failed to allocate memmap or usemap, then clear its
> > +	 * ->section_mem_map accordingly. During this process, we need
> > +	 * increase 'alloc_usemap_and_memmap' whether its allocation of
> > +	 * memmap or usemap failed or not, so that after we handle the i-th
> > +	 * memory section, can get memmap and usemap of (i+1)-th section
> > +	 * correctly. */
> 
> I'm really scratching my head over this comment.  For instance "increase
> 'alloc_usemap_and_memmap'" doesn't make any sense to me.  How do you
> increase a function?

My bad, Dave, it should be 'nr_consumed_maps', which is the index of
present section marked in the 1) stage at above. I must do it with wrong
copy&paste.

Let me say it with a concret example, e.g in one system, there are 10
memory sections, and 5 on each node. Then its usemap_map[0..9] and
map_map[0..9] need indexed with nr_consumed_maps from 0 to 9. Given one
map allocation failed, say the 5-th section, in
alloc_usemap_and_memmap(), we don't clear its ms->section_mem_map, means
it's still present, just its usemap_map[5] or map_map[5] is NULL, then
continue handling 6-th section. Until the last for_each_present_section_nr()
loop in sparse_init(),  we iterate all 10 memory sections, and found
5-th section's map is not OK, then it has to be taken off from mm
system, otherwise corruption will happen if access 5-th section's
memory.

> 
> I wonder if you could give that comment another shot.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/4] mm/sparse: Optimize memmap allocation during sparse_init()
  2018-06-08  7:28     ` Baoquan He
@ 2018-06-08  7:41       ` Baoquan He
  0 siblings, 0 replies; 17+ messages in thread
From: Baoquan He @ 2018-06-08  7:41 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel, akpm, pagupta, linux-mm, kirill.shutemov

On 06/08/18 at 03:28pm, Baoquan He wrote:
> On 06/07/18 at 03:46pm, Dave Hansen wrote:
> > > @@ -297,8 +298,8 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
> > >  		if (!present_section_nr(pnum))
> > >  			continue;
> > >  
> > > -		map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
> > > -		if (map_map[pnum])
> > > +		map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL);
> > > +		if (map_map[nr_consumed_maps++])
> > >  			continue;
> > ...
> > 
> > This looks wonky.
> > 
> > This seems to say that even if we fail to sparse_mem_map_populate() (it
> > returns NULL), we still consume a map.  Is that right?
> 
> Yes, the usemap_map[] and map_map[] allocated in sparse_init() are two
> temporary pointer array. Here if sparse_mem_map_populate() succeed, it
> will return the starting address of the page struct in this section, and
> map_map[i] stores the address for later use. If failed, map_map[i] =
> NULL, we will check this value in sparse_init() and decide this section
> is invalid, then clear it with 'ms->section_mem_map = 0;'. 
> 
> This is done on purpose.
> 
> > 
> > >  	/* fallback */
> > > +	nr_consumed_maps = 0;
> > >  	for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
> > >  		struct mem_section *ms;
> > >  
> > >  		if (!present_section_nr(pnum))
> > >  			continue;
> > > -		map_map[pnum] = sparse_mem_map_populate(pnum, nodeid, NULL);
> > > -		if (map_map[pnum])
> > > +		map_map[nr_consumed_maps] = sparse_mem_map_populate(pnum, nodeid, NULL);
> > > +		if (map_map[nr_consumed_maps++])
> > >  			continue;
> > 
> > Same questionable pattern as above...
> 
> Ditto
> 
> > 
> > >  #ifdef CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
> > > -	size2 = sizeof(struct page *) * NR_MEM_SECTIONS;
> > > +	size2 = sizeof(struct page *) * nr_present_sections;
> > >  	map_map = memblock_virt_alloc(size2, 0);
> > >  	if (!map_map)
> > >  		panic("can not allocate map_map\n");
> > > @@ -586,27 +594,44 @@ void __init sparse_init(void)
> > >  				sizeof(map_map[0]));
> > >  #endif
> > >  
> > > +	/* The numner of present sections stored in nr_present_sections
> > 
> > "number"?
> 
> Yes, will change. Thanks.
> 
> > 
> > Also, this is not correct comment CodingStyle.
> 
> Agree, will update.
> 
> > 
> > > +	 * are kept the same since mem sections are marked as present in
> > > +	 * memory_present().
> > 
> > Are you just trying to say that we are not making sections present here?
> 
> Yes, 'present' has different meaning in different stage. For
> struct mem_section **mem_section, we allocate array to prepare to store
> pointer pointing at each mem_section in system.
> 
> 1) in sparse_memory_present_with_active_regions(), we will walk over all
> memory regions in memblock and mark those memory sections as 'present'
> if it's not hole. Note that we say it's present because it exists in
> memblock.
> 
> 2) in sparse_init(), we will allocate usemap and memmap for each memory
> sections, for better memory management, we will try to allocate memory
> from that node at one time when handle that node's memory sections. Here
> if any failure happened on a certain memory section, e.g
> sparse_mem_map_populate() failed case you mentioned, we will clear it by
> "ms->section_mem_map = 0", to make it not present. Because if we still

Here, I mean in the last for_each_present_section_nr() loop in
sparse_init() to clear it by "ms->section_mem_map = 0". But not during
alloc_usemap_and_memmap() calling. In this stage, it's present, meaning
it owns memory regions in memblock, and its usemap and memmap have been
allocated and installed correctly.

> think it's present, and continue useing it, apparently mm system will
> corrupt.
> 
> > 
> > >                         In this for loop, we need check which sections
> > > +	 * failed to allocate memmap or usemap, then clear its
> > > +	 * ->section_mem_map accordingly. During this process, we need
> > > +	 * increase 'alloc_usemap_and_memmap' whether its allocation of
> > > +	 * memmap or usemap failed or not, so that after we handle the i-th
> > > +	 * memory section, can get memmap and usemap of (i+1)-th section
> > > +	 * correctly. */
> > 
> > I'm really scratching my head over this comment.  For instance "increase
> > 'alloc_usemap_and_memmap'" doesn't make any sense to me.  How do you
> > increase a function?
> 
> My bad, Dave, it should be 'nr_consumed_maps', which is the index of
> present section marked in the 1) stage at above. I must do it with wrong
> copy&paste.
> 
> Let me say it with a concret example, e.g in one system, there are 10
> memory sections, and 5 on each node. Then its usemap_map[0..9] and
> map_map[0..9] need indexed with nr_consumed_maps from 0 to 9. Given one
> map allocation failed, say the 5-th section, in
> alloc_usemap_and_memmap(), we don't clear its ms->section_mem_map, means
> it's still present, just its usemap_map[5] or map_map[5] is NULL, then
> continue handling 6-th section. Until the last for_each_present_section_nr()
> loop in sparse_init(),  we iterate all 10 memory sections, and found
> 5-th section's map is not OK, then it has to be taken off from mm
> system, otherwise corruption will happen if access 5-th section's
> memory.
> 
> > 
> > I wonder if you could give that comment another shot.
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap
  2018-06-08  6:27     ` Baoquan He
@ 2018-06-08 14:20       ` Dave Hansen
  2018-06-08 15:17         ` Baoquan He
  0 siblings, 1 reply; 17+ messages in thread
From: Dave Hansen @ 2018-06-08 14:20 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, akpm, pagupta, linux-mm, kirill.shutemov

On 06/07/2018 11:27 PM, Baoquan He wrote:
> In alloc_usemap_and_memmap(), it will call
> sparse_early_usemaps_alloc_node() or sparse_early_mem_maps_alloc_node()
> to allocate usemap and memmap for each node and install them into
> usemap_map[] and map_map[]. Here we need pass in the number of present
> sections on this node so that we can move pointer of usemap_map[] and
> map_map[] to right position.
> 
> How do think about above words?

But you're now passing in the size of the data structure.  Why is that
needed all of a sudden?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap
  2018-06-08 14:20       ` Dave Hansen
@ 2018-06-08 15:17         ` Baoquan He
  2018-06-08 16:13           ` Dave Hansen
  0 siblings, 1 reply; 17+ messages in thread
From: Baoquan He @ 2018-06-08 15:17 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel, akpm, pagupta, linux-mm, kirill.shutemov

On 06/08/18 at 07:20am, Dave Hansen wrote:
> On 06/07/2018 11:27 PM, Baoquan He wrote:
> > In alloc_usemap_and_memmap(), it will call
> > sparse_early_usemaps_alloc_node() or sparse_early_mem_maps_alloc_node()
> > to allocate usemap and memmap for each node and install them into
> > usemap_map[] and map_map[]. Here we need pass in the number of present
> > sections on this node so that we can move pointer of usemap_map[] and
> > map_map[] to right position.
> > 
> > How do think about above words?
> 
> But you're now passing in the size of the data structure.  Why is that
> needed all of a sudden?

Oh, it's the size of the data structure. Because
alloc_usemap_and_memmap() is reused for both usemap and memmap
allocation, then inside alloc_usemap_and_memmap(), we need move forward
the passed in pointer which points at the starting address of usemap_map
or memmap_map, to a new position which points at the next starting
address on new node.

You can see we passed the usemap_map, map_map, and the array element
size.  

void __init sparse_init(void)
{
	...
        alloc_usemap_and_memmap(sparse_early_usemaps_alloc_node,                                                                                  
                                (void *)usemap_map,
                                sizeof(usemap_map[0]));
	...
        alloc_usemap_and_memmap(sparse_early_mem_maps_alloc_node,
                                (void *)map_map,
                                sizeof(map_map[0]));
	...
}

Then inside alloc_usemap_and_memmap(), For each node, we get how many
present sections on this node, call hook alloc_func(). Then we update
the pointer to point at a new position of usemap_map[] or map_map[].

Here usemap_map[] is (unsigned long *), map_map[] is (struct page*).
Even though both of them are pointer, I think it might be not good to
assume that in alloc_usemap_and_memmap() because
alloc_usemap_and_memmap() doesn't know what is stored in its 2nd
parameter, namely (void * data). So add this new parameter to tell it.

Combine with patch 4/4, it might be easier to understand.

static void __init alloc_usemap_and_memmap(void (*alloc_func)
..
{
	...
	for_each_present_section_nr(pnum_begin + 1, pnum) {
                alloc_func(data, pnum_begin, pnum,                                                                                                
                                                map_count, nodeid_begin);
		...
		data += map_count * data_unit_size; 
		...
	}
	...
}

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap
  2018-06-08 15:17         ` Baoquan He
@ 2018-06-08 16:13           ` Dave Hansen
  2018-06-10 23:32             ` Baoquan He
  0 siblings, 1 reply; 17+ messages in thread
From: Dave Hansen @ 2018-06-08 16:13 UTC (permalink / raw)
  To: Baoquan He; +Cc: linux-kernel, akpm, pagupta, linux-mm, kirill.shutemov

On 06/08/2018 08:17 AM, Baoquan He wrote:
> 
> Then inside alloc_usemap_and_memmap(), For each node, we get how many
> present sections on this node, call hook alloc_func(). Then we update
> the pointer to point at a new position of usemap_map[] or map_map[].

I think this is the key.

alloc_usemap_and_memmap() is passed in a "void *" that it needs to
update as things get consumed.  But, it knows only the quantity of
objects consumed and not the type.  This effectively tells it enough
about the type to let it update the pointer as objects are consumed.

Right?

Can we get that in the changelog?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap
  2018-06-08 16:13           ` Dave Hansen
@ 2018-06-10 23:32             ` Baoquan He
  0 siblings, 0 replies; 17+ messages in thread
From: Baoquan He @ 2018-06-10 23:32 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel, akpm, pagupta, linux-mm, kirill.shutemov

On 06/08/18 at 09:13am, Dave Hansen wrote:
> On 06/08/2018 08:17 AM, Baoquan He wrote:
> > 
> > Then inside alloc_usemap_and_memmap(), For each node, we get how many
> > present sections on this node, call hook alloc_func(). Then we update
> > the pointer to point at a new position of usemap_map[] or map_map[].
> 
> I think this is the key.
> 
> alloc_usemap_and_memmap() is passed in a "void *" that it needs to
> update as things get consumed.  But, it knows only the quantity of
> objects consumed and not the type.  This effectively tells it enough
> about the type to let it update the pointer as objects are consumed.
> 
> Right?
> 
> Can we get that in the changelog?

Hmm, I like above sentences very much, thanks.

Do you means putting it in changelog, but not commit log of patch 3/4,
right? I can do this when repost.

Thanks
Baoquan

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-06-10 23:33 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-21 10:15 [PATCH v4 0/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
2018-05-21 10:15 ` [PATCH v4 1/4] mm/sparse: Add a static variable nr_present_sections Baoquan He
2018-06-07 22:46   ` Dave Hansen
2018-05-21 10:15 ` [PATCH v4 2/4] mm/sparsemem: Defer the ms->section_mem_map clearing Baoquan He
2018-06-07 22:47   ` Dave Hansen
2018-05-21 10:15 ` [PATCH v4 3/4] mm/sparse: Add a new parameter 'data_unit_size' for alloc_usemap_and_memmap Baoquan He
2018-06-07 22:48   ` Dave Hansen
2018-06-08  6:27     ` Baoquan He
2018-06-08 14:20       ` Dave Hansen
2018-06-08 15:17         ` Baoquan He
2018-06-08 16:13           ` Dave Hansen
2018-06-10 23:32             ` Baoquan He
2018-05-21 10:15 ` [PATCH v4 4/4] mm/sparse: Optimize memmap allocation during sparse_init() Baoquan He
2018-06-07 22:46   ` Dave Hansen
2018-06-08  7:28     ` Baoquan He
2018-06-08  7:41       ` Baoquan He
2018-06-07 22:17 ` [PATCH v4 0/4] " Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).