linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/5]  mm/hotplug: Only use subsection map for VMEMMAP
@ 2020-03-12 12:44 Baoquan He
  2020-03-12 12:44 ` [PATCH v4 1/5] mm/sparse.c: introduce new function fill_subsection_map() Baoquan He
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Baoquan He @ 2020-03-12 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, mhocko, david, richard.weiyang, dan.j.williams, bhe

Memory sub-section hotplug was added to fix the issue that nvdimm could
be mapped at non-section aligned starting address. A subsection map is
added into struct mem_section_usage to implement it.

However, config ZONE_DEVICE depends on SPARSEMEM_VMEMMAP. It means
subsection map only makes sense when SPARSEMEM_VMEMMAP enabled. For the
classic sparse, subsection map is meaningless and confusing.

About the classic sparse which doesn't support subsection hotplug, Dan
said it's more because the effort and maintenance burden outweighs the
benefit. Besides, the current 64 bit ARCHes all enable
SPARSEMEM_VMEMMAP_ENABLE by default.

This patchset is rebased on the old patch 1 of v3 and an apended fix
which doesn't initialize the local variable 'empty'.

The old patch 7 in v3 will be taken out to post alone, since it's a
clean up patch, not related to this subsection map handling.

Changelog
v3->v4:
  No big change, mainly address concerns from David.

v2->v3:
  David spotted a code bug in the old patch 1, the old local variable
  subsection_map is invalid once ms->usage is resetting. Add a local
  variable 'empty' to cache if subsection_map is empty or not.

  Remove the kernel-doc comments for the newly added functions
  fill_subsection_map() and clear_subsection_map(). Michal and David
  suggested this.

  Add a new static function is_subsection_map_empty() to check if the
  handled section map is empty, but not return the value from
  clear_subsection_map(). David suggested this.

  Add document about only VMEMMAP supporting sub-section hotplug, and
  check_pfn_span() gating the alignment and size. Michal help rephrase
  the words.

v1->v2:
  Move the hot remove fixing patch to the front so that people can
  back port it to easier. Suggested by David.

  Split the old patch which invalidate the sub-section map in
  !VMEMMAP case into two patches, patch 4/7, and patch 6/7. This
  makes patch reviewing easier. Suggested by David.

  Take Wei Yang's fixing patch out to post alone, since it has been
  reviewed and acked by people. Suggested by Andrew.

  Fix a code comment mistake in the current patch 2/7. Found out by
  Wei Yang during reviewing.

Baoquan He (5):
  mm/sparse.c: introduce new function fill_subsection_map()
  mm/sparse.c: introduce a new function clear_subsection_map()
  mm/sparse.c: only use subsection map in VMEMMAP case
  mm/sparse.c: add note about only VMEMMAP supporting sub-section
    support
  mm/sparse.c: move subsection_map related functions together

 include/linux/mmzone.h |   2 +
 mm/sparse.c            | 136 ++++++++++++++++++++++++++++-------------
 2 files changed, 95 insertions(+), 43 deletions(-)

-- 
2.17.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v4 1/5] mm/sparse.c: introduce new function fill_subsection_map()
  2020-03-12 12:44 [PATCH v4 0/5] mm/hotplug: Only use subsection map for VMEMMAP Baoquan He
@ 2020-03-12 12:44 ` Baoquan He
  2020-03-12 13:30   ` Pankaj Gupta
  2020-03-12 12:44 ` [PATCH v4 2/5] mm/sparse.c: introduce a new function clear_subsection_map() Baoquan He
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Baoquan He @ 2020-03-12 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, mhocko, david, richard.weiyang, dan.j.williams, bhe

Factor out the code that fills the subsection map from section_activate()
into fill_subsection_map(), this makes section_activate() cleaner and
easier to follow.

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 mm/sparse.c | 32 +++++++++++++++++++++-----------
 1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index cf28505e82c5..5919bc5b1547 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -792,24 +792,15 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
 		ms->section_mem_map = (unsigned long)NULL;
 }
 
-static struct page * __meminit section_activate(int nid, unsigned long pfn,
-		unsigned long nr_pages, struct vmem_altmap *altmap)
+static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
 {
-	DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
 	struct mem_section *ms = __pfn_to_section(pfn);
-	struct mem_section_usage *usage = NULL;
+	DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
 	unsigned long *subsection_map;
-	struct page *memmap;
 	int rc = 0;
 
 	subsection_mask_set(map, pfn, nr_pages);
 
-	if (!ms->usage) {
-		usage = kzalloc(mem_section_usage_size(), GFP_KERNEL);
-		if (!usage)
-			return ERR_PTR(-ENOMEM);
-		ms->usage = usage;
-	}
 	subsection_map = &ms->usage->subsection_map[0];
 
 	if (bitmap_empty(map, SUBSECTIONS_PER_SECTION))
@@ -820,6 +811,25 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
 		bitmap_or(subsection_map, map, subsection_map,
 				SUBSECTIONS_PER_SECTION);
 
+	return rc;
+}
+
+static struct page * __meminit section_activate(int nid, unsigned long pfn,
+		unsigned long nr_pages, struct vmem_altmap *altmap)
+{
+	struct mem_section *ms = __pfn_to_section(pfn);
+	struct mem_section_usage *usage = NULL;
+	struct page *memmap;
+	int rc = 0;
+
+	if (!ms->usage) {
+		usage = kzalloc(mem_section_usage_size(), GFP_KERNEL);
+		if (!usage)
+			return ERR_PTR(-ENOMEM);
+		ms->usage = usage;
+	}
+
+	rc = fill_subsection_map(pfn, nr_pages);
 	if (rc) {
 		if (usage)
 			ms->usage = NULL;
-- 
2.17.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v4 2/5] mm/sparse.c: introduce a new function clear_subsection_map()
  2020-03-12 12:44 [PATCH v4 0/5] mm/hotplug: Only use subsection map for VMEMMAP Baoquan He
  2020-03-12 12:44 ` [PATCH v4 1/5] mm/sparse.c: introduce new function fill_subsection_map() Baoquan He
@ 2020-03-12 12:44 ` Baoquan He
  2020-03-12 13:26   ` Pankaj Gupta
  2020-03-12 12:44 ` [PATCH v4 3/5] mm/sparse.c: only use subsection map in VMEMMAP case Baoquan He
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Baoquan He @ 2020-03-12 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, mhocko, david, richard.weiyang, dan.j.williams, bhe

Factor out the code which clear subsection map of one memory region from
section_deactivate() into clear_subsection_map().

And also add helper function is_subsection_map_empty() to check if
the current subsection map is empty or not.

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 mm/sparse.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 5919bc5b1547..0be4d4ed96de 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -726,15 +726,11 @@ static void free_map_bootmem(struct page *memmap)
 }
 #endif /* CONFIG_SPARSEMEM_VMEMMAP */
 
-static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
-		struct vmem_altmap *altmap)
+static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages)
 {
 	DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
 	DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) = { 0 };
 	struct mem_section *ms = __pfn_to_section(pfn);
-	bool section_is_early = early_section(ms);
-	struct page *memmap = NULL;
-	bool empty;
 	unsigned long *subsection_map = ms->usage
 		? &ms->usage->subsection_map[0] : NULL;
 
@@ -745,8 +741,28 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
 	if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTION),
 				"section already deactivated (%#lx + %ld)\n",
 				pfn, nr_pages))
-		return;
+		return -EINVAL;
 
+	bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION);
+	return 0;
+}
+
+static bool is_subsection_map_empty(struct mem_section *ms)
+{
+	return bitmap_empty(&ms->usage->subsection_map[0],
+			    SUBSECTIONS_PER_SECTION);
+}
+
+static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
+		struct vmem_altmap *altmap)
+{
+	struct mem_section *ms = __pfn_to_section(pfn);
+	bool section_is_early = early_section(ms);
+	struct page *memmap = NULL;
+	bool empty;
+
+	if (clear_subsection_map(pfn, nr_pages))
+		return;
 	/*
 	 * There are 3 cases to handle across two configurations
 	 * (SPARSEMEM_VMEMMAP={y,n}):
@@ -764,8 +780,7 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
 	 *
 	 * For 2/ and 3/ the SPARSEMEM_VMEMMAP={y,n} cases are unified
 	 */
-	bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION);
-	empty = bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION);
+	empty = is_subsection_map_empty(ms);
 	if (empty) {
 		unsigned long section_nr = pfn_to_section_nr(pfn);
 
-- 
2.17.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v4 3/5] mm/sparse.c: only use subsection map in VMEMMAP case
  2020-03-12 12:44 [PATCH v4 0/5] mm/hotplug: Only use subsection map for VMEMMAP Baoquan He
  2020-03-12 12:44 ` [PATCH v4 1/5] mm/sparse.c: introduce new function fill_subsection_map() Baoquan He
  2020-03-12 12:44 ` [PATCH v4 2/5] mm/sparse.c: introduce a new function clear_subsection_map() Baoquan He
@ 2020-03-12 12:44 ` Baoquan He
  2020-03-12 12:44 ` [PATCH v4 4/5] mm/sparse.c: add note about only VMEMMAP supporting sub-section hotplug Baoquan He
  2020-03-12 12:44 ` [PATCH v4 5/5] mm/sparse.c: move subsection_map related functions together Baoquan He
  4 siblings, 0 replies; 9+ messages in thread
From: Baoquan He @ 2020-03-12 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, mhocko, david, richard.weiyang, dan.j.williams, bhe

Currently, to support subsection aligned memory region adding for pmem,
subsection map is added to track which subsection is present.

However, config ZONE_DEVICE depends on SPARSEMEM_VMEMMAP. It means
subsection map only makes sense when SPARSEMEM_VMEMMAP enabled. For the
classic sparse, it's meaningless. Even worse, it may confuse people when
checking code related to the classic sparse.

About the classic sparse which doesn't support subsection hotplug, Dan
said it's more because the effort and maintenance burden outweighs the
benefit. Besides, the current 64 bit ARCHes all enable
SPARSEMEM_VMEMMAP_ENABLE by default.

Combining the above reasons, no need to provide subsection map and the
relevant handling for the classic sparse. Let's remove them.

Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mmzone.h |  2 ++
 mm/sparse.c            | 25 +++++++++++++++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 42b77d3b68e8..f3f264826423 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1143,7 +1143,9 @@ static inline unsigned long section_nr_to_pfn(unsigned long sec)
 #define SUBSECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SUBSECTION_MASK)
 
 struct mem_section_usage {
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
 	DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION);
+#endif
 	/* See declaration of similar field in struct zone */
 	unsigned long pageblock_flags[0];
 };
diff --git a/mm/sparse.c b/mm/sparse.c
index 0be4d4ed96de..117fe4554c38 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -209,6 +209,7 @@ static inline unsigned long first_present_section_nr(void)
 	return next_present_section_nr(-1);
 }
 
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
 static void subsection_mask_set(unsigned long *map, unsigned long pfn,
 		unsigned long nr_pages)
 {
@@ -243,6 +244,11 @@ void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages)
 		nr_pages -= pfns;
 	}
 }
+#else
+void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages)
+{
+}
+#endif
 
 /* Record a memory area against a node. */
 void __init memory_present(int nid, unsigned long start, unsigned long end)
@@ -726,6 +732,7 @@ static void free_map_bootmem(struct page *memmap)
 }
 #endif /* CONFIG_SPARSEMEM_VMEMMAP */
 
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
 static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages)
 {
 	DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
@@ -752,6 +759,17 @@ static bool is_subsection_map_empty(struct mem_section *ms)
 	return bitmap_empty(&ms->usage->subsection_map[0],
 			    SUBSECTIONS_PER_SECTION);
 }
+#else
+static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages)
+{
+	return 0;
+}
+
+static bool is_subsection_map_empty(struct mem_section *ms)
+{
+	return true;
+}
+#endif
 
 static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
 		struct vmem_altmap *altmap)
@@ -807,6 +825,7 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
 		ms->section_mem_map = (unsigned long)NULL;
 }
 
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
 static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
 {
 	struct mem_section *ms = __pfn_to_section(pfn);
@@ -828,6 +847,12 @@ static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
 
 	return rc;
 }
+#else
+static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
+{
+	return 0;
+}
+#endif
 
 static struct page * __meminit section_activate(int nid, unsigned long pfn,
 		unsigned long nr_pages, struct vmem_altmap *altmap)
-- 
2.17.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v4 4/5] mm/sparse.c: add note about only VMEMMAP supporting sub-section hotplug
  2020-03-12 12:44 [PATCH v4 0/5] mm/hotplug: Only use subsection map for VMEMMAP Baoquan He
                   ` (2 preceding siblings ...)
  2020-03-12 12:44 ` [PATCH v4 3/5] mm/sparse.c: only use subsection map in VMEMMAP case Baoquan He
@ 2020-03-12 12:44 ` Baoquan He
  2020-03-12 13:45   ` David Hildenbrand
  2020-03-12 12:44 ` [PATCH v4 5/5] mm/sparse.c: move subsection_map related functions together Baoquan He
  4 siblings, 1 reply; 9+ messages in thread
From: Baoquan He @ 2020-03-12 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, mhocko, david, richard.weiyang, dan.j.williams, bhe

And tell check_pfn_span() gating the porper alignment and size of
hot added memory region.

And also move the code comments from inside section_deactivate()
to being above it. The code comments are reasonable for the whole
function, and the moving makes code cleaner.

Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 mm/sparse.c | 38 +++++++++++++++++++++-----------------
 1 file changed, 21 insertions(+), 17 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 117fe4554c38..f02a524e17d1 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -771,6 +771,22 @@ static bool is_subsection_map_empty(struct mem_section *ms)
 }
 #endif
 
+/*
+ * To deactivate a memory region, there are 3 cases to handle across
+ * two configurations (SPARSEMEM_VMEMMAP={y,n}):
+ *
+ * 1. deactivation of a partial hot-added section (only possible in
+ *    the SPARSEMEM_VMEMMAP=y case).
+ *      a) section was present at memory init.
+ *      b) section was hot-added post memory init.
+ * 2. deactivation of a complete hot-added section.
+ * 3. deactivation of a complete section from memory init.
+ *
+ * For 1, when subsection_map does not empty we will not be freeing the
+ * usage map, but still need to free the vmemmap range.
+ *
+ * For 2 and 3, the SPARSEMEM_VMEMMAP={y,n} cases are unified
+ */
 static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
 		struct vmem_altmap *altmap)
 {
@@ -781,23 +797,7 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
 
 	if (clear_subsection_map(pfn, nr_pages))
 		return;
-	/*
-	 * There are 3 cases to handle across two configurations
-	 * (SPARSEMEM_VMEMMAP={y,n}):
-	 *
-	 * 1/ deactivation of a partial hot-added section (only possible
-	 * in the SPARSEMEM_VMEMMAP=y case).
-	 *    a/ section was present at memory init
-	 *    b/ section was hot-added post memory init
-	 * 2/ deactivation of a complete hot-added section
-	 * 3/ deactivation of a complete section from memory init
-	 *
-	 * For 1/, when subsection_map does not empty we will not be
-	 * freeing the usage map, but still need to free the vmemmap
-	 * range.
-	 *
-	 * For 2/ and 3/ the SPARSEMEM_VMEMMAP={y,n} cases are unified
-	 */
+
 	empty = is_subsection_map_empty(ms);
 	if (empty) {
 		unsigned long section_nr = pfn_to_section_nr(pfn);
@@ -905,6 +905,10 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
  *
  * This is only intended for hotplug.
  *
+ * Note that only VMEMMAP supports sub-section aligned hotplug,
+ * the proper alignment and size are gated by check_pfn_span().
+ *
+ *
  * Return:
  * * 0		- On success.
  * * -EEXIST	- Section has been present.
-- 
2.17.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v4 5/5] mm/sparse.c: move subsection_map related functions together
  2020-03-12 12:44 [PATCH v4 0/5] mm/hotplug: Only use subsection map for VMEMMAP Baoquan He
                   ` (3 preceding siblings ...)
  2020-03-12 12:44 ` [PATCH v4 4/5] mm/sparse.c: add note about only VMEMMAP supporting sub-section hotplug Baoquan He
@ 2020-03-12 12:44 ` Baoquan He
  4 siblings, 0 replies; 9+ messages in thread
From: Baoquan He @ 2020-03-12 12:44 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, akpm, mhocko, david, richard.weiyang, dan.j.williams, bhe

No functional change.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/sparse.c | 132 +++++++++++++++++++++++++---------------------------
 1 file changed, 64 insertions(+), 68 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index f02a524e17d1..bf6c00a28045 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -244,10 +244,74 @@ void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages)
 		nr_pages -= pfns;
 	}
 }
+
+static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages)
+{
+	DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
+	DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) = { 0 };
+	struct mem_section *ms = __pfn_to_section(pfn);
+	unsigned long *subsection_map = ms->usage
+		? &ms->usage->subsection_map[0] : NULL;
+
+	subsection_mask_set(map, pfn, nr_pages);
+	if (subsection_map)
+		bitmap_and(tmp, map, subsection_map, SUBSECTIONS_PER_SECTION);
+
+	if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTION),
+				"section already deactivated (%#lx + %ld)\n",
+				pfn, nr_pages))
+		return -EINVAL;
+
+	bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION);
+	return 0;
+}
+
+static bool is_subsection_map_empty(struct mem_section *ms)
+{
+	return bitmap_empty(&ms->usage->subsection_map[0],
+			    SUBSECTIONS_PER_SECTION);
+}
+
+static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
+{
+	struct mem_section *ms = __pfn_to_section(pfn);
+	DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
+	unsigned long *subsection_map;
+	int rc = 0;
+
+	subsection_mask_set(map, pfn, nr_pages);
+
+	subsection_map = &ms->usage->subsection_map[0];
+
+	if (bitmap_empty(map, SUBSECTIONS_PER_SECTION))
+		rc = -EINVAL;
+	else if (bitmap_intersects(map, subsection_map, SUBSECTIONS_PER_SECTION))
+		rc = -EEXIST;
+	else
+		bitmap_or(subsection_map, map, subsection_map,
+				SUBSECTIONS_PER_SECTION);
+
+	return rc;
+}
 #else
 void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages)
 {
 }
+
+static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages)
+{
+	return 0;
+}
+
+static bool is_subsection_map_empty(struct mem_section *ms)
+{
+	return true;
+}
+
+static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
+{
+	return 0;
+}
 #endif
 
 /* Record a memory area against a node. */
@@ -732,45 +796,6 @@ static void free_map_bootmem(struct page *memmap)
 }
 #endif /* CONFIG_SPARSEMEM_VMEMMAP */
 
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
-static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages)
-{
-	DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
-	DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) = { 0 };
-	struct mem_section *ms = __pfn_to_section(pfn);
-	unsigned long *subsection_map = ms->usage
-		? &ms->usage->subsection_map[0] : NULL;
-
-	subsection_mask_set(map, pfn, nr_pages);
-	if (subsection_map)
-		bitmap_and(tmp, map, subsection_map, SUBSECTIONS_PER_SECTION);
-
-	if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTION),
-				"section already deactivated (%#lx + %ld)\n",
-				pfn, nr_pages))
-		return -EINVAL;
-
-	bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION);
-	return 0;
-}
-
-static bool is_subsection_map_empty(struct mem_section *ms)
-{
-	return bitmap_empty(&ms->usage->subsection_map[0],
-			    SUBSECTIONS_PER_SECTION);
-}
-#else
-static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages)
-{
-	return 0;
-}
-
-static bool is_subsection_map_empty(struct mem_section *ms)
-{
-	return true;
-}
-#endif
-
 /*
  * To deactivate a memory region, there are 3 cases to handle across
  * two configurations (SPARSEMEM_VMEMMAP={y,n}):
@@ -825,35 +850,6 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
 		ms->section_mem_map = (unsigned long)NULL;
 }
 
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
-static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
-{
-	struct mem_section *ms = __pfn_to_section(pfn);
-	DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
-	unsigned long *subsection_map;
-	int rc = 0;
-
-	subsection_mask_set(map, pfn, nr_pages);
-
-	subsection_map = &ms->usage->subsection_map[0];
-
-	if (bitmap_empty(map, SUBSECTIONS_PER_SECTION))
-		rc = -EINVAL;
-	else if (bitmap_intersects(map, subsection_map, SUBSECTIONS_PER_SECTION))
-		rc = -EEXIST;
-	else
-		bitmap_or(subsection_map, map, subsection_map,
-				SUBSECTIONS_PER_SECTION);
-
-	return rc;
-}
-#else
-static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
-{
-	return 0;
-}
-#endif
-
 static struct page * __meminit section_activate(int nid, unsigned long pfn,
 		unsigned long nr_pages, struct vmem_altmap *altmap)
 {
-- 
2.17.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 2/5] mm/sparse.c: introduce a new function clear_subsection_map()
  2020-03-12 12:44 ` [PATCH v4 2/5] mm/sparse.c: introduce a new function clear_subsection_map() Baoquan He
@ 2020-03-12 13:26   ` Pankaj Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Pankaj Gupta @ 2020-03-12 13:26 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, linux-mm, Andrew Morton, mhocko, David Hildenbrand,
	richard.weiyang, dan.j.williams

>
> Factor out the code which clear subsection map of one memory region from
> section_deactivate() into clear_subsection_map().
>
> And also add helper function is_subsection_map_empty() to check if
> the current subsection map is empty or not.
>
> Signed-off-by: Baoquan He <bhe@redhat.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/sparse.c | 31 +++++++++++++++++++++++--------
>  1 file changed, 23 insertions(+), 8 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 5919bc5b1547..0be4d4ed96de 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -726,15 +726,11 @@ static void free_map_bootmem(struct page *memmap)
>  }
>  #endif /* CONFIG_SPARSEMEM_VMEMMAP */
>
> -static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
> -               struct vmem_altmap *altmap)
> +static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages)
>  {
>         DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
>         DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) = { 0 };
>         struct mem_section *ms = __pfn_to_section(pfn);
> -       bool section_is_early = early_section(ms);
> -       struct page *memmap = NULL;
> -       bool empty;
>         unsigned long *subsection_map = ms->usage
>                 ? &ms->usage->subsection_map[0] : NULL;
>
> @@ -745,8 +741,28 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
>         if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTION),
>                                 "section already deactivated (%#lx + %ld)\n",
>                                 pfn, nr_pages))
> -               return;
> +               return -EINVAL;
>
> +       bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION);
> +       return 0;
> +}
> +
> +static bool is_subsection_map_empty(struct mem_section *ms)
> +{
> +       return bitmap_empty(&ms->usage->subsection_map[0],
> +                           SUBSECTIONS_PER_SECTION);
> +}
> +
> +static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
> +               struct vmem_altmap *altmap)
> +{
> +       struct mem_section *ms = __pfn_to_section(pfn);
> +       bool section_is_early = early_section(ms);
> +       struct page *memmap = NULL;
> +       bool empty;
> +
> +       if (clear_subsection_map(pfn, nr_pages))
> +               return;
>         /*
>          * There are 3 cases to handle across two configurations
>          * (SPARSEMEM_VMEMMAP={y,n}):
> @@ -764,8 +780,7 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
>          *
>          * For 2/ and 3/ the SPARSEMEM_VMEMMAP={y,n} cases are unified
>          */
> -       bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION);
> -       empty = bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION);
> +       empty = is_subsection_map_empty(ms);
>         if (empty) {
>                 unsigned long section_nr = pfn_to_section_nr(pfn);
>
> --

Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>

> 2.17.2
>
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 1/5] mm/sparse.c: introduce new function fill_subsection_map()
  2020-03-12 12:44 ` [PATCH v4 1/5] mm/sparse.c: introduce new function fill_subsection_map() Baoquan He
@ 2020-03-12 13:30   ` Pankaj Gupta
  0 siblings, 0 replies; 9+ messages in thread
From: Pankaj Gupta @ 2020-03-12 13:30 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, linux-mm, Andrew Morton, mhocko, David Hildenbrand,
	richard.weiyang, dan.j.williams

>
> Factor out the code that fills the subsection map from section_activate()
> into fill_subsection_map(), this makes section_activate() cleaner and
> easier to follow.
>
> Signed-off-by: Baoquan He <bhe@redhat.com>
> Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/sparse.c | 32 +++++++++++++++++++++-----------
>  1 file changed, 21 insertions(+), 11 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index cf28505e82c5..5919bc5b1547 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -792,24 +792,15 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
>                 ms->section_mem_map = (unsigned long)NULL;
>  }
>
> -static struct page * __meminit section_activate(int nid, unsigned long pfn,
> -               unsigned long nr_pages, struct vmem_altmap *altmap)
> +static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
>  {
> -       DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
>         struct mem_section *ms = __pfn_to_section(pfn);
> -       struct mem_section_usage *usage = NULL;
> +       DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 };
>         unsigned long *subsection_map;
> -       struct page *memmap;
>         int rc = 0;
>
>         subsection_mask_set(map, pfn, nr_pages);
>
> -       if (!ms->usage) {
> -               usage = kzalloc(mem_section_usage_size(), GFP_KERNEL);
> -               if (!usage)
> -                       return ERR_PTR(-ENOMEM);
> -               ms->usage = usage;
> -       }
>         subsection_map = &ms->usage->subsection_map[0];
>
>         if (bitmap_empty(map, SUBSECTIONS_PER_SECTION))
> @@ -820,6 +811,25 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
>                 bitmap_or(subsection_map, map, subsection_map,
>                                 SUBSECTIONS_PER_SECTION);
>
> +       return rc;
> +}
> +
> +static struct page * __meminit section_activate(int nid, unsigned long pfn,
> +               unsigned long nr_pages, struct vmem_altmap *altmap)
> +{
> +       struct mem_section *ms = __pfn_to_section(pfn);
> +       struct mem_section_usage *usage = NULL;
> +       struct page *memmap;
> +       int rc = 0;
> +
> +       if (!ms->usage) {
> +               usage = kzalloc(mem_section_usage_size(), GFP_KERNEL);
> +               if (!usage)
> +                       return ERR_PTR(-ENOMEM);
> +               ms->usage = usage;
> +       }
> +
> +       rc = fill_subsection_map(pfn, nr_pages);
>         if (rc) {
>                 if (usage)
>                         ms->usage = NULL;
> --

Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>

> 2.17.2
>
>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 4/5] mm/sparse.c: add note about only VMEMMAP supporting sub-section hotplug
  2020-03-12 12:44 ` [PATCH v4 4/5] mm/sparse.c: add note about only VMEMMAP supporting sub-section hotplug Baoquan He
@ 2020-03-12 13:45   ` David Hildenbrand
  0 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand @ 2020-03-12 13:45 UTC (permalink / raw)
  To: Baoquan He, linux-kernel
  Cc: linux-mm, akpm, mhocko, richard.weiyang, dan.j.williams

On 12.03.20 13:44, Baoquan He wrote:
> And tell check_pfn_span() gating the porper alignment and size of
> hot added memory region.
> 
> And also move the code comments from inside section_deactivate()
> to being above it. The code comments are reasonable for the whole
> function, and the moving makes code cleaner.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> Acked-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/sparse.c | 38 +++++++++++++++++++++-----------------
>  1 file changed, 21 insertions(+), 17 deletions(-)
> 
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 117fe4554c38..f02a524e17d1 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -771,6 +771,22 @@ static bool is_subsection_map_empty(struct mem_section *ms)
>  }
>  #endif
>  
> +/*
> + * To deactivate a memory region, there are 3 cases to handle across
> + * two configurations (SPARSEMEM_VMEMMAP={y,n}):
> + *
> + * 1. deactivation of a partial hot-added section (only possible in
> + *    the SPARSEMEM_VMEMMAP=y case).
> + *      a) section was present at memory init.
> + *      b) section was hot-added post memory init.
> + * 2. deactivation of a complete hot-added section.
> + * 3. deactivation of a complete section from memory init.
> + *
> + * For 1, when subsection_map does not empty we will not be freeing the
> + * usage map, but still need to free the vmemmap range.
> + *
> + * For 2 and 3, the SPARSEMEM_VMEMMAP={y,n} cases are unified
> + */
>  static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
>  		struct vmem_altmap *altmap)
>  {
> @@ -781,23 +797,7 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
>  
>  	if (clear_subsection_map(pfn, nr_pages))
>  		return;
> -	/*
> -	 * There are 3 cases to handle across two configurations
> -	 * (SPARSEMEM_VMEMMAP={y,n}):
> -	 *
> -	 * 1/ deactivation of a partial hot-added section (only possible
> -	 * in the SPARSEMEM_VMEMMAP=y case).
> -	 *    a/ section was present at memory init
> -	 *    b/ section was hot-added post memory init
> -	 * 2/ deactivation of a complete hot-added section
> -	 * 3/ deactivation of a complete section from memory init
> -	 *
> -	 * For 1/, when subsection_map does not empty we will not be
> -	 * freeing the usage map, but still need to free the vmemmap
> -	 * range.
> -	 *
> -	 * For 2/ and 3/ the SPARSEMEM_VMEMMAP={y,n} cases are unified
> -	 */
> +
>  	empty = is_subsection_map_empty(ms);
>  	if (empty) {
>  		unsigned long section_nr = pfn_to_section_nr(pfn);
> @@ -905,6 +905,10 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
>   *
>   * This is only intended for hotplug.
>   *
> + * Note that only VMEMMAP supports sub-section aligned hotplug,
> + * the proper alignment and size are gated by check_pfn_span().
> + *
> + *
>   * Return:
>   * * 0		- On success.
>   * * -EEXIST	- Section has been present.
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-12 13:45 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-12 12:44 [PATCH v4 0/5] mm/hotplug: Only use subsection map for VMEMMAP Baoquan He
2020-03-12 12:44 ` [PATCH v4 1/5] mm/sparse.c: introduce new function fill_subsection_map() Baoquan He
2020-03-12 13:30   ` Pankaj Gupta
2020-03-12 12:44 ` [PATCH v4 2/5] mm/sparse.c: introduce a new function clear_subsection_map() Baoquan He
2020-03-12 13:26   ` Pankaj Gupta
2020-03-12 12:44 ` [PATCH v4 3/5] mm/sparse.c: only use subsection map in VMEMMAP case Baoquan He
2020-03-12 12:44 ` [PATCH v4 4/5] mm/sparse.c: add note about only VMEMMAP supporting sub-section hotplug Baoquan He
2020-03-12 13:45   ` David Hildenbrand
2020-03-12 12:44 ` [PATCH v4 5/5] mm/sparse.c: move subsection_map related functions together Baoquan He

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).