linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/6] mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE
@ 2021-01-13  7:35 Dan Williams
  2021-01-13  7:35 ` [PATCH v3 1/6] mm: Move pfn_to_online_page() out of line Dan Williams
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Dan Williams @ 2021-01-13  7:35 UTC (permalink / raw)
  To: linux-mm
  Cc: David Hildenbrand, Dave Jiang, Ira Weiny, stable,
	Naoya Horiguchi, Qian Cai, Michal Hocko, Oscar Salvador,
	Michal Hocko, Naoya Horiguchi, Vishal Verma, Andrew Morton,
	linux-nvdimm, linux-kernel

Changes since v2 [1]:
- Collect some reviewed-by's from David and Oscar

- Rework subsection validity to include pfn_valid() gated by
  CONFIG_HAVE_ARCH_PFN_VALID (David, Oscar)

- Introduce pgmap_pfn_valid() to validate metadata vs data in a pgmap (David)

! Kill put_ref_page(): the extra "if (ref_page) put_page(ref_page)" still
  feels more cluttered than adding a tiny helper. (Oscar)

[1]: http://lore.kernel.org/r/161044407603.1482714.16630477578392768273.stgit@dwillia2-desk3.amr.corp.intel.com

---

Michal reminds that the discussion about how to ensure pfn-walkers do
not get confused by ZONE_DEVICE pages never resolved. A pfn-walker that
uses pfn_to_online_page() may inadvertently translate a pfn as online
and in the page allocator, when it is offline managed by a ZONE_DEVICE
mapping (details in Patch 3: ("mm: Teach pfn_to_online_page() about
ZONE_DEVICE section collisions")).

The 2 proposals under consideration are teach pfn_to_online_page() to be
precise in the presence of mixed-zone sections, or teach the memory-add
code to drop the System RAM associated with ZONE_DEVICE collisions. In
order to not regress memory capacity by a few 10s to 100s of MiB the
approach taken in this set is to add precision to pfn_to_online_page().

In the course of validating pfn_to_online_page() a couple other fixes
fell out:

1/ soft_offline_page() fails to drop the reference taken in the
   madvise(..., MADV_SOFT_OFFLINE) case.

2/ The libnvdimm sysfs attribute visibility code was failing to publish
   the resource base for memmap=ss!nn defined namespaces. This is needed
   for the regression test for soft_offline_page().

3/ memory_failure() uses get_dev_pagemap() to lookup ZONE_DEVICE pages,
   however that mapping may contain data pages and metadata raw pfns.
   Introduce pgmap_pfn_valid() to delineate the 2 types and fail the
   handling of raw metadata pfns.

---

Dan Williams (6):
      mm: Move pfn_to_online_page() out of line
      mm: Teach pfn_to_online_page() to consider subsection validity
      mm: Teach pfn_to_online_page() about ZONE_DEVICE section collisions
      mm: Fix page reference leak in soft_offline_page()
      mm: Fix memory_failure() handling of dax-namespace metadata
      libnvdimm/namespace: Fix visibility of namespace resource attribute


 drivers/nvdimm/namespace_devs.c |   10 +++---
 include/linux/memory_hotplug.h  |   17 +--------
 include/linux/memremap.h        |    6 +++
 include/linux/mmzone.h          |   22 ++++++++----
 mm/memory-failure.c             |   26 ++++++++++++--
 mm/memory_hotplug.c             |   70 +++++++++++++++++++++++++++++++++++++++
 mm/memremap.c                   |   15 ++++++++
 7 files changed, 134 insertions(+), 32 deletions(-)


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3 1/6] mm: Move pfn_to_online_page() out of line
  2021-01-13  7:35 [PATCH v3 0/6] mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE Dan Williams
@ 2021-01-13  7:35 ` Dan Williams
  2021-01-13  7:35 ` [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity Dan Williams
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-01-13  7:35 UTC (permalink / raw)
  To: linux-mm
  Cc: Michal Hocko, David Hildenbrand, Oscar Salvador, vishal.l.verma,
	linux-nvdimm, linux-kernel

pfn_to_online_page() is already too large to be a macro or an inline
function. In anticipation of further logic changes / growth, move it out
of line.

No functional change, just code movement.

Reported-by: Michal Hocko <mhocko@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 include/linux/memory_hotplug.h |   17 +----------------
 mm/memory_hotplug.c            |   16 ++++++++++++++++
 2 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 15acce5ab106..3d99de0db2dd 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -16,22 +16,7 @@ struct resource;
 struct vmem_altmap;
 
 #ifdef CONFIG_MEMORY_HOTPLUG
-/*
- * Return page for the valid pfn only if the page is online. All pfn
- * walkers which rely on the fully initialized page->flags and others
- * should use this rather than pfn_valid && pfn_to_page
- */
-#define pfn_to_online_page(pfn)					   \
-({								   \
-	struct page *___page = NULL;				   \
-	unsigned long ___pfn = pfn;				   \
-	unsigned long ___nr = pfn_to_section_nr(___pfn);	   \
-								   \
-	if (___nr < NR_MEM_SECTIONS && online_section_nr(___nr) && \
-	    pfn_valid_within(___pfn))				   \
-		___page = pfn_to_page(___pfn);			   \
-	___page;						   \
-})
+struct page *pfn_to_online_page(unsigned long pfn);
 
 /*
  * Types for free bootmem stored in page->lru.next. These have to be in
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index f9d57b9be8c7..55a69d4396e7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -300,6 +300,22 @@ static int check_hotplug_memory_addressable(unsigned long pfn,
 	return 0;
 }
 
+/*
+ * Return page for the valid pfn only if the page is online. All pfn
+ * walkers which rely on the fully initialized page->flags and others
+ * should use this rather than pfn_valid && pfn_to_page
+ */
+struct page *pfn_to_online_page(unsigned long pfn)
+{
+	unsigned long nr = pfn_to_section_nr(pfn);
+
+	if (nr < NR_MEM_SECTIONS && online_section_nr(nr) &&
+	    pfn_valid_within(pfn))
+		return pfn_to_page(pfn);
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(pfn_to_online_page);
+
 /*
  * Reasonably generic function for adding memory.  It is
  * expected that archs that support memory hotplug will



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity
  2021-01-13  7:35 [PATCH v3 0/6] mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE Dan Williams
  2021-01-13  7:35 ` [PATCH v3 1/6] mm: Move pfn_to_online_page() out of line Dan Williams
@ 2021-01-13  7:35 ` Dan Williams
  2021-01-13  8:29   ` David Hildenbrand
  2021-01-13  8:30   ` Oscar Salvador
  2021-01-13  7:35 ` [PATCH v3 3/6] mm: Teach pfn_to_online_page() about ZONE_DEVICE section collisions Dan Williams
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 12+ messages in thread
From: Dan Williams @ 2021-01-13  7:35 UTC (permalink / raw)
  To: linux-mm
  Cc: Qian Cai, Michal Hocko, Oscar Salvador, David Hildenbrand,
	vishal.l.verma, linux-nvdimm, linux-kernel

pfn_section_valid() determines pfn validity on subsection granularity
where pfn_valid() may be limited to coarse section granularity.
Explicitly validate subsections after pfn_valid() succeeds.

Fixes: b13bc35193d9 ("mm/hotplug: invalid PFNs from pfn_to_online_page()")
Cc: Qian Cai <cai@lca.pw>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oscar Salvador <osalvador@suse.de>
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 mm/memory_hotplug.c |   24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 55a69d4396e7..9f37f8a68da4 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -308,11 +308,27 @@ static int check_hotplug_memory_addressable(unsigned long pfn,
 struct page *pfn_to_online_page(unsigned long pfn)
 {
 	unsigned long nr = pfn_to_section_nr(pfn);
+	struct mem_section *ms;
+
+	if (nr >= NR_MEM_SECTIONS)
+		return NULL;
+
+	ms = __nr_to_section(nr);
+	if (!online_section(ms))
+		return NULL;
+
+	/*
+	 * Save some code text when online_section() +
+	 * pfn_section_valid() are sufficient.
+	 */
+	if (IS_ENABLED(CONFIG_HAVE_ARCH_PFN_VALID))
+		if (!pfn_valid(pfn))
+			return NULL;
+
+	if (!pfn_section_valid(ms, pfn))
+		return NULL;
 
-	if (nr < NR_MEM_SECTIONS && online_section_nr(nr) &&
-	    pfn_valid_within(pfn))
-		return pfn_to_page(pfn);
-	return NULL;
+	return pfn_to_page(pfn);
 }
 EXPORT_SYMBOL_GPL(pfn_to_online_page);
 



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 3/6] mm: Teach pfn_to_online_page() about ZONE_DEVICE section collisions
  2021-01-13  7:35 [PATCH v3 0/6] mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE Dan Williams
  2021-01-13  7:35 ` [PATCH v3 1/6] mm: Move pfn_to_online_page() out of line Dan Williams
  2021-01-13  7:35 ` [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity Dan Williams
@ 2021-01-13  7:35 ` Dan Williams
  2021-01-13  7:35 ` [PATCH v3 4/6] mm: Fix page reference leak in soft_offline_page() Dan Williams
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-01-13  7:35 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Michal Hocko, David Hildenbrand,
	David Hildenbrand, Oscar Salvador, vishal.l.verma, linux-nvdimm,
	linux-kernel

While pfn_to_online_page() is able to determine pfn_valid() at
subsection granularity it is not able to reliably determine if a given
pfn is also online if the section is mixes ZONE_{NORMAL,MOVABLE} with
ZONE_DEVICE. This means that pfn_to_online_page() may return invalid
@page objects. For example with a memory map like:

100000000-1fbffffff : System RAM
  142000000-143002e16 : Kernel code
  143200000-143713fff : Kernel rodata
  143800000-143b15b7f : Kernel data
  144227000-144ffffff : Kernel bss
1fc000000-2fbffffff : Persistent Memory (legacy)
  1fc000000-2fbffffff : namespace0.0

This command:

echo 0x1fc000000 > /sys/devices/system/memory/soft_offline_page

...succeeds when it should fail. When it succeeds it touches
an uninitialized page and may crash or cause other damage (see
dissolve_free_huge_page()).

While the memory map above is contrived via the memmap=ss!nn kernel
command line option, the collision happens in practice on shipping
platforms. The memory controller resources that decode spans of
physical address space are a limited resource. One technique
platform-firmware uses to conserve those resources is to share a decoder
across 2 devices to keep the address range contiguous. Unfortunately the
unit of operation of a decoder is 64MiB while the Linux section size is
128MiB. This results in situations where, without subsection hotplug
memory mappings with different lifetimes collide into one object that
can only express one lifetime.

Update move_pfn_range_to_zone() to flag (SECTION_TAINT_ZONE_DEVICE) a
section that mixes ZONE_DEVICE pfns with other online pfns. With
SECTION_TAINT_ZONE_DEVICE to delineate, pfn_to_online_page() can fall
back to a slow-path check for ZONE_DEVICE pfns in an online section. In
the fast path online_section() for a full ZONE_DEVICE section returns
false.

Because the collision case is rare, and for simplicity, the
SECTION_TAINT_ZONE_DEVICE flag is never cleared once set.

Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
Cc: Andrew Morton <akpm@linux-foundation.org>
Reported-by: Michal Hocko <mhocko@suse.com>
Reported-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 include/linux/mmzone.h |   22 +++++++++++++++-------
 mm/memory_hotplug.c    |   38 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+), 7 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index b593316bff3d..0b5c44f730b4 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1273,13 +1273,14 @@ extern size_t mem_section_usage_size(void);
  *      which results in PFN_SECTION_SHIFT equal 6.
  * To sum it up, at least 6 bits are available.
  */
-#define	SECTION_MARKED_PRESENT	(1UL<<0)
-#define SECTION_HAS_MEM_MAP	(1UL<<1)
-#define SECTION_IS_ONLINE	(1UL<<2)
-#define SECTION_IS_EARLY	(1UL<<3)
-#define SECTION_MAP_LAST_BIT	(1UL<<4)
-#define SECTION_MAP_MASK	(~(SECTION_MAP_LAST_BIT-1))
-#define SECTION_NID_SHIFT	3
+#define SECTION_MARKED_PRESENT		(1UL<<0)
+#define SECTION_HAS_MEM_MAP		(1UL<<1)
+#define SECTION_IS_ONLINE		(1UL<<2)
+#define SECTION_IS_EARLY		(1UL<<3)
+#define SECTION_TAINT_ZONE_DEVICE	(1UL<<4)
+#define SECTION_MAP_LAST_BIT		(1UL<<5)
+#define SECTION_MAP_MASK		(~(SECTION_MAP_LAST_BIT-1))
+#define SECTION_NID_SHIFT		3
 
 static inline struct page *__section_mem_map_addr(struct mem_section *section)
 {
@@ -1318,6 +1319,13 @@ static inline int online_section(struct mem_section *section)
 	return (section && (section->section_mem_map & SECTION_IS_ONLINE));
 }
 
+static inline int online_device_section(struct mem_section *section)
+{
+	unsigned long flags = SECTION_IS_ONLINE | SECTION_TAINT_ZONE_DEVICE;
+
+	return section && ((section->section_mem_map & flags) == flags);
+}
+
 static inline int online_section_nr(unsigned long nr)
 {
 	return online_section(__nr_to_section(nr));
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9f37f8a68da4..889d58523fa1 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -308,6 +308,7 @@ static int check_hotplug_memory_addressable(unsigned long pfn,
 struct page *pfn_to_online_page(unsigned long pfn)
 {
 	unsigned long nr = pfn_to_section_nr(pfn);
+	struct dev_pagemap *pgmap;
 	struct mem_section *ms;
 
 	if (nr >= NR_MEM_SECTIONS)
@@ -328,6 +329,22 @@ struct page *pfn_to_online_page(unsigned long pfn)
 	if (!pfn_section_valid(ms, pfn))
 		return NULL;
 
+	if (!online_device_section(ms))
+		return pfn_to_page(pfn);
+
+	/*
+	 * Slowpath: when ZONE_DEVICE collides with
+	 * ZONE_{NORMAL,MOVABLE} within the same section some pfns in
+	 * the section may be 'offline' but 'valid'. Only
+	 * get_dev_pagemap() can determine sub-section online status.
+	 */
+	pgmap = get_dev_pagemap(pfn, NULL);
+	put_dev_pagemap(pgmap);
+
+	/* The presence of a pgmap indicates ZONE_DEVICE offline pfn */
+	if (pgmap)
+		return NULL;
+
 	return pfn_to_page(pfn);
 }
 EXPORT_SYMBOL_GPL(pfn_to_online_page);
@@ -710,6 +727,14 @@ static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsigned lon
 	pgdat->node_spanned_pages = max(start_pfn + nr_pages, old_end_pfn) - pgdat->node_start_pfn;
 
 }
+
+static void section_taint_zone_device(unsigned long pfn)
+{
+	struct mem_section *ms = __pfn_to_section(pfn);
+
+	ms->section_mem_map |= SECTION_TAINT_ZONE_DEVICE;
+}
+
 /*
  * Associate the pfn range with the given zone, initializing the memmaps
  * and resizing the pgdat/zone data to span the added pages. After this
@@ -739,6 +764,19 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
 	resize_pgdat_range(pgdat, start_pfn, nr_pages);
 	pgdat_resize_unlock(pgdat, &flags);
 
+	/*
+	 * Subsection population requires care in pfn_to_online_page().
+	 * Set the taint to enable the slow path detection of
+	 * ZONE_DEVICE pages in an otherwise  ZONE_{NORMAL,MOVABLE}
+	 * section.
+	 */
+	if (zone_idx(zone) == ZONE_DEVICE) {
+		if (!IS_ALIGNED(start_pfn, PAGES_PER_SECTION))
+			section_taint_zone_device(start_pfn);
+		if (!IS_ALIGNED(start_pfn + nr_pages, PAGES_PER_SECTION))
+			section_taint_zone_device(start_pfn + nr_pages);
+	}
+
 	/*
 	 * TODO now we have a visible range of pages which are not associated
 	 * with their zone properly. Not nice but set_pfnblock_flags_mask



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 4/6] mm: Fix page reference leak in soft_offline_page()
  2021-01-13  7:35 [PATCH v3 0/6] mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE Dan Williams
                   ` (2 preceding siblings ...)
  2021-01-13  7:35 ` [PATCH v3 3/6] mm: Teach pfn_to_online_page() about ZONE_DEVICE section collisions Dan Williams
@ 2021-01-13  7:35 ` Dan Williams
  2021-01-13  7:35 ` [PATCH v3 5/6] mm: Fix memory_failure() handling of dax-namespace metadata Dan Williams
  2021-01-13  7:35 ` [PATCH v3 6/6] libnvdimm/namespace: Fix visibility of namespace resource attribute Dan Williams
  5 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-01-13  7:35 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Naoya Horiguchi, Michal Hocko, David Hildenbrand,
	Oscar Salvador, stable, vishal.l.verma, linux-nvdimm,
	linux-kernel

The conversion to move pfn_to_online_page() internal to
soft_offline_page() missed that the get_user_pages() reference taken by
the madvise() path needs to be dropped when pfn_to_online_page() fails.
Note the direct sysfs-path to soft_offline_page() does not perform a
get_user_pages() lookup.

When soft_offline_page() is handed a pfn_valid() &&
!pfn_to_online_page() pfn the kernel hangs at dax-device shutdown due to
a leaked reference.

Fixes: feec24a6139d ("mm, soft-offline: convert parameter to pfn")
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
Cc: Michal Hocko <mhocko@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 mm/memory-failure.c |   20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 5a38e9eade94..78b173c7190c 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1885,6 +1885,12 @@ static int soft_offline_free_page(struct page *page)
 	return rc;
 }
 
+static void put_ref_page(struct page *page)
+{
+	if (page)
+		put_page(page);
+}
+
 /**
  * soft_offline_page - Soft offline a page.
  * @pfn: pfn to soft-offline
@@ -1910,20 +1916,26 @@ static int soft_offline_free_page(struct page *page)
 int soft_offline_page(unsigned long pfn, int flags)
 {
 	int ret;
-	struct page *page;
 	bool try_again = true;
+	struct page *page, *ref_page = NULL;
+
+	WARN_ON_ONCE(!pfn_valid(pfn) && (flags & MF_COUNT_INCREASED));
 
 	if (!pfn_valid(pfn))
 		return -ENXIO;
+	if (flags & MF_COUNT_INCREASED)
+		ref_page = pfn_to_page(pfn);
+
 	/* Only online pages can be soft-offlined (esp., not ZONE_DEVICE). */
 	page = pfn_to_online_page(pfn);
-	if (!page)
+	if (!page) {
+		put_ref_page(ref_page);
 		return -EIO;
+	}
 
 	if (PageHWPoison(page)) {
 		pr_info("%s: %#lx page already poisoned\n", __func__, pfn);
-		if (flags & MF_COUNT_INCREASED)
-			put_page(page);
+		put_ref_page(ref_page);
 		return 0;
 	}
 



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 5/6] mm: Fix memory_failure() handling of dax-namespace metadata
  2021-01-13  7:35 [PATCH v3 0/6] mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE Dan Williams
                   ` (3 preceding siblings ...)
  2021-01-13  7:35 ` [PATCH v3 4/6] mm: Fix page reference leak in soft_offline_page() Dan Williams
@ 2021-01-13  7:35 ` Dan Williams
  2021-01-13  8:31   ` David Hildenbrand
  2021-01-13  7:35 ` [PATCH v3 6/6] libnvdimm/namespace: Fix visibility of namespace resource attribute Dan Williams
  5 siblings, 1 reply; 12+ messages in thread
From: Dan Williams @ 2021-01-13  7:35 UTC (permalink / raw)
  To: linux-mm
  Cc: Naoya Horiguchi, Andrew Morton, David Hildenbrand,
	vishal.l.verma, linux-nvdimm, linux-kernel

Given 'struct dev_pagemap' spans both data pages and metadata pages be
careful to consult the altmap if present to delineate metadata. In fact
the pfn_first() helper already identifies the first valid data pfn, so
export that helper for other code paths via pgmap_pfn_valid().

Other usage of get_dev_pagemap() are not a concern because those are
operating on known data pfns having been looking up by get_user_pages().
I.e. metadata pfns are never user mapped.

Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 include/linux/memremap.h |    6 ++++++
 mm/memory-failure.c      |    6 ++++++
 mm/memremap.c            |   15 +++++++++++++++
 3 files changed, 27 insertions(+)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 79c49e7f5c30..f5b464daeeca 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -137,6 +137,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
 void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap);
 struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 		struct dev_pagemap *pgmap);
+bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
 
 unsigned long vmem_altmap_offset(struct vmem_altmap *altmap);
 void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns);
@@ -165,6 +166,11 @@ static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 	return NULL;
 }
 
+static inline bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn)
+{
+	return false;
+}
+
 static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
 {
 	return 0;
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 78b173c7190c..541569cb4a99 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1308,6 +1308,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
 		 */
 		put_page(page);
 
+	/* device metadata space is not recoverable */
+	if (!pgmap_pfn_valid(pgmap, pfn)) {
+		rc = -ENXIO;
+		goto out;
+	}
+
 	/*
 	 * Prevent the inode from being freed while we are interrogating
 	 * the address_space, typically this would be handled by
diff --git a/mm/memremap.c b/mm/memremap.c
index 16b2fb482da1..2455bac89506 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -80,6 +80,21 @@ static unsigned long pfn_first(struct dev_pagemap *pgmap, int range_id)
 	return pfn + vmem_altmap_offset(pgmap_altmap(pgmap));
 }
 
+bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn)
+{
+	int i;
+
+	for (i = 0; i < pgmap->nr_range; i++) {
+		struct range *range = &pgmap->ranges[i];
+
+		if (pfn >= PHYS_PFN(range->start) &&
+		    pfn <= PHYS_PFN(range->end))
+			return pfn >= pfn_first(pgmap, i);
+	}
+
+	return false;
+}
+
 static unsigned long pfn_end(struct dev_pagemap *pgmap, int range_id)
 {
 	const struct range *range = &pgmap->ranges[range_id];



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v3 6/6] libnvdimm/namespace: Fix visibility of namespace resource attribute
  2021-01-13  7:35 [PATCH v3 0/6] mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE Dan Williams
                   ` (4 preceding siblings ...)
  2021-01-13  7:35 ` [PATCH v3 5/6] mm: Fix memory_failure() handling of dax-namespace metadata Dan Williams
@ 2021-01-13  7:35 ` Dan Williams
  2021-01-13  8:35   ` Greg KH
  5 siblings, 1 reply; 12+ messages in thread
From: Dan Williams @ 2021-01-13  7:35 UTC (permalink / raw)
  To: linux-mm
  Cc: Vishal Verma, Dave Jiang, Ira Weiny, stable, linux-nvdimm, linux-kernel

Legacy pmem namespaces lost support for the "resource" attribute when
the code was cleaned up to put the permission visibility in the
declaration. Restore this by listing 'resource' in the default
attributes.

A new ndctl regression test for pfn_to_online_page() corner cases builds
on this fix.

Fixes: bfd2e9140656 ("libnvdimm: Simplify root read-only definition for the 'resource' attribute")
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/nvdimm/namespace_devs.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
index 6da67f4d641a..2403b71b601e 100644
--- a/drivers/nvdimm/namespace_devs.c
+++ b/drivers/nvdimm/namespace_devs.c
@@ -1635,11 +1635,11 @@ static umode_t namespace_visible(struct kobject *kobj,
 		return a->mode;
 	}
 
-	if (a == &dev_attr_nstype.attr || a == &dev_attr_size.attr
-			|| a == &dev_attr_holder.attr
-			|| a == &dev_attr_holder_class.attr
-			|| a == &dev_attr_force_raw.attr
-			|| a == &dev_attr_mode.attr)
+	/* base is_namespace_io() attributes */
+	if (a == &dev_attr_nstype.attr || a == &dev_attr_size.attr ||
+	    a == &dev_attr_holder.attr || a == &dev_attr_holder_class.attr ||
+	    a == &dev_attr_force_raw.attr || a == &dev_attr_mode.attr ||
+	    a == &dev_attr_resource.attr)
 		return a->mode;
 
 	return 0;



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity
  2021-01-13  7:35 ` [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity Dan Williams
@ 2021-01-13  8:29   ` David Hildenbrand
  2021-01-13 21:52     ` Dan Williams
  2021-01-13  8:30   ` Oscar Salvador
  1 sibling, 1 reply; 12+ messages in thread
From: David Hildenbrand @ 2021-01-13  8:29 UTC (permalink / raw)
  To: Dan Williams, linux-mm
  Cc: Qian Cai, Michal Hocko, Oscar Salvador, vishal.l.verma,
	linux-nvdimm, linux-kernel

On 13.01.21 08:35, Dan Williams wrote:
> pfn_section_valid() determines pfn validity on subsection granularity
> where pfn_valid() may be limited to coarse section granularity.
> Explicitly validate subsections after pfn_valid() succeeds.
> 
> Fixes: b13bc35193d9 ("mm/hotplug: invalid PFNs from pfn_to_online_page()")
> Cc: Qian Cai <cai@lca.pw>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  mm/memory_hotplug.c |   24 ++++++++++++++++++++----
>  1 file changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 55a69d4396e7..9f37f8a68da4 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -308,11 +308,27 @@ static int check_hotplug_memory_addressable(unsigned long pfn,
>  struct page *pfn_to_online_page(unsigned long pfn)
>  {
>  	unsigned long nr = pfn_to_section_nr(pfn);
> +	struct mem_section *ms;
> +
> +	if (nr >= NR_MEM_SECTIONS)
> +		return NULL;
> +
> +	ms = __nr_to_section(nr);
> +	if (!online_section(ms))
> +		return NULL;
> +
> +	/*
> +	 * Save some code text when online_section() +
> +	 * pfn_section_valid() are sufficient.
> +	 */
> +	if (IS_ENABLED(CONFIG_HAVE_ARCH_PFN_VALID))
> +		if (!pfn_valid(pfn))
> +			return NULL;

Nit:

if (IS_ENABLED(CONFIG_HAVE_ARCH_PFN_VALID) &&
    !pfn_valid(pfn))

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity
  2021-01-13  7:35 ` [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity Dan Williams
  2021-01-13  8:29   ` David Hildenbrand
@ 2021-01-13  8:30   ` Oscar Salvador
  1 sibling, 0 replies; 12+ messages in thread
From: Oscar Salvador @ 2021-01-13  8:30 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-mm, Qian Cai, Michal Hocko, David Hildenbrand,
	vishal.l.verma, linux-nvdimm, linux-kernel

On Tue, Jan 12, 2021 at 11:35:27PM -0800, Dan Williams wrote:
> pfn_section_valid() determines pfn validity on subsection granularity
> where pfn_valid() may be limited to coarse section granularity.
> Explicitly validate subsections after pfn_valid() succeeds.
> 
> Fixes: b13bc35193d9 ("mm/hotplug: invalid PFNs from pfn_to_online_page()")
> Cc: Qian Cai <cai@lca.pw>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>

Reviewed-by: Oscar Salvador <osalvador@suse.de>

> ---
>  mm/memory_hotplug.c |   24 ++++++++++++++++++++----
>  1 file changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 55a69d4396e7..9f37f8a68da4 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -308,11 +308,27 @@ static int check_hotplug_memory_addressable(unsigned long pfn,
>  struct page *pfn_to_online_page(unsigned long pfn)
>  {
>  	unsigned long nr = pfn_to_section_nr(pfn);
> +	struct mem_section *ms;
> +
> +	if (nr >= NR_MEM_SECTIONS)
> +		return NULL;
> +
> +	ms = __nr_to_section(nr);
> +	if (!online_section(ms))
> +		return NULL;
> +
> +	/*
> +	 * Save some code text when online_section() +
> +	 * pfn_section_valid() are sufficient.
> +	 */
> +	if (IS_ENABLED(CONFIG_HAVE_ARCH_PFN_VALID))
> +		if (!pfn_valid(pfn))
> +			return NULL;
> +
> +	if (!pfn_section_valid(ms, pfn))
> +		return NULL;
>  
> -	if (nr < NR_MEM_SECTIONS && online_section_nr(nr) &&
> -	    pfn_valid_within(pfn))
> -		return pfn_to_page(pfn);
> -	return NULL;
> +	return pfn_to_page(pfn);
>  }
>  EXPORT_SYMBOL_GPL(pfn_to_online_page);
>  
> 

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 5/6] mm: Fix memory_failure() handling of dax-namespace metadata
  2021-01-13  7:35 ` [PATCH v3 5/6] mm: Fix memory_failure() handling of dax-namespace metadata Dan Williams
@ 2021-01-13  8:31   ` David Hildenbrand
  0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand @ 2021-01-13  8:31 UTC (permalink / raw)
  To: Dan Williams, linux-mm
  Cc: Naoya Horiguchi, Andrew Morton, vishal.l.verma, linux-nvdimm,
	linux-kernel

On 13.01.21 08:35, Dan Williams wrote:
> Given 'struct dev_pagemap' spans both data pages and metadata pages be
> careful to consult the altmap if present to delineate metadata. In fact
> the pfn_first() helper already identifies the first valid data pfn, so
> export that helper for other code paths via pgmap_pfn_valid().
> 
> Other usage of get_dev_pagemap() are not a concern because those are
> operating on known data pfns having been looking up by get_user_pages().
> I.e. metadata pfns are never user mapped.
> 
> Cc: Naoya Horiguchi <naoya.horiguchi@nec.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  include/linux/memremap.h |    6 ++++++
>  mm/memory-failure.c      |    6 ++++++
>  mm/memremap.c            |   15 +++++++++++++++
>  3 files changed, 27 insertions(+)
> 
> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> index 79c49e7f5c30..f5b464daeeca 100644
> --- a/include/linux/memremap.h
> +++ b/include/linux/memremap.h
> @@ -137,6 +137,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
>  void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap);
>  struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
>  		struct dev_pagemap *pgmap);
> +bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
>  
>  unsigned long vmem_altmap_offset(struct vmem_altmap *altmap);
>  void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns);
> @@ -165,6 +166,11 @@ static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
>  	return NULL;
>  }
>  
> +static inline bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn)
> +{
> +	return false;
> +}
> +
>  static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap)
>  {
>  	return 0;
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 78b173c7190c..541569cb4a99 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1308,6 +1308,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
>  		 */
>  		put_page(page);
>  
> +	/* device metadata space is not recoverable */
> +	if (!pgmap_pfn_valid(pgmap, pfn)) {
> +		rc = -ENXIO;
> +		goto out;
> +	}
> +
>  	/*
>  	 * Prevent the inode from being freed while we are interrogating
>  	 * the address_space, typically this would be handled by
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 16b2fb482da1..2455bac89506 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -80,6 +80,21 @@ static unsigned long pfn_first(struct dev_pagemap *pgmap, int range_id)
>  	return pfn + vmem_altmap_offset(pgmap_altmap(pgmap));
>  }
>  
> +bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn)
> +{
> +	int i;
> +
> +	for (i = 0; i < pgmap->nr_range; i++) {
> +		struct range *range = &pgmap->ranges[i];
> +
> +		if (pfn >= PHYS_PFN(range->start) &&
> +		    pfn <= PHYS_PFN(range->end))
> +			return pfn >= pfn_first(pgmap, i);
> +	}
> +
> +	return false;
> +}
> +
>  static unsigned long pfn_end(struct dev_pagemap *pgmap, int range_id)
>  {
>  	const struct range *range = &pgmap->ranges[range_id];
> 

LGTM

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 6/6] libnvdimm/namespace: Fix visibility of namespace resource attribute
  2021-01-13  7:35 ` [PATCH v3 6/6] libnvdimm/namespace: Fix visibility of namespace resource attribute Dan Williams
@ 2021-01-13  8:35   ` Greg KH
  0 siblings, 0 replies; 12+ messages in thread
From: Greg KH @ 2021-01-13  8:35 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-mm, Vishal Verma, Dave Jiang, Ira Weiny, stable,
	linux-nvdimm, linux-kernel

On Tue, Jan 12, 2021 at 11:35:50PM -0800, Dan Williams wrote:
> Legacy pmem namespaces lost support for the "resource" attribute when
> the code was cleaned up to put the permission visibility in the
> declaration. Restore this by listing 'resource' in the default
> attributes.
> 
> A new ndctl regression test for pfn_to_online_page() corner cases builds
> on this fix.
> 
> Fixes: bfd2e9140656 ("libnvdimm: Simplify root read-only definition for the 'resource' attribute")
> Cc: Vishal Verma <vishal.l.verma@intel.com>
> Cc: Dave Jiang <dave.jiang@intel.com>
> Cc: Ira Weiny <ira.weiny@intel.com>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/nvdimm/namespace_devs.c |   10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)

<formletter>

This is not the correct way to submit patches for inclusion in the
stable kernel tree.  Please read:
    https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.

</formletter>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity
  2021-01-13  8:29   ` David Hildenbrand
@ 2021-01-13 21:52     ` Dan Williams
  0 siblings, 0 replies; 12+ messages in thread
From: Dan Williams @ 2021-01-13 21:52 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Linux MM, Qian Cai, Michal Hocko, Oscar Salvador, Vishal L Verma,
	linux-nvdimm, Linux Kernel Mailing List

On Wed, Jan 13, 2021 at 12:29 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 13.01.21 08:35, Dan Williams wrote:
> > pfn_section_valid() determines pfn validity on subsection granularity
> > where pfn_valid() may be limited to coarse section granularity.
> > Explicitly validate subsections after pfn_valid() succeeds.
> >
> > Fixes: b13bc35193d9 ("mm/hotplug: invalid PFNs from pfn_to_online_page()")
> > Cc: Qian Cai <cai@lca.pw>
> > Cc: Michal Hocko <mhocko@suse.com>
> > Cc: Oscar Salvador <osalvador@suse.de>
> > Reported-by: David Hildenbrand <david@redhat.com>
> > Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> > ---
> >  mm/memory_hotplug.c |   24 ++++++++++++++++++++----
> >  1 file changed, 20 insertions(+), 4 deletions(-)
> >
> > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> > index 55a69d4396e7..9f37f8a68da4 100644
> > --- a/mm/memory_hotplug.c
> > +++ b/mm/memory_hotplug.c
> > @@ -308,11 +308,27 @@ static int check_hotplug_memory_addressable(unsigned long pfn,
> >  struct page *pfn_to_online_page(unsigned long pfn)
> >  {
> >       unsigned long nr = pfn_to_section_nr(pfn);
> > +     struct mem_section *ms;
> > +
> > +     if (nr >= NR_MEM_SECTIONS)
> > +             return NULL;
> > +
> > +     ms = __nr_to_section(nr);
> > +     if (!online_section(ms))
> > +             return NULL;
> > +
> > +     /*
> > +      * Save some code text when online_section() +
> > +      * pfn_section_valid() are sufficient.
> > +      */
> > +     if (IS_ENABLED(CONFIG_HAVE_ARCH_PFN_VALID))
> > +             if (!pfn_valid(pfn))
> > +                     return NULL;
>
> Nit:
>
> if (IS_ENABLED(CONFIG_HAVE_ARCH_PFN_VALID) &&
>     !pfn_valid(pfn))
>

Ok... I'll do a final resend "To: akpm" after the kbuild robot
finishes chewing on this series.


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-01-13 21:52 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-13  7:35 [PATCH v3 0/6] mm: Fix pfn_to_online_page() with respect to ZONE_DEVICE Dan Williams
2021-01-13  7:35 ` [PATCH v3 1/6] mm: Move pfn_to_online_page() out of line Dan Williams
2021-01-13  7:35 ` [PATCH v3 2/6] mm: Teach pfn_to_online_page() to consider subsection validity Dan Williams
2021-01-13  8:29   ` David Hildenbrand
2021-01-13 21:52     ` Dan Williams
2021-01-13  8:30   ` Oscar Salvador
2021-01-13  7:35 ` [PATCH v3 3/6] mm: Teach pfn_to_online_page() about ZONE_DEVICE section collisions Dan Williams
2021-01-13  7:35 ` [PATCH v3 4/6] mm: Fix page reference leak in soft_offline_page() Dan Williams
2021-01-13  7:35 ` [PATCH v3 5/6] mm: Fix memory_failure() handling of dax-namespace metadata Dan Williams
2021-01-13  8:31   ` David Hildenbrand
2021-01-13  7:35 ` [PATCH v3 6/6] libnvdimm/namespace: Fix visibility of namespace resource attribute Dan Williams
2021-01-13  8:35   ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).