linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation
@ 2020-11-01 17:08 Mike Rapoport
  2020-11-01 17:08 ` [PATCH v3 1/4] mm: introduce debug_pagealloc_map_pages() helper Mike Rapoport
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Mike Rapoport @ 2020-11-01 17:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Albert Ou, Andy Lutomirski, Benjamin Herrenschmidt,
	Borislav Petkov, Catalin Marinas, Christian Borntraeger,
	Christoph Lameter, David S. Miller, Dave Hansen,
	David Hildenbrand, David Rientjes, Edgecombe, Rick P,
	H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Kirill A. Shutemov, Len Brown, Michael Ellerman, Mike Rapoport,
	Mike Rapoport, Palmer Dabbelt, Paul Mackerras, Paul Walmsley,
	Pavel Machek, Pekka Enberg, Peter Zijlstra, Rafael J. Wysocki,
	Thomas Gleixner, Vasily Gorbik, Will Deacon, linux-arm-kernel,
	linux-kernel, linux-mm, linux-pm, linux-riscv, linux-s390,
	linuxppc-dev, sparclinux, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].

Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail, but since this function is
void, the failure will go unnoticed.

Moreover, there's lack of consistency of __kernel_map_pages() semantics
across architectures as some guard this function with
#ifdef DEBUG_PAGEALLOC, some refuse to update the direct map if page
allocation debugging is disabled at run time and some allow modifying the
direct map regardless of DEBUG_PAGEALLOC settings.

This set straightens this out by restoring dependency of
__kernel_map_pages() on DEBUG_PAGEALLOC and updating the call sites
accordingly. 

Since currently the only user of __kernel_map_pages() outside
DEBUG_PAGEALLOC, it is updated to make direct map accesses there more
explicit.

[1] https://lore.kernel.org/lkml/2759b4bf-e1e3-d006-7d86-78a40348269d@redhat.com

v3 changes:
* update arm64 changes to avoid regression, per Rick's comments
* fix bisectability

v2 changes:
* Rephrase patch 2 changelog to better describe the change intentions and
implications
* Move removal of kernel_map_pages() from patch 1 to patch 2, per David
https://lore.kernel.org/lkml/20201029161902.19272-1-rppt@kernel.org

v1:
https://lore.kernel.org/lkml/20201025101555.3057-1-rppt@kernel.org

Mike Rapoport (4):
  mm: introduce debug_pagealloc_map_pages() helper
  PM: hibernate: make direct map manipulations more explicit
  arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC
  arch, mm: make kernel_page_present() always available

 arch/Kconfig                        |  3 +++
 arch/arm64/Kconfig                  |  4 +---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/mm/pageattr.c            |  6 +++--
 arch/powerpc/Kconfig                |  5 +----
 arch/riscv/Kconfig                  |  4 +---
 arch/riscv/include/asm/pgtable.h    |  2 --
 arch/riscv/include/asm/set_memory.h |  1 +
 arch/riscv/mm/pageattr.c            | 31 +++++++++++++++++++++++++
 arch/s390/Kconfig                   |  4 +---
 arch/sparc/Kconfig                  |  4 +---
 arch/x86/Kconfig                    |  4 +---
 arch/x86/include/asm/set_memory.h   |  1 +
 arch/x86/mm/pat/set_memory.c        |  4 ++--
 include/linux/mm.h                  | 35 +++++++++++++----------------
 include/linux/set_memory.h          |  5 +++++
 kernel/power/snapshot.c             | 30 +++++++++++++++++++++++--
 mm/memory_hotplug.c                 |  3 +--
 mm/page_alloc.c                     |  6 ++---
 mm/slab.c                           |  8 +++----
 20 files changed, 103 insertions(+), 58 deletions(-)

-- 
2.28.0

*** BLURB HERE ***

Mike Rapoport (4):
  mm: introduce debug_pagealloc_map_pages() helper
  PM: hibernate: make direct map manipulations more explicit
  arch, mm: restore dependency of __kernel_map_pages() of
    DEBUG_PAGEALLOC
  arch, mm: make kernel_page_present() always available

 arch/Kconfig                        |  3 +++
 arch/arm64/Kconfig                  |  4 +---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/mm/pageattr.c            |  6 +++--
 arch/powerpc/Kconfig                |  5 +----
 arch/riscv/Kconfig                  |  4 +---
 arch/riscv/include/asm/pgtable.h    |  2 --
 arch/riscv/include/asm/set_memory.h |  1 +
 arch/riscv/mm/pageattr.c            | 31 +++++++++++++++++++++++++
 arch/s390/Kconfig                   |  4 +---
 arch/sparc/Kconfig                  |  4 +---
 arch/x86/Kconfig                    |  4 +---
 arch/x86/include/asm/set_memory.h   |  1 +
 arch/x86/mm/pat/set_memory.c        |  4 ++--
 include/linux/mm.h                  | 35 +++++++++++++----------------
 include/linux/set_memory.h          |  5 +++++
 kernel/power/snapshot.c             | 30 +++++++++++++++++++++++--
 mm/memory_hotplug.c                 |  3 +--
 mm/page_alloc.c                     |  6 ++---
 mm/slab.c                           |  8 +++----
 20 files changed, 103 insertions(+), 58 deletions(-)

-- 
2.28.0


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 1/4] mm: introduce debug_pagealloc_map_pages() helper
  2020-11-01 17:08 [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Mike Rapoport
@ 2020-11-01 17:08 ` Mike Rapoport
  2020-11-01 17:08 ` [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit Mike Rapoport
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Mike Rapoport @ 2020-11-01 17:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Albert Ou, Andy Lutomirski, Benjamin Herrenschmidt,
	Borislav Petkov, Catalin Marinas, Christian Borntraeger,
	Christoph Lameter, David S. Miller, Dave Hansen,
	David Hildenbrand, David Rientjes, Edgecombe, Rick P,
	H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Kirill A. Shutemov, Len Brown, Michael Ellerman, Mike Rapoport,
	Mike Rapoport, Palmer Dabbelt, Paul Mackerras, Paul Walmsley,
	Pavel Machek, Pekka Enberg, Peter Zijlstra, Rafael J. Wysocki,
	Thomas Gleixner, Vasily Gorbik, Will Deacon, linux-arm-kernel,
	linux-kernel, linux-mm, linux-pm, linux-riscv, linux-s390,
	linuxppc-dev, sparclinux, x86

From: Mike Rapoport <rppt@linux.ibm.com>

When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled().

The only place that calls __kernel_map_pages() without checking whether
DEBUG_PAGEALLOC is enabled is the hibernation code that presumes
availability of this function when ARCH_HAS_SET_DIRECT_MAP is set.
Still, on arm64, __kernel_map_pages() will bail out when DEBUG_PAGEALLOC is
not enabled but set_direct_map_invalid_noflush() may render some pages not
present in the direct map and hibernation code won't be able to save such
pages.

To make page allocation debugging and hibernation interaction more robust,
the dependency on DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP has to be made
more explicit.

Start with combining the guard condition and the call to
__kernel_map_pages() into a single debug_pagealloc_map_pages() function to
emphasize that __kernel_map_pages() should not be called without
DEBUG_PAGEALLOC and use this new function to map/unmap pages when page
allocation debug is enabled.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h  | 10 ++++++++++
 mm/memory_hotplug.c |  3 +--
 mm/page_alloc.c     |  6 ++----
 mm/slab.c           |  8 +++-----
 4 files changed, 16 insertions(+), 11 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef360fe70aaf..1fc0609056dc 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2936,12 +2936,22 @@ kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	__kernel_map_pages(page, numpages, enable);
 }
+
+static inline void debug_pagealloc_map_pages(struct page *page,
+					     int numpages, int enable)
+{
+	if (debug_pagealloc_enabled_static())
+		__kernel_map_pages(page, numpages, enable);
+}
+
 #ifdef CONFIG_HIBERNATION
 extern bool kernel_page_present(struct page *page);
 #endif	/* CONFIG_HIBERNATION */
 #else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 static inline void
 kernel_map_pages(struct page *page, int numpages, int enable) {}
+static inline void debug_pagealloc_map_pages(struct page *page,
+					     int numpages, int enable) {}
 #ifdef CONFIG_HIBERNATION
 static inline bool kernel_page_present(struct page *page) { return true; }
 #endif	/* CONFIG_HIBERNATION */
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index b44d4c7ba73b..e2b6043a4428 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -614,8 +614,7 @@ void generic_online_page(struct page *page, unsigned int order)
 	 * so we should map it first. This is better than introducing a special
 	 * case in page freeing fast path.
 	 */
-	if (debug_pagealloc_enabled_static())
-		kernel_map_pages(page, 1 << order, 1);
+	debug_pagealloc_map_pages(page, 1 << order, 1);
 	__free_pages_core(page, order);
 	totalram_pages_add(1UL << order);
 #ifdef CONFIG_HIGHMEM
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 23f5066bd4a5..9a66a1ff9193 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1272,8 +1272,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	 */
 	arch_free_page(page, order);
 
-	if (debug_pagealloc_enabled_static())
-		kernel_map_pages(page, 1 << order, 0);
+	debug_pagealloc_map_pages(page, 1 << order, 0);
 
 	kasan_free_nondeferred_pages(page, order);
 
@@ -2270,8 +2269,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	set_page_refcounted(page);
 
 	arch_alloc_page(page, order);
-	if (debug_pagealloc_enabled_static())
-		kernel_map_pages(page, 1 << order, 1);
+	debug_pagealloc_map_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 	kernel_poison_pages(page, 1 << order, 1);
 	set_page_owner(page, order, gfp_flags);
diff --git a/mm/slab.c b/mm/slab.c
index b1113561b98b..340db0ce74c4 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1431,10 +1431,8 @@ static bool is_debug_pagealloc_cache(struct kmem_cache *cachep)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int map)
 {
-	if (!is_debug_pagealloc_cache(cachep))
-		return;
-
-	kernel_map_pages(virt_to_page(objp), cachep->size / PAGE_SIZE, map);
+	debug_pagealloc_map_pages(virt_to_page(objp),
+				  cachep->size / PAGE_SIZE, map);
 }
 
 #else
@@ -2062,7 +2060,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
 
 #if DEBUG
 	/*
-	 * If we're going to use the generic kernel_map_pages()
+	 * If we're going to use the generic debug_pagealloc_map_pages()
 	 * poisoning, then it's going to smash the contents of
 	 * the redzone and userword anyhow, so switch them off.
 	 */
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit
  2020-11-01 17:08 [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Mike Rapoport
  2020-11-01 17:08 ` [PATCH v3 1/4] mm: introduce debug_pagealloc_map_pages() helper Mike Rapoport
@ 2020-11-01 17:08 ` Mike Rapoport
  2020-11-02  9:19   ` David Hildenbrand
  2020-11-03 11:08   ` Kirill A. Shutemov
  2020-11-01 17:08 ` [PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC Mike Rapoport
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: Mike Rapoport @ 2020-11-01 17:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Albert Ou, Andy Lutomirski, Benjamin Herrenschmidt,
	Borislav Petkov, Catalin Marinas, Christian Borntraeger,
	Christoph Lameter, David S. Miller, Dave Hansen,
	David Hildenbrand, David Rientjes, Edgecombe, Rick P,
	H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Kirill A. Shutemov, Len Brown, Michael Ellerman, Mike Rapoport,
	Mike Rapoport, Palmer Dabbelt, Paul Mackerras, Paul Walmsley,
	Pavel Machek, Pekka Enberg, Peter Zijlstra, Rafael J. Wysocki,
	Thomas Gleixner, Vasily Gorbik, Will Deacon, linux-arm-kernel,
	linux-kernel, linux-mm, linux-pm, linux-riscv, linux-s390,
	linuxppc-dev, sparclinux, x86, Rafael J . Wysocki

From: Mike Rapoport <rppt@linux.ibm.com>

When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.

Introduce hibernate_map_page() that will explicitly use
set_direct_map_{default,invalid}_noflush() for ARCH_HAS_SET_DIRECT_MAP case
and debug_pagealloc_map_pages() for DEBUG_PAGEALLOC case.

The remapping of the pages in safe_copy_page() presumes that it only
changes protection bits in an existing PTE and so it is safe to ignore
return value of set_direct_map_{default,invalid}_noflush().

Still, add a WARN_ON() so that future changes in set_memory APIs will not
silently break hibernation.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 include/linux/mm.h      | 12 ------------
 kernel/power/snapshot.c | 30 ++++++++++++++++++++++++++++--
 2 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1fc0609056dc..14e397f3752c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2927,16 +2927,6 @@ static inline bool debug_pagealloc_enabled_static(void)
 #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
 extern void __kernel_map_pages(struct page *page, int numpages, int enable);
 
-/*
- * When called in DEBUG_PAGEALLOC context, the call should most likely be
- * guarded by debug_pagealloc_enabled() or debug_pagealloc_enabled_static()
- */
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable)
-{
-	__kernel_map_pages(page, numpages, enable);
-}
-
 static inline void debug_pagealloc_map_pages(struct page *page,
 					     int numpages, int enable)
 {
@@ -2948,8 +2938,6 @@ static inline void debug_pagealloc_map_pages(struct page *page,
 extern bool kernel_page_present(struct page *page);
 #endif	/* CONFIG_HIBERNATION */
 #else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
-static inline void
-kernel_map_pages(struct page *page, int numpages, int enable) {}
 static inline void debug_pagealloc_map_pages(struct page *page,
 					     int numpages, int enable) {}
 #ifdef CONFIG_HIBERNATION
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 46b1804c1ddf..054c8cce4236 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -76,6 +76,32 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
 static inline void hibernate_restore_unprotect_page(void *page_address) {}
 #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
 
+static inline void hibernate_map_page(struct page *page, int enable)
+{
+	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
+		unsigned long addr = (unsigned long)page_address(page);
+		int ret;
+
+		/*
+		 * This should not fail because remapping a page here means
+		 * that we only update protection bits in an existing PTE.
+		 * It is still worth to have WARN_ON() here if something
+		 * changes and this will no longer be the case.
+		 */
+		if (enable)
+			ret = set_direct_map_default_noflush(page);
+		else
+			ret = set_direct_map_invalid_noflush(page);
+
+		if (WARN_ON(ret))
+			return;
+
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	} else {
+		debug_pagealloc_map_pages(page, 1, enable);
+	}
+}
+
 static int swsusp_page_is_free(struct page *);
 static void swsusp_set_page_forbidden(struct page *);
 static void swsusp_unset_page_forbidden(struct page *);
@@ -1355,9 +1381,9 @@ static void safe_copy_page(void *dst, struct page *s_page)
 	if (kernel_page_present(s_page)) {
 		do_copy_page(dst, page_address(s_page));
 	} else {
-		kernel_map_pages(s_page, 1, 1);
+		hibernate_map_page(s_page, 1);
 		do_copy_page(dst, page_address(s_page));
-		kernel_map_pages(s_page, 1, 0);
+		hibernate_map_page(s_page, 0);
 	}
 }
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC
  2020-11-01 17:08 [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Mike Rapoport
  2020-11-01 17:08 ` [PATCH v3 1/4] mm: introduce debug_pagealloc_map_pages() helper Mike Rapoport
  2020-11-01 17:08 ` [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit Mike Rapoport
@ 2020-11-01 17:08 ` Mike Rapoport
  2020-11-02  9:23   ` David Hildenbrand
  2020-11-01 17:08 ` [PATCH v3 4/4] arch, mm: make kernel_page_present() always available Mike Rapoport
  2020-11-03 11:15 ` [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Kirill A. Shutemov
  4 siblings, 1 reply; 16+ messages in thread
From: Mike Rapoport @ 2020-11-01 17:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Albert Ou, Andy Lutomirski, Benjamin Herrenschmidt,
	Borislav Petkov, Catalin Marinas, Christian Borntraeger,
	Christoph Lameter, David S. Miller, Dave Hansen,
	David Hildenbrand, David Rientjes, Edgecombe, Rick P,
	H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Kirill A. Shutemov, Len Brown, Michael Ellerman, Mike Rapoport,
	Mike Rapoport, Palmer Dabbelt, Paul Mackerras, Paul Walmsley,
	Pavel Machek, Pekka Enberg, Peter Zijlstra, Rafael J. Wysocki,
	Thomas Gleixner, Vasily Gorbik, Will Deacon, linux-arm-kernel,
	linux-kernel, linux-mm, linux-pm, linux-riscv, linux-s390,
	linuxppc-dev, sparclinux, x86

From: Mike Rapoport <rppt@linux.ibm.com>

The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.

Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef DEBUG_PAGEALLOC and some refuse to map/unmap
pages when page allocation debugging is disabled at runtime.

As all the users of __kernel_map_pages() were converted to use
debug_pagealloc_map_pages() it is safe to make it available only when
DEBUG_PAGEALLOC is set.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 arch/Kconfig                     |  3 +++
 arch/arm64/Kconfig               |  4 +---
 arch/arm64/mm/pageattr.c         |  8 ++++++--
 arch/powerpc/Kconfig             |  5 +----
 arch/riscv/Kconfig               |  4 +---
 arch/riscv/include/asm/pgtable.h |  2 --
 arch/riscv/mm/pageattr.c         |  2 ++
 arch/s390/Kconfig                |  4 +---
 arch/sparc/Kconfig               |  4 +---
 arch/x86/Kconfig                 |  4 +---
 arch/x86/mm/pat/set_memory.c     |  2 ++
 include/linux/mm.h               | 10 +++++++---
 12 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index 56b6ccc0e32d..56d4752b6db6 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1028,6 +1028,9 @@ config HAVE_STATIC_CALL_INLINE
 	bool
 	depends on HAVE_STATIC_CALL
 
+config ARCH_SUPPORTS_DEBUG_PAGEALLOC
+	bool
+
 source "kernel/gcov/Kconfig"
 
 source "scripts/gcc-plugins/Kconfig"
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f858c352f72a..5a01dfb77b93 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -71,6 +71,7 @@ config ARM64
 	select ARCH_USE_QUEUED_RWLOCKS
 	select ARCH_USE_QUEUED_SPINLOCKS
 	select ARCH_USE_SYM_ANNOTATIONS
+	select ARCH_SUPPORTS_DEBUG_PAGEALLOC
 	select ARCH_SUPPORTS_MEMORY_FAILURE
 	select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK
 	select ARCH_SUPPORTS_ATOMIC_RMW
@@ -1005,9 +1006,6 @@ config HOLES_IN_ZONE
 
 source "kernel/Kconfig.hz"
 
-config ARCH_SUPPORTS_DEBUG_PAGEALLOC
-	def_bool y
-
 config ARCH_SPARSEMEM_ENABLE
 	def_bool y
 	select SPARSEMEM_VMEMMAP_ENABLE
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 1b94f5b82654..439325532be1 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -155,7 +155,7 @@ int set_direct_map_invalid_noflush(struct page *page)
 		.clear_mask = __pgprot(PTE_VALID),
 	};
 
-	if (!rodata_full)
+	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -170,7 +170,7 @@ int set_direct_map_default_noflush(struct page *page)
 		.clear_mask = __pgprot(PTE_RDONLY),
 	};
 
-	if (!rodata_full)
+	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -178,6 +178,7 @@ int set_direct_map_default_noflush(struct page *page)
 				   PAGE_SIZE, change_page_range, &data);
 }
 
+#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (!debug_pagealloc_enabled() && !rodata_full)
@@ -186,6 +187,7 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 	set_memory_valid((unsigned long)page_address(page), numpages, enable);
 }
 
+#ifdef CONFIG_HIBERNATION
 /*
  * This function is used to determine if a linear map page has been marked as
  * not-valid. Walk the page table and check the PTE_VALID bit. This is based
@@ -232,3 +234,5 @@ bool kernel_page_present(struct page *page)
 	ptep = pte_offset_kernel(pmdp, addr);
 	return pte_valid(READ_ONCE(*ptep));
 }
+#endif /* CONFIG_HIBERNATION */
+#endif /* CONFIG_DEBUG_PAGEALLOC */
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index e9f13fe08492..ad8a83f3ddca 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -146,6 +146,7 @@ config PPC
 	select ARCH_MIGHT_HAVE_PC_SERIO
 	select ARCH_OPTIONAL_KERNEL_RWX		if ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_SUPPORTS_ATOMIC_RMW
+	select ARCH_SUPPORTS_DEBUG_PAGEALLOC	if PPC32 || PPC_BOOK3S_64
 	select ARCH_USE_BUILTIN_BSWAP
 	select ARCH_USE_CMPXCHG_LOCKREF		if PPC64
 	select ARCH_USE_QUEUED_RWLOCKS		if PPC_QUEUED_SPINLOCKS
@@ -355,10 +356,6 @@ config PPC_OF_PLATFORM_PCI
 	depends on PCI
 	depends on PPC64 # not supported on 32 bits yet
 
-config ARCH_SUPPORTS_DEBUG_PAGEALLOC
-	depends on PPC32 || PPC_BOOK3S_64
-	def_bool y
-
 config ARCH_SUPPORTS_UPROBES
 	def_bool y
 
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 44377fd7860e..9283c6f9ae2a 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -14,6 +14,7 @@ config RISCV
 	def_bool y
 	select ARCH_CLOCKSOURCE_INIT
 	select ARCH_SUPPORTS_ATOMIC_RMW
+	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
 	select ARCH_HAS_BINFMT_FLAT
 	select ARCH_HAS_DEBUG_VM_PGTABLE
 	select ARCH_HAS_DEBUG_VIRTUAL if MMU
@@ -153,9 +154,6 @@ config ARCH_SELECT_MEMORY_MODEL
 config ARCH_WANT_GENERAL_HUGETLB
 	def_bool y
 
-config ARCH_SUPPORTS_DEBUG_PAGEALLOC
-	def_bool y
-
 config SYS_SUPPORTS_HUGETLBFS
 	depends on MMU
 	def_bool y
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 183f1f4b2ae6..41a72861987c 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -461,8 +461,6 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
 #define VMALLOC_START		0
 #define VMALLOC_END		TASK_SIZE
 
-static inline void __kernel_map_pages(struct page *page, int numpages, int enable) {}
-
 #endif /* !CONFIG_MMU */
 
 #define kern_addr_valid(addr)   (1) /* FIXME */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 19fecb362d81..321b09d2e2ea 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -184,6 +184,7 @@ int set_direct_map_default_noflush(struct page *page)
 	return ret;
 }
 
+#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (!debug_pagealloc_enabled())
@@ -196,3 +197,4 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 		__set_memory((unsigned long)page_address(page), numpages,
 			     __pgprot(0), __pgprot(_PAGE_PRESENT));
 }
+#endif
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 4a2a12be04c9..991a850a6c0b 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -35,9 +35,6 @@ config GENERIC_LOCKBREAK
 config PGSTE
 	def_bool y if KVM
 
-config ARCH_SUPPORTS_DEBUG_PAGEALLOC
-	def_bool y
-
 config AUDIT_ARCH
 	def_bool y
 
@@ -106,6 +103,7 @@ config S390
 	select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE
 	select ARCH_STACKWALK
 	select ARCH_SUPPORTS_ATOMIC_RMW
+	select ARCH_SUPPORTS_DEBUG_PAGEALLOC
 	select ARCH_SUPPORTS_NUMA_BALANCING
 	select ARCH_USE_BUILTIN_BSWAP
 	select ARCH_USE_CMPXCHG_LOCKREF
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index a6ca135442f9..2c729b8d097a 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -88,6 +88,7 @@ config SPARC64
 	select HAVE_C_RECORDMCOUNT
 	select HAVE_ARCH_AUDITSYSCALL
 	select ARCH_SUPPORTS_ATOMIC_RMW
+	select ARCH_SUPPORTS_DEBUG_PAGEALLOC
 	select HAVE_NMI
 	select HAVE_REGS_AND_STACK_ACCESS_API
 	select ARCH_USE_QUEUED_RWLOCKS
@@ -148,9 +149,6 @@ config GENERIC_ISA_DMA
 	bool
 	default y if SPARC32
 
-config ARCH_SUPPORTS_DEBUG_PAGEALLOC
-	def_bool y if SPARC64
-
 config PGTABLE_LEVELS
 	default 4 if 64BIT
 	default 3
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f6946b81f74a..0db3fb1da70c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
 	select ARCH_STACKWALK
 	select ARCH_SUPPORTS_ACPI
 	select ARCH_SUPPORTS_ATOMIC_RMW
+	select ARCH_SUPPORTS_DEBUG_PAGEALLOC
 	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
 	select ARCH_USE_BUILTIN_BSWAP
 	select ARCH_USE_QUEUED_RWLOCKS
@@ -329,9 +330,6 @@ config ZONE_DMA32
 config AUDIT_ARCH
 	def_bool y if X86_64
 
-config ARCH_SUPPORTS_DEBUG_PAGEALLOC
-	def_bool y
-
 config KASAN_SHADOW_OFFSET
 	hex
 	depends on KASAN
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 40baa90e74f4..bc9be96b777f 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2194,6 +2194,7 @@ int set_direct_map_default_noflush(struct page *page)
 	return __set_pages_p(page, 1);
 }
 
+#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (PageHighMem(page))
@@ -2239,6 +2240,7 @@ bool kernel_page_present(struct page *page)
 	return (pte_val(*pte) & _PAGE_PRESENT);
 }
 #endif /* CONFIG_HIBERNATION */
+#endif /* CONFIG_DEBUG_PAGEALLOC */
 
 int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
 				   unsigned numpages, unsigned long page_flags)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 14e397f3752c..ab0ef6bd351d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2924,7 +2924,11 @@ static inline bool debug_pagealloc_enabled_static(void)
 	return static_branch_unlikely(&_debug_pagealloc_enabled);
 }
 
-#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
+#ifdef CONFIG_DEBUG_PAGEALLOC
+/*
+ * To support DEBUG_PAGEALLOC architecture must ensure that
+ * __kernel_map_pages() never fails
+ */
 extern void __kernel_map_pages(struct page *page, int numpages, int enable);
 
 static inline void debug_pagealloc_map_pages(struct page *page,
@@ -2937,13 +2941,13 @@ static inline void debug_pagealloc_map_pages(struct page *page,
 #ifdef CONFIG_HIBERNATION
 extern bool kernel_page_present(struct page *page);
 #endif	/* CONFIG_HIBERNATION */
-#else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+#else	/* CONFIG_DEBUG_PAGEALLOC */
 static inline void debug_pagealloc_map_pages(struct page *page,
 					     int numpages, int enable) {}
 #ifdef CONFIG_HIBERNATION
 static inline bool kernel_page_present(struct page *page) { return true; }
 #endif	/* CONFIG_HIBERNATION */
-#endif	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+#endif	/* CONFIG_DEBUG_PAGEALLOC */
 
 #ifdef __HAVE_ARCH_GATE_AREA
 extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v3 4/4] arch, mm: make kernel_page_present() always available
  2020-11-01 17:08 [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Mike Rapoport
                   ` (2 preceding siblings ...)
  2020-11-01 17:08 ` [PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC Mike Rapoport
@ 2020-11-01 17:08 ` Mike Rapoport
  2020-11-02  9:28   ` David Hildenbrand
  2020-11-03 11:15 ` [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Kirill A. Shutemov
  4 siblings, 1 reply; 16+ messages in thread
From: Mike Rapoport @ 2020-11-01 17:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Albert Ou, Andy Lutomirski, Benjamin Herrenschmidt,
	Borislav Petkov, Catalin Marinas, Christian Borntraeger,
	Christoph Lameter, David S. Miller, Dave Hansen,
	David Hildenbrand, David Rientjes, Edgecombe, Rick P,
	H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Kirill A. Shutemov, Len Brown, Michael Ellerman, Mike Rapoport,
	Mike Rapoport, Palmer Dabbelt, Paul Mackerras, Paul Walmsley,
	Pavel Machek, Pekka Enberg, Peter Zijlstra, Rafael J. Wysocki,
	Thomas Gleixner, Vasily Gorbik, Will Deacon, linux-arm-kernel,
	linux-kernel, linux-mm, linux-pm, linux-riscv, linux-s390,
	linuxppc-dev, sparclinux, x86

From: Mike Rapoport <rppt@linux.ibm.com>

For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.

Add RISC-V implementation of kernel_page_present(), update its forward
declarations and stubs to be a part of set_memory API and remove ugly
ifdefery in inlcude/linux/mm.h around current declarations of
kernel_page_present().

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 arch/arm64/include/asm/cacheflush.h |  1 +
 arch/arm64/mm/pageattr.c            |  4 +---
 arch/riscv/include/asm/set_memory.h |  1 +
 arch/riscv/mm/pageattr.c            | 29 +++++++++++++++++++++++++++++
 arch/x86/include/asm/set_memory.h   |  1 +
 arch/x86/mm/pat/set_memory.c        |  4 +---
 include/linux/mm.h                  |  7 -------
 include/linux/set_memory.h          |  5 +++++
 8 files changed, 39 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 9384fd8fc13c..45217f21f1fe 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -140,6 +140,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable);
 
 int set_direct_map_invalid_noflush(struct page *page);
 int set_direct_map_default_noflush(struct page *page);
+bool kernel_page_present(struct page *page);
 
 #include <asm-generic/cacheflush.h>
 
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 439325532be1..92eccaf595c8 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -186,8 +186,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 
 	set_memory_valid((unsigned long)page_address(page), numpages, enable);
 }
+#endif /* CONFIG_DEBUG_PAGEALLOC */
 
-#ifdef CONFIG_HIBERNATION
 /*
  * This function is used to determine if a linear map page has been marked as
  * not-valid. Walk the page table and check the PTE_VALID bit. This is based
@@ -234,5 +234,3 @@ bool kernel_page_present(struct page *page)
 	ptep = pte_offset_kernel(pmdp, addr);
 	return pte_valid(READ_ONCE(*ptep));
 }
-#endif /* CONFIG_HIBERNATION */
-#endif /* CONFIG_DEBUG_PAGEALLOC */
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 4c5bae7ca01c..d690b08dff2a 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -24,6 +24,7 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 
 int set_direct_map_invalid_noflush(struct page *page);
 int set_direct_map_default_noflush(struct page *page);
+bool kernel_page_present(struct page *page);
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 321b09d2e2ea..87ba5a68bbb8 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -198,3 +198,32 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 			     __pgprot(0), __pgprot(_PAGE_PRESENT));
 }
 #endif
+
+bool kernel_page_present(struct page *page)
+{
+	unsigned long addr = (unsigned long)page_address(page);
+	pgd_t *pgd;
+	pud_t *pud;
+	p4d_t *p4d;
+	pmd_t *pmd;
+	pte_t *pte;
+
+	pgd = pgd_offset_k(addr);
+	if (!pgd_present(*pgd))
+		return false;
+
+	p4d = p4d_offset(pgd, addr);
+	if (!p4d_present(*p4d))
+		return false;
+
+	pud = pud_offset(p4d, addr);
+	if (!pud_present(*pud))
+		return false;
+
+	pmd = pmd_offset(pud, addr);
+	if (!pmd_present(*pmd))
+		return false;
+
+	pte = pte_offset_kernel(pmd, addr);
+	return pte_present(*pte);
+}
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 5948218f35c5..4352f08bfbb5 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -82,6 +82,7 @@ int set_pages_rw(struct page *page, int numpages);
 
 int set_direct_map_invalid_noflush(struct page *page);
 int set_direct_map_default_noflush(struct page *page);
+bool kernel_page_present(struct page *page);
 
 extern int kernel_set_to_readonly;
 
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index bc9be96b777f..16f878c26667 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2226,8 +2226,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 
 	arch_flush_lazy_mmu_mode();
 }
+#endif /* CONFIG_DEBUG_PAGEALLOC */
 
-#ifdef CONFIG_HIBERNATION
 bool kernel_page_present(struct page *page)
 {
 	unsigned int level;
@@ -2239,8 +2239,6 @@ bool kernel_page_present(struct page *page)
 	pte = lookup_address((unsigned long)page_address(page), &level);
 	return (pte_val(*pte) & _PAGE_PRESENT);
 }
-#endif /* CONFIG_HIBERNATION */
-#endif /* CONFIG_DEBUG_PAGEALLOC */
 
 int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
 				   unsigned numpages, unsigned long page_flags)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ab0ef6bd351d..44b82f22e76a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2937,16 +2937,9 @@ static inline void debug_pagealloc_map_pages(struct page *page,
 	if (debug_pagealloc_enabled_static())
 		__kernel_map_pages(page, numpages, enable);
 }
-
-#ifdef CONFIG_HIBERNATION
-extern bool kernel_page_present(struct page *page);
-#endif	/* CONFIG_HIBERNATION */
 #else	/* CONFIG_DEBUG_PAGEALLOC */
 static inline void debug_pagealloc_map_pages(struct page *page,
 					     int numpages, int enable) {}
-#ifdef CONFIG_HIBERNATION
-static inline bool kernel_page_present(struct page *page) { return true; }
-#endif	/* CONFIG_HIBERNATION */
 #endif	/* CONFIG_DEBUG_PAGEALLOC */
 
 #ifdef __HAVE_ARCH_GATE_AREA
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index 860e0f843c12..fe1aa4e54680 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -23,6 +23,11 @@ static inline int set_direct_map_default_noflush(struct page *page)
 {
 	return 0;
 }
+
+static inline bool kernel_page_present(struct page *page)
+{
+	return true;
+}
 #endif
 
 #ifndef set_mce_nospec
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit
  2020-11-01 17:08 ` [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit Mike Rapoport
@ 2020-11-02  9:19   ` David Hildenbrand
  2020-11-02 15:12     ` Mike Rapoport
  2020-11-03 11:08   ` Kirill A. Shutemov
  1 sibling, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2020-11-02  9:19 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Albert Ou, Andy Lutomirski, Benjamin Herrenschmidt,
	Borislav Petkov, Catalin Marinas, Christian Borntraeger,
	Christoph Lameter, David S. Miller, Dave Hansen, David Rientjes,
	Edgecombe, Rick P, H. Peter Anvin, Heiko Carstens, Ingo Molnar,
	Joonsoo Kim, Kirill A. Shutemov, Len Brown, Michael Ellerman,
	Mike Rapoport, Palmer Dabbelt, Paul Mackerras, Paul Walmsley,
	Pavel Machek, Pekka Enberg, Peter Zijlstra, Rafael J. Wysocki,
	Thomas Gleixner, Vasily Gorbik, Will Deacon, linux-arm-kernel,
	linux-kernel, linux-mm, linux-pm, linux-riscv, linux-s390,
	linuxppc-dev, sparclinux, x86, Rafael J . Wysocki

On 01.11.20 18:08, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
> not present in the direct map and has to be explicitly mapped before it
> could be copied.
> 
> Introduce hibernate_map_page() that will explicitly use
> set_direct_map_{default,invalid}_noflush() for ARCH_HAS_SET_DIRECT_MAP case
> and debug_pagealloc_map_pages() for DEBUG_PAGEALLOC case.
> 
> The remapping of the pages in safe_copy_page() presumes that it only
> changes protection bits in an existing PTE and so it is safe to ignore
> return value of set_direct_map_{default,invalid}_noflush().
> 
> Still, add a WARN_ON() so that future changes in set_memory APIs will not
> silently break hibernation.
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>   include/linux/mm.h      | 12 ------------
>   kernel/power/snapshot.c | 30 ++++++++++++++++++++++++++++--
>   2 files changed, 28 insertions(+), 14 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 1fc0609056dc..14e397f3752c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2927,16 +2927,6 @@ static inline bool debug_pagealloc_enabled_static(void)
>   #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
>   extern void __kernel_map_pages(struct page *page, int numpages, int enable);
>   
> -/*
> - * When called in DEBUG_PAGEALLOC context, the call should most likely be
> - * guarded by debug_pagealloc_enabled() or debug_pagealloc_enabled_static()
> - */
> -static inline void
> -kernel_map_pages(struct page *page, int numpages, int enable)
> -{
> -	__kernel_map_pages(page, numpages, enable);
> -}
> -
>   static inline void debug_pagealloc_map_pages(struct page *page,
>   					     int numpages, int enable)
>   {
> @@ -2948,8 +2938,6 @@ static inline void debug_pagealloc_map_pages(struct page *page,
>   extern bool kernel_page_present(struct page *page);
>   #endif	/* CONFIG_HIBERNATION */
>   #else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
> -static inline void
> -kernel_map_pages(struct page *page, int numpages, int enable) {}
>   static inline void debug_pagealloc_map_pages(struct page *page,
>   					     int numpages, int enable) {}
>   #ifdef CONFIG_HIBERNATION
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 46b1804c1ddf..054c8cce4236 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -76,6 +76,32 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
>   static inline void hibernate_restore_unprotect_page(void *page_address) {}
>   #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
>   
> +static inline void hibernate_map_page(struct page *page, int enable)
> +{
> +	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> +		unsigned long addr = (unsigned long)page_address(page);
> +		int ret;
> +
> +		/*
> +		 * This should not fail because remapping a page here means
> +		 * that we only update protection bits in an existing PTE.
> +		 * It is still worth to have WARN_ON() here if something
> +		 * changes and this will no longer be the case.
> +		 */
> +		if (enable)
> +			ret = set_direct_map_default_noflush(page);
> +		else
> +			ret = set_direct_map_invalid_noflush(page);
> +
> +		if (WARN_ON(ret))
> +			return;

People seem to prefer pr_warn() now that production kernels have panic 
on warn enabled. It's weird.

> +
> +		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +	} else {
> +		debug_pagealloc_map_pages(page, 1, enable);

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC
  2020-11-01 17:08 ` [PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC Mike Rapoport
@ 2020-11-02  9:23   ` David Hildenbrand
  2020-11-02 15:15     ` Mike Rapoport
  0 siblings, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2020-11-02  9:23 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Albert Ou, Andy Lutomirski, Benjamin Herrenschmidt,
	Borislav Petkov, Catalin Marinas, Christian Borntraeger,
	Christoph Lameter, David S. Miller, Dave Hansen, David Rientjes,
	Edgecombe, Rick P, H. Peter Anvin, Heiko Carstens, Ingo Molnar,
	Joonsoo Kim, Kirill A. Shutemov, Len Brown, Michael Ellerman,
	Mike Rapoport, Palmer Dabbelt, Paul Mackerras, Paul Walmsley,
	Pavel Machek, Pekka Enberg, Peter Zijlstra, Rafael J. Wysocki,
	Thomas Gleixner, Vasily Gorbik, Will Deacon, linux-arm-kernel,
	linux-kernel, linux-mm, linux-pm, linux-riscv, linux-s390,
	linuxppc-dev, sparclinux, x86


>   int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
>   				   unsigned numpages, unsigned long page_flags)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 14e397f3752c..ab0ef6bd351d 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2924,7 +2924,11 @@ static inline bool debug_pagealloc_enabled_static(void)
>   	return static_branch_unlikely(&_debug_pagealloc_enabled);
>   }
>   
> -#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
> +#ifdef CONFIG_DEBUG_PAGEALLOC
> +/*
> + * To support DEBUG_PAGEALLOC architecture must ensure that
> + * __kernel_map_pages() never fails

Maybe add here, that this implies mapping everything via PTEs during boot.

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 4/4] arch, mm: make kernel_page_present() always available
  2020-11-01 17:08 ` [PATCH v3 4/4] arch, mm: make kernel_page_present() always available Mike Rapoport
@ 2020-11-02  9:28   ` David Hildenbrand
  2020-11-02 15:18     ` Mike Rapoport
  0 siblings, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2020-11-02  9:28 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Albert Ou, Andy Lutomirski, Benjamin Herrenschmidt,
	Borislav Petkov, Catalin Marinas, Christian Borntraeger,
	Christoph Lameter, David S. Miller, Dave Hansen, David Rientjes,
	Edgecombe, Rick P, H. Peter Anvin, Heiko Carstens, Ingo Molnar,
	Joonsoo Kim, Kirill A. Shutemov, Len Brown, Michael Ellerman,
	Mike Rapoport, Palmer Dabbelt, Paul Mackerras, Paul Walmsley,
	Pavel Machek, Pekka Enberg, Peter Zijlstra, Rafael J. Wysocki,
	Thomas Gleixner, Vasily Gorbik, Will Deacon, linux-arm-kernel,
	linux-kernel, linux-mm, linux-pm, linux-riscv, linux-s390,
	linuxppc-dev, sparclinux, x86

On 01.11.20 18:08, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
> verify that a page is mapped in the kernel direct map can be useful
> regardless of hibernation.
> 
> Add RISC-V implementation of kernel_page_present(), update its forward
> declarations and stubs to be a part of set_memory API and remove ugly
> ifdefery in inlcude/linux/mm.h around current declarations of
> kernel_page_present().
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>   arch/arm64/include/asm/cacheflush.h |  1 +
>   arch/arm64/mm/pageattr.c            |  4 +---
>   arch/riscv/include/asm/set_memory.h |  1 +
>   arch/riscv/mm/pageattr.c            | 29 +++++++++++++++++++++++++++++
>   arch/x86/include/asm/set_memory.h   |  1 +
>   arch/x86/mm/pat/set_memory.c        |  4 +---
>   include/linux/mm.h                  |  7 -------
>   include/linux/set_memory.h          |  5 +++++
>   8 files changed, 39 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 9384fd8fc13c..45217f21f1fe 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -140,6 +140,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable);
>   
>   int set_direct_map_invalid_noflush(struct page *page);
>   int set_direct_map_default_noflush(struct page *page);
> +bool kernel_page_present(struct page *page);
>   
>   #include <asm-generic/cacheflush.h>
>   
> diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> index 439325532be1..92eccaf595c8 100644
> --- a/arch/arm64/mm/pageattr.c
> +++ b/arch/arm64/mm/pageattr.c
> @@ -186,8 +186,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
>   
>   	set_memory_valid((unsigned long)page_address(page), numpages, enable);
>   }
> +#endif /* CONFIG_DEBUG_PAGEALLOC */
>   
> -#ifdef CONFIG_HIBERNATION
>   /*
>    * This function is used to determine if a linear map page has been marked as
>    * not-valid. Walk the page table and check the PTE_VALID bit. This is based
> @@ -234,5 +234,3 @@ bool kernel_page_present(struct page *page)
>   	ptep = pte_offset_kernel(pmdp, addr);
>   	return pte_valid(READ_ONCE(*ptep));
>   }
> -#endif /* CONFIG_HIBERNATION */
> -#endif /* CONFIG_DEBUG_PAGEALLOC */
> diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> index 4c5bae7ca01c..d690b08dff2a 100644
> --- a/arch/riscv/include/asm/set_memory.h
> +++ b/arch/riscv/include/asm/set_memory.h
> @@ -24,6 +24,7 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
>   
>   int set_direct_map_invalid_noflush(struct page *page);
>   int set_direct_map_default_noflush(struct page *page);
> +bool kernel_page_present(struct page *page);
>   
>   #endif /* __ASSEMBLY__ */
>   
> diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> index 321b09d2e2ea..87ba5a68bbb8 100644
> --- a/arch/riscv/mm/pageattr.c
> +++ b/arch/riscv/mm/pageattr.c
> @@ -198,3 +198,32 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
>   			     __pgprot(0), __pgprot(_PAGE_PRESENT));
>   }
>   #endif
> +
> +bool kernel_page_present(struct page *page)
> +{
> +	unsigned long addr = (unsigned long)page_address(page);
> +	pgd_t *pgd;
> +	pud_t *pud;
> +	p4d_t *p4d;
> +	pmd_t *pmd;
> +	pte_t *pte;
> +
> +	pgd = pgd_offset_k(addr);
> +	if (!pgd_present(*pgd))
> +		return false;
> +
> +	p4d = p4d_offset(pgd, addr);
> +	if (!p4d_present(*p4d))
> +		return false;
> +
> +	pud = pud_offset(p4d, addr);
> +	if (!pud_present(*pud))
> +		return false;
> +
> +	pmd = pmd_offset(pud, addr);
> +	if (!pmd_present(*pmd))
> +		return false;
> +
> +	pte = pte_offset_kernel(pmd, addr);
> +	return pte_present(*pte);
> +}
> diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
> index 5948218f35c5..4352f08bfbb5 100644
> --- a/arch/x86/include/asm/set_memory.h
> +++ b/arch/x86/include/asm/set_memory.h
> @@ -82,6 +82,7 @@ int set_pages_rw(struct page *page, int numpages);
>   
>   int set_direct_map_invalid_noflush(struct page *page);
>   int set_direct_map_default_noflush(struct page *page);
> +bool kernel_page_present(struct page *page);
>   
>   extern int kernel_set_to_readonly;
>   
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index bc9be96b777f..16f878c26667 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -2226,8 +2226,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
>   
>   	arch_flush_lazy_mmu_mode();
>   }
> +#endif /* CONFIG_DEBUG_PAGEALLOC */
>   
> -#ifdef CONFIG_HIBERNATION
>   bool kernel_page_present(struct page *page)
>   {
>   	unsigned int level;
> @@ -2239,8 +2239,6 @@ bool kernel_page_present(struct page *page)
>   	pte = lookup_address((unsigned long)page_address(page), &level);
>   	return (pte_val(*pte) & _PAGE_PRESENT);
>   }
> -#endif /* CONFIG_HIBERNATION */
> -#endif /* CONFIG_DEBUG_PAGEALLOC */
>   
>   int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
>   				   unsigned numpages, unsigned long page_flags)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ab0ef6bd351d..44b82f22e76a 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2937,16 +2937,9 @@ static inline void debug_pagealloc_map_pages(struct page *page,
>   	if (debug_pagealloc_enabled_static())
>   		__kernel_map_pages(page, numpages, enable);
>   }
> -
> -#ifdef CONFIG_HIBERNATION
> -extern bool kernel_page_present(struct page *page);
> -#endif	/* CONFIG_HIBERNATION */
>   #else	/* CONFIG_DEBUG_PAGEALLOC */
>   static inline void debug_pagealloc_map_pages(struct page *page,
>   					     int numpages, int enable) {}
> -#ifdef CONFIG_HIBERNATION
> -static inline bool kernel_page_present(struct page *page) { return true; }
> -#endif	/* CONFIG_HIBERNATION */
>   #endif	/* CONFIG_DEBUG_PAGEALLOC */
>   
>   #ifdef __HAVE_ARCH_GATE_AREA
> diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
> index 860e0f843c12..fe1aa4e54680 100644
> --- a/include/linux/set_memory.h
> +++ b/include/linux/set_memory.h
> @@ -23,6 +23,11 @@ static inline int set_direct_map_default_noflush(struct page *page)
>   {
>   	return 0;
>   }
> +
> +static inline bool kernel_page_present(struct page *page)
> +{
> +	return true;
> +}
>   #endif
>   
>   #ifndef set_mce_nospec
> 

It's somewhat weird to move this to set_memory.h - it's only one 
possible user. I think include/linux/mm.h is a better fit. Ack to making 
it independent of CONFIG_HIBERNATION.

in include/linux/mm.h , I'd prefer:

#if defined(CONFIG_DEBUG_PAGEALLOC) || \
     defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
bool kernel_page_present(struct page *page);
#else
static inline bool kernel_page_present(struct page *page)
{
	return true;
}
#endif

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit
  2020-11-02  9:19   ` David Hildenbrand
@ 2020-11-02 15:12     ` Mike Rapoport
  0 siblings, 0 replies; 16+ messages in thread
From: Mike Rapoport @ 2020-11-02 15:12 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Albert Ou, Andy Lutomirski,
	Benjamin Herrenschmidt, Borislav Petkov, Catalin Marinas,
	Christian Borntraeger, Christoph Lameter, David S. Miller,
	Dave Hansen, David Rientjes, Edgecombe, Rick P, H. Peter Anvin,
	Heiko Carstens, Ingo Molnar, Joonsoo Kim, Kirill A. Shutemov,
	Len Brown, Michael Ellerman, Mike Rapoport, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Pavel Machek, Pekka Enberg,
	Peter Zijlstra, Rafael J. Wysocki, Thomas Gleixner,
	Vasily Gorbik, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-mm, linux-pm, linux-riscv, linux-s390, linuxppc-dev,
	sparclinux, x86, Rafael J . Wysocki

On Mon, Nov 02, 2020 at 10:19:36AM +0100, David Hildenbrand wrote:
> On 01.11.20 18:08, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
> > not present in the direct map and has to be explicitly mapped before it
> > could be copied.
> > 
> > Introduce hibernate_map_page() that will explicitly use
> > set_direct_map_{default,invalid}_noflush() for ARCH_HAS_SET_DIRECT_MAP case
> > and debug_pagealloc_map_pages() for DEBUG_PAGEALLOC case.
> > 
> > The remapping of the pages in safe_copy_page() presumes that it only
> > changes protection bits in an existing PTE and so it is safe to ignore
> > return value of set_direct_map_{default,invalid}_noflush().
> > 
> > Still, add a WARN_ON() so that future changes in set_memory APIs will not
> > silently break hibernation.
> > 
> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > ---
> >   include/linux/mm.h      | 12 ------------
> >   kernel/power/snapshot.c | 30 ++++++++++++++++++++++++++++--
> >   2 files changed, 28 insertions(+), 14 deletions(-)
> > 
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 1fc0609056dc..14e397f3752c 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2927,16 +2927,6 @@ static inline bool debug_pagealloc_enabled_static(void)
> >   #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
> >   extern void __kernel_map_pages(struct page *page, int numpages, int enable);
> > -/*
> > - * When called in DEBUG_PAGEALLOC context, the call should most likely be
> > - * guarded by debug_pagealloc_enabled() or debug_pagealloc_enabled_static()
> > - */
> > -static inline void
> > -kernel_map_pages(struct page *page, int numpages, int enable)
> > -{
> > -	__kernel_map_pages(page, numpages, enable);
> > -}
> > -
> >   static inline void debug_pagealloc_map_pages(struct page *page,
> >   					     int numpages, int enable)
> >   {
> > @@ -2948,8 +2938,6 @@ static inline void debug_pagealloc_map_pages(struct page *page,
> >   extern bool kernel_page_present(struct page *page);
> >   #endif	/* CONFIG_HIBERNATION */
> >   #else	/* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */
> > -static inline void
> > -kernel_map_pages(struct page *page, int numpages, int enable) {}
> >   static inline void debug_pagealloc_map_pages(struct page *page,
> >   					     int numpages, int enable) {}
> >   #ifdef CONFIG_HIBERNATION
> > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> > index 46b1804c1ddf..054c8cce4236 100644
> > --- a/kernel/power/snapshot.c
> > +++ b/kernel/power/snapshot.c
> > @@ -76,6 +76,32 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
> >   static inline void hibernate_restore_unprotect_page(void *page_address) {}
> >   #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
> > +static inline void hibernate_map_page(struct page *page, int enable)
> > +{
> > +	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> > +		unsigned long addr = (unsigned long)page_address(page);
> > +		int ret;
> > +
> > +		/*
> > +		 * This should not fail because remapping a page here means
> > +		 * that we only update protection bits in an existing PTE.
> > +		 * It is still worth to have WARN_ON() here if something
> > +		 * changes and this will no longer be the case.
> > +		 */
> > +		if (enable)
> > +			ret = set_direct_map_default_noflush(page);
> > +		else
> > +			ret = set_direct_map_invalid_noflush(page);
> > +
> > +		if (WARN_ON(ret))
> > +			return;
> 
> People seem to prefer pr_warn() now that production kernels have panic on
> warn enabled. It's weird.

Weird indeed as the whole point of WARN to yell without causing a
crash...
I can change to pr_warn though...

> > +
> > +		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > +	} else {
> > +		debug_pagealloc_map_pages(page, 1, enable);
> 
> Reviewed-by: David Hildenbrand <david@redhat.com>

Thanks!

> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC
  2020-11-02  9:23   ` David Hildenbrand
@ 2020-11-02 15:15     ` Mike Rapoport
  0 siblings, 0 replies; 16+ messages in thread
From: Mike Rapoport @ 2020-11-02 15:15 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Albert Ou, Andy Lutomirski,
	Benjamin Herrenschmidt, Borislav Petkov, Catalin Marinas,
	Christian Borntraeger, Christoph Lameter, David S. Miller,
	Dave Hansen, David Rientjes, Edgecombe, Rick P, H. Peter Anvin,
	Heiko Carstens, Ingo Molnar, Joonsoo Kim, Kirill A. Shutemov,
	Len Brown, Michael Ellerman, Mike Rapoport, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Pavel Machek, Pekka Enberg,
	Peter Zijlstra, Rafael J. Wysocki, Thomas Gleixner,
	Vasily Gorbik, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-mm, linux-pm, linux-riscv, linux-s390, linuxppc-dev,
	sparclinux, x86

On Mon, Nov 02, 2020 at 10:23:20AM +0100, David Hildenbrand wrote:
> 
> >   int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
> >   				   unsigned numpages, unsigned long page_flags)
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 14e397f3752c..ab0ef6bd351d 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2924,7 +2924,11 @@ static inline bool debug_pagealloc_enabled_static(void)
> >   	return static_branch_unlikely(&_debug_pagealloc_enabled);
> >   }
> > -#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
> > +#ifdef CONFIG_DEBUG_PAGEALLOC
> > +/*
> > + * To support DEBUG_PAGEALLOC architecture must ensure that
> > + * __kernel_map_pages() never fails
> 
> Maybe add here, that this implies mapping everything via PTEs during boot.

This is more of an implementation detail, while assumption that
__kernel_map_pages() does not fail is somewhat a requirement :)

> Acked-by: David Hildenbrand <david@redhat.com>

Thanks!

> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 4/4] arch, mm: make kernel_page_present() always available
  2020-11-02  9:28   ` David Hildenbrand
@ 2020-11-02 15:18     ` Mike Rapoport
  0 siblings, 0 replies; 16+ messages in thread
From: Mike Rapoport @ 2020-11-02 15:18 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Albert Ou, Andy Lutomirski,
	Benjamin Herrenschmidt, Borislav Petkov, Catalin Marinas,
	Christian Borntraeger, Christoph Lameter, David S. Miller,
	Dave Hansen, David Rientjes, Edgecombe, Rick P, H. Peter Anvin,
	Heiko Carstens, Ingo Molnar, Joonsoo Kim, Kirill A. Shutemov,
	Len Brown, Michael Ellerman, Mike Rapoport, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Pavel Machek, Pekka Enberg,
	Peter Zijlstra, Rafael J. Wysocki, Thomas Gleixner,
	Vasily Gorbik, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-mm, linux-pm, linux-riscv, linux-s390, linuxppc-dev,
	sparclinux, x86

On Mon, Nov 02, 2020 at 10:28:14AM +0100, David Hildenbrand wrote:
> On 01.11.20 18:08, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
> > verify that a page is mapped in the kernel direct map can be useful
> > regardless of hibernation.
> > 
> > Add RISC-V implementation of kernel_page_present(), update its forward
> > declarations and stubs to be a part of set_memory API and remove ugly
> > ifdefery in inlcude/linux/mm.h around current declarations of
> > kernel_page_present().
> > 
> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > ---
> >   arch/arm64/include/asm/cacheflush.h |  1 +
> >   arch/arm64/mm/pageattr.c            |  4 +---
> >   arch/riscv/include/asm/set_memory.h |  1 +
> >   arch/riscv/mm/pageattr.c            | 29 +++++++++++++++++++++++++++++
> >   arch/x86/include/asm/set_memory.h   |  1 +
> >   arch/x86/mm/pat/set_memory.c        |  4 +---
> >   include/linux/mm.h                  |  7 -------
> >   include/linux/set_memory.h          |  5 +++++
> >   8 files changed, 39 insertions(+), 13 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> > index 9384fd8fc13c..45217f21f1fe 100644
> > --- a/arch/arm64/include/asm/cacheflush.h
> > +++ b/arch/arm64/include/asm/cacheflush.h
> > @@ -140,6 +140,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable);
> >   int set_direct_map_invalid_noflush(struct page *page);
> >   int set_direct_map_default_noflush(struct page *page);
> > +bool kernel_page_present(struct page *page);
> >   #include <asm-generic/cacheflush.h>
> > diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
> > index 439325532be1..92eccaf595c8 100644
> > --- a/arch/arm64/mm/pageattr.c
> > +++ b/arch/arm64/mm/pageattr.c
> > @@ -186,8 +186,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
> >   	set_memory_valid((unsigned long)page_address(page), numpages, enable);
> >   }
> > +#endif /* CONFIG_DEBUG_PAGEALLOC */
> > -#ifdef CONFIG_HIBERNATION
> >   /*
> >    * This function is used to determine if a linear map page has been marked as
> >    * not-valid. Walk the page table and check the PTE_VALID bit. This is based
> > @@ -234,5 +234,3 @@ bool kernel_page_present(struct page *page)
> >   	ptep = pte_offset_kernel(pmdp, addr);
> >   	return pte_valid(READ_ONCE(*ptep));
> >   }
> > -#endif /* CONFIG_HIBERNATION */
> > -#endif /* CONFIG_DEBUG_PAGEALLOC */
> > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> > index 4c5bae7ca01c..d690b08dff2a 100644
> > --- a/arch/riscv/include/asm/set_memory.h
> > +++ b/arch/riscv/include/asm/set_memory.h
> > @@ -24,6 +24,7 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
> >   int set_direct_map_invalid_noflush(struct page *page);
> >   int set_direct_map_default_noflush(struct page *page);
> > +bool kernel_page_present(struct page *page);
> >   #endif /* __ASSEMBLY__ */
> > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> > index 321b09d2e2ea..87ba5a68bbb8 100644
> > --- a/arch/riscv/mm/pageattr.c
> > +++ b/arch/riscv/mm/pageattr.c
> > @@ -198,3 +198,32 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
> >   			     __pgprot(0), __pgprot(_PAGE_PRESENT));
> >   }
> >   #endif
> > +
> > +bool kernel_page_present(struct page *page)
> > +{
> > +	unsigned long addr = (unsigned long)page_address(page);
> > +	pgd_t *pgd;
> > +	pud_t *pud;
> > +	p4d_t *p4d;
> > +	pmd_t *pmd;
> > +	pte_t *pte;
> > +
> > +	pgd = pgd_offset_k(addr);
> > +	if (!pgd_present(*pgd))
> > +		return false;
> > +
> > +	p4d = p4d_offset(pgd, addr);
> > +	if (!p4d_present(*p4d))
> > +		return false;
> > +
> > +	pud = pud_offset(p4d, addr);
> > +	if (!pud_present(*pud))
> > +		return false;
> > +
> > +	pmd = pmd_offset(pud, addr);
> > +	if (!pmd_present(*pmd))
> > +		return false;
> > +
> > +	pte = pte_offset_kernel(pmd, addr);
> > +	return pte_present(*pte);
> > +}
> > diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
> > index 5948218f35c5..4352f08bfbb5 100644
> > --- a/arch/x86/include/asm/set_memory.h
> > +++ b/arch/x86/include/asm/set_memory.h
> > @@ -82,6 +82,7 @@ int set_pages_rw(struct page *page, int numpages);
> >   int set_direct_map_invalid_noflush(struct page *page);
> >   int set_direct_map_default_noflush(struct page *page);
> > +bool kernel_page_present(struct page *page);
> >   extern int kernel_set_to_readonly;
> > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > index bc9be96b777f..16f878c26667 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -2226,8 +2226,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
> >   	arch_flush_lazy_mmu_mode();
> >   }
> > +#endif /* CONFIG_DEBUG_PAGEALLOC */
> > -#ifdef CONFIG_HIBERNATION
> >   bool kernel_page_present(struct page *page)
> >   {
> >   	unsigned int level;
> > @@ -2239,8 +2239,6 @@ bool kernel_page_present(struct page *page)
> >   	pte = lookup_address((unsigned long)page_address(page), &level);
> >   	return (pte_val(*pte) & _PAGE_PRESENT);
> >   }
> > -#endif /* CONFIG_HIBERNATION */
> > -#endif /* CONFIG_DEBUG_PAGEALLOC */
> >   int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
> >   				   unsigned numpages, unsigned long page_flags)
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index ab0ef6bd351d..44b82f22e76a 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2937,16 +2937,9 @@ static inline void debug_pagealloc_map_pages(struct page *page,
> >   	if (debug_pagealloc_enabled_static())
> >   		__kernel_map_pages(page, numpages, enable);
> >   }
> > -
> > -#ifdef CONFIG_HIBERNATION
> > -extern bool kernel_page_present(struct page *page);
> > -#endif	/* CONFIG_HIBERNATION */
> >   #else	/* CONFIG_DEBUG_PAGEALLOC */
> >   static inline void debug_pagealloc_map_pages(struct page *page,
> >   					     int numpages, int enable) {}
> > -#ifdef CONFIG_HIBERNATION
> > -static inline bool kernel_page_present(struct page *page) { return true; }
> > -#endif	/* CONFIG_HIBERNATION */
> >   #endif	/* CONFIG_DEBUG_PAGEALLOC */
> >   #ifdef __HAVE_ARCH_GATE_AREA
> > diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
> > index 860e0f843c12..fe1aa4e54680 100644
> > --- a/include/linux/set_memory.h
> > +++ b/include/linux/set_memory.h
> > @@ -23,6 +23,11 @@ static inline int set_direct_map_default_noflush(struct page *page)
> >   {
> >   	return 0;
> >   }
> > +
> > +static inline bool kernel_page_present(struct page *page)
> > +{
> > +	return true;
> > +}
> >   #endif
> >   #ifndef set_mce_nospec
> > 
> 
> It's somewhat weird to move this to set_memory.h - it's only one possible
> user. I think include/linux/mm.h is a better fit. Ack to making it
> independent of CONFIG_HIBERNATION.

Semantically this is a part of direct map manipulation, that's primarily
why I put it into set_memory.h

> in include/linux/mm.h , I'd prefer:
> 
> #if defined(CONFIG_DEBUG_PAGEALLOC) || \
>     defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)

The second reason was to avoid this ^
and the third is -7 lines to include/linux/mm.h :)

> bool kernel_page_present(struct page *page);
> #else
> static inline bool kernel_page_present(struct page *page)
> {
> 	return true;
> }
> #endif
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit
  2020-11-01 17:08 ` [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit Mike Rapoport
  2020-11-02  9:19   ` David Hildenbrand
@ 2020-11-03 11:08   ` Kirill A. Shutemov
  2020-11-03 12:13     ` Mike Rapoport
  1 sibling, 1 reply; 16+ messages in thread
From: Kirill A. Shutemov @ 2020-11-03 11:08 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Albert Ou, Andy Lutomirski,
	Benjamin Herrenschmidt, Borislav Petkov, Catalin Marinas,
	Christian Borntraeger, Christoph Lameter, David S. Miller,
	Dave Hansen, David Hildenbrand, David Rientjes, Edgecombe,
	Rick P, H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Len Brown, Michael Ellerman, Mike Rapoport, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Pavel Machek, Pekka Enberg,
	Peter Zijlstra, Rafael J. Wysocki, Thomas Gleixner,
	Vasily Gorbik, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-mm, linux-pm, linux-riscv, linux-s390, linuxppc-dev,
	sparclinux, x86, Rafael J . Wysocki

On Sun, Nov 01, 2020 at 07:08:13PM +0200, Mike Rapoport wrote:
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 46b1804c1ddf..054c8cce4236 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -76,6 +76,32 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
>  static inline void hibernate_restore_unprotect_page(void *page_address) {}
>  #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
>  
> +static inline void hibernate_map_page(struct page *page, int enable)
> +{
> +	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> +		unsigned long addr = (unsigned long)page_address(page);
> +		int ret;
> +
> +		/*
> +		 * This should not fail because remapping a page here means
> +		 * that we only update protection bits in an existing PTE.
> +		 * It is still worth to have WARN_ON() here if something
> +		 * changes and this will no longer be the case.
> +		 */
> +		if (enable)
> +			ret = set_direct_map_default_noflush(page);
> +		else
> +			ret = set_direct_map_invalid_noflush(page);
> +
> +		if (WARN_ON(ret))

_ONCE?
> +			return;
> +
> +		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +	} else {
> +		debug_pagealloc_map_pages(page, 1, enable);
> +	}
> +}
> +
>  static int swsusp_page_is_free(struct page *);
>  static void swsusp_set_page_forbidden(struct page *);
>  static void swsusp_unset_page_forbidden(struct page *);

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation
  2020-11-01 17:08 [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Mike Rapoport
                   ` (3 preceding siblings ...)
  2020-11-01 17:08 ` [PATCH v3 4/4] arch, mm: make kernel_page_present() always available Mike Rapoport
@ 2020-11-03 11:15 ` Kirill A. Shutemov
  4 siblings, 0 replies; 16+ messages in thread
From: Kirill A. Shutemov @ 2020-11-03 11:15 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Albert Ou, Andy Lutomirski,
	Benjamin Herrenschmidt, Borislav Petkov, Catalin Marinas,
	Christian Borntraeger, Christoph Lameter, David S. Miller,
	Dave Hansen, David Hildenbrand, David Rientjes, Edgecombe,
	Rick P, H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Len Brown, Michael Ellerman, Mike Rapoport, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Pavel Machek, Pekka Enberg,
	Peter Zijlstra, Rafael J. Wysocki, Thomas Gleixner,
	Vasily Gorbik, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-mm, linux-pm, linux-riscv, linux-s390, linuxppc-dev,
	sparclinux, x86

On Sun, Nov 01, 2020 at 07:08:11PM +0200, Mike Rapoport wrote:
> Mike Rapoport (4):
>   mm: introduce debug_pagealloc_map_pages() helper
>   PM: hibernate: make direct map manipulations more explicit
>   arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC
>   arch, mm: make kernel_page_present() always available

The series looks good to me (apart from the minor nit):

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit
  2020-11-03 11:08   ` Kirill A. Shutemov
@ 2020-11-03 12:13     ` Mike Rapoport
  2020-11-03 14:39       ` Kirill A. Shutemov
  0 siblings, 1 reply; 16+ messages in thread
From: Mike Rapoport @ 2020-11-03 12:13 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Andrew Morton, Albert Ou, Andy Lutomirski,
	Benjamin Herrenschmidt, Borislav Petkov, Catalin Marinas,
	Christian Borntraeger, Christoph Lameter, David S. Miller,
	Dave Hansen, David Hildenbrand, David Rientjes, Edgecombe,
	Rick P, H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Len Brown, Michael Ellerman, Mike Rapoport, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Pavel Machek, Pekka Enberg,
	Peter Zijlstra, Rafael J. Wysocki, Thomas Gleixner,
	Vasily Gorbik, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-mm, linux-pm, linux-riscv, linux-s390, linuxppc-dev,
	sparclinux, x86, Rafael J . Wysocki

On Tue, Nov 03, 2020 at 02:08:16PM +0300, Kirill A. Shutemov wrote:
> On Sun, Nov 01, 2020 at 07:08:13PM +0200, Mike Rapoport wrote:
> > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> > index 46b1804c1ddf..054c8cce4236 100644
> > --- a/kernel/power/snapshot.c
> > +++ b/kernel/power/snapshot.c
> > @@ -76,6 +76,32 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
> >  static inline void hibernate_restore_unprotect_page(void *page_address) {}
> >  #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
> >  
> > +static inline void hibernate_map_page(struct page *page, int enable)
> > +{
> > +	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> > +		unsigned long addr = (unsigned long)page_address(page);
> > +		int ret;
> > +
> > +		/*
> > +		 * This should not fail because remapping a page here means
> > +		 * that we only update protection bits in an existing PTE.
> > +		 * It is still worth to have WARN_ON() here if something
> > +		 * changes and this will no longer be the case.
> > +		 */
> > +		if (enable)
> > +			ret = set_direct_map_default_noflush(page);
> > +		else
> > +			ret = set_direct_map_invalid_noflush(page);
> > +
> > +		if (WARN_ON(ret))
> 
> _ONCE?

I've changed it to pr_warn() after David said people enable panic on
warn in production kernels.

> > +			return;
> > +
> > +		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > +	} else {
> > +		debug_pagealloc_map_pages(page, 1, enable);
> > +	}
> > +}
> > +
> >  static int swsusp_page_is_free(struct page *);
> >  static void swsusp_set_page_forbidden(struct page *);
> >  static void swsusp_unset_page_forbidden(struct page *);
> 
> -- 
>  Kirill A. Shutemov

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit
  2020-11-03 12:13     ` Mike Rapoport
@ 2020-11-03 14:39       ` Kirill A. Shutemov
  2020-11-03 15:56         ` Mike Rapoport
  0 siblings, 1 reply; 16+ messages in thread
From: Kirill A. Shutemov @ 2020-11-03 14:39 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Albert Ou, Andy Lutomirski,
	Benjamin Herrenschmidt, Borislav Petkov, Catalin Marinas,
	Christian Borntraeger, Christoph Lameter, David S. Miller,
	Dave Hansen, David Hildenbrand, David Rientjes, Edgecombe,
	Rick P, H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Len Brown, Michael Ellerman, Mike Rapoport, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Pavel Machek, Pekka Enberg,
	Peter Zijlstra, Rafael J. Wysocki, Thomas Gleixner,
	Vasily Gorbik, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-mm, linux-pm, linux-riscv, linux-s390, linuxppc-dev,
	sparclinux, x86, Rafael J . Wysocki

On Tue, Nov 03, 2020 at 02:13:50PM +0200, Mike Rapoport wrote:
> On Tue, Nov 03, 2020 at 02:08:16PM +0300, Kirill A. Shutemov wrote:
> > On Sun, Nov 01, 2020 at 07:08:13PM +0200, Mike Rapoport wrote:
> > > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> > > index 46b1804c1ddf..054c8cce4236 100644
> > > --- a/kernel/power/snapshot.c
> > > +++ b/kernel/power/snapshot.c
> > > @@ -76,6 +76,32 @@ static inline void hibernate_restore_protect_page(void *page_address) {}
> > >  static inline void hibernate_restore_unprotect_page(void *page_address) {}
> > >  #endif /* CONFIG_STRICT_KERNEL_RWX  && CONFIG_ARCH_HAS_SET_MEMORY */
> > >  
> > > +static inline void hibernate_map_page(struct page *page, int enable)
> > > +{
> > > +	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> > > +		unsigned long addr = (unsigned long)page_address(page);
> > > +		int ret;
> > > +
> > > +		/*
> > > +		 * This should not fail because remapping a page here means
> > > +		 * that we only update protection bits in an existing PTE.
> > > +		 * It is still worth to have WARN_ON() here if something
> > > +		 * changes and this will no longer be the case.
> > > +		 */
> > > +		if (enable)
> > > +			ret = set_direct_map_default_noflush(page);
> > > +		else
> > > +			ret = set_direct_map_invalid_noflush(page);
> > > +
> > > +		if (WARN_ON(ret))
> > 
> > _ONCE?
> 
> I've changed it to pr_warn() after David said people enable panic on
> warn in production kernels.

pr_warn_once()? :P

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit
  2020-11-03 14:39       ` Kirill A. Shutemov
@ 2020-11-03 15:56         ` Mike Rapoport
  0 siblings, 0 replies; 16+ messages in thread
From: Mike Rapoport @ 2020-11-03 15:56 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Andrew Morton, Albert Ou, Andy Lutomirski,
	Benjamin Herrenschmidt, Borislav Petkov, Catalin Marinas,
	Christian Borntraeger, Christoph Lameter, David S. Miller,
	Dave Hansen, David Hildenbrand, David Rientjes, Edgecombe,
	Rick P, H. Peter Anvin, Heiko Carstens, Ingo Molnar, Joonsoo Kim,
	Len Brown, Michael Ellerman, Mike Rapoport, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Pavel Machek, Pekka Enberg,
	Peter Zijlstra, Rafael J. Wysocki, Thomas Gleixner,
	Vasily Gorbik, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-mm, linux-pm, linux-riscv, linux-s390, linuxppc-dev,
	sparclinux, x86, Rafael J . Wysocki

On Tue, Nov 03, 2020 at 05:39:16PM +0300, Kirill A. Shutemov wrote:
> On Tue, Nov 03, 2020 at 02:13:50PM +0200, Mike Rapoport wrote:
> > > > +
> > > > +		if (WARN_ON(ret))
> > > 
> > > _ONCE?
> > 
> > I've changed it to pr_warn() after David said people enable panic on
> > warn in production kernels.
> 
> pr_warn_once()? :P

Sure :)

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-11-03 15:56 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-01 17:08 [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Mike Rapoport
2020-11-01 17:08 ` [PATCH v3 1/4] mm: introduce debug_pagealloc_map_pages() helper Mike Rapoport
2020-11-01 17:08 ` [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit Mike Rapoport
2020-11-02  9:19   ` David Hildenbrand
2020-11-02 15:12     ` Mike Rapoport
2020-11-03 11:08   ` Kirill A. Shutemov
2020-11-03 12:13     ` Mike Rapoport
2020-11-03 14:39       ` Kirill A. Shutemov
2020-11-03 15:56         ` Mike Rapoport
2020-11-01 17:08 ` [PATCH v3 3/4] arch, mm: restore dependency of __kernel_map_pages() of DEBUG_PAGEALLOC Mike Rapoport
2020-11-02  9:23   ` David Hildenbrand
2020-11-02 15:15     ` Mike Rapoport
2020-11-01 17:08 ` [PATCH v3 4/4] arch, mm: make kernel_page_present() always available Mike Rapoport
2020-11-02  9:28   ` David Hildenbrand
2020-11-02 15:18     ` Mike Rapoport
2020-11-03 11:15 ` [PATCH v3 0/4] arch, mm: improve robustness of direct map manipulation Kirill A. Shutemov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).