linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] cleanup page poisoning
@ 2020-11-03 15:22 Vlastimil Babka
  2020-11-03 15:22 ` [PATCH v2 1/5] mm, page_alloc: do not rely on the order of page_poison and init_on_alloc/free parameters Vlastimil Babka
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Vlastimil Babka @ 2020-11-03 15:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, David Hildenbrand, Mateusz Nosek, Laura Abbott,
	Vlastimil Babka, Len Brown, linux-pm, Mike Rapoport,
	Pavel Machek, Rafael J. Wysocki

The first version was called "optimize handling of memory debugging parameters"
[1], changes are:

- apply review feedback
- drop former Patch 3
- add new patches 3-5, change name and cover letter of series

I have identified a number of issues and opportunities for cleanup with
CONFIG_PAGE_POISON and friends:
- interaction with init_on_alloc and init_on_free parameters depends on
  the order of parameters (Patch 1)
- the boot time enabling uses static key, but inefficienty (Patch 2)
- sanity checking is incompatible with hibernation (Patch 3)
- CONFIG_PAGE_POISONING_NO_SANITY can be removed now that we have init_on_free
  (Patch 4)
- CONFIG_PAGE_POISONING_ZERO can be most likely removed now that we have
  init_on_free (Patch 5)

[1] https://lore.kernel.org/r/20201026173358.14704-1-vbabka@suse.cz

Vlastimil Babka (5):
  mm, page_alloc: do not rely on the order of page_poison and
    init_on_alloc/free parameters
  mm, page_poison: use static key more efficiently
  kernel/power: allow hibernation with page_poison sanity checking
  mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITY
  mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO

 drivers/virtio/virtio_balloon.c |   4 +-
 include/linux/mm.h              |  43 ++++++------
 include/linux/poison.h          |   4 --
 init/main.c                     |   2 +-
 kernel/power/hibernate.c        |   2 +-
 kernel/power/power.h            |   2 +-
 kernel/power/snapshot.c         |  14 ++--
 mm/Kconfig.debug                |  28 ++------
 mm/page_alloc.c                 | 113 ++++++++++++++++----------------
 mm/page_poison.c                |  56 ++--------------
 tools/include/linux/poison.h    |   6 +-
 11 files changed, 106 insertions(+), 168 deletions(-)

-- 
2.29.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 1/5] mm, page_alloc: do not rely on the order of page_poison and init_on_alloc/free parameters
  2020-11-03 15:22 [PATCH v2 0/5] cleanup page poisoning Vlastimil Babka
@ 2020-11-03 15:22 ` Vlastimil Babka
  2020-11-03 15:22 ` [PATCH v2 2/5] mm, page_poison: use static key more efficiently Vlastimil Babka
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Vlastimil Babka @ 2020-11-03 15:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, David Hildenbrand, Mateusz Nosek, Laura Abbott,
	Vlastimil Babka, Mike Rapoport

Enabling page_poison=1 together with init_on_alloc=1 or init_on_free=1 produces
a warning in dmesg that page_poison takes precedence. However, as these
warnings are printed in early_param handlers for init_on_alloc/free, they are
not printed if page_poison is enabled later on the command line (handlers are
called in the order of their parameters), or when init_on_alloc/free is always
enabled by the respective config option - before the page_poison early param
handler is called, it is not considered to be enabled. This is inconsistent.

We can remove the dependency on order by making the init_on_* parameters only
set a boolean variable, and postponing the evaluation after all early params
have been processed. Introduce a new init_mem_debugging_and_hardening()
function for that, and move the related debug_pagealloc processing there as
well.

As a result init_mem_debugging_and_hardening() knows always accurately if
init_on_* and/or page_poison options were enabled. Thus we can also optimize
want_init_on_alloc() and want_init_on_free(). We don't need to check
page_poisoning_enabled() there, we can instead not enable the init_on_* static
keys at all, if page poisoning is enabled. This results in a simpler and more
effective code.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
---
 include/linux/mm.h | 20 ++---------
 init/main.c        |  2 +-
 mm/page_alloc.c    | 88 ++++++++++++++++++++++------------------------
 3 files changed, 46 insertions(+), 64 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef360fe70aaf..e0bc1c8aa2a6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2865,6 +2865,7 @@ extern int apply_to_existing_page_range(struct mm_struct *mm,
 				   unsigned long address, unsigned long size,
 				   pte_fn_t fn, void *data);
 
+extern void init_mem_debugging_and_hardening(void);
 #ifdef CONFIG_PAGE_POISONING
 extern bool page_poisoning_enabled(void);
 extern void kernel_poison_pages(struct page *page, int numpages, int enable);
@@ -2874,35 +2875,20 @@ static inline void kernel_poison_pages(struct page *page, int numpages,
 					int enable) { }
 #endif
 
-#ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON
-DECLARE_STATIC_KEY_TRUE(init_on_alloc);
-#else
 DECLARE_STATIC_KEY_FALSE(init_on_alloc);
-#endif
 static inline bool want_init_on_alloc(gfp_t flags)
 {
-	if (static_branch_unlikely(&init_on_alloc) &&
-	    !page_poisoning_enabled())
+	if (static_branch_unlikely(&init_on_alloc))
 		return true;
 	return flags & __GFP_ZERO;
 }
 
-#ifdef CONFIG_INIT_ON_FREE_DEFAULT_ON
-DECLARE_STATIC_KEY_TRUE(init_on_free);
-#else
 DECLARE_STATIC_KEY_FALSE(init_on_free);
-#endif
 static inline bool want_init_on_free(void)
 {
-	return static_branch_unlikely(&init_on_free) &&
-	       !page_poisoning_enabled();
+	return static_branch_unlikely(&init_on_free);
 }
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
-extern void init_debug_pagealloc(void);
-#else
-static inline void init_debug_pagealloc(void) {}
-#endif
 extern bool _debug_pagealloc_enabled_early;
 DECLARE_STATIC_KEY_FALSE(_debug_pagealloc_enabled);
 
diff --git a/init/main.c b/init/main.c
index 130376ec10ba..14da5458baf6 100644
--- a/init/main.c
+++ b/init/main.c
@@ -815,7 +815,7 @@ static void __init mm_init(void)
 	 * bigger than MAX_ORDER unless SPARSEMEM.
 	 */
 	page_ext_init_flatmem();
-	init_debug_pagealloc();
+	init_mem_debugging_and_hardening();
 	report_meminit();
 	mem_init();
 	kmem_cache_init();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 23f5066bd4a5..44d596c9c764 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -165,53 +165,26 @@ unsigned long totalcma_pages __read_mostly;
 
 int percpu_pagelist_fraction;
 gfp_t gfp_allowed_mask __read_mostly = GFP_BOOT_MASK;
-#ifdef CONFIG_INIT_ON_ALLOC_DEFAULT_ON
-DEFINE_STATIC_KEY_TRUE(init_on_alloc);
-#else
 DEFINE_STATIC_KEY_FALSE(init_on_alloc);
-#endif
 EXPORT_SYMBOL(init_on_alloc);
 
-#ifdef CONFIG_INIT_ON_FREE_DEFAULT_ON
-DEFINE_STATIC_KEY_TRUE(init_on_free);
-#else
 DEFINE_STATIC_KEY_FALSE(init_on_free);
-#endif
 EXPORT_SYMBOL(init_on_free);
 
+static bool _init_on_alloc_enabled_early __read_mostly
+				= IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON);
 static int __init early_init_on_alloc(char *buf)
 {
-	int ret;
-	bool bool_result;
 
-	ret = kstrtobool(buf, &bool_result);
-	if (ret)
-		return ret;
-	if (bool_result && page_poisoning_enabled())
-		pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, will take precedence over init_on_alloc\n");
-	if (bool_result)
-		static_branch_enable(&init_on_alloc);
-	else
-		static_branch_disable(&init_on_alloc);
-	return 0;
+	return kstrtobool(buf, &_init_on_alloc_enabled_early);
 }
 early_param("init_on_alloc", early_init_on_alloc);
 
+static bool _init_on_free_enabled_early __read_mostly
+				= IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON);
 static int __init early_init_on_free(char *buf)
 {
-	int ret;
-	bool bool_result;
-
-	ret = kstrtobool(buf, &bool_result);
-	if (ret)
-		return ret;
-	if (bool_result && page_poisoning_enabled())
-		pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, will take precedence over init_on_free\n");
-	if (bool_result)
-		static_branch_enable(&init_on_free);
-	else
-		static_branch_disable(&init_on_free);
-	return 0;
+	return kstrtobool(buf, &_init_on_free_enabled_early);
 }
 early_param("init_on_free", early_init_on_free);
 
@@ -728,19 +701,6 @@ static int __init early_debug_pagealloc(char *buf)
 }
 early_param("debug_pagealloc", early_debug_pagealloc);
 
-void init_debug_pagealloc(void)
-{
-	if (!debug_pagealloc_enabled())
-		return;
-
-	static_branch_enable(&_debug_pagealloc_enabled);
-
-	if (!debug_guardpage_minorder())
-		return;
-
-	static_branch_enable(&_debug_guardpage_enabled);
-}
-
 static int __init debug_guardpage_minorder_setup(char *buf)
 {
 	unsigned long res;
@@ -792,6 +752,42 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
 				unsigned int order, int migratetype) {}
 #endif
 
+/*
+ * Enable static keys related to various memory debugging and hardening options.
+ * Some override others, and depend on early params that are evaluated in the
+ * order of appearance. So we need to first gather the full picture of what was
+ * enabled, and then make decisions.
+ */
+void init_mem_debugging_and_hardening(void)
+{
+	if (_init_on_alloc_enabled_early) {
+		if (page_poisoning_enabled())
+			pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
+				"will take precedence over init_on_alloc\n");
+		else
+			static_branch_enable(&init_on_alloc);
+	}
+	if (_init_on_free_enabled_early) {
+		if (page_poisoning_enabled())
+			pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
+				"will take precedence over init_on_free\n");
+		else
+			static_branch_enable(&init_on_free);
+	}
+
+#ifdef CONFIG_DEBUG_PAGEALLOC
+	if (!debug_pagealloc_enabled())
+		return;
+
+	static_branch_enable(&_debug_pagealloc_enabled);
+
+	if (!debug_guardpage_minorder())
+		return;
+
+	static_branch_enable(&_debug_guardpage_enabled);
+#endif
+}
+
 static inline void set_buddy_order(struct page *page, unsigned int order)
 {
 	set_page_private(page, order);
-- 
2.29.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 2/5] mm, page_poison: use static key more efficiently
  2020-11-03 15:22 [PATCH v2 0/5] cleanup page poisoning Vlastimil Babka
  2020-11-03 15:22 ` [PATCH v2 1/5] mm, page_alloc: do not rely on the order of page_poison and init_on_alloc/free parameters Vlastimil Babka
@ 2020-11-03 15:22 ` Vlastimil Babka
  2020-11-11 15:38   ` David Hildenbrand
  2020-11-03 15:22 ` [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking Vlastimil Babka
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Vlastimil Babka @ 2020-11-03 15:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, David Hildenbrand, Mateusz Nosek, Laura Abbott,
	Vlastimil Babka

Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key")
changed page_poisoning_enabled() to a static key check. However, the function
is not inlined, so each check still involves a function call with overhead not
eliminated when page poisoning is disabled.

Analogically to how debug_pagealloc is handled, this patch converts
page_poisoning_enabled() back to boolean check, and introduces
page_poisoning_enabled_static() for fast paths. Both functions are inlined.

The function kernel_poison_pages() is also called unconditionally and does
the static key check inside. Remove it from there and put it to callers. Also
split it to two functions kernel_poison_pages() and kernel_unpoison_pages()
instead of the confusing bool parameter.

Also optimize the check that enables page poisoning instead of debug_pagealloc
for architectures without proper debug_pagealloc support. Move the check to
init_mem_debugging_and_hardening() to enable a single static key instead of
having two static branches in page_poisoning_enabled_static().

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 drivers/virtio/virtio_balloon.c |  2 +-
 include/linux/mm.h              | 23 +++++++++++---
 mm/page_alloc.c                 | 19 ++++++++++--
 mm/page_poison.c                | 53 +++++----------------------------
 4 files changed, 43 insertions(+), 54 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 481611c09dae..e53faed6ba93 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -1116,7 +1116,7 @@ static int virtballoon_validate(struct virtio_device *vdev)
 	 */
 	if (!want_init_on_free() &&
 	    (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY) ||
-	     !page_poisoning_enabled()))
+	     !page_poisoning_enabled_static()))
 		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
 	else if (!virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON))
 		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_REPORTING);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index e0bc1c8aa2a6..4d6dd9f44571 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2867,12 +2867,27 @@ extern int apply_to_existing_page_range(struct mm_struct *mm,
 
 extern void init_mem_debugging_and_hardening(void);
 #ifdef CONFIG_PAGE_POISONING
-extern bool page_poisoning_enabled(void);
-extern void kernel_poison_pages(struct page *page, int numpages, int enable);
+extern void kernel_poison_pages(struct page *page, int numpages);
+extern void kernel_unpoison_pages(struct page *page, int numpages);
+extern bool _page_poisoning_enabled_early;
+DECLARE_STATIC_KEY_FALSE(_page_poisoning_enabled);
+static inline bool page_poisoning_enabled(void)
+{
+	return _page_poisoning_enabled_early;
+}
+/*
+ * For use in fast paths after init_mem_debugging() has run, or when a
+ * false negative result is not harmful when called too early.
+ */
+static inline bool page_poisoning_enabled_static(void)
+{
+	return (static_branch_unlikely(&_page_poisoning_enabled));
+}
 #else
 static inline bool page_poisoning_enabled(void) { return false; }
-static inline void kernel_poison_pages(struct page *page, int numpages,
-					int enable) { }
+static inline bool page_poisoning_enabled_static(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages) { }
+static inline void kernel_unpoison_pages(struct page *page, int numpages) { }
 #endif
 
 DECLARE_STATIC_KEY_FALSE(init_on_alloc);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 44d596c9c764..fd7f9345adc0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -775,6 +775,17 @@ void init_mem_debugging_and_hardening(void)
 			static_branch_enable(&init_on_free);
 	}
 
+#ifdef CONFIG_PAGE_POISONING
+	/*
+	 * Page poisoning is debug page alloc for some arches. If
+	 * either of those options are enabled, enable poisoning.
+	 */
+	if (page_poisoning_enabled() ||
+	     (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
+	      debug_pagealloc_enabled()))
+		static_branch_enable(&_page_poisoning_enabled);
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 	if (!debug_pagealloc_enabled())
 		return;
@@ -1260,7 +1271,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	if (want_init_on_free())
 		kernel_init_free_pages(page, 1 << order);
 
-	kernel_poison_pages(page, 1 << order, 0);
+	if (page_poisoning_enabled_static())
+		kernel_poison_pages(page, 1 << order);
 	/*
 	 * arch_free_page() can make the page's contents inaccessible.  s390
 	 * does this.  So nothing which can access the page's contents should
@@ -2206,7 +2218,7 @@ static inline int check_new_page(struct page *page)
 static inline bool free_pages_prezeroed(void)
 {
 	return (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
-		page_poisoning_enabled()) || want_init_on_free();
+		page_poisoning_enabled_static()) || want_init_on_free();
 }
 
 #ifdef CONFIG_DEBUG_VM
@@ -2269,7 +2281,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
 	if (debug_pagealloc_enabled_static())
 		kernel_map_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
-	kernel_poison_pages(page, 1 << order, 1);
+	if (page_poisoning_enabled_static())
+		kernel_unpoison_pages(page, 1 << order);
 	set_page_owner(page, order, gfp_flags);
 }
 
diff --git a/mm/page_poison.c b/mm/page_poison.c
index ae0482cded87..dd7aeada036f 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -8,45 +8,17 @@
 #include <linux/ratelimit.h>
 #include <linux/kasan.h>
 
-static DEFINE_STATIC_KEY_FALSE_RO(want_page_poisoning);
+bool _page_poisoning_enabled_early;
+EXPORT_SYMBOL(_page_poisoning_enabled_early);
+DEFINE_STATIC_KEY_FALSE(_page_poisoning_enabled);
+EXPORT_SYMBOL(_page_poisoning_enabled);
 
 static int __init early_page_poison_param(char *buf)
 {
-	int ret;
-	bool tmp;
-
-	ret = strtobool(buf, &tmp);
-	if (ret)
-		return ret;
-
-	if (tmp)
-		static_branch_enable(&want_page_poisoning);
-	else
-		static_branch_disable(&want_page_poisoning);
-
-	return 0;
+	return kstrtobool(buf, &_page_poisoning_enabled_early);
 }
 early_param("page_poison", early_page_poison_param);
 
-/**
- * page_poisoning_enabled - check if page poisoning is enabled
- *
- * Return true if page poisoning is enabled, or false if not.
- */
-bool page_poisoning_enabled(void)
-{
-	/*
-	 * Assumes that debug_pagealloc_enabled is set before
-	 * memblock_free_all.
-	 * Page poisoning is debug page alloc for some arches. If
-	 * either of those options are enabled, enable poisoning.
-	 */
-	return (static_branch_unlikely(&want_page_poisoning) ||
-		(!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
-		debug_pagealloc_enabled()));
-}
-EXPORT_SYMBOL_GPL(page_poisoning_enabled);
-
 static void poison_page(struct page *page)
 {
 	void *addr = kmap_atomic(page);
@@ -58,7 +30,7 @@ static void poison_page(struct page *page)
 	kunmap_atomic(addr);
 }
 
-static void poison_pages(struct page *page, int n)
+void kernel_poison_pages(struct page *page, int n)
 {
 	int i;
 
@@ -117,7 +89,7 @@ static void unpoison_page(struct page *page)
 	kunmap_atomic(addr);
 }
 
-static void unpoison_pages(struct page *page, int n)
+void kernel_unpoison_pages(struct page *page, int n)
 {
 	int i;
 
@@ -125,17 +97,6 @@ static void unpoison_pages(struct page *page, int n)
 		unpoison_page(page + i);
 }
 
-void kernel_poison_pages(struct page *page, int numpages, int enable)
-{
-	if (!page_poisoning_enabled())
-		return;
-
-	if (enable)
-		unpoison_pages(page, numpages);
-	else
-		poison_pages(page, numpages);
-}
-
 #ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-- 
2.29.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking
  2020-11-03 15:22 [PATCH v2 0/5] cleanup page poisoning Vlastimil Babka
  2020-11-03 15:22 ` [PATCH v2 1/5] mm, page_alloc: do not rely on the order of page_poison and init_on_alloc/free parameters Vlastimil Babka
  2020-11-03 15:22 ` [PATCH v2 2/5] mm, page_poison: use static key more efficiently Vlastimil Babka
@ 2020-11-03 15:22 ` Vlastimil Babka
  2020-11-05 18:36   ` Rafael J. Wysocki
  2020-11-11 15:42   ` David Hildenbrand
  2020-11-03 15:22 ` [PATCH v2 4/5] mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITY Vlastimil Babka
  2020-11-03 15:22 ` [PATCH v2 5/5] mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO Vlastimil Babka
  4 siblings, 2 replies; 16+ messages in thread
From: Vlastimil Babka @ 2020-11-03 15:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, David Hildenbrand, Mateusz Nosek, Laura Abbott,
	Vlastimil Babka, Rafael J. Wysocki, Len Brown, Pavel Machek,
	linux-pm

Page poisoning used to be incompatible with hibernation, as the state of
poisoned pages was lost after resume, thus enabling CONFIG_HIBERNATION forces
CONFIG_PAGE_POISONING_NO_SANITY. For the same reason, the poisoning with zeroes
variant CONFIG_PAGE_POISONING_ZERO used to disable hibernation. The latter
restriction was removed by commit 1ad1410f632d ("PM / Hibernate: allow
hibernation with PAGE_POISONING_ZERO") and similarly for init_on_free by commit
18451f9f9e58 ("PM: hibernate: fix crashes with init_on_free=1") by making sure
free pages are cleared after resume.

We can use the same mechanism to instead poison free pages with PAGE_POISON
after resume. This covers both zero and 0xAA patterns. Thus we can remove the
Kconfig restriction that disables page poison sanity checking when hibernation
is enabled.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Len Brown <len.brown@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: <linux-pm@vger.kernel.org>
---
 kernel/power/hibernate.c |  2 +-
 kernel/power/power.h     |  2 +-
 kernel/power/snapshot.c  | 14 ++++++++++----
 mm/Kconfig.debug         |  1 -
 4 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index 2fc7d509a34f..da0b41914177 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -326,7 +326,7 @@ static int create_image(int platform_mode)
 
 	if (!in_suspend) {
 		events_check_enabled = false;
-		clear_free_pages();
+		clear_or_poison_free_pages();
 	}
 
 	platform_leave(platform_mode);
diff --git a/kernel/power/power.h b/kernel/power/power.h
index 24f12d534515..778bf431ec02 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -106,7 +106,7 @@ extern int create_basic_memory_bitmaps(void);
 extern void free_basic_memory_bitmaps(void);
 extern int hibernate_preallocate_memory(void);
 
-extern void clear_free_pages(void);
+extern void clear_or_poison_free_pages(void);
 
 /**
  *	Auxiliary structure used for reading the snapshot image data and
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 46b1804c1ddf..6b1c84afa891 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1144,7 +1144,7 @@ void free_basic_memory_bitmaps(void)
 	pr_debug("Basic memory bitmaps freed\n");
 }
 
-void clear_free_pages(void)
+void clear_or_poison_free_pages(void)
 {
 	struct memory_bitmap *bm = free_pages_map;
 	unsigned long pfn;
@@ -1152,12 +1152,18 @@ void clear_free_pages(void)
 	if (WARN_ON(!(free_pages_map)))
 		return;
 
-	if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
+	if (page_poisoning_enabled() || want_init_on_free()) {
 		memory_bm_position_reset(bm);
 		pfn = memory_bm_next_pfn(bm);
 		while (pfn != BM_END_OF_MAP) {
-			if (pfn_valid(pfn))
-				clear_highpage(pfn_to_page(pfn));
+			if (pfn_valid(pfn)) {
+				struct page *page = pfn_to_page(pfn);
+				if (page_poisoning_enabled_static())
+					kernel_poison_pages(page, 1);
+				else if (want_init_on_free())
+					clear_highpage(page);
+
+			}
 
 			pfn = memory_bm_next_pfn(bm);
 		}
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 864f129f1937..c57786ad5be9 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -64,7 +64,6 @@ config PAGE_OWNER
 
 config PAGE_POISONING
 	bool "Poison pages after freeing"
-	select PAGE_POISONING_NO_SANITY if HIBERNATION
 	help
 	  Fill the pages with poison patterns after free_pages() and verify
 	  the patterns before alloc_pages. The filling of the memory helps
-- 
2.29.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 4/5] mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITY
  2020-11-03 15:22 [PATCH v2 0/5] cleanup page poisoning Vlastimil Babka
                   ` (2 preceding siblings ...)
  2020-11-03 15:22 ` [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking Vlastimil Babka
@ 2020-11-03 15:22 ` Vlastimil Babka
  2020-11-11 15:43   ` David Hildenbrand
  2020-11-03 15:22 ` [PATCH v2 5/5] mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO Vlastimil Babka
  4 siblings, 1 reply; 16+ messages in thread
From: Vlastimil Babka @ 2020-11-03 15:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, David Hildenbrand, Mateusz Nosek, Laura Abbott,
	Vlastimil Babka

CONFIG_PAGE_POISONING_NO_SANITY skips the check on page alloc whether the
poison pattern was corrupted, suggesting a use-after-free. The motivation to
introduce it in commit 8823b1dbc05f ("mm/page_poison.c: enable PAGE_POISONING
as a separate option") was to simply sanitize freed pages, optimally together
with CONFIG_PAGE_POISONING_ZERO.

These days we have an init_on_free=1 boot option, which makes this use case of
page poisoning redundant. For sanitizing, writing zeroes is sufficient, there
is pretty much no benefit from writing the 0xAA poison pattern to freed pages,
without checking it back on alloc. Thus, remove this option and suggest
init_on_free instead in the main config's help.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 drivers/virtio/virtio_balloon.c |  4 +---
 mm/Kconfig.debug                | 15 ++++-----------
 mm/page_poison.c                |  3 ---
 3 files changed, 5 insertions(+), 17 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index e53faed6ba93..8985fc2cea86 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -1114,9 +1114,7 @@ static int virtballoon_validate(struct virtio_device *vdev)
 	 * page reporting as it could potentially change the contents
 	 * of our free pages.
 	 */
-	if (!want_init_on_free() &&
-	    (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY) ||
-	     !page_poisoning_enabled_static()))
+	if (!want_init_on_free() && !page_poisoning_enabled_static())
 		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
 	else if (!virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON))
 		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_REPORTING);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index c57786ad5be9..14e29fe5bfa6 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -74,18 +74,11 @@ config PAGE_POISONING
 	  Note that "poison" here is not the same thing as the "HWPoison"
 	  for CONFIG_MEMORY_FAILURE. This is software poisoning only.
 
-	  If unsure, say N
+	  If you are only interested in sanitization of freed pages without
+	  checking the poison pattern on alloc, you can boot the kernel with
+	  "init_on_free=1" instead of enabling this.
 
-config PAGE_POISONING_NO_SANITY
-	depends on PAGE_POISONING
-	bool "Only poison, don't sanity check"
-	help
-	   Skip the sanity checking on alloc, only fill the pages with
-	   poison on free. This reduces some of the overhead of the
-	   poisoning feature.
-
-	   If you are only interested in sanitization, say Y. Otherwise
-	   say N.
+	  If unsure, say N
 
 config PAGE_POISONING_ZERO
 	bool "Use zero for poisoning instead of debugging value"
diff --git a/mm/page_poison.c b/mm/page_poison.c
index dd7aeada036f..084fc3ff4c15 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -51,9 +51,6 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
 	unsigned char *start;
 	unsigned char *end;
 
-	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
-		return;
-
 	start = memchr_inv(mem, PAGE_POISON, bytes);
 	if (!start)
 		return;
-- 
2.29.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v2 5/5] mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO
  2020-11-03 15:22 [PATCH v2 0/5] cleanup page poisoning Vlastimil Babka
                   ` (3 preceding siblings ...)
  2020-11-03 15:22 ` [PATCH v2 4/5] mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITY Vlastimil Babka
@ 2020-11-03 15:22 ` Vlastimil Babka
  2020-11-11 15:45   ` David Hildenbrand
  4 siblings, 1 reply; 16+ messages in thread
From: Vlastimil Babka @ 2020-11-03 15:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, David Hildenbrand, Mateusz Nosek, Laura Abbott,
	Vlastimil Babka

CONFIG_PAGE_POISONING_ZERO uses the zero pattern instead of 0xAA. It was
introduced by commit 1414c7f4f7d7 ("mm/page_poisoning.c: allow for zero
poisoning"), noting that using zeroes retains the benefit of sanitizing content
of freed pages, with the benefit of not having to zero them again on alloc, and
the downside of making some forms of corruption (stray writes of NULLs) harder
to detect than with the 0xAA pattern. Together with
CONFIG_PAGE_POISONING_NO_SANITY it made possible to sanitize the contents on
free without checking it back on alloc.

These days we have the init_on_free() option to achieve sanitization with
zeroes and to save clearing on alloc (and without checking on alloc). Arguably
if someone does choose to check the poison for corruption on alloc, the savings
of not clearing the page are secondary, and it makes sense to always use the
0xAA poison pattern. Thus, remove the CONFIG_PAGE_POISONING_ZERO option for
being redundant.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/poison.h       |  4 ----
 mm/Kconfig.debug             | 12 ------------
 mm/page_alloc.c              |  8 +-------
 tools/include/linux/poison.h |  6 +-----
 4 files changed, 2 insertions(+), 28 deletions(-)

diff --git a/include/linux/poison.h b/include/linux/poison.h
index dc8ae5d8db03..aff1c9250c82 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -27,11 +27,7 @@
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
 /********** mm/page_poison.c **********/
-#ifdef CONFIG_PAGE_POISONING_ZERO
-#define PAGE_POISON 0x00
-#else
 #define PAGE_POISON 0xaa
-#endif
 
 /********** mm/page_alloc.c ************/
 
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 14e29fe5bfa6..1e73717802f8 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -80,18 +80,6 @@ config PAGE_POISONING
 
 	  If unsure, say N
 
-config PAGE_POISONING_ZERO
-	bool "Use zero for poisoning instead of debugging value"
-	depends on PAGE_POISONING
-	help
-	   Instead of using the existing poison value, fill the pages with
-	   zeros. This makes it harder to detect when errors are occurring
-	   due to sanitization but the zeroing at free means that it is
-	   no longer necessary to write zeros when GFP_ZERO is used on
-	   allocation.
-
-	   If unsure, say N
-
 config DEBUG_PAGE_REF
 	bool "Enable tracepoint to track down page reference manipulation"
 	depends on DEBUG_KERNEL
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fd7f9345adc0..2054320d3c1f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2215,12 +2215,6 @@ static inline int check_new_page(struct page *page)
 	return 1;
 }
 
-static inline bool free_pages_prezeroed(void)
-{
-	return (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
-		page_poisoning_enabled_static()) || want_init_on_free();
-}
-
 #ifdef CONFIG_DEBUG_VM
 /*
  * With DEBUG_VM enabled, order-0 pages are checked for expected state when
@@ -2291,7 +2285,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
 {
 	post_alloc_hook(page, order, gfp_flags);
 
-	if (!free_pages_prezeroed() && want_init_on_alloc(gfp_flags))
+	if (!want_init_on_free() && want_init_on_alloc(gfp_flags))
 		kernel_init_free_pages(page, 1 << order);
 
 	if (order && (gfp_flags & __GFP_COMP))
diff --git a/tools/include/linux/poison.h b/tools/include/linux/poison.h
index d29725769107..2e6338ac5eed 100644
--- a/tools/include/linux/poison.h
+++ b/tools/include/linux/poison.h
@@ -35,12 +35,8 @@
  */
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
-/********** mm/debug-pagealloc.c **********/
-#ifdef CONFIG_PAGE_POISONING_ZERO
-#define PAGE_POISON 0x00
-#else
+/********** mm/page_poison.c **********/
 #define PAGE_POISON 0xaa
-#endif
 
 /********** mm/page_alloc.c ************/
 
-- 
2.29.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking
  2020-11-03 15:22 ` [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking Vlastimil Babka
@ 2020-11-05 18:36   ` Rafael J. Wysocki
  2020-11-11 15:42   ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: Rafael J. Wysocki @ 2020-11-05 18:36 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Linux Memory Management List,
	Linux Kernel Mailing List, Alexander Potapenko, Kees Cook,
	Michal Hocko, David Hildenbrand, Mateusz Nosek, Laura Abbott,
	Rafael J. Wysocki, Len Brown, Pavel Machek, Linux PM

On Tue, Nov 3, 2020 at 4:24 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> Page poisoning used to be incompatible with hibernation, as the state of
> poisoned pages was lost after resume, thus enabling CONFIG_HIBERNATION forces
> CONFIG_PAGE_POISONING_NO_SANITY. For the same reason, the poisoning with zeroes
> variant CONFIG_PAGE_POISONING_ZERO used to disable hibernation. The latter
> restriction was removed by commit 1ad1410f632d ("PM / Hibernate: allow
> hibernation with PAGE_POISONING_ZERO") and similarly for init_on_free by commit
> 18451f9f9e58 ("PM: hibernate: fix crashes with init_on_free=1") by making sure
> free pages are cleared after resume.
>
> We can use the same mechanism to instead poison free pages with PAGE_POISON
> after resume. This covers both zero and 0xAA patterns. Thus we can remove the
> Kconfig restriction that disables page poison sanity checking when hibernation
> is enabled.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
> Cc: Len Brown <len.brown@intel.com>
> Cc: Pavel Machek <pavel@ucw.cz>
> Cc: <linux-pm@vger.kernel.org>

Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

for the hibernation part and I'm assuming that this patch will be
routed through the mm tree.

> ---
>  kernel/power/hibernate.c |  2 +-
>  kernel/power/power.h     |  2 +-
>  kernel/power/snapshot.c  | 14 ++++++++++----
>  mm/Kconfig.debug         |  1 -
>  4 files changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
> index 2fc7d509a34f..da0b41914177 100644
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -326,7 +326,7 @@ static int create_image(int platform_mode)
>
>         if (!in_suspend) {
>                 events_check_enabled = false;
> -               clear_free_pages();
> +               clear_or_poison_free_pages();
>         }
>
>         platform_leave(platform_mode);
> diff --git a/kernel/power/power.h b/kernel/power/power.h
> index 24f12d534515..778bf431ec02 100644
> --- a/kernel/power/power.h
> +++ b/kernel/power/power.h
> @@ -106,7 +106,7 @@ extern int create_basic_memory_bitmaps(void);
>  extern void free_basic_memory_bitmaps(void);
>  extern int hibernate_preallocate_memory(void);
>
> -extern void clear_free_pages(void);
> +extern void clear_or_poison_free_pages(void);
>
>  /**
>   *     Auxiliary structure used for reading the snapshot image data and
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 46b1804c1ddf..6b1c84afa891 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1144,7 +1144,7 @@ void free_basic_memory_bitmaps(void)
>         pr_debug("Basic memory bitmaps freed\n");
>  }
>
> -void clear_free_pages(void)
> +void clear_or_poison_free_pages(void)
>  {
>         struct memory_bitmap *bm = free_pages_map;
>         unsigned long pfn;
> @@ -1152,12 +1152,18 @@ void clear_free_pages(void)
>         if (WARN_ON(!(free_pages_map)))
>                 return;
>
> -       if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
> +       if (page_poisoning_enabled() || want_init_on_free()) {
>                 memory_bm_position_reset(bm);
>                 pfn = memory_bm_next_pfn(bm);
>                 while (pfn != BM_END_OF_MAP) {
> -                       if (pfn_valid(pfn))
> -                               clear_highpage(pfn_to_page(pfn));
> +                       if (pfn_valid(pfn)) {
> +                               struct page *page = pfn_to_page(pfn);
> +                               if (page_poisoning_enabled_static())
> +                                       kernel_poison_pages(page, 1);
> +                               else if (want_init_on_free())
> +                                       clear_highpage(page);
> +
> +                       }
>
>                         pfn = memory_bm_next_pfn(bm);
>                 }
> diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
> index 864f129f1937..c57786ad5be9 100644
> --- a/mm/Kconfig.debug
> +++ b/mm/Kconfig.debug
> @@ -64,7 +64,6 @@ config PAGE_OWNER
>
>  config PAGE_POISONING
>         bool "Poison pages after freeing"
> -       select PAGE_POISONING_NO_SANITY if HIBERNATION
>         help
>           Fill the pages with poison patterns after free_pages() and verify
>           the patterns before alloc_pages. The filling of the memory helps
> --
> 2.29.1
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 2/5] mm, page_poison: use static key more efficiently
  2020-11-03 15:22 ` [PATCH v2 2/5] mm, page_poison: use static key more efficiently Vlastimil Babka
@ 2020-11-11 15:38   ` David Hildenbrand
  2020-11-12 14:37     ` Vlastimil Babka
  0 siblings, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2020-11-11 15:38 UTC (permalink / raw)
  To: Vlastimil Babka, Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott

On 03.11.20 16:22, Vlastimil Babka wrote:
> Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key")
> changed page_poisoning_enabled() to a static key check. However, the function
> is not inlined, so each check still involves a function call with overhead not
> eliminated when page poisoning is disabled.
> 
> Analogically to how debug_pagealloc is handled, this patch converts
> page_poisoning_enabled() back to boolean check, and introduces
> page_poisoning_enabled_static() for fast paths. Both functions are inlined.
> 
> The function kernel_poison_pages() is also called unconditionally and does
> the static key check inside. Remove it from there and put it to callers. Also
> split it to two functions kernel_poison_pages() and kernel_unpoison_pages()
> instead of the confusing bool parameter.
> 
> Also optimize the check that enables page poisoning instead of debug_pagealloc
> for architectures without proper debug_pagealloc support. Move the check to
> init_mem_debugging_and_hardening() to enable a single static key instead of
> having two static branches in page_poisoning_enabled_static().

[...]

> + * For use in fast paths after init_mem_debugging() has run, or when a
> + * false negative result is not harmful when called too early.
> + */
> +static inline bool page_poisoning_enabled_static(void)
> +{
> +	return (static_branch_unlikely(&_page_poisoning_enabled));

As already mentioned IIRC:

return static_branch_unlikely(&_page_poisoning_enabled);

> +}
>   #else
>   static inline bool page_poisoning_enabled(void) { return false; }
> -static inline void kernel_poison_pages(struct page *page, int numpages,
> -					int enable) { }
> +static inline bool page_poisoning_enabled_static(void) { return false; }
> +static inline void kernel_poison_pages(struct page *page, int numpages) { }
> +static inline void kernel_unpoison_pages(struct page *page, int numpages) { }
>   #endif
>   
>   DECLARE_STATIC_KEY_FALSE(init_on_alloc);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 44d596c9c764..fd7f9345adc0 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -775,6 +775,17 @@ void init_mem_debugging_and_hardening(void)
>   			static_branch_enable(&init_on_free);
>   	}
>   
> +#ifdef CONFIG_PAGE_POISONING
> +	/*
> +	 * Page poisoning is debug page alloc for some arches. If
> +	 * either of those options are enabled, enable poisoning.
> +	 */
> +	if (page_poisoning_enabled() ||
> +	     (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
> +	      debug_pagealloc_enabled()))
> +		static_branch_enable(&_page_poisoning_enabled);
> +#endif
> +
>   #ifdef CONFIG_DEBUG_PAGEALLOC
>   	if (!debug_pagealloc_enabled())
>   		return;
> @@ -1260,7 +1271,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
>   	if (want_init_on_free())
>   		kernel_init_free_pages(page, 1 << order);
>   
> -	kernel_poison_pages(page, 1 << order, 0);
> +	if (page_poisoning_enabled_static())
> +		kernel_poison_pages(page, 1 << order);

This would look much better by having kernel_poison_pages() simply be 
implemented in a header, where the static check is performed.

Take a look at how it's handled in mm/shuffle.h

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking
  2020-11-03 15:22 ` [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking Vlastimil Babka
  2020-11-05 18:36   ` Rafael J. Wysocki
@ 2020-11-11 15:42   ` David Hildenbrand
  2020-11-12 14:39     ` Vlastimil Babka
  1 sibling, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2020-11-11 15:42 UTC (permalink / raw)
  To: Vlastimil Babka, Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott, Rafael J. Wysocki,
	Len Brown, Pavel Machek, linux-pm

On 03.11.20 16:22, Vlastimil Babka wrote:
> Page poisoning used to be incompatible with hibernation, as the state of
> poisoned pages was lost after resume, thus enabling CONFIG_HIBERNATION forces
> CONFIG_PAGE_POISONING_NO_SANITY. For the same reason, the poisoning with zeroes
> variant CONFIG_PAGE_POISONING_ZERO used to disable hibernation. The latter
> restriction was removed by commit 1ad1410f632d ("PM / Hibernate: allow
> hibernation with PAGE_POISONING_ZERO") and similarly for init_on_free by commit
> 18451f9f9e58 ("PM: hibernate: fix crashes with init_on_free=1") by making sure
> free pages are cleared after resume.
> 
> We can use the same mechanism to instead poison free pages with PAGE_POISON
> after resume. This covers both zero and 0xAA patterns. Thus we can remove the
> Kconfig restriction that disables page poison sanity checking when hibernation
> is enabled.
> 
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
> Cc: Len Brown <len.brown@intel.com>
> Cc: Pavel Machek <pavel@ucw.cz>
> Cc: <linux-pm@vger.kernel.org>
> ---
>   kernel/power/hibernate.c |  2 +-
>   kernel/power/power.h     |  2 +-
>   kernel/power/snapshot.c  | 14 ++++++++++----
>   mm/Kconfig.debug         |  1 -
>   4 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
> index 2fc7d509a34f..da0b41914177 100644
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -326,7 +326,7 @@ static int create_image(int platform_mode)
>   
>   	if (!in_suspend) {
>   		events_check_enabled = false;
> -		clear_free_pages();
> +		clear_or_poison_free_pages();
>   	}
>   
>   	platform_leave(platform_mode);
> diff --git a/kernel/power/power.h b/kernel/power/power.h
> index 24f12d534515..778bf431ec02 100644
> --- a/kernel/power/power.h
> +++ b/kernel/power/power.h
> @@ -106,7 +106,7 @@ extern int create_basic_memory_bitmaps(void);
>   extern void free_basic_memory_bitmaps(void);
>   extern int hibernate_preallocate_memory(void);
>   
> -extern void clear_free_pages(void);
> +extern void clear_or_poison_free_pages(void);
>   
>   /**
>    *	Auxiliary structure used for reading the snapshot image data and
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 46b1804c1ddf..6b1c84afa891 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1144,7 +1144,7 @@ void free_basic_memory_bitmaps(void)
>   	pr_debug("Basic memory bitmaps freed\n");
>   }
>   
> -void clear_free_pages(void)
> +void clear_or_poison_free_pages(void)
>   {
>   	struct memory_bitmap *bm = free_pages_map;
>   	unsigned long pfn;
> @@ -1152,12 +1152,18 @@ void clear_free_pages(void)
>   	if (WARN_ON(!(free_pages_map)))
>   		return;
>   
> -	if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
> +	if (page_poisoning_enabled() || want_init_on_free()) {
>   		memory_bm_position_reset(bm);
>   		pfn = memory_bm_next_pfn(bm);
>   		while (pfn != BM_END_OF_MAP) {
> -			if (pfn_valid(pfn))
> -				clear_highpage(pfn_to_page(pfn));
> +			if (pfn_valid(pfn)) {
> +				struct page *page = pfn_to_page(pfn);

^ empty line missing. And at least I prefer to declare all variables in 
the function header.

I'd even suggest to move this into a separate function like

clear_or_poison_free_page(struct page *page)


> +				if (page_poisoning_enabled_static())
> +					kernel_poison_pages(page, 1);
> +				else if (want_init_on_free())
> +					clear_highpage(page);
> +
> +			}
>   
>   			pfn = memory_bm_next_pfn(bm);
>   		}
> diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
> index 864f129f1937..c57786ad5be9 100644
> --- a/mm/Kconfig.debug
> +++ b/mm/Kconfig.debug
> @@ -64,7 +64,6 @@ config PAGE_OWNER
>   
>   config PAGE_POISONING
>   	bool "Poison pages after freeing"
> -	select PAGE_POISONING_NO_SANITY if HIBERNATION
>   	help
>   	  Fill the pages with poison patterns after free_pages() and verify
>   	  the patterns before alloc_pages. The filling of the memory helps
> 

Unless I am missing something important, this should work! Thanks!

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 4/5] mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITY
  2020-11-03 15:22 ` [PATCH v2 4/5] mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITY Vlastimil Babka
@ 2020-11-11 15:43   ` David Hildenbrand
  0 siblings, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2020-11-11 15:43 UTC (permalink / raw)
  To: Vlastimil Babka, Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott

On 03.11.20 16:22, Vlastimil Babka wrote:
> CONFIG_PAGE_POISONING_NO_SANITY skips the check on page alloc whether the
> poison pattern was corrupted, suggesting a use-after-free. The motivation to
> introduce it in commit 8823b1dbc05f ("mm/page_poison.c: enable PAGE_POISONING
> as a separate option") was to simply sanitize freed pages, optimally together
> with CONFIG_PAGE_POISONING_ZERO.
> 
> These days we have an init_on_free=1 boot option, which makes this use case of
> page poisoning redundant. For sanitizing, writing zeroes is sufficient, there
> is pretty much no benefit from writing the 0xAA poison pattern to freed pages,
> without checking it back on alloc. Thus, remove this option and suggest
> init_on_free instead in the main config's help.
> 
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>   drivers/virtio/virtio_balloon.c |  4 +---
>   mm/Kconfig.debug                | 15 ++++-----------
>   mm/page_poison.c                |  3 ---
>   3 files changed, 5 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index e53faed6ba93..8985fc2cea86 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -1114,9 +1114,7 @@ static int virtballoon_validate(struct virtio_device *vdev)
>   	 * page reporting as it could potentially change the contents
>   	 * of our free pages.
>   	 */
> -	if (!want_init_on_free() &&
> -	    (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY) ||
> -	     !page_poisoning_enabled_static()))
> +	if (!want_init_on_free() && !page_poisoning_enabled_static())
>   		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
>   	else if (!virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON))
>   		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_REPORTING);
> diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
> index c57786ad5be9..14e29fe5bfa6 100644
> --- a/mm/Kconfig.debug
> +++ b/mm/Kconfig.debug
> @@ -74,18 +74,11 @@ config PAGE_POISONING
>   	  Note that "poison" here is not the same thing as the "HWPoison"
>   	  for CONFIG_MEMORY_FAILURE. This is software poisoning only.
>   
> -	  If unsure, say N
> +	  If you are only interested in sanitization of freed pages without
> +	  checking the poison pattern on alloc, you can boot the kernel with
> +	  "init_on_free=1" instead of enabling this.
>   
> -config PAGE_POISONING_NO_SANITY
> -	depends on PAGE_POISONING
> -	bool "Only poison, don't sanity check"
> -	help
> -	   Skip the sanity checking on alloc, only fill the pages with
> -	   poison on free. This reduces some of the overhead of the
> -	   poisoning feature.
> -
> -	   If you are only interested in sanitization, say Y. Otherwise
> -	   say N.
> +	  If unsure, say N
>   
>   config PAGE_POISONING_ZERO
>   	bool "Use zero for poisoning instead of debugging value"
> diff --git a/mm/page_poison.c b/mm/page_poison.c
> index dd7aeada036f..084fc3ff4c15 100644
> --- a/mm/page_poison.c
> +++ b/mm/page_poison.c
> @@ -51,9 +51,6 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
>   	unsigned char *start;
>   	unsigned char *end;
>   
> -	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
> -		return;
> -
>   	start = memchr_inv(mem, PAGE_POISON, bytes);
>   	if (!start)
>   		return;
> 

This clearly simplifies things.

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 5/5] mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO
  2020-11-03 15:22 ` [PATCH v2 5/5] mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO Vlastimil Babka
@ 2020-11-11 15:45   ` David Hildenbrand
  0 siblings, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2020-11-11 15:45 UTC (permalink / raw)
  To: Vlastimil Babka, Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott

On 03.11.20 16:22, Vlastimil Babka wrote:
> CONFIG_PAGE_POISONING_ZERO uses the zero pattern instead of 0xAA. It was
> introduced by commit 1414c7f4f7d7 ("mm/page_poisoning.c: allow for zero
> poisoning"), noting that using zeroes retains the benefit of sanitizing content
> of freed pages, with the benefit of not having to zero them again on alloc, and
> the downside of making some forms of corruption (stray writes of NULLs) harder
> to detect than with the 0xAA pattern. Together with
> CONFIG_PAGE_POISONING_NO_SANITY it made possible to sanitize the contents on
> free without checking it back on alloc.
> 
> These days we have the init_on_free() option to achieve sanitization with
> zeroes and to save clearing on alloc (and without checking on alloc). Arguably
> if someone does choose to check the poison for corruption on alloc, the savings
> of not clearing the page are secondary, and it makes sense to always use the
> 0xAA poison pattern. Thus, remove the CONFIG_PAGE_POISONING_ZERO option for
> being redundant.

I agree, this simplifies things ... and I don't see a need to complicate 
things to speed up corner-case debug mechanisms. Thanks!

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 2/5] mm, page_poison: use static key more efficiently
  2020-11-11 15:38   ` David Hildenbrand
@ 2020-11-12 14:37     ` Vlastimil Babka
  2020-11-12 16:06       ` David Hildenbrand
  0 siblings, 1 reply; 16+ messages in thread
From: Vlastimil Babka @ 2020-11-12 14:37 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott

On 11/11/20 4:38 PM, David Hildenbrand wrote:
> On 03.11.20 16:22, Vlastimil Babka wrote:
>> Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key")
>> changed page_poisoning_enabled() to a static key check. However, the function
>> is not inlined, so each check still involves a function call with overhead not
>> eliminated when page poisoning is disabled.
>> 
>> Analogically to how debug_pagealloc is handled, this patch converts
>> page_poisoning_enabled() back to boolean check, and introduces
>> page_poisoning_enabled_static() for fast paths. Both functions are inlined.
>> 
>> The function kernel_poison_pages() is also called unconditionally and does
>> the static key check inside. Remove it from there and put it to callers. Also
>> split it to two functions kernel_poison_pages() and kernel_unpoison_pages()
>> instead of the confusing bool parameter.
>> 
>> Also optimize the check that enables page poisoning instead of debug_pagealloc
>> for architectures without proper debug_pagealloc support. Move the check to
>> init_mem_debugging_and_hardening() to enable a single static key instead of
>> having two static branches in page_poisoning_enabled_static().
> 
> [...]
> 
>> + * For use in fast paths after init_mem_debugging() has run, or when a
>> + * false negative result is not harmful when called too early.
>> + */
>> +static inline bool page_poisoning_enabled_static(void)
>> +{
>> +	return (static_branch_unlikely(&_page_poisoning_enabled));
> 
> As already mentioned IIRC:

Yes, it was, and I thought I fixed it. Guess not.

> return static_branch_unlikely(&_page_poisoning_enabled);
> 
>> +}

>> @@ -1260,7 +1271,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
>>   	if (want_init_on_free())
>>   		kernel_init_free_pages(page, 1 << order);
>>   
>> -	kernel_poison_pages(page, 1 << order, 0);
>> +	if (page_poisoning_enabled_static())
>> +		kernel_poison_pages(page, 1 << order);
> 
> This would look much better by having kernel_poison_pages() simply be
> implemented in a header, where the static check is performed.
> 
> Take a look at how it's handled in mm/shuffle.h
  
Ok. Fixup below.

----8<----

 From 7ce26ba61296f583f0f9089e7887f07424f25d2c Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@suse.cz>
Date: Thu, 12 Nov 2020 15:20:58 +0100
Subject: [PATCH] mm, page_poison: use static key more efficiently-fix

Non-functional cleanups, per David Hildenbrand.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
  include/linux/mm.h | 16 +++++++++++++---
  mm/page_alloc.c    |  7 +++----
  mm/page_poison.c   |  4 ++--
  3 files changed, 18 insertions(+), 9 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4d6dd9f44571..861b9392b5dc 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2867,8 +2867,8 @@ extern int apply_to_existing_page_range(struct mm_struct *mm,
  
  extern void init_mem_debugging_and_hardening(void);
  #ifdef CONFIG_PAGE_POISONING
-extern void kernel_poison_pages(struct page *page, int numpages);
-extern void kernel_unpoison_pages(struct page *page, int numpages);
+extern void __kernel_poison_pages(struct page *page, int numpages);
+extern void __kernel_unpoison_pages(struct page *page, int numpages);
  extern bool _page_poisoning_enabled_early;
  DECLARE_STATIC_KEY_FALSE(_page_poisoning_enabled);
  static inline bool page_poisoning_enabled(void)
@@ -2881,7 +2881,17 @@ static inline bool page_poisoning_enabled(void)
   */
  static inline bool page_poisoning_enabled_static(void)
  {
-	return (static_branch_unlikely(&_page_poisoning_enabled));
+	return static_branch_unlikely(&_page_poisoning_enabled);
+}
+static inline void kernel_poison_pages(struct page *page, int numpages)
+{
+	if (page_poisoning_enabled_static())
+		__kernel_poison_pages(page, numpages);
+}
+static inline void kernel_unpoison_pages(struct page *page, int numpages)
+{
+	if (page_poisoning_enabled_static())
+		__kernel_unpoison_pages(page, numpages);
  }
  #else
  static inline bool page_poisoning_enabled(void) { return false; }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fd7f9345adc0..1388b5939551 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1271,8 +1271,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
  	if (want_init_on_free())
  		kernel_init_free_pages(page, 1 << order);
  
-	if (page_poisoning_enabled_static())
-		kernel_poison_pages(page, 1 << order);
+	kernel_poison_pages(page, 1 << order);
+
  	/*
  	 * arch_free_page() can make the page's contents inaccessible.  s390
  	 * does this.  So nothing which can access the page's contents should
@@ -2281,8 +2281,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
  	if (debug_pagealloc_enabled_static())
  		kernel_map_pages(page, 1 << order, 1);
  	kasan_alloc_pages(page, order);
-	if (page_poisoning_enabled_static())
-		kernel_unpoison_pages(page, 1 << order);
+	kernel_unpoison_pages(page, 1 << order);
  	set_page_owner(page, order, gfp_flags);
  }
  
diff --git a/mm/page_poison.c b/mm/page_poison.c
index dd7aeada036f..4d75fc9ccc7a 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -30,7 +30,7 @@ static void poison_page(struct page *page)
  	kunmap_atomic(addr);
  }
  
-void kernel_poison_pages(struct page *page, int n)
+void __kernel_poison_pages(struct page *page, int n)
  {
  	int i;
  
@@ -89,7 +89,7 @@ static void unpoison_page(struct page *page)
  	kunmap_atomic(addr);
  }
  
-void kernel_unpoison_pages(struct page *page, int n)
+void __kernel_unpoison_pages(struct page *page, int n)
  {
  	int i;
  
-- 
2.29.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking
  2020-11-11 15:42   ` David Hildenbrand
@ 2020-11-12 14:39     ` Vlastimil Babka
  2020-11-12 15:52       ` Rafael J. Wysocki
  2020-11-12 16:07       ` David Hildenbrand
  0 siblings, 2 replies; 16+ messages in thread
From: Vlastimil Babka @ 2020-11-12 14:39 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott, Rafael J. Wysocki,
	Len Brown, Pavel Machek, linux-pm

On 11/11/20 4:42 PM, David Hildenbrand wrote:
...
>> @@ -1152,12 +1152,18 @@ void clear_free_pages(void)
>>   	if (WARN_ON(!(free_pages_map)))
>>   		return;
>>   
>> -	if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
>> +	if (page_poisoning_enabled() || want_init_on_free()) {
>>   		memory_bm_position_reset(bm);
>>   		pfn = memory_bm_next_pfn(bm);
>>   		while (pfn != BM_END_OF_MAP) {
>> -			if (pfn_valid(pfn))
>> -				clear_highpage(pfn_to_page(pfn));
>> +			if (pfn_valid(pfn)) {
>> +				struct page *page = pfn_to_page(pfn);
> 
> ^ empty line missing. And at least I prefer to declare all variables in
> the function header.
> 
> I'd even suggest to move this into a separate function like
> 
> clear_or_poison_free_page(struct page *page)
> 
> 

Ok, fixup below.

----8<----
 From cae1e8ccfa57c28ed1b2f5f8a47319b86cbdcfbf Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@suse.cz>
Date: Thu, 12 Nov 2020 15:33:07 +0100
Subject: [PATCH] kernel/power: allow hibernation with page_poison sanity
  checking-fix

Adapt to __kernel_unpoison_pages fixup. Split out clear_or_poison_free_page()
per David Hildenbrand.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
  include/linux/mm.h      |  1 +
  kernel/power/snapshot.c | 18 ++++++++++--------
  2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 861b9392b5dc..d4cfb06a611e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2896,6 +2896,7 @@ static inline void kernel_unpoison_pages(struct page *page, int numpages)
  #else
  static inline bool page_poisoning_enabled(void) { return false; }
  static inline bool page_poisoning_enabled_static(void) { return false; }
+static inline void __kernel_poison_pages(struct page *page, int nunmpages) { }
  static inline void kernel_poison_pages(struct page *page, int numpages) { }
  static inline void kernel_unpoison_pages(struct page *page, int numpages) { }
  #endif
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 6b1c84afa891..a3491b29c5cc 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1144,6 +1144,14 @@ void free_basic_memory_bitmaps(void)
  	pr_debug("Basic memory bitmaps freed\n");
  }
  
+static void clear_or_poison_free_page(struct page *page)
+{
+	if (page_poisoning_enabled_static())
+		__kernel_poison_pages(page, 1);
+	else if (want_init_on_free())
+		clear_highpage(page);
+}
+
  void clear_or_poison_free_pages(void)
  {
  	struct memory_bitmap *bm = free_pages_map;
@@ -1156,14 +1164,8 @@ void clear_or_poison_free_pages(void)
  		memory_bm_position_reset(bm);
  		pfn = memory_bm_next_pfn(bm);
  		while (pfn != BM_END_OF_MAP) {
-			if (pfn_valid(pfn)) {
-				struct page *page = pfn_to_page(pfn);
-				if (page_poisoning_enabled_static())
-					kernel_poison_pages(page, 1);
-				else if (want_init_on_free())
-					clear_highpage(page);
-
-			}
+			if (pfn_valid(pfn))
+				clear_or_poison_free_page(pfn_to_page(pfn));
  
  			pfn = memory_bm_next_pfn(bm);
  		}
-- 
2.29.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking
  2020-11-12 14:39     ` Vlastimil Babka
@ 2020-11-12 15:52       ` Rafael J. Wysocki
  2020-11-12 16:07       ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: Rafael J. Wysocki @ 2020-11-12 15:52 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: David Hildenbrand, Andrew Morton, Linux Memory Management List,
	Linux Kernel Mailing List, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott, Rafael J. Wysocki,
	Len Brown, Pavel Machek, Linux PM

On Thu, Nov 12, 2020 at 3:42 PM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 11/11/20 4:42 PM, David Hildenbrand wrote:
> ...
> >> @@ -1152,12 +1152,18 @@ void clear_free_pages(void)
> >>      if (WARN_ON(!(free_pages_map)))
> >>              return;
> >>
> >> -    if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
> >> +    if (page_poisoning_enabled() || want_init_on_free()) {
> >>              memory_bm_position_reset(bm);
> >>              pfn = memory_bm_next_pfn(bm);
> >>              while (pfn != BM_END_OF_MAP) {
> >> -                    if (pfn_valid(pfn))
> >> -                            clear_highpage(pfn_to_page(pfn));
> >> +                    if (pfn_valid(pfn)) {
> >> +                            struct page *page = pfn_to_page(pfn);
> >
> > ^ empty line missing. And at least I prefer to declare all variables in
> > the function header.
> >
> > I'd even suggest to move this into a separate function like
> >
> > clear_or_poison_free_page(struct page *page)
> >
> >
>
> Ok, fixup below.
>
> ----8<----
>  From cae1e8ccfa57c28ed1b2f5f8a47319b86cbdcfbf Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka <vbabka@suse.cz>
> Date: Thu, 12 Nov 2020 15:33:07 +0100
> Subject: [PATCH] kernel/power: allow hibernation with page_poison sanity
>   checking-fix
>
> Adapt to __kernel_unpoison_pages fixup. Split out clear_or_poison_free_page()
> per David Hildenbrand.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

Still

Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

> ---
>   include/linux/mm.h      |  1 +
>   kernel/power/snapshot.c | 18 ++++++++++--------
>   2 files changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 861b9392b5dc..d4cfb06a611e 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2896,6 +2896,7 @@ static inline void kernel_unpoison_pages(struct page *page, int numpages)
>   #else
>   static inline bool page_poisoning_enabled(void) { return false; }
>   static inline bool page_poisoning_enabled_static(void) { return false; }
> +static inline void __kernel_poison_pages(struct page *page, int nunmpages) { }
>   static inline void kernel_poison_pages(struct page *page, int numpages) { }
>   static inline void kernel_unpoison_pages(struct page *page, int numpages) { }
>   #endif
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 6b1c84afa891..a3491b29c5cc 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1144,6 +1144,14 @@ void free_basic_memory_bitmaps(void)
>         pr_debug("Basic memory bitmaps freed\n");
>   }
>
> +static void clear_or_poison_free_page(struct page *page)
> +{
> +       if (page_poisoning_enabled_static())
> +               __kernel_poison_pages(page, 1);
> +       else if (want_init_on_free())
> +               clear_highpage(page);
> +}
> +
>   void clear_or_poison_free_pages(void)
>   {
>         struct memory_bitmap *bm = free_pages_map;
> @@ -1156,14 +1164,8 @@ void clear_or_poison_free_pages(void)
>                 memory_bm_position_reset(bm);
>                 pfn = memory_bm_next_pfn(bm);
>                 while (pfn != BM_END_OF_MAP) {
> -                       if (pfn_valid(pfn)) {
> -                               struct page *page = pfn_to_page(pfn);
> -                               if (page_poisoning_enabled_static())
> -                                       kernel_poison_pages(page, 1);
> -                               else if (want_init_on_free())
> -                                       clear_highpage(page);
> -
> -                       }
> +                       if (pfn_valid(pfn))
> +                               clear_or_poison_free_page(pfn_to_page(pfn));
>
>                         pfn = memory_bm_next_pfn(bm);
>                 }
> --
> 2.29.1
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 2/5] mm, page_poison: use static key more efficiently
  2020-11-12 14:37     ` Vlastimil Babka
@ 2020-11-12 16:06       ` David Hildenbrand
  0 siblings, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2020-11-12 16:06 UTC (permalink / raw)
  To: Vlastimil Babka, Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott

On 12.11.20 15:37, Vlastimil Babka wrote:
> On 11/11/20 4:38 PM, David Hildenbrand wrote:
>> On 03.11.20 16:22, Vlastimil Babka wrote:
>>> Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key")
>>> changed page_poisoning_enabled() to a static key check. However, the function
>>> is not inlined, so each check still involves a function call with overhead not
>>> eliminated when page poisoning is disabled.
>>>
>>> Analogically to how debug_pagealloc is handled, this patch converts
>>> page_poisoning_enabled() back to boolean check, and introduces
>>> page_poisoning_enabled_static() for fast paths. Both functions are inlined.
>>>
>>> The function kernel_poison_pages() is also called unconditionally and does
>>> the static key check inside. Remove it from there and put it to callers. Also
>>> split it to two functions kernel_poison_pages() and kernel_unpoison_pages()
>>> instead of the confusing bool parameter.
>>>
>>> Also optimize the check that enables page poisoning instead of debug_pagealloc
>>> for architectures without proper debug_pagealloc support. Move the check to
>>> init_mem_debugging_and_hardening() to enable a single static key instead of
>>> having two static branches in page_poisoning_enabled_static().
>>
>> [...]
>>
>>> + * For use in fast paths after init_mem_debugging() has run, or when a
>>> + * false negative result is not harmful when called too early.
>>> + */
>>> +static inline bool page_poisoning_enabled_static(void)
>>> +{
>>> +	return (static_branch_unlikely(&_page_poisoning_enabled));
>>
>> As already mentioned IIRC:
> 
> Yes, it was, and I thought I fixed it. Guess not.
> 
>> return static_branch_unlikely(&_page_poisoning_enabled);
>>
>>> +}
> 
>>> @@ -1260,7 +1271,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
>>>    	if (want_init_on_free())
>>>    		kernel_init_free_pages(page, 1 << order);
>>>    
>>> -	kernel_poison_pages(page, 1 << order, 0);
>>> +	if (page_poisoning_enabled_static())
>>> +		kernel_poison_pages(page, 1 << order);
>>
>> This would look much better by having kernel_poison_pages() simply be
>> implemented in a header, where the static check is performed.
>>
>> Take a look at how it's handled in mm/shuffle.h
>    
> Ok. Fixup below.
> 
> ----8<----
> 
>   From 7ce26ba61296f583f0f9089e7887f07424f25d2c Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka <vbabka@suse.cz>
> Date: Thu, 12 Nov 2020 15:20:58 +0100
> Subject: [PATCH] mm, page_poison: use static key more efficiently-fix
> 
> Non-functional cleanups, per David Hildenbrand.
> 
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>    include/linux/mm.h | 16 +++++++++++++---
>    mm/page_alloc.c    |  7 +++----
>    mm/page_poison.c   |  4 ++--
>    3 files changed, 18 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 4d6dd9f44571..861b9392b5dc 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2867,8 +2867,8 @@ extern int apply_to_existing_page_range(struct mm_struct *mm,
>    
>    extern void init_mem_debugging_and_hardening(void);
>    #ifdef CONFIG_PAGE_POISONING
> -extern void kernel_poison_pages(struct page *page, int numpages);
> -extern void kernel_unpoison_pages(struct page *page, int numpages);
> +extern void __kernel_poison_pages(struct page *page, int numpages);
> +extern void __kernel_unpoison_pages(struct page *page, int numpages);
>    extern bool _page_poisoning_enabled_early;
>    DECLARE_STATIC_KEY_FALSE(_page_poisoning_enabled);
>    static inline bool page_poisoning_enabled(void)
> @@ -2881,7 +2881,17 @@ static inline bool page_poisoning_enabled(void)
>     */
>    static inline bool page_poisoning_enabled_static(void)
>    {
> -	return (static_branch_unlikely(&_page_poisoning_enabled));
> +	return static_branch_unlikely(&_page_poisoning_enabled);
> +}
> +static inline void kernel_poison_pages(struct page *page, int numpages)
> +{
> +	if (page_poisoning_enabled_static())
> +		__kernel_poison_pages(page, numpages);
> +}
> +static inline void kernel_unpoison_pages(struct page *page, int numpages)
> +{
> +	if (page_poisoning_enabled_static())
> +		__kernel_unpoison_pages(page, numpages);
>    }
>    #else
>    static inline bool page_poisoning_enabled(void) { return false; }
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index fd7f9345adc0..1388b5939551 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1271,8 +1271,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
>    	if (want_init_on_free())
>    		kernel_init_free_pages(page, 1 << order);
>    
> -	if (page_poisoning_enabled_static())
> -		kernel_poison_pages(page, 1 << order);
> +	kernel_poison_pages(page, 1 << order);
> +
>    	/*
>    	 * arch_free_page() can make the page's contents inaccessible.  s390
>    	 * does this.  So nothing which can access the page's contents should
> @@ -2281,8 +2281,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>    	if (debug_pagealloc_enabled_static())
>    		kernel_map_pages(page, 1 << order, 1);
>    	kasan_alloc_pages(page, order);
> -	if (page_poisoning_enabled_static())
> -		kernel_unpoison_pages(page, 1 << order);
> +	kernel_unpoison_pages(page, 1 << order);
>    	set_page_owner(page, order, gfp_flags);
>    }
>    
> diff --git a/mm/page_poison.c b/mm/page_poison.c
> index dd7aeada036f..4d75fc9ccc7a 100644
> --- a/mm/page_poison.c
> +++ b/mm/page_poison.c
> @@ -30,7 +30,7 @@ static void poison_page(struct page *page)
>    	kunmap_atomic(addr);
>    }
>    
> -void kernel_poison_pages(struct page *page, int n)
> +void __kernel_poison_pages(struct page *page, int n)
>    {
>    	int i;
>    
> @@ -89,7 +89,7 @@ static void unpoison_page(struct page *page)
>    	kunmap_atomic(addr);
>    }
>    
> -void kernel_unpoison_pages(struct page *page, int n)
> +void __kernel_unpoison_pages(struct page *page, int n)
>    {
>    	int i;
>    
> 

LGTM.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking
  2020-11-12 14:39     ` Vlastimil Babka
  2020-11-12 15:52       ` Rafael J. Wysocki
@ 2020-11-12 16:07       ` David Hildenbrand
  1 sibling, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2020-11-12 16:07 UTC (permalink / raw)
  To: Vlastimil Babka, Andrew Morton
  Cc: linux-mm, linux-kernel, Alexander Potapenko, Kees Cook,
	Michal Hocko, Mateusz Nosek, Laura Abbott, Rafael J. Wysocki,
	Len Brown, Pavel Machek, linux-pm

On 12.11.20 15:39, Vlastimil Babka wrote:
> On 11/11/20 4:42 PM, David Hildenbrand wrote:
> ...
>>> @@ -1152,12 +1152,18 @@ void clear_free_pages(void)
>>>    	if (WARN_ON(!(free_pages_map)))
>>>    		return;
>>>    
>>> -	if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
>>> +	if (page_poisoning_enabled() || want_init_on_free()) {
>>>    		memory_bm_position_reset(bm);
>>>    		pfn = memory_bm_next_pfn(bm);
>>>    		while (pfn != BM_END_OF_MAP) {
>>> -			if (pfn_valid(pfn))
>>> -				clear_highpage(pfn_to_page(pfn));
>>> +			if (pfn_valid(pfn)) {
>>> +				struct page *page = pfn_to_page(pfn);
>>
>> ^ empty line missing. And at least I prefer to declare all variables in
>> the function header.
>>
>> I'd even suggest to move this into a separate function like
>>
>> clear_or_poison_free_page(struct page *page)
>>
>>
> 
> Ok, fixup below.
> 
> ----8<----
>   From cae1e8ccfa57c28ed1b2f5f8a47319b86cbdcfbf Mon Sep 17 00:00:00 2001
> From: Vlastimil Babka <vbabka@suse.cz>
> Date: Thu, 12 Nov 2020 15:33:07 +0100
> Subject: [PATCH] kernel/power: allow hibernation with page_poison sanity
>    checking-fix
> 
> Adapt to __kernel_unpoison_pages fixup. Split out clear_or_poison_free_page()
> per David Hildenbrand.
> 
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>    include/linux/mm.h      |  1 +
>    kernel/power/snapshot.c | 18 ++++++++++--------
>    2 files changed, 11 insertions(+), 8 deletions(-)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 861b9392b5dc..d4cfb06a611e 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2896,6 +2896,7 @@ static inline void kernel_unpoison_pages(struct page *page, int numpages)
>    #else
>    static inline bool page_poisoning_enabled(void) { return false; }
>    static inline bool page_poisoning_enabled_static(void) { return false; }
> +static inline void __kernel_poison_pages(struct page *page, int nunmpages) { }
>    static inline void kernel_poison_pages(struct page *page, int numpages) { }
>    static inline void kernel_unpoison_pages(struct page *page, int numpages) { }
>    #endif
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 6b1c84afa891..a3491b29c5cc 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1144,6 +1144,14 @@ void free_basic_memory_bitmaps(void)
>    	pr_debug("Basic memory bitmaps freed\n");
>    }
>    
> +static void clear_or_poison_free_page(struct page *page)
> +{
> +	if (page_poisoning_enabled_static())
> +		__kernel_poison_pages(page, 1);
> +	else if (want_init_on_free())
> +		clear_highpage(page);
> +}
> +
>    void clear_or_poison_free_pages(void)
>    {
>    	struct memory_bitmap *bm = free_pages_map;
> @@ -1156,14 +1164,8 @@ void clear_or_poison_free_pages(void)
>    		memory_bm_position_reset(bm);
>    		pfn = memory_bm_next_pfn(bm);
>    		while (pfn != BM_END_OF_MAP) {
> -			if (pfn_valid(pfn)) {
> -				struct page *page = pfn_to_page(pfn);
> -				if (page_poisoning_enabled_static())
> -					kernel_poison_pages(page, 1);
> -				else if (want_init_on_free())
> -					clear_highpage(page);
> -
> -			}
> +			if (pfn_valid(pfn))
> +				clear_or_poison_free_page(pfn_to_page(pfn));
>    
>    			pfn = memory_bm_next_pfn(bm);
>    		}
> 

LGTM, thanks!

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-11-12 16:07 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-03 15:22 [PATCH v2 0/5] cleanup page poisoning Vlastimil Babka
2020-11-03 15:22 ` [PATCH v2 1/5] mm, page_alloc: do not rely on the order of page_poison and init_on_alloc/free parameters Vlastimil Babka
2020-11-03 15:22 ` [PATCH v2 2/5] mm, page_poison: use static key more efficiently Vlastimil Babka
2020-11-11 15:38   ` David Hildenbrand
2020-11-12 14:37     ` Vlastimil Babka
2020-11-12 16:06       ` David Hildenbrand
2020-11-03 15:22 ` [PATCH v2 3/5] kernel/power: allow hibernation with page_poison sanity checking Vlastimil Babka
2020-11-05 18:36   ` Rafael J. Wysocki
2020-11-11 15:42   ` David Hildenbrand
2020-11-12 14:39     ` Vlastimil Babka
2020-11-12 15:52       ` Rafael J. Wysocki
2020-11-12 16:07       ` David Hildenbrand
2020-11-03 15:22 ` [PATCH v2 4/5] mm, page_poison: remove CONFIG_PAGE_POISONING_NO_SANITY Vlastimil Babka
2020-11-11 15:43   ` David Hildenbrand
2020-11-03 15:22 ` [PATCH v2 5/5] mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO Vlastimil Babka
2020-11-11 15:45   ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).