* [RFC PATCH] mm: enable page poisoning early at boot
@ 2017-03-06 14:16 Vinayak Menon
2017-03-06 23:17 ` Laura Abbott
0 siblings, 1 reply; 3+ messages in thread
From: Vinayak Menon @ 2017-03-06 14:16 UTC (permalink / raw)
To: iamjoonsoo.kim, labbott, mhocko, akpm; +Cc: shashim, linux-mm, Vinayak Menon
On SPARSEMEM systems page poisoning is enabled after buddy is up, because
of the dependency on page extension init. This causes the pages released
by free_all_bootmem not to be poisoned. This either delays or misses
the identification of some issues because the pages have to undergo another
cycle of alloc-free-alloc for any corruption to be detected.
Enable page poisoning early by getting rid of the PAGE_EXT_DEBUG_POISON
flag. Since all the free pages will now be poisoned, the flag need not be
verified before checking the poison during an alloc.
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
---
Sending it as an RFC because I am not sure if I have missed a code path
that can free pages to buddy skipping kernel_poison_pages, making
the flag PAGE_EXT_DEBUG_POISON a necessity.
include/linux/mm.h | 1 -
mm/page_alloc.c | 13 +++------
mm/page_ext.c | 3 ---
mm/page_poison.c | 77 +++++++++---------------------------------------------
4 files changed, 15 insertions(+), 79 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0d65dd7..b881966 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2473,7 +2473,6 @@ extern long copy_huge_page_from_user(struct page *dst_page,
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
extern struct page_ext_operations debug_guardpage_ops;
-extern struct page_ext_operations page_poisoning_ops;
#ifdef CONFIG_DEBUG_PAGEALLOC
extern unsigned int _debug_guardpage_minorder;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fc5db1b..860b36f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1694,10 +1694,10 @@ static inline int check_new_page(struct page *page)
return 1;
}
-static inline bool free_pages_prezeroed(bool poisoned)
+static inline bool free_pages_prezeroed(void)
{
return IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
- page_poisoning_enabled() && poisoned;
+ page_poisoning_enabled();
}
#ifdef CONFIG_DEBUG_VM
@@ -1751,17 +1751,10 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
unsigned int alloc_flags)
{
int i;
- bool poisoned = true;
-
- for (i = 0; i < (1 << order); i++) {
- struct page *p = page + i;
- if (poisoned)
- poisoned &= page_is_poisoned(p);
- }
post_alloc_hook(page, order, gfp_flags);
- if (!free_pages_prezeroed(poisoned) && (gfp_flags & __GFP_ZERO))
+ if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO))
for (i = 0; i < (1 << order); i++)
clear_highpage(page + i);
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 121dcff..fc3e7ff 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -59,9 +59,6 @@
static struct page_ext_operations *page_ext_ops[] = {
&debug_guardpage_ops,
-#ifdef CONFIG_PAGE_POISONING
- &page_poisoning_ops,
-#endif
#ifdef CONFIG_PAGE_OWNER
&page_owner_ops,
#endif
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 2e647c6..be19e98 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -6,7 +6,6 @@
#include <linux/poison.h>
#include <linux/ratelimit.h>
-static bool __page_poisoning_enabled __read_mostly;
static bool want_page_poisoning __read_mostly;
static int early_page_poison_param(char *buf)
@@ -19,74 +18,21 @@ static int early_page_poison_param(char *buf)
bool page_poisoning_enabled(void)
{
- return __page_poisoning_enabled;
-}
-
-static bool need_page_poisoning(void)
-{
- return want_page_poisoning;
-}
-
-static void init_page_poisoning(void)
-{
/*
- * page poisoning is debug page alloc for some arches. If either
- * of those options are enabled, enable poisoning
+ * Assumes that debug_pagealloc_enabled is set before
+ * free_all_bootmem.
+ * Page poisoning is debug page alloc for some arches. If
+ * either of those options are enabled, enable poisoning.
*/
- if (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC)) {
- if (!want_page_poisoning && !debug_pagealloc_enabled())
- return;
- } else {
- if (!want_page_poisoning)
- return;
- }
-
- __page_poisoning_enabled = true;
-}
-
-struct page_ext_operations page_poisoning_ops = {
- .need = need_page_poisoning,
- .init = init_page_poisoning,
-};
-
-static inline void set_page_poison(struct page *page)
-{
- struct page_ext *page_ext;
-
- page_ext = lookup_page_ext(page);
- if (unlikely(!page_ext))
- return;
-
- __set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline void clear_page_poison(struct page *page)
-{
- struct page_ext *page_ext;
-
- page_ext = lookup_page_ext(page);
- if (unlikely(!page_ext))
- return;
-
- __clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-bool page_is_poisoned(struct page *page)
-{
- struct page_ext *page_ext;
-
- page_ext = lookup_page_ext(page);
- if (unlikely(!page_ext))
- return false;
-
- return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+ return (want_page_poisoning ||
+ (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
+ debug_pagealloc_enabled()));
}
static void poison_page(struct page *page)
{
void *addr = kmap_atomic(page);
- set_page_poison(page);
memset(addr, PAGE_POISON, PAGE_SIZE);
kunmap_atomic(addr);
}
@@ -140,12 +86,13 @@ static void unpoison_page(struct page *page)
{
void *addr;
- if (!page_is_poisoned(page))
- return;
-
addr = kmap_atomic(page);
+ /*
+ * Page poisoning when enabled poisons each and every page
+ * that is freed to buddy. Thus no extra check is done to
+ * see if a page was posioned.
+ */
check_poison_mem(addr, PAGE_SIZE);
- clear_page_poison(page);
kunmap_atomic(addr);
}
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of the Code Aurora Forum, hosted by The Linux Foundation
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [RFC PATCH] mm: enable page poisoning early at boot
2017-03-06 14:16 [RFC PATCH] mm: enable page poisoning early at boot Vinayak Menon
@ 2017-03-06 23:17 ` Laura Abbott
2017-03-07 14:08 ` Vinayak Menon
0 siblings, 1 reply; 3+ messages in thread
From: Laura Abbott @ 2017-03-06 23:17 UTC (permalink / raw)
To: Vinayak Menon, iamjoonsoo.kim, mhocko, akpm; +Cc: shashim, linux-mm
On 03/06/2017 06:16 AM, Vinayak Menon wrote:
> On SPARSEMEM systems page poisoning is enabled after buddy is up, because
> of the dependency on page extension init. This causes the pages released
> by free_all_bootmem not to be poisoned. This either delays or misses
> the identification of some issues because the pages have to undergo another
> cycle of alloc-free-alloc for any corruption to be detected.
> Enable page poisoning early by getting rid of the PAGE_EXT_DEBUG_POISON
> flag. Since all the free pages will now be poisoned, the flag need not be
> verified before checking the poison during an alloc.
>
> Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
> ---
>
> Sending it as an RFC because I am not sure if I have missed a code path
> that can free pages to buddy skipping kernel_poison_pages, making
> the flag PAGE_EXT_DEBUG_POISON a necessity.
>
Have you tested this with hibernation? That's one place which tends
to cause problems with poisoning.
I'm curious what issues you've caught with this patch.
Thanks,
Laura
> include/linux/mm.h | 1 -
> mm/page_alloc.c | 13 +++------
> mm/page_ext.c | 3 ---
> mm/page_poison.c | 77 +++++++++---------------------------------------------
> 4 files changed, 15 insertions(+), 79 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 0d65dd7..b881966 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2473,7 +2473,6 @@ extern long copy_huge_page_from_user(struct page *dst_page,
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */
>
> extern struct page_ext_operations debug_guardpage_ops;
> -extern struct page_ext_operations page_poisoning_ops;
>
> #ifdef CONFIG_DEBUG_PAGEALLOC
> extern unsigned int _debug_guardpage_minorder;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index fc5db1b..860b36f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1694,10 +1694,10 @@ static inline int check_new_page(struct page *page)
> return 1;
> }
>
> -static inline bool free_pages_prezeroed(bool poisoned)
> +static inline bool free_pages_prezeroed(void)
> {
> return IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
> - page_poisoning_enabled() && poisoned;
> + page_poisoning_enabled();
> }
>
> #ifdef CONFIG_DEBUG_VM
> @@ -1751,17 +1751,10 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags
> unsigned int alloc_flags)
> {
> int i;
> - bool poisoned = true;
> -
> - for (i = 0; i < (1 << order); i++) {
> - struct page *p = page + i;
> - if (poisoned)
> - poisoned &= page_is_poisoned(p);
> - }
>
> post_alloc_hook(page, order, gfp_flags);
>
> - if (!free_pages_prezeroed(poisoned) && (gfp_flags & __GFP_ZERO))
> + if (!free_pages_prezeroed() && (gfp_flags & __GFP_ZERO))
> for (i = 0; i < (1 << order); i++)
> clear_highpage(page + i);
>
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index 121dcff..fc3e7ff 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -59,9 +59,6 @@
>
> static struct page_ext_operations *page_ext_ops[] = {
> &debug_guardpage_ops,
> -#ifdef CONFIG_PAGE_POISONING
> - &page_poisoning_ops,
> -#endif
> #ifdef CONFIG_PAGE_OWNER
> &page_owner_ops,
> #endif
> diff --git a/mm/page_poison.c b/mm/page_poison.c
> index 2e647c6..be19e98 100644
> --- a/mm/page_poison.c
> +++ b/mm/page_poison.c
> @@ -6,7 +6,6 @@
> #include <linux/poison.h>
> #include <linux/ratelimit.h>
>
> -static bool __page_poisoning_enabled __read_mostly;
> static bool want_page_poisoning __read_mostly;
>
> static int early_page_poison_param(char *buf)
> @@ -19,74 +18,21 @@ static int early_page_poison_param(char *buf)
>
> bool page_poisoning_enabled(void)
> {
> - return __page_poisoning_enabled;
> -}
> -
> -static bool need_page_poisoning(void)
> -{
> - return want_page_poisoning;
> -}
> -
> -static void init_page_poisoning(void)
> -{
> /*
> - * page poisoning is debug page alloc for some arches. If either
> - * of those options are enabled, enable poisoning
> + * Assumes that debug_pagealloc_enabled is set before
> + * free_all_bootmem.
> + * Page poisoning is debug page alloc for some arches. If
> + * either of those options are enabled, enable poisoning.
> */
> - if (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC)) {
> - if (!want_page_poisoning && !debug_pagealloc_enabled())
> - return;
> - } else {
> - if (!want_page_poisoning)
> - return;
> - }
> -
> - __page_poisoning_enabled = true;
> -}
> -
> -struct page_ext_operations page_poisoning_ops = {
> - .need = need_page_poisoning,
> - .init = init_page_poisoning,
> -};
> -
> -static inline void set_page_poison(struct page *page)
> -{
> - struct page_ext *page_ext;
> -
> - page_ext = lookup_page_ext(page);
> - if (unlikely(!page_ext))
> - return;
> -
> - __set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
> -}
> -
> -static inline void clear_page_poison(struct page *page)
> -{
> - struct page_ext *page_ext;
> -
> - page_ext = lookup_page_ext(page);
> - if (unlikely(!page_ext))
> - return;
> -
> - __clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
> -}
> -
> -bool page_is_poisoned(struct page *page)
> -{
> - struct page_ext *page_ext;
> -
> - page_ext = lookup_page_ext(page);
> - if (unlikely(!page_ext))
> - return false;
> -
> - return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
> + return (want_page_poisoning ||
> + (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
> + debug_pagealloc_enabled()));
> }
>
> static void poison_page(struct page *page)
> {
> void *addr = kmap_atomic(page);
>
> - set_page_poison(page);
> memset(addr, PAGE_POISON, PAGE_SIZE);
> kunmap_atomic(addr);
> }
> @@ -140,12 +86,13 @@ static void unpoison_page(struct page *page)
> {
> void *addr;
>
> - if (!page_is_poisoned(page))
> - return;
> -
> addr = kmap_atomic(page);
> + /*
> + * Page poisoning when enabled poisons each and every page
> + * that is freed to buddy. Thus no extra check is done to
> + * see if a page was posioned.
> + */
> check_poison_mem(addr, PAGE_SIZE);
> - clear_page_poison(page);
> kunmap_atomic(addr);
> }
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [RFC PATCH] mm: enable page poisoning early at boot
2017-03-06 23:17 ` Laura Abbott
@ 2017-03-07 14:08 ` Vinayak Menon
0 siblings, 0 replies; 3+ messages in thread
From: Vinayak Menon @ 2017-03-07 14:08 UTC (permalink / raw)
To: Laura Abbott, iamjoonsoo.kim, mhocko, akpm; +Cc: shashim, linux-mm
On 3/7/2017 4:47 AM, Laura Abbott wrote:
> On 03/06/2017 06:16 AM, Vinayak Menon wrote:
>> On SPARSEMEM systems page poisoning is enabled after buddy is up, because
>> of the dependency on page extension init. This causes the pages released
>> by free_all_bootmem not to be poisoned. This either delays or misses
>> the identification of some issues because the pages have to undergo another
>> cycle of alloc-free-alloc for any corruption to be detected.
>> Enable page poisoning early by getting rid of the PAGE_EXT_DEBUG_POISON
>> flag. Since all the free pages will now be poisoned, the flag need not be
>> verified before checking the poison during an alloc.
>>
>> Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
>> ---
>>
>> Sending it as an RFC because I am not sure if I have missed a code path
>> that can free pages to buddy skipping kernel_poison_pages, making
>> the flag PAGE_EXT_DEBUG_POISON a necessity.
>>
> Have you tested this with hibernation? That's one place which tends
> to cause problems with poisoning.
Today I tried that on qemu and did not see any issue, and did not expect to hit any problems since
CONFIG_PAGE_POISONING_NO_SANITY gets enabled with HIBERNATION.
>
> I'm curious what issues you've caught with this patch.
This patch is used to catch corruptions during/soon after boot due to DDR instability. The free list can be
traversed in a dump and corrupted physical addresses can be analyzed for patterns. But I think, this
would also be useful in catching software induced corruptions which could be missed now because the page
has to undergo a cycle of alloc-free-alloc for any corruption to be detected. And on systems with more
RAM, it can take time for every page to complete this cycle.
Thanks,
Vinayak
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2017-03-07 14:08 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-06 14:16 [RFC PATCH] mm: enable page poisoning early at boot Vinayak Menon
2017-03-06 23:17 ` Laura Abbott
2017-03-07 14:08 ` Vinayak Menon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).