* [RFC PATCH] mm, page_alloc: drop should_suppress_show_mem
@ 2018-09-07 11:43 ` Michal Hocko
0 siblings, 0 replies; 3+ messages in thread
From: Michal Hocko @ 2018-09-07 11:43 UTC (permalink / raw)
To: linux-mm
Cc: Vlastimil Babka, Andrew Morton, David Rientjes, Mel Gorman, LKML,
Michal Hocko
From: Michal Hocko <mhocko@suse.com>
should_suppress_show_mem has been introduced to reduce the overhead of
show_mem on large NUMA systems. Things have changed since then though.
Namely c78e93630d15 ("mm: do not walk all of system memory during
show_mem") has reduced the overhead considerably.
Moreover warn_alloc_show_mem clears SHOW_MEM_FILTER_NODES when called
from the IRQ context already so we are not printing per node stats.
Remove should_suppress_show_mem because we are losing potentially
interesting information about allocation failures. We have seen a bug
report where system gets unresponsive under memory pressure and there
is only
kernel: [2032243.696888] qlge 0000:8b:00.1 ql1: Could not get a page chunk, i=8, clean_idx =200 .
kernel: [2032243.710725] swapper/7: page allocation failure: order:1, mode:0x1084120(GFP_ATOMIC|__GFP_COLD|__GFP_COMP)
without an additional information for debugging. It would be great to
see the state of the page allocator at the moment.
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 16 +---------------
1 file changed, 1 insertion(+), 15 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 89d2a2ab3fe6..025f23dc282e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3366,26 +3366,12 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
return NULL;
}
-/*
- * Large machines with many possible nodes should not always dump per-node
- * meminfo in irq context.
- */
-static inline bool should_suppress_show_mem(void)
-{
- bool ret = false;
-
-#if NODES_SHIFT > 8
- ret = in_interrupt();
-#endif
- return ret;
-}
-
static void warn_alloc_show_mem(gfp_t gfp_mask, nodemask_t *nodemask)
{
unsigned int filter = SHOW_MEM_FILTER_NODES;
static DEFINE_RATELIMIT_STATE(show_mem_rs, HZ, 1);
- if (should_suppress_show_mem() || !__ratelimit(&show_mem_rs))
+ if (!__ratelimit(&show_mem_rs))
return;
/*
--
2.18.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [RFC PATCH] mm, page_alloc: drop should_suppress_show_mem
@ 2018-09-07 11:43 ` Michal Hocko
0 siblings, 0 replies; 3+ messages in thread
From: Michal Hocko @ 2018-09-07 11:43 UTC (permalink / raw)
To: linux-mm
Cc: Vlastimil Babka, Andrew Morton, David Rientjes, Mel Gorman, LKML,
Michal Hocko
From: Michal Hocko <mhocko@suse.com>
should_suppress_show_mem has been introduced to reduce the overhead of
show_mem on large NUMA systems. Things have changed since then though.
Namely c78e93630d15 ("mm: do not walk all of system memory during
show_mem") has reduced the overhead considerably.
Moreover warn_alloc_show_mem clears SHOW_MEM_FILTER_NODES when called
from the IRQ context already so we are not printing per node stats.
Remove should_suppress_show_mem because we are losing potentially
interesting information about allocation failures. We have seen a bug
report where system gets unresponsive under memory pressure and there
is only
kernel: [2032243.696888] qlge 0000:8b:00.1 ql1: Could not get a page chunk, i=8, clean_idx =200 .
kernel: [2032243.710725] swapper/7: page allocation failure: order:1, mode:0x1084120(GFP_ATOMIC|__GFP_COLD|__GFP_COMP)
without an additional information for debugging. It would be great to
see the state of the page allocator at the moment.
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 16 +---------------
1 file changed, 1 insertion(+), 15 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 89d2a2ab3fe6..025f23dc282e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3366,26 +3366,12 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
return NULL;
}
-/*
- * Large machines with many possible nodes should not always dump per-node
- * meminfo in irq context.
- */
-static inline bool should_suppress_show_mem(void)
-{
- bool ret = false;
-
-#if NODES_SHIFT > 8
- ret = in_interrupt();
-#endif
- return ret;
-}
-
static void warn_alloc_show_mem(gfp_t gfp_mask, nodemask_t *nodemask)
{
unsigned int filter = SHOW_MEM_FILTER_NODES;
static DEFINE_RATELIMIT_STATE(show_mem_rs, HZ, 1);
- if (should_suppress_show_mem() || !__ratelimit(&show_mem_rs))
+ if (!__ratelimit(&show_mem_rs))
return;
/*
--
2.18.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [RFC PATCH] mm, page_alloc: drop should_suppress_show_mem
2018-09-07 11:43 ` Michal Hocko
(?)
@ 2018-09-07 12:16 ` Vlastimil Babka
-1 siblings, 0 replies; 3+ messages in thread
From: Vlastimil Babka @ 2018-09-07 12:16 UTC (permalink / raw)
To: Michal Hocko, linux-mm
Cc: Andrew Morton, David Rientjes, Mel Gorman, LKML, Michal Hocko
On 09/07/2018 01:43 PM, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
>
> should_suppress_show_mem has been introduced to reduce the overhead of
> show_mem on large NUMA systems. Things have changed since then though.
> Namely c78e93630d15 ("mm: do not walk all of system memory during
> show_mem") has reduced the overhead considerably.
>
> Moreover warn_alloc_show_mem clears SHOW_MEM_FILTER_NODES when called
> from the IRQ context already so we are not printing per node stats.
>
> Remove should_suppress_show_mem because we are losing potentially
> interesting information about allocation failures. We have seen a bug
> report where system gets unresponsive under memory pressure and there
> is only
> kernel: [2032243.696888] qlge 0000:8b:00.1 ql1: Could not get a page chunk, i=8, clean_idx =200 .
> kernel: [2032243.710725] swapper/7: page allocation failure: order:1, mode:0x1084120(GFP_ATOMIC|__GFP_COLD|__GFP_COMP)
>
> without an additional information for debugging. It would be great to
> see the state of the page allocator at the moment.
>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
The dependency on build-time constant instead of real system size is
also unfortunate. Maybe the time was depending on *possible* nodes in
the past, but I don't think it's the case today.
Thanks.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2018-09-07 12:16 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-07 11:43 [RFC PATCH] mm, page_alloc: drop should_suppress_show_mem Michal Hocko
2018-09-07 11:43 ` Michal Hocko
2018-09-07 12:16 ` Vlastimil Babka
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.