linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order
@ 2020-07-04  2:26 Long Li
  2020-07-04  2:32 ` Matthew Wilcox
  0 siblings, 1 reply; 2+ messages in thread
From: Long Li @ 2020-07-04  2:26 UTC (permalink / raw)
  To: willy, cl, penberg, rientjes, iamjoonsoo.kim, akpm; +Cc: linux-mm

kmalloc cannot allocate memory from HIGHMEM.  Allocating large amounts
of memory currently bypasses the check and will simply leak the memory
when page_address() returns NULL.  To fix this, factor the
GFP_SLAB_BUG_MASK check out of slab & slub, and call it from
kmalloc_order() as well. In order to make the code clear, the warning
message is put in one place.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Long Li <lonuxli.64@gmail.com>
---

changes in v5:
-Change the check function name to kmalloc_fix_flags(), This name
may be more appropriate.

changes in V4:
-Change the check function name to kmalloc_check_flags()
-Put the flags check into the kmalloc_check_flags()

changes in V3:
-Put the warning message in one place
-updage the change log to be clear

 mm/slab.c        |  8 +-------
 mm/slab.h        |  1 +
 mm/slab_common.c | 18 +++++++++++++++++-
 mm/slub.c        |  8 +-------
 4 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index ac7a223d9ac3..f2f150bd180b 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2573,13 +2573,7 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep,
 	 * Be lazy and only check for valid flags here,  keeping it out of the
 	 * critical path in kmem_cache_alloc().
 	 */
-	if (unlikely(flags & GFP_SLAB_BUG_MASK)) {
-		gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;
-		flags &= ~GFP_SLAB_BUG_MASK;
-		pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
-				invalid_mask, &invalid_mask, flags, &flags);
-		dump_stack();
-	}
+	flags = kmalloc_fix_flags(flags);
 	WARN_ON_ONCE(cachep->ctor && (flags & __GFP_ZERO));
 	local_flags = flags & (GFP_CONSTRAINT_MASK|GFP_RECLAIM_MASK);
 
diff --git a/mm/slab.h b/mm/slab.h
index a06f3313e4a0..8cd2bf391725 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -90,6 +90,7 @@ void create_kmalloc_caches(slab_flags_t);
 struct kmem_cache *kmalloc_slab(size_t, gfp_t);
 #endif
 
+gfp_t kmalloc_fix_flags(gfp_t flags);
 
 /* Functions provided by the slab allocators */
 int __kmem_cache_create(struct kmem_cache *, slab_flags_t flags);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index a143a8c8f874..16d63f6dad05 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -26,6 +26,8 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/kmem.h>
 
+#include "internal.h"
+
 #include "slab.h"
 
 enum slab_state slab_state;
@@ -805,6 +807,20 @@ void __init create_kmalloc_caches(slab_flags_t flags)
 }
 #endif /* !CONFIG_SLOB */
 
+gfp_t kmalloc_fix_flags(gfp_t flags)
+{
+	if (unlikely(flags & GFP_SLAB_BUG_MASK)) {
+		gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;
+
+		flags &= ~GFP_SLAB_BUG_MASK;
+		pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
+				invalid_mask, &invalid_mask, flags, &flags);
+		dump_stack();
+	}
+
+	return flags;
+}
+
 /*
  * To avoid unnecessary overhead, we pass through large allocation requests
  * directly to the page allocator. We use __GFP_COMP, because we will need to
@@ -815,7 +831,7 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	void *ret = NULL;
 	struct page *page;
 
-	flags |= __GFP_COMP;
+	flags = kmalloc_fix_flags(flags) | __GFP_COMP;
 	page = alloc_pages(flags, order);
 	if (likely(page)) {
 		ret = page_address(page);
diff --git a/mm/slub.c b/mm/slub.c
index 62d2de56549e..dfaad93163d5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1817,13 +1817,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 
 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 {
-	if (unlikely(flags & GFP_SLAB_BUG_MASK)) {
-		gfp_t invalid_mask = flags & GFP_SLAB_BUG_MASK;
-		flags &= ~GFP_SLAB_BUG_MASK;
-		pr_warn("Unexpected gfp: %#x (%pGg). Fixing up to gfp: %#x (%pGg). Fix your code!\n",
-				invalid_mask, &invalid_mask, flags, &flags);
-		dump_stack();
-	}
+	flags = kmalloc_fix_flags(flags);
 
 	return allocate_slab(s,
 		flags & (GFP_RECLAIM_MASK | GFP_CONSTRAINT_MASK), node);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v5] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order
  2020-07-04  2:26 [PATCH v5] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order Long Li
@ 2020-07-04  2:32 ` Matthew Wilcox
  0 siblings, 0 replies; 2+ messages in thread
From: Matthew Wilcox @ 2020-07-04  2:32 UTC (permalink / raw)
  To: Long Li; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, akpm, linux-mm

On Sat, Jul 04, 2020 at 02:26:07AM +0000, Long Li wrote:
> kmalloc cannot allocate memory from HIGHMEM.  Allocating large amounts
> of memory currently bypasses the check and will simply leak the memory
> when page_address() returns NULL.  To fix this, factor the
> GFP_SLAB_BUG_MASK check out of slab & slub, and call it from
> kmalloc_order() as well. In order to make the code clear, the warning
> message is put in one place.
> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reviewed-by: Pekka Enberg <penberg@kernel.org>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Long Li <lonuxli.64@gmail.com>
> ---
> 
> changes in v5:
> -Change the check function name to kmalloc_fix_flags(), This name
> may be more appropriate.
> 
> changes in V4:
> -Change the check function name to kmalloc_check_flags()
> -Put the flags check into the kmalloc_check_flags()

No.  As I said:

The point of not doing that was that this is unlikely().  With your
change there is now a function call to check something that's (extremely)
unlikely().



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-07-04  2:32 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-04  2:26 [PATCH v5] mm, slab: Check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order Long Li
2020-07-04  2:32 ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).