From: David Rientjes <rientjes@google.com> To: Rasmus Villemoes <linux@rasmusvillemoes.dk>, Dave Kleikamp <shaggy@kernel.org>, Christoph Hellwig <hch@lst.de> Cc: Andrew Morton <akpm@linux-foundation.org>, Sebastian Ott <sebott@linux.vnet.ibm.com>, Mikulas Patocka <mpatocka@redhat.com>, Catalin Marinas <catalin.marinas@arm.com>, jfs-discussion@lists.sourceforge.net, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch 1/2] mm, mempool: poison elements backed by slab allocator Date: Thu, 19 Mar 2015 16:20:37 -0700 (PDT) [thread overview] Message-ID: <alpine.DEB.2.10.1503191602330.513@chino.kir.corp.google.com> (raw) In-Reply-To: <8761a1dxsv.fsf@rasmusvillemoes.dk> On Mon, 16 Mar 2015, Rasmus Villemoes wrote: > > Mempools keep elements in a reserved pool for contexts in which > > allocation may not be possible. When an element is allocated from the > > reserved pool, its memory contents is the same as when it was added to > > the reserved pool. > > > > Because of this, elements lack any free poisoning to detect > > use-after-free errors. > > > > This patch adds free poisoning for elements backed by the slab allocator. > > This is possible because the mempool layer knows the object size of each > > element. > > > > When an element is added to the reserved pool, it is poisoned with > > POISON_FREE. When it is removed from the reserved pool, the contents are > > checked for POISON_FREE. If there is a mismatch, a warning is emitted to > > the kernel log. > > > > + > > +static void poison_slab_element(mempool_t *pool, void *element) > > +{ > > + if (pool->alloc == mempool_alloc_slab || > > + pool->alloc == mempool_kmalloc) { > > + size_t size = ksize(element); > > + u8 *obj = element; > > + > > + memset(obj, POISON_FREE, size - 1); > > + obj[size - 1] = POISON_END; > > + } > > +} > > Maybe a stupid question, but what happens if the underlying slab > allocator has non-trivial ->ctor? > Not a stupid question at all, it's very legitimate, thanks for thinking about it! Using slab constructors with mempools is inherently risky because you don't know where the element is coming from when returned by mempool_alloc(): was it able to be allocated in GFP_NOFS context from the slab allocator, or is it coming from mempool's reserved pool of elements? In the former, the constructor is properly called and in the latter it's not called: we simply pop the element from the reserved pool and return it to the caller. For that reason, without mempool element poisoning, we need to take care that objects are properly deconstructed when freed to the reserved pool so that they are in a state we expect when returned from mempool_alloc(). Thus, at least part of the slab constructor must be duplicated before calling mempool_free(). There's one user in the tree that actually does this: it's the mempool_create_slab_pool() in fs/jfs/jfs_metapage.c, and it does exactly what is described above: it clears necessary fields before doing mempool_free() that are duplicated in the slab object constructor. We'd like to be able to do this: diff --git a/mm/mempool.c b/mm/mempool.c --- a/mm/mempool.c +++ b/mm/mempool.c @@ -15,6 +15,7 @@ #include <linux/mempool.h> #include <linux/blkdev.h> #include <linux/writeback.h> +#include "slab.h" static void add_element(mempool_t *pool, void *element) { @@ -332,6 +333,7 @@ EXPORT_SYMBOL(mempool_free); void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data) { struct kmem_cache *mem = pool_data; + BUG_ON(mem->ctor); return kmem_cache_alloc(mem, gfp_mask); } EXPORT_SYMBOL(mempool_alloc_slab); Since it would be difficult to reproduce an error with an improperly decosntructed mempool element when used with a mempool based on a slab cache that has a constructor: normally, slab objects are allocatable even with GFP_NOFS since there are free objects available and there's less of a liklihood that we'll need to use the mempool reserved pool. But we obviously can't do that if jfs is actively using mempools based on slab caches with object constructors. I think it would be much better to simply initialize objects as they are allocated, regardless of whether they come from the slab allocator or mempool reserved pool, and avoid trying to set the state up properly before mempool_free(). This patch properly initializes all fields that are currently done by the object constructor (with the exception of mp->flags since it is immediately overwritten by the caller anyway). It also removes META_free since nothing is checking for it. Jfs folks, would this be acceptable to you? --- diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c --- a/fs/jfs/jfs_metapage.c +++ b/fs/jfs/jfs_metapage.c @@ -183,30 +183,23 @@ static inline void remove_metapage(struct page *page, struct metapage *mp) #endif -static void init_once(void *foo) -{ - struct metapage *mp = (struct metapage *)foo; - - mp->lid = 0; - mp->lsn = 0; - mp->flag = 0; - mp->data = NULL; - mp->clsn = 0; - mp->log = NULL; - set_bit(META_free, &mp->flag); - init_waitqueue_head(&mp->wait); -} - static inline struct metapage *alloc_metapage(gfp_t gfp_mask) { - return mempool_alloc(metapage_mempool, gfp_mask); + struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask); + + if (mp) { + mp->lid = 0; + mp->lsn = 0; + mp->data = NULL; + mp->clsn = 0; + mp->log = NULL; + init_waitqueue_head(&mp->wait); + } + return mp; } static inline void free_metapage(struct metapage *mp) { - mp->flag = 0; - set_bit(META_free, &mp->flag); - mempool_free(mp, metapage_mempool); } @@ -216,7 +209,7 @@ int __init metapage_init(void) * Allocate the metapage structures */ metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage), - 0, 0, init_once); + 0, 0, NULL); if (metapage_cache == NULL) return -ENOMEM; diff --git a/fs/jfs/jfs_metapage.h b/fs/jfs/jfs_metapage.h index a78beda..337e9e5 100644 --- a/fs/jfs/jfs_metapage.h +++ b/fs/jfs/jfs_metapage.h @@ -48,7 +48,6 @@ struct metapage { /* metapage flag */ #define META_locked 0 -#define META_free 1 #define META_dirty 2 #define META_sync 3 #define META_discard 4
WARNING: multiple messages have this Message-ID (diff)
From: David Rientjes <rientjes@google.com> To: Rasmus Villemoes <linux@rasmusvillemoes.dk>, Dave Kleikamp <shaggy@kernel.org>, Christoph Hellwig <hch@lst.de> Cc: Andrew Morton <akpm@linux-foundation.org>, Sebastian Ott <sebott@linux.vnet.ibm.com>, Mikulas Patocka <mpatocka@redhat.com>, Catalin Marinas <catalin.marinas@arm.com>, jfs-discussion@lists.sourceforge.net, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch 1/2] mm, mempool: poison elements backed by slab allocator Date: Thu, 19 Mar 2015 16:20:37 -0700 (PDT) [thread overview] Message-ID: <alpine.DEB.2.10.1503191602330.513@chino.kir.corp.google.com> (raw) In-Reply-To: <8761a1dxsv.fsf@rasmusvillemoes.dk> On Mon, 16 Mar 2015, Rasmus Villemoes wrote: > > Mempools keep elements in a reserved pool for contexts in which > > allocation may not be possible. When an element is allocated from the > > reserved pool, its memory contents is the same as when it was added to > > the reserved pool. > > > > Because of this, elements lack any free poisoning to detect > > use-after-free errors. > > > > This patch adds free poisoning for elements backed by the slab allocator. > > This is possible because the mempool layer knows the object size of each > > element. > > > > When an element is added to the reserved pool, it is poisoned with > > POISON_FREE. When it is removed from the reserved pool, the contents are > > checked for POISON_FREE. If there is a mismatch, a warning is emitted to > > the kernel log. > > > > + > > +static void poison_slab_element(mempool_t *pool, void *element) > > +{ > > + if (pool->alloc == mempool_alloc_slab || > > + pool->alloc == mempool_kmalloc) { > > + size_t size = ksize(element); > > + u8 *obj = element; > > + > > + memset(obj, POISON_FREE, size - 1); > > + obj[size - 1] = POISON_END; > > + } > > +} > > Maybe a stupid question, but what happens if the underlying slab > allocator has non-trivial ->ctor? > Not a stupid question at all, it's very legitimate, thanks for thinking about it! Using slab constructors with mempools is inherently risky because you don't know where the element is coming from when returned by mempool_alloc(): was it able to be allocated in GFP_NOFS context from the slab allocator, or is it coming from mempool's reserved pool of elements? In the former, the constructor is properly called and in the latter it's not called: we simply pop the element from the reserved pool and return it to the caller. For that reason, without mempool element poisoning, we need to take care that objects are properly deconstructed when freed to the reserved pool so that they are in a state we expect when returned from mempool_alloc(). Thus, at least part of the slab constructor must be duplicated before calling mempool_free(). There's one user in the tree that actually does this: it's the mempool_create_slab_pool() in fs/jfs/jfs_metapage.c, and it does exactly what is described above: it clears necessary fields before doing mempool_free() that are duplicated in the slab object constructor. We'd like to be able to do this: diff --git a/mm/mempool.c b/mm/mempool.c --- a/mm/mempool.c +++ b/mm/mempool.c @@ -15,6 +15,7 @@ #include <linux/mempool.h> #include <linux/blkdev.h> #include <linux/writeback.h> +#include "slab.h" static void add_element(mempool_t *pool, void *element) { @@ -332,6 +333,7 @@ EXPORT_SYMBOL(mempool_free); void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data) { struct kmem_cache *mem = pool_data; + BUG_ON(mem->ctor); return kmem_cache_alloc(mem, gfp_mask); } EXPORT_SYMBOL(mempool_alloc_slab); Since it would be difficult to reproduce an error with an improperly decosntructed mempool element when used with a mempool based on a slab cache that has a constructor: normally, slab objects are allocatable even with GFP_NOFS since there are free objects available and there's less of a liklihood that we'll need to use the mempool reserved pool. But we obviously can't do that if jfs is actively using mempools based on slab caches with object constructors. I think it would be much better to simply initialize objects as they are allocated, regardless of whether they come from the slab allocator or mempool reserved pool, and avoid trying to set the state up properly before mempool_free(). This patch properly initializes all fields that are currently done by the object constructor (with the exception of mp->flags since it is immediately overwritten by the caller anyway). It also removes META_free since nothing is checking for it. Jfs folks, would this be acceptable to you? --- diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c --- a/fs/jfs/jfs_metapage.c +++ b/fs/jfs/jfs_metapage.c @@ -183,30 +183,23 @@ static inline void remove_metapage(struct page *page, struct metapage *mp) #endif -static void init_once(void *foo) -{ - struct metapage *mp = (struct metapage *)foo; - - mp->lid = 0; - mp->lsn = 0; - mp->flag = 0; - mp->data = NULL; - mp->clsn = 0; - mp->log = NULL; - set_bit(META_free, &mp->flag); - init_waitqueue_head(&mp->wait); -} - static inline struct metapage *alloc_metapage(gfp_t gfp_mask) { - return mempool_alloc(metapage_mempool, gfp_mask); + struct metapage *mp = mempool_alloc(metapage_mempool, gfp_mask); + + if (mp) { + mp->lid = 0; + mp->lsn = 0; + mp->data = NULL; + mp->clsn = 0; + mp->log = NULL; + init_waitqueue_head(&mp->wait); + } + return mp; } static inline void free_metapage(struct metapage *mp) { - mp->flag = 0; - set_bit(META_free, &mp->flag); - mempool_free(mp, metapage_mempool); } @@ -216,7 +209,7 @@ int __init metapage_init(void) * Allocate the metapage structures */ metapage_cache = kmem_cache_create("jfs_mp", sizeof(struct metapage), - 0, 0, init_once); + 0, 0, NULL); if (metapage_cache == NULL) return -ENOMEM; diff --git a/fs/jfs/jfs_metapage.h b/fs/jfs/jfs_metapage.h index a78beda..337e9e5 100644 --- a/fs/jfs/jfs_metapage.h +++ b/fs/jfs/jfs_metapage.h @@ -48,7 +48,6 @@ struct metapage { /* metapage flag */ #define META_locked 0 -#define META_free 1 #define META_dirty 2 #define META_sync 3 #define META_discard 4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-03-19 23:20 UTC|newest] Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-03-09 7:21 [patch 1/2] mm, mempool: poison elements backed by slab allocator David Rientjes 2015-03-09 7:21 ` David Rientjes 2015-03-09 7:22 ` [patch 2/2] mm, mempool: poison elements backed by page allocator David Rientjes 2015-03-09 7:22 ` David Rientjes 2015-03-12 20:28 ` [patch 1/2] mm, mempool: poison elements backed by slab allocator Andrew Morton 2015-03-12 20:28 ` Andrew Morton 2015-03-14 0:06 ` David Rientjes 2015-03-14 0:06 ` David Rientjes 2015-03-16 10:46 ` Rasmus Villemoes 2015-03-16 10:46 ` Rasmus Villemoes 2015-03-19 23:20 ` David Rientjes [this message] 2015-03-19 23:20 ` David Rientjes 2015-03-19 23:26 ` Dave Kleikamp 2015-03-19 23:26 ` Dave Kleikamp
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=alpine.DEB.2.10.1503191602330.513@chino.kir.corp.google.com \ --to=rientjes@google.com \ --cc=akpm@linux-foundation.org \ --cc=catalin.marinas@arm.com \ --cc=hch@lst.de \ --cc=jfs-discussion@lists.sourceforge.net \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux@rasmusvillemoes.dk \ --cc=mpatocka@redhat.com \ --cc=sebott@linux.vnet.ibm.com \ --cc=shaggy@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.