From: Michal Hocko <mhocko@kernel.org> To: Kees Cook <keescook@chromium.org> Cc: Christoph Lameter <cl@linux.com>, Andrew Morton <akpm@linux-foundation.org>, Pekka Enberg <penberg@kernel.org>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Linux-MM <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org> Subject: Re: [PATCH] mm: Add additional consistency check Date: Fri, 28 Apr 2017 08:16:38 +0200 [thread overview] Message-ID: <20170428061637.GB8143@dhcp22.suse.cz> (raw) In-Reply-To: <CAGXu5j+vVn02Vsx5TzWPz3MS7Jow1gi+m3ojwMXrL-w6aaZhtw@mail.gmail.com> On Thu 27-04-17 18:11:28, Kees Cook wrote: > On Tue, Apr 11, 2017 at 7:19 AM, Michal Hocko <mhocko@kernel.org> wrote: > > I would do something like... > > --- > > diff --git a/mm/slab.c b/mm/slab.c > > index bd63450a9b16..87c99a5e9e18 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -393,10 +393,15 @@ static inline void set_store_user_dirty(struct kmem_cache *cachep) {} > > static int slab_max_order = SLAB_MAX_ORDER_LO; > > static bool slab_max_order_set __initdata; > > > > +static inline struct kmem_cache *page_to_cache(struct page *page) > > +{ > > + return page->slab_cache; > > +} > > + > > static inline struct kmem_cache *virt_to_cache(const void *obj) > > { > > struct page *page = virt_to_head_page(obj); > > - return page->slab_cache; > > + return page_to_cache(page); > > } > > > > static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, > > @@ -3813,14 +3818,18 @@ void kfree(const void *objp) > > { > > struct kmem_cache *c; > > unsigned long flags; > > + struct page *page; > > > > trace_kfree(_RET_IP_, objp); > > > > if (unlikely(ZERO_OR_NULL_PTR(objp))) > > return; > > + page = virt_to_head_page(obj); > > + if (CHECK_DATA_CORRUPTION(!PageSlab(page))) > > + return; > > local_irq_save(flags); > > kfree_debugcheck(objp); > > - c = virt_to_cache(objp); > > + c = page_to_cache(page); > > debug_check_no_locks_freed(objp, c->object_size); > > > > debug_check_no_obj_freed(objp, c->object_size); > > Sorry for the delay, I've finally had time to look at this again. > > So, this only handles the kfree() case, not the kmem_cache_free() nor > kmem_cache_free_bulk() cases, so it misses all the non-kmalloc > allocations (and kfree() ultimately calls down to kmem_cache_free()). > Similarly, my proposed patch missed the kfree() path. :P yes > As I work on a replacement, is the goal to avoid the checks while > under local_irq_save()? (i.e. I can't just put the check in > virt_to_cache(), etc.) You would have to check all callers of virt_to_cache. I would simply replace BUG_ON(!PageSlab()) in cache_from_obj. kmem_cache_free already handles NULL cache. kmem_cache_free_bulk and build_detached_freelist can be made to do so. -- Michal Hocko SUSE Labs
WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org> To: Kees Cook <keescook@chromium.org> Cc: Christoph Lameter <cl@linux.com>, Andrew Morton <akpm@linux-foundation.org>, Pekka Enberg <penberg@kernel.org>, David Rientjes <rientjes@google.com>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Linux-MM <linux-mm@kvack.org>, LKML <linux-kernel@vger.kernel.org> Subject: Re: [PATCH] mm: Add additional consistency check Date: Fri, 28 Apr 2017 08:16:38 +0200 [thread overview] Message-ID: <20170428061637.GB8143@dhcp22.suse.cz> (raw) In-Reply-To: <CAGXu5j+vVn02Vsx5TzWPz3MS7Jow1gi+m3ojwMXrL-w6aaZhtw@mail.gmail.com> On Thu 27-04-17 18:11:28, Kees Cook wrote: > On Tue, Apr 11, 2017 at 7:19 AM, Michal Hocko <mhocko@kernel.org> wrote: > > I would do something like... > > --- > > diff --git a/mm/slab.c b/mm/slab.c > > index bd63450a9b16..87c99a5e9e18 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -393,10 +393,15 @@ static inline void set_store_user_dirty(struct kmem_cache *cachep) {} > > static int slab_max_order = SLAB_MAX_ORDER_LO; > > static bool slab_max_order_set __initdata; > > > > +static inline struct kmem_cache *page_to_cache(struct page *page) > > +{ > > + return page->slab_cache; > > +} > > + > > static inline struct kmem_cache *virt_to_cache(const void *obj) > > { > > struct page *page = virt_to_head_page(obj); > > - return page->slab_cache; > > + return page_to_cache(page); > > } > > > > static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, > > @@ -3813,14 +3818,18 @@ void kfree(const void *objp) > > { > > struct kmem_cache *c; > > unsigned long flags; > > + struct page *page; > > > > trace_kfree(_RET_IP_, objp); > > > > if (unlikely(ZERO_OR_NULL_PTR(objp))) > > return; > > + page = virt_to_head_page(obj); > > + if (CHECK_DATA_CORRUPTION(!PageSlab(page))) > > + return; > > local_irq_save(flags); > > kfree_debugcheck(objp); > > - c = virt_to_cache(objp); > > + c = page_to_cache(page); > > debug_check_no_locks_freed(objp, c->object_size); > > > > debug_check_no_obj_freed(objp, c->object_size); > > Sorry for the delay, I've finally had time to look at this again. > > So, this only handles the kfree() case, not the kmem_cache_free() nor > kmem_cache_free_bulk() cases, so it misses all the non-kmalloc > allocations (and kfree() ultimately calls down to kmem_cache_free()). > Similarly, my proposed patch missed the kfree() path. :P yes > As I work on a replacement, is the goal to avoid the checks while > under local_irq_save()? (i.e. I can't just put the check in > virt_to_cache(), etc.) You would have to check all callers of virt_to_cache. I would simply replace BUG_ON(!PageSlab()) in cache_from_obj. kmem_cache_free already handles NULL cache. kmem_cache_free_bulk and build_detached_freelist can be made to do so. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-04-28 6:16 UTC|newest] Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-03-31 16:40 [PATCH] mm: Add additional consistency check Kees Cook 2017-03-31 16:40 ` Kees Cook 2017-03-31 21:33 ` Andrew Morton 2017-03-31 21:33 ` Andrew Morton 2017-04-01 0:04 ` Kees Cook 2017-04-01 0:04 ` Kees Cook 2017-04-03 3:40 ` Michael Ellerman 2017-04-03 3:40 ` Michael Ellerman 2017-04-03 14:03 ` Christoph Lameter 2017-04-03 14:03 ` Christoph Lameter 2017-04-03 14:53 ` Matthew Wilcox 2017-04-03 14:53 ` Matthew Wilcox 2017-04-04 11:30 ` Michal Hocko 2017-04-04 11:30 ` Michal Hocko 2017-04-04 15:07 ` Christoph Lameter 2017-04-04 15:07 ` Christoph Lameter 2017-04-04 15:16 ` Michal Hocko 2017-04-04 15:16 ` Michal Hocko 2017-04-04 15:46 ` Kees Cook 2017-04-04 15:46 ` Kees Cook 2017-04-04 15:58 ` Michal Hocko 2017-04-04 15:58 ` Michal Hocko 2017-04-04 16:02 ` Kees Cook 2017-04-04 16:02 ` Kees Cook 2017-04-04 19:13 ` Christoph Lameter 2017-04-04 19:13 ` Christoph Lameter 2017-04-04 19:42 ` Michal Hocko 2017-04-04 19:42 ` Michal Hocko 2017-04-04 19:58 ` Christoph Lameter 2017-04-04 19:58 ` Christoph Lameter 2017-04-04 20:13 ` Michal Hocko 2017-04-04 20:13 ` Michal Hocko 2017-04-11 4:58 ` Kees Cook 2017-04-11 4:58 ` Kees Cook 2017-04-11 13:46 ` Michal Hocko 2017-04-11 13:46 ` Michal Hocko 2017-04-11 14:14 ` Kees Cook 2017-04-11 14:14 ` Kees Cook 2017-04-11 14:19 ` Michal Hocko 2017-04-11 14:19 ` Michal Hocko 2017-04-11 16:05 ` Kees Cook 2017-04-11 16:05 ` Kees Cook 2017-04-11 16:16 ` Christoph Lameter 2017-04-11 16:16 ` Christoph Lameter 2017-04-11 16:19 ` Kees Cook 2017-04-11 16:19 ` Kees Cook 2017-04-11 16:23 ` Christoph Lameter 2017-04-11 16:23 ` Christoph Lameter 2017-04-11 16:30 ` Kees Cook 2017-04-11 16:30 ` Kees Cook 2017-04-11 16:26 ` Christoph Lameter 2017-04-11 16:26 ` Christoph Lameter 2017-04-11 16:41 ` Michal Hocko 2017-04-11 16:41 ` Michal Hocko 2017-04-11 18:03 ` Christoph Lameter 2017-04-11 18:03 ` Christoph Lameter 2017-04-11 18:30 ` Michal Hocko 2017-04-11 18:30 ` Michal Hocko 2017-04-11 18:44 ` Christoph Lameter 2017-04-11 18:44 ` Christoph Lameter 2017-04-11 18:55 ` Michal Hocko 2017-04-11 18:55 ` Michal Hocko 2017-04-11 18:59 ` Christoph Lameter 2017-04-11 18:59 ` Christoph Lameter 2017-04-11 19:39 ` Michal Hocko 2017-04-11 19:39 ` Michal Hocko 2017-04-17 15:22 ` Christoph Lameter 2017-04-17 15:22 ` Christoph Lameter 2017-04-18 6:41 ` Michal Hocko 2017-04-18 6:41 ` Michal Hocko 2017-04-18 13:31 ` Christoph Lameter 2017-04-18 13:31 ` Christoph Lameter 2017-04-18 13:37 ` Christoph Lameter 2017-04-18 13:37 ` Christoph Lameter 2017-04-28 1:11 ` Kees Cook 2017-04-28 1:11 ` Kees Cook 2017-04-28 6:16 ` Michal Hocko [this message] 2017-04-28 6:16 ` Michal Hocko 2017-04-27 12:06 ` Michal Hocko 2017-04-27 12:06 ` Michal Hocko 2017-04-11 18:30 ` Christoph Lameter 2017-04-11 18:30 ` Christoph Lameter
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170428061637.GB8143@dhcp22.suse.cz \ --to=mhocko@kernel.org \ --cc=akpm@linux-foundation.org \ --cc=cl@linux.com \ --cc=iamjoonsoo.kim@lge.com \ --cc=keescook@chromium.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=penberg@kernel.org \ --cc=rientjes@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.