linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrey Konovalov <andreyknvl@gmail.com>
To: Feng Tang <feng.tang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	 Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	 Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	 Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	 Jonathan Corbet <corbet@lwn.net>,
	Dave Hansen <dave.hansen@intel.com>,
	 Linux Memory Management List <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>,
	 kasan-dev <kasan-dev@googlegroups.com>,
	Kees Cook <keescook@chromium.org>
Subject: Re: [PATCH v6 2/4] mm/slub: only zero the requested size of buffer for kzalloc
Date: Mon, 26 Sep 2022 21:11:24 +0200	[thread overview]
Message-ID: <CA+fCnZfSv98uvxop7YN_L-F=WNVkb5rcwa6Nmf5yN-59p8Sr4Q@mail.gmail.com> (raw)
In-Reply-To: <20220913065423.520159-3-feng.tang@intel.com>

On Tue, Sep 13, 2022 at 8:54 AM Feng Tang <feng.tang@intel.com> wrote:
>

Hi Feng,

> kzalloc/kmalloc will round up the request size to a fixed size
> (mostly power of 2), so the allocated memory could be more than
> requested. Currently kzalloc family APIs will zero all the
> allocated memory.
>
> To detect out-of-bound usage of the extra allocated memory, only
> zero the requested part, so that sanity check could be added to
> the extra space later.

I still don't like the idea of only zeroing the requested memory and
not the whole object. Considering potential info-leak vulnerabilities.

Can we only do this when SLAB_DEBUG is enabled?

> Performance wise, smaller zeroing length also brings shorter
> execution time, as shown from test data on various server/desktop
> platforms.
>
> For kzalloc users who will call ksize() later and utilize this
> extra space, please be aware that the space is not zeroed any
> more.

CC Kees

>
> Signed-off-by: Feng Tang <feng.tang@intel.com>
> ---
>  mm/slab.c |  7 ++++---
>  mm/slab.h |  5 +++--
>  mm/slub.c | 10 +++++++---
>  3 files changed, 14 insertions(+), 8 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index a5486ff8362a..4594de0e3d6b 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3253,7 +3253,8 @@ slab_alloc_node(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
>         init = slab_want_init_on_alloc(flags, cachep);
>
>  out:
> -       slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init);
> +       slab_post_alloc_hook(cachep, objcg, flags, 1, &objp, init,
> +                               cachep->object_size);
>         return objp;
>  }
>
> @@ -3506,13 +3507,13 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
>          * Done outside of the IRQ disabled section.
>          */
>         slab_post_alloc_hook(s, objcg, flags, size, p,
> -                               slab_want_init_on_alloc(flags, s));
> +                       slab_want_init_on_alloc(flags, s), s->object_size);
>         /* FIXME: Trace call missing. Christoph would like a bulk variant */
>         return size;
>  error:
>         local_irq_enable();
>         cache_alloc_debugcheck_after_bulk(s, flags, i, p, _RET_IP_);
> -       slab_post_alloc_hook(s, objcg, flags, i, p, false);
> +       slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size);
>         kmem_cache_free_bulk(s, i, p);
>         return 0;
>  }
> diff --git a/mm/slab.h b/mm/slab.h
> index d0ef9dd44b71..3cf5adf63f48 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -730,7 +730,8 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s,
>
>  static inline void slab_post_alloc_hook(struct kmem_cache *s,
>                                         struct obj_cgroup *objcg, gfp_t flags,
> -                                       size_t size, void **p, bool init)
> +                                       size_t size, void **p, bool init,
> +                                       unsigned int orig_size)
>  {
>         size_t i;
>
> @@ -746,7 +747,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s,
>         for (i = 0; i < size; i++) {
>                 p[i] = kasan_slab_alloc(s, p[i], flags, init);
>                 if (p[i] && init && !kasan_has_integrated_init())
> -                       memset(p[i], 0, s->object_size);
> +                       memset(p[i], 0, orig_size);

Note that when KASAN is enabled and has integrated init, it will
initialize the whole object, which leads to an inconsistency with this
change.

>                 kmemleak_alloc_recursive(p[i], s->object_size, 1,
>                                          s->flags, flags);
>         }
> diff --git a/mm/slub.c b/mm/slub.c
> index c8ba16b3a4db..6f823e99d8b4 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3376,7 +3376,11 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, struct list_l
>         init = slab_want_init_on_alloc(gfpflags, s);
>
>  out:
> -       slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init);
> +       /*
> +        * When init equals 'true', like for kzalloc() family, only
> +        * @orig_size bytes will be zeroed instead of s->object_size
> +        */
> +       slab_post_alloc_hook(s, objcg, gfpflags, 1, &object, init, orig_size);
>
>         return object;
>  }
> @@ -3833,11 +3837,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
>          * Done outside of the IRQ disabled fastpath loop.
>          */
>         slab_post_alloc_hook(s, objcg, flags, size, p,
> -                               slab_want_init_on_alloc(flags, s));
> +                       slab_want_init_on_alloc(flags, s), s->object_size);
>         return i;
>  error:
>         slub_put_cpu_ptr(s->cpu_slab);
> -       slab_post_alloc_hook(s, objcg, flags, i, p, false);
> +       slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size);
>         kmem_cache_free_bulk(s, i, p);
>         return 0;
>  }
> --
> 2.34.1
>


  reply	other threads:[~2022-09-26 19:11 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-13  6:54 [PATCH v6 0/4] mm/slub: some debug enhancements for kmalloc Feng Tang
2022-09-13  6:54 ` [PATCH v6 1/4] mm/slub: enable debugging memory wasting of kmalloc Feng Tang
2022-09-23 11:43   ` Vlastimil Babka
2022-09-24  7:08     ` Feng Tang
2022-10-30 19:23   ` John Thomson
2022-10-30 21:30     ` Vlastimil Babka
2022-10-31  2:36       ` Feng Tang
2022-10-31 10:05         ` John Thomson
2022-10-31 11:36           ` Hyeonggon Yoo
2022-10-31 11:42           ` Feng Tang
2022-11-01  0:18             ` John Thomson
2022-11-01  2:41               ` John Thomson
2022-11-01  7:57               ` Feng Tang
2022-11-01  9:20                 ` John Thomson
2022-11-01  9:31                   ` Hyeonggon Yoo
2022-11-01 10:33                     ` John Thomson
2022-11-01 10:42                       ` Hyeonggon Yoo
2022-11-01 13:55                         ` Feng Tang
2022-11-01 19:39                           ` John Thomson
2022-11-02  6:08                             ` Feng Tang
2022-11-02  7:16                               ` Hyeonggon Yoo
2022-11-03  7:18                                 ` Feng Tang
2022-11-03  7:45                                   ` John Thomson
2022-11-03  8:16                                     ` Feng Tang
2022-11-02  8:22                       ` Vlastimil Babka
2022-11-03  5:54                         ` Feng Tang
2022-11-03  8:33                           ` Vlastimil Babka
2022-11-03 14:16                             ` Feng Tang
2022-11-03 14:36                               ` Hyeonggon Yoo
2022-11-03 16:57                                 ` Vlastimil Babka
2022-11-03 17:35                                   ` Vlastimil Babka
2022-11-04  3:52                                     ` Feng Tang
2022-09-13  6:54 ` [PATCH v6 2/4] mm/slub: only zero the requested size of buffer for kzalloc Feng Tang
2022-09-26 19:11   ` Andrey Konovalov [this message]
2022-09-26 20:15     ` Kees Cook
2022-09-27  1:22       ` Feng Tang
2022-09-27  2:42     ` Feng Tang
2022-10-13 14:00       ` Andrey Konovalov
2022-10-14  5:59         ` Feng Tang
2022-09-13  6:54 ` [PATCH v6 3/4] mm: kasan: Add free_meta size info in struct kasan_cache Feng Tang
2022-09-20 19:20   ` Andrey Konovalov
2022-09-21 12:02     ` Feng Tang
2022-09-24 18:05       ` Andrey Konovalov
2022-09-25 11:26         ` Feng Tang
2022-09-25 16:31           ` Andrey Konovalov
2022-09-27  3:03             ` Feng Tang
2022-09-13  6:54 ` [PATCH v6 4/4] mm/slub: extend redzone check to extra allocated kmalloc space than requested Feng Tang
2022-09-13  8:53   ` Hyeonggon Yoo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+fCnZfSv98uvxop7YN_L-F=WNVkb5rcwa6Nmf5yN-59p8Sr4Q@mail.gmail.com' \
    --to=andreyknvl@gmail.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@intel.com \
    --cc=dvyukov@google.com \
    --cc=feng.tang@intel.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).