kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] mm/slab: always use cache from obj
       [not found] <20230214101949.7461-1-jiazi.li@transsion.com>
@ 2023-02-14 10:33 ` Vlastimil Babka
  2023-02-15  5:49   ` lijiazi
  0 siblings, 1 reply; 4+ messages in thread
From: Vlastimil Babka @ 2023-02-14 10:33 UTC (permalink / raw)
  To: Jiazi.Li, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo
  Cc: Jiazi.Li, linux-mm, Kees Cook, Kernel Hardening

On 2/14/23 11:19, Jiazi.Li wrote:
> If free obj to a wrong cache, in addition random, different offset
> and object_size will also cause problems:
> 1. The offset of a cache with a ctor is not zero, free an object from
> this cache to cache with offset zero, will write next freepointer to
> wrong location, resulting in confusion of freelist.

Kernels hardened against freelist corruption will enable
CONFIG_SLAB_FREELIST_HARDENED, so that's already covered, no?

> 2. If wrong cache want init on free, and cache->object_size is large
> than obj size, which may lead to overwrite issue.

In general, being defensive against usage errors is part of either hardening
or debugging, which is what the existing code takes into account.

> Compared with adding a lot of if-else, it may be better to use obj's
> cache directly.
> 
> Signed-off-by: Jiazi.Li <jiazi.li@transsion.com>
> ---
>  mm/slab.h | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/mm/slab.h b/mm/slab.h
> index 63fb4c00d529..ed39b2e4f27b 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -670,10 +670,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>  {
>  	struct kmem_cache *cachep;
>  
> -	if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
> -	    !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
> -		return s;
> -
>  	cachep = virt_to_cache(x);
>  	if (WARN(cachep && cachep != s,
>  		  "%s: Wrong slab cache. %s but object is from %s\n",


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/slab: always use cache from obj
  2023-02-14 10:33 ` [PATCH] mm/slab: always use cache from obj Vlastimil Babka
@ 2023-02-15  5:49   ` lijiazi
  2023-02-15  9:41     ` Vlastimil Babka
  0 siblings, 1 reply; 4+ messages in thread
From: lijiazi @ 2023-02-15  5:49 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Roman Gushchin, Hyeonggon Yoo, Jiazi.Li, linux-mm,
	Kees Cook, Kernel Hardening

On Tue, Feb 14, 2023 at 11:33:21AM +0100, Vlastimil Babka wrote:
> On 2/14/23 11:19, Jiazi.Li wrote:
> > If free obj to a wrong cache, in addition random, different offset
> > and object_size will also cause problems:
> > 1. The offset of a cache with a ctor is not zero, free an object from
> > this cache to cache with offset zero, will write next freepointer to
> > wrong location, resulting in confusion of freelist.
> 
> Kernels hardened against freelist corruption will enable
> CONFIG_SLAB_FREELIST_HARDENED, so that's already covered, no?
> 
Yes, HARDENED already covered.
> > 2. If wrong cache want init on free, and cache->object_size is large
> > than obj size, which may lead to overwrite issue.
> 
> In general, being defensive against usage errors is part of either hardening
> or debugging, which is what the existing code takes into account.
> 
My consideration is for the wrong cache problem on version without
HARDENED or debugging, it is likely to cause kernel panic, and such
problem is difficult to analyze on non-debug version.
When reproducing this problem on debug version, it will not cause kernel
panic, but only print the WARN log, then use correct cache to free obj.
Because we want to reproduce kernel panic problem, so may ignore WARN
log and think that can not reproduce problem on debug version.

Thanks for your reply, I will enable CONFIG_SLAB_FREELIST_HARDENED on
non-debug version later.
> > Compared with adding a lot of if-else, it may be better to use obj's
> > cache directly.
> > 
> > Signed-off-by: Jiazi.Li <jiazi.li@transsion.com>
> > ---
> >  mm/slab.h | 4 ----
> >  1 file changed, 4 deletions(-)
> > 
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 63fb4c00d529..ed39b2e4f27b 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -670,10 +670,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
> >  {
> >  	struct kmem_cache *cachep;
> >  
> > -	if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
> > -	    !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
> > -		return s;
> > -
> >  	cachep = virt_to_cache(x);
> >  	if (WARN(cachep && cachep != s,
> >  		  "%s: Wrong slab cache. %s but object is from %s\n",
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/slab: always use cache from obj
  2023-02-15  5:49   ` lijiazi
@ 2023-02-15  9:41     ` Vlastimil Babka
  2023-02-15 10:03       ` lijiazi
  0 siblings, 1 reply; 4+ messages in thread
From: Vlastimil Babka @ 2023-02-15  9:41 UTC (permalink / raw)
  To: lijiazi
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Roman Gushchin, Hyeonggon Yoo, Jiazi.Li, linux-mm,
	Kees Cook, Kernel Hardening

On 2/15/23 06:49, lijiazi wrote:
> On Tue, Feb 14, 2023 at 11:33:21AM +0100, Vlastimil Babka wrote:
>> On 2/14/23 11:19, Jiazi.Li wrote:
>> > If free obj to a wrong cache, in addition random, different offset
>> > and object_size will also cause problems:
>> > 1. The offset of a cache with a ctor is not zero, free an object from
>> > this cache to cache with offset zero, will write next freepointer to
>> > wrong location, resulting in confusion of freelist.
>> 
>> Kernels hardened against freelist corruption will enable
>> CONFIG_SLAB_FREELIST_HARDENED, so that's already covered, no?
>> 
> Yes, HARDENED already covered.
>> > 2. If wrong cache want init on free, and cache->object_size is large
>> > than obj size, which may lead to overwrite issue.
>> 
>> In general, being defensive against usage errors is part of either hardening
>> or debugging, which is what the existing code takes into account.
>> 
> My consideration is for the wrong cache problem on version without
> HARDENED or debugging, it is likely to cause kernel panic, and such
> problem is difficult to analyze on non-debug version.
> When reproducing this problem on debug version, it will not cause kernel
> panic, but only print the WARN log, then use correct cache to free obj.
> Because we want to reproduce kernel panic problem, so may ignore WARN
> log and think that can not reproduce problem on debug version.

If you need the panic in order to e.g. capture a crash dump, you could
enable slab debugging and boot with panic_on_warn to make the WARN result in
panic.

> Thanks for your reply, I will enable CONFIG_SLAB_FREELIST_HARDENED on
> non-debug version later.
>> > Compared with adding a lot of if-else, it may be better to use obj's
>> > cache directly.
>> > 
>> > Signed-off-by: Jiazi.Li <jiazi.li@transsion.com>
>> > ---
>> >  mm/slab.h | 4 ----
>> >  1 file changed, 4 deletions(-)
>> > 
>> > diff --git a/mm/slab.h b/mm/slab.h
>> > index 63fb4c00d529..ed39b2e4f27b 100644
>> > --- a/mm/slab.h
>> > +++ b/mm/slab.h
>> > @@ -670,10 +670,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
>> >  {
>> >  	struct kmem_cache *cachep;
>> >  
>> > -	if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
>> > -	    !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
>> > -		return s;
>> > -
>> >  	cachep = virt_to_cache(x);
>> >  	if (WARN(cachep && cachep != s,
>> >  		  "%s: Wrong slab cache. %s but object is from %s\n",
>> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm/slab: always use cache from obj
  2023-02-15  9:41     ` Vlastimil Babka
@ 2023-02-15 10:03       ` lijiazi
  0 siblings, 0 replies; 4+ messages in thread
From: lijiazi @ 2023-02-15 10:03 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Roman Gushchin, Hyeonggon Yoo, Jiazi.Li, linux-mm,
	Kees Cook, Kernel Hardening

On Wed, Feb 15, 2023 at 10:41:32AM +0100, Vlastimil Babka wrote:
> On 2/15/23 06:49, lijiazi wrote:
> > On Tue, Feb 14, 2023 at 11:33:21AM +0100, Vlastimil Babka wrote:
> >> On 2/14/23 11:19, Jiazi.Li wrote:
> >> > If free obj to a wrong cache, in addition random, different offset
> >> > and object_size will also cause problems:
> >> > 1. The offset of a cache with a ctor is not zero, free an object from
> >> > this cache to cache with offset zero, will write next freepointer to
> >> > wrong location, resulting in confusion of freelist.
> >> 
> >> Kernels hardened against freelist corruption will enable
> >> CONFIG_SLAB_FREELIST_HARDENED, so that's already covered, no?
> >> 
> > Yes, HARDENED already covered.
> >> > 2. If wrong cache want init on free, and cache->object_size is large
> >> > than obj size, which may lead to overwrite issue.
> >> 
> >> In general, being defensive against usage errors is part of either hardening
> >> or debugging, which is what the existing code takes into account.
> >> 
> > My consideration is for the wrong cache problem on version without
> > HARDENED or debugging, it is likely to cause kernel panic, and such
> > problem is difficult to analyze on non-debug version.
> > When reproducing this problem on debug version, it will not cause kernel
> > panic, but only print the WARN log, then use correct cache to free obj.
> > Because we want to reproduce kernel panic problem, so may ignore WARN
> > log and think that can not reproduce problem on debug version.
> 
> If you need the panic in order to e.g. capture a crash dump, you could
> enable slab debugging and boot with panic_on_warn to make the WARN result in
> panic.
>

Thank you for your suggestion.
I used to think that the debug version has less tolerance for errors
than non-debug version, so I didn't add panic_on_warn.

> > Thanks for your reply, I will enable CONFIG_SLAB_FREELIST_HARDENED on
> > non-debug version later.
> >> > Compared with adding a lot of if-else, it may be better to use obj's
> >> > cache directly.
> >> > 
> >> > Signed-off-by: Jiazi.Li <jiazi.li@transsion.com>
> >> > ---
> >> >  mm/slab.h | 4 ----
> >> >  1 file changed, 4 deletions(-)
> >> > 
> >> > diff --git a/mm/slab.h b/mm/slab.h
> >> > index 63fb4c00d529..ed39b2e4f27b 100644
> >> > --- a/mm/slab.h
> >> > +++ b/mm/slab.h
> >> > @@ -670,10 +670,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
> >> >  {
> >> >  	struct kmem_cache *cachep;
> >> >  
> >> > -	if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
> >> > -	    !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
> >> > -		return s;
> >> > -
> >> >  	cachep = virt_to_cache(x);
> >> >  	if (WARN(cachep && cachep != s,
> >> >  		  "%s: Wrong slab cache. %s but object is from %s\n",
> >> 
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-02-15 14:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20230214101949.7461-1-jiazi.li@transsion.com>
2023-02-14 10:33 ` [PATCH] mm/slab: always use cache from obj Vlastimil Babka
2023-02-15  5:49   ` lijiazi
2023-02-15  9:41     ` Vlastimil Babka
2023-02-15 10:03       ` lijiazi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).