* [PATCH 0/5] Fixups for slub
@ 2021-09-16 12:39 Miaohe Lin
2021-09-16 12:39 ` [PATCH 1/5] mm, slub: fix two bugs in slab_debug_trace_open() Miaohe Lin
` (4 more replies)
0 siblings, 5 replies; 13+ messages in thread
From: Miaohe Lin @ 2021-09-16 12:39 UTC (permalink / raw)
To: akpm, cl, penberg, rientjes, iamjoonsoo.kim, vbabka
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel, linmiaohe
Hi all,
This series contains various bug fixes for slub. We fix memoryleak,
use-afer-free, NULL pointer dereferencing and so on in slub. More
details can be found in the respective changelogs. Thanks!
Miaohe Lin (5):
mm, slub: fix two bugs in slab_debug_trace_open()
mm, slub: fix mismatch between reconstructed freelist depth and cnt
mm, slub: fix potential memoryleak in kmem_cache_open()
mm, slub: fix potential use-after-free in slab_debugfs_fops
mm, slub: fix incorrect memcg slab count for bulk free
mm/slub.c | 31 ++++++++++++++++++++++++-------
1 file changed, 24 insertions(+), 7 deletions(-)
--
2.23.0
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 1/5] mm, slub: fix two bugs in slab_debug_trace_open()
2021-09-16 12:39 [PATCH 0/5] Fixups for slub Miaohe Lin
@ 2021-09-16 12:39 ` Miaohe Lin
2021-10-05 9:46 ` Vlastimil Babka
2021-09-16 12:39 ` [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist depth and cnt Miaohe Lin
` (3 subsequent siblings)
4 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-09-16 12:39 UTC (permalink / raw)
To: akpm, cl, penberg, rientjes, iamjoonsoo.kim, vbabka
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel, linmiaohe
It's possible that __seq_open_private() will return NULL. So we should
check t before using lest dereferencing NULL pointer. And in error paths,
we forgot to release private buffer via seq_release_private(). Memory
will leak in these paths.
Fixes: 64dd68497be7 ("mm: slub: move sysfs slab alloc/free interfaces to debugfs")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/slub.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 3d2025f7163b..ed160b6c54f8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -6108,9 +6108,14 @@ static int slab_debug_trace_open(struct inode *inode, struct file *filep)
struct kmem_cache *s = file_inode(filep)->i_private;
unsigned long *obj_map;
+ if (!t)
+ return -ENOMEM;
+
obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL);
- if (!obj_map)
+ if (!obj_map) {
+ seq_release_private(inode, filep);
return -ENOMEM;
+ }
if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") == 0)
alloc = TRACK_ALLOC;
@@ -6119,6 +6124,7 @@ static int slab_debug_trace_open(struct inode *inode, struct file *filep)
if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL)) {
bitmap_free(obj_map);
+ seq_release_private(inode, filep);
return -ENOMEM;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist depth and cnt
2021-09-16 12:39 [PATCH 0/5] Fixups for slub Miaohe Lin
2021-09-16 12:39 ` [PATCH 1/5] mm, slub: fix two bugs in slab_debug_trace_open() Miaohe Lin
@ 2021-09-16 12:39 ` Miaohe Lin
2021-10-05 9:57 ` Vlastimil Babka
2021-09-16 12:39 ` [PATCH 3/5] mm, slub: fix potential memoryleak in kmem_cache_open() Miaohe Lin
` (2 subsequent siblings)
4 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-09-16 12:39 UTC (permalink / raw)
To: akpm, cl, penberg, rientjes, iamjoonsoo.kim, vbabka
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel, linmiaohe
If object's reuse is delayed, it will be excluded from the reconstructed
freelist. But we forgot to adjust the cnt accordingly. So there will be
a mismatch between reconstructed freelist depth and cnt. This will lead
to free_debug_processing() complain about freelist count or a incorrect
slub inuse count.
Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/slub.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index ed160b6c54f8..a56a6423d4e8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1701,7 +1701,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s,
}
static inline bool slab_free_freelist_hook(struct kmem_cache *s,
- void **head, void **tail)
+ void **head, void **tail,
+ int *cnt)
{
void *object;
@@ -1728,6 +1729,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
*head = object;
if (!*tail)
*tail = object;
+ } else {
+ /*
+ * Adjust the reconstructed freelist depth
+ * accordingly if object's reuse is delayed.
+ */
+ --(*cnt);
}
} while (object != old_tail);
@@ -3480,7 +3487,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
* With KASAN enabled slab_free_freelist_hook modifies the freelist
* to remove objects, whose reuse must be delayed.
*/
- if (slab_free_freelist_hook(s, &head, &tail))
+ if (slab_free_freelist_hook(s, &head, &tail, &cnt))
do_slab_free(s, page, head, tail, cnt, addr);
}
--
2.23.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 3/5] mm, slub: fix potential memoryleak in kmem_cache_open()
2021-09-16 12:39 [PATCH 0/5] Fixups for slub Miaohe Lin
2021-09-16 12:39 ` [PATCH 1/5] mm, slub: fix two bugs in slab_debug_trace_open() Miaohe Lin
2021-09-16 12:39 ` [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist depth and cnt Miaohe Lin
@ 2021-09-16 12:39 ` Miaohe Lin
2021-10-05 10:02 ` Vlastimil Babka
2021-09-16 12:39 ` [PATCH 4/5] mm, slub: fix potential use-after-free in slab_debugfs_fops Miaohe Lin
2021-09-16 12:39 ` [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk free Miaohe Lin
4 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-09-16 12:39 UTC (permalink / raw)
To: akpm, cl, penberg, rientjes, iamjoonsoo.kim, vbabka
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel, linmiaohe
In error path, the random_seq of slub cache might be leaked. Fix this by
using __kmem_cache_release() to release all the relevant resources.
Fixes: 210e7a43fa90 ("mm: SLUB freelist randomization")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index a56a6423d4e8..bf1793fb4ce5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4210,8 +4210,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
if (alloc_kmem_cache_cpus(s))
return 0;
- free_kmem_cache_nodes(s);
error:
+ __kmem_cache_release(s);
return -EINVAL;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 4/5] mm, slub: fix potential use-after-free in slab_debugfs_fops
2021-09-16 12:39 [PATCH 0/5] Fixups for slub Miaohe Lin
` (2 preceding siblings ...)
2021-09-16 12:39 ` [PATCH 3/5] mm, slub: fix potential memoryleak in kmem_cache_open() Miaohe Lin
@ 2021-09-16 12:39 ` Miaohe Lin
2021-10-05 10:36 ` Vlastimil Babka
2021-09-16 12:39 ` [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk free Miaohe Lin
4 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-09-16 12:39 UTC (permalink / raw)
To: akpm, cl, penberg, rientjes, iamjoonsoo.kim, vbabka
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel, linmiaohe
When sysfs_slab_add failed, we shouldn't call debugfs_slab_add() for s
because s will be freed soon. And slab_debugfs_fops will use s later
leading to a use-after-free.
Fixes: 64dd68497be7 ("mm: slub: move sysfs slab alloc/free interfaces to debugfs")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/slub.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index bf1793fb4ce5..f3df0f04a472 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4887,13 +4887,15 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags)
return 0;
err = sysfs_slab_add(s);
- if (err)
+ if (err) {
__kmem_cache_release(s);
+ return err;
+ }
if (s->flags & SLAB_STORE_USER)
debugfs_slab_add(s);
- return err;
+ return 0;
}
void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)
--
2.23.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk free
2021-09-16 12:39 [PATCH 0/5] Fixups for slub Miaohe Lin
` (3 preceding siblings ...)
2021-09-16 12:39 ` [PATCH 4/5] mm, slub: fix potential use-after-free in slab_debugfs_fops Miaohe Lin
@ 2021-09-16 12:39 ` Miaohe Lin
2021-10-05 10:50 ` Vlastimil Babka
4 siblings, 1 reply; 13+ messages in thread
From: Miaohe Lin @ 2021-09-16 12:39 UTC (permalink / raw)
To: akpm, cl, penberg, rientjes, iamjoonsoo.kim, vbabka
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel, linmiaohe
kmem_cache_free_bulk() will call memcg_slab_free_hook() for all objects
when doing bulk free. So we shouldn't call memcg_slab_free_hook() again
for bulk free to avoid incorrect memcg slab count.
Fixes: d1b2cf6cb84a ("mm: memcg/slab: uncharge during kmem_cache_free_bulk()")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/slub.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index f3df0f04a472..d8f77346376d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3420,7 +3420,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
struct kmem_cache_cpu *c;
unsigned long tid;
- memcg_slab_free_hook(s, &head, 1);
+ /* memcg_slab_free_hook() is already called for bulk free. */
+ if (!tail)
+ memcg_slab_free_hook(s, &head, 1);
redo:
/*
* Determine the currently cpus per cpu slab.
--
2.23.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 1/5] mm, slub: fix two bugs in slab_debug_trace_open()
2021-09-16 12:39 ` [PATCH 1/5] mm, slub: fix two bugs in slab_debug_trace_open() Miaohe Lin
@ 2021-10-05 9:46 ` Vlastimil Babka
0 siblings, 0 replies; 13+ messages in thread
From: Vlastimil Babka @ 2021-10-05 9:46 UTC (permalink / raw)
To: Miaohe Lin, akpm, cl, penberg, rientjes, iamjoonsoo.kim
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel
On 9/16/21 14:39, Miaohe Lin wrote:
> It's possible that __seq_open_private() will return NULL. So we should
> check t before using lest dereferencing NULL pointer. And in error paths,
> we forgot to release private buffer via seq_release_private(). Memory
> will leak in these paths.
>
> Fixes: 64dd68497be7 ("mm: slub: move sysfs slab alloc/free interfaces to debugfs")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/slub.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 3d2025f7163b..ed160b6c54f8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -6108,9 +6108,14 @@ static int slab_debug_trace_open(struct inode *inode, struct file *filep)
> struct kmem_cache *s = file_inode(filep)->i_private;
> unsigned long *obj_map;
>
> + if (!t)
> + return -ENOMEM;
> +
> obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL);
> - if (!obj_map)
> + if (!obj_map) {
> + seq_release_private(inode, filep);
> return -ENOMEM;
> + }
>
> if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") == 0)
> alloc = TRACK_ALLOC;
> @@ -6119,6 +6124,7 @@ static int slab_debug_trace_open(struct inode *inode, struct file *filep)
>
> if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL)) {
> bitmap_free(obj_map);
> + seq_release_private(inode, filep);
> return -ENOMEM;
> }
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist depth and cnt
2021-09-16 12:39 ` [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist depth and cnt Miaohe Lin
@ 2021-10-05 9:57 ` Vlastimil Babka
2021-10-08 2:01 ` Miaohe Lin
0 siblings, 1 reply; 13+ messages in thread
From: Vlastimil Babka @ 2021-10-05 9:57 UTC (permalink / raw)
To: Miaohe Lin, akpm, cl, penberg, rientjes, iamjoonsoo.kim
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel
On 9/16/21 14:39, Miaohe Lin wrote:
> If object's reuse is delayed, it will be excluded from the reconstructed
> freelist. But we forgot to adjust the cnt accordingly. So there will be
> a mismatch between reconstructed freelist depth and cnt. This will lead
> to free_debug_processing() complain about freelist count or a incorrect
> slub inuse count.
>
> Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
I was worried about taking pointer of the cnt parameter when it's hardcoded
1, whether it would destroy inlining. Looks like not, luckily, the function
is just renamed:
> ./scripts/bloat-o-meter mm/slub.o slub.o.after
add/remove: 1/1 grow/shrink: 0/0 up/down: 292/-292 (0)
Function old new delta
slab_free_freelist_hook.constprop - 292 +292
slab_free_freelist_hook 292 - -292
> ---
> mm/slub.c | 11 +++++++++--
> 1 file changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index ed160b6c54f8..a56a6423d4e8 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1701,7 +1701,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s,
> }
>
> static inline bool slab_free_freelist_hook(struct kmem_cache *s,
> - void **head, void **tail)
> + void **head, void **tail,
> + int *cnt)
> {
>
> void *object;
> @@ -1728,6 +1729,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
> *head = object;
> if (!*tail)
> *tail = object;
> + } else {
> + /*
> + * Adjust the reconstructed freelist depth
> + * accordingly if object's reuse is delayed.
> + */
> + --(*cnt);
> }
> } while (object != old_tail);
>
> @@ -3480,7 +3487,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
> * With KASAN enabled slab_free_freelist_hook modifies the freelist
> * to remove objects, whose reuse must be delayed.
> */
> - if (slab_free_freelist_hook(s, &head, &tail))
> + if (slab_free_freelist_hook(s, &head, &tail, &cnt))
> do_slab_free(s, page, head, tail, cnt, addr);
> }
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 3/5] mm, slub: fix potential memoryleak in kmem_cache_open()
2021-09-16 12:39 ` [PATCH 3/5] mm, slub: fix potential memoryleak in kmem_cache_open() Miaohe Lin
@ 2021-10-05 10:02 ` Vlastimil Babka
0 siblings, 0 replies; 13+ messages in thread
From: Vlastimil Babka @ 2021-10-05 10:02 UTC (permalink / raw)
To: Miaohe Lin, akpm, cl, penberg, rientjes, iamjoonsoo.kim
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel
On 9/16/21 14:39, Miaohe Lin wrote:
> In error path, the random_seq of slub cache might be leaked. Fix this by
> using __kmem_cache_release() to release all the relevant resources.
>
> Fixes: 210e7a43fa90 ("mm: SLUB freelist randomization")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/slub.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index a56a6423d4e8..bf1793fb4ce5 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4210,8 +4210,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
> if (alloc_kmem_cache_cpus(s))
> return 0;
>
> - free_kmem_cache_nodes(s);
> error:
> + __kmem_cache_release(s);
> return -EINVAL;
> }
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/5] mm, slub: fix potential use-after-free in slab_debugfs_fops
2021-09-16 12:39 ` [PATCH 4/5] mm, slub: fix potential use-after-free in slab_debugfs_fops Miaohe Lin
@ 2021-10-05 10:36 ` Vlastimil Babka
0 siblings, 0 replies; 13+ messages in thread
From: Vlastimil Babka @ 2021-10-05 10:36 UTC (permalink / raw)
To: Miaohe Lin, akpm, cl, penberg, rientjes, iamjoonsoo.kim
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel
On 9/16/21 14:39, Miaohe Lin wrote:
> When sysfs_slab_add failed, we shouldn't call debugfs_slab_add() for s
> because s will be freed soon. And slab_debugfs_fops will use s later
> leading to a use-after-free.
>
> Fixes: 64dd68497be7 ("mm: slub: move sysfs slab alloc/free interfaces to debugfs")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/slub.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index bf1793fb4ce5..f3df0f04a472 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4887,13 +4887,15 @@ int __kmem_cache_create(struct kmem_cache *s, slab_flags_t flags)
> return 0;
>
> err = sysfs_slab_add(s);
> - if (err)
> + if (err) {
> __kmem_cache_release(s);
> + return err;
> + }
>
> if (s->flags & SLAB_STORE_USER)
> debugfs_slab_add(s);
>
> - return err;
> + return 0;
> }
>
> void *__kmalloc_track_caller(size_t size, gfp_t gfpflags, unsigned long caller)
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk free
2021-09-16 12:39 ` [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk free Miaohe Lin
@ 2021-10-05 10:50 ` Vlastimil Babka
2021-10-05 21:43 ` Andrew Morton
0 siblings, 1 reply; 13+ messages in thread
From: Vlastimil Babka @ 2021-10-05 10:50 UTC (permalink / raw)
To: Miaohe Lin, akpm, cl, penberg, rientjes, iamjoonsoo.kim
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel
On 9/16/21 14:39, Miaohe Lin wrote:
> kmem_cache_free_bulk() will call memcg_slab_free_hook() for all objects
> when doing bulk free. So we shouldn't call memcg_slab_free_hook() again
> for bulk free to avoid incorrect memcg slab count.
>
> Fixes: d1b2cf6cb84a ("mm: memcg/slab: uncharge during kmem_cache_free_bulk()")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
I now noticed the series doesn't Cc: stable and it should, so I hope Andrew
can add those together with the review tags. Thanks.
> ---
> mm/slub.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index f3df0f04a472..d8f77346376d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3420,7 +3420,9 @@ static __always_inline void do_slab_free(struct kmem_cache *s,
> struct kmem_cache_cpu *c;
> unsigned long tid;
>
> - memcg_slab_free_hook(s, &head, 1);
> + /* memcg_slab_free_hook() is already called for bulk free. */
> + if (!tail)
> + memcg_slab_free_hook(s, &head, 1);
> redo:
> /*
> * Determine the currently cpus per cpu slab.
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk free
2021-10-05 10:50 ` Vlastimil Babka
@ 2021-10-05 21:43 ` Andrew Morton
0 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2021-10-05 21:43 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Miaohe Lin, cl, penberg, rientjes, iamjoonsoo.kim, gregkh,
faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook, bharata,
guro, linux-mm, linux-kernel
On Tue, 5 Oct 2021 12:50:08 +0200 Vlastimil Babka <vbabka@suse.cz> wrote:
> I now noticed the series doesn't Cc: stable and it should, so I hope Andrew
> can add those together with the review tags. Thanks.
Done, thanks.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist depth and cnt
2021-10-05 9:57 ` Vlastimil Babka
@ 2021-10-08 2:01 ` Miaohe Lin
0 siblings, 0 replies; 13+ messages in thread
From: Miaohe Lin @ 2021-10-08 2:01 UTC (permalink / raw)
To: Vlastimil Babka
Cc: gregkh, faiyazm, andreyknvl, ryabinin.a.a, thgarnie, keescook,
bharata, guro, linux-mm, linux-kernel, Andrew Morton,
Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim
On 2021/10/5 17:57, Vlastimil Babka wrote:
> On 9/16/21 14:39, Miaohe Lin wrote:
>> If object's reuse is delayed, it will be excluded from the reconstructed
>> freelist. But we forgot to adjust the cnt accordingly. So there will be
>> a mismatch between reconstructed freelist depth and cnt. This will lead
>> to free_debug_processing() complain about freelist count or a incorrect
>> slub inuse count.
>>
>> Fixes: c3895391df38 ("kasan, slub: fix handling of kasan_slab_free hook")
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>
> I was worried about taking pointer of the cnt parameter when it's hardcoded
> 1, whether it would destroy inlining. Looks like not, luckily, the function
> is just renamed:
>
Many thanks for your review and thoughtful consideration! :)
>> ./scripts/bloat-o-meter mm/slub.o slub.o.after
> add/remove: 1/1 grow/shrink: 0/0 up/down: 292/-292 (0)
> Function old new delta
> slab_free_freelist_hook.constprop - 292 +292
> slab_free_freelist_hook 292 - -292
>
>> ---
>> mm/slub.c | 11 +++++++++--
>> 1 file changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index ed160b6c54f8..a56a6423d4e8 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1701,7 +1701,8 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s,
>> }
>>
>> static inline bool slab_free_freelist_hook(struct kmem_cache *s,
>> - void **head, void **tail)
>> + void **head, void **tail,
>> + int *cnt)
>> {
>>
>> void *object;
>> @@ -1728,6 +1729,12 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
>> *head = object;
>> if (!*tail)
>> *tail = object;
>> + } else {
>> + /*
>> + * Adjust the reconstructed freelist depth
>> + * accordingly if object's reuse is delayed.
>> + */
>> + --(*cnt);
>> }
>> } while (object != old_tail);
>>
>> @@ -3480,7 +3487,7 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page,
>> * With KASAN enabled slab_free_freelist_hook modifies the freelist
>> * to remove objects, whose reuse must be delayed.
>> */
>> - if (slab_free_freelist_hook(s, &head, &tail))
>> + if (slab_free_freelist_hook(s, &head, &tail, &cnt))
>> do_slab_free(s, page, head, tail, cnt, addr);
>> }
>>
>>
>
> .
>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2021-10-08 2:01 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-16 12:39 [PATCH 0/5] Fixups for slub Miaohe Lin
2021-09-16 12:39 ` [PATCH 1/5] mm, slub: fix two bugs in slab_debug_trace_open() Miaohe Lin
2021-10-05 9:46 ` Vlastimil Babka
2021-09-16 12:39 ` [PATCH 2/5] mm, slub: fix mismatch between reconstructed freelist depth and cnt Miaohe Lin
2021-10-05 9:57 ` Vlastimil Babka
2021-10-08 2:01 ` Miaohe Lin
2021-09-16 12:39 ` [PATCH 3/5] mm, slub: fix potential memoryleak in kmem_cache_open() Miaohe Lin
2021-10-05 10:02 ` Vlastimil Babka
2021-09-16 12:39 ` [PATCH 4/5] mm, slub: fix potential use-after-free in slab_debugfs_fops Miaohe Lin
2021-10-05 10:36 ` Vlastimil Babka
2021-09-16 12:39 ` [PATCH 5/5] mm, slub: fix incorrect memcg slab count for bulk free Miaohe Lin
2021-10-05 10:50 ` Vlastimil Babka
2021-10-05 21:43 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).