All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free()
@ 2021-11-02 11:43 Yunfeng Ye
  2021-11-02 13:53 ` Tang Yizhou
  2021-11-02 18:37 ` John Hubbard
  0 siblings, 2 replies; 6+ messages in thread
From: Yunfeng Ye @ 2021-11-02 11:43 UTC (permalink / raw)
  To: cl, penberg, rientjes, iamjoonsoo.kim, Andrew Morton, vbabka,
	linux-mm, linux-kernel
  Cc: vbabka, jhubbard, songmuchun, willy, wuxu.wu, Hewenliang

After the memory is freed, it can be immediately allocated by other
CPUs, before the "free" trace report has been emitted. This causes
inaccurate traces.

For example, if the following sequence of events occurs:

    CPU 0                 CPU 1

  (1) alloc xxxxxx
  (2) free  xxxxxx
                         (3) alloc xxxxxx
                         (4) free  xxxxxx

Then they will be inaccurately reported via tracing, so that they appear
to have happened in this order:

    CPU 0                 CPU 1

  (1) alloc xxxxxx
                         (2) alloc xxxxxx
  (3) free  xxxxxx
                         (4) free  xxxxxx

This makes it look like CPU 1 somehow managed to allocate mmemory that
CPU 0 still had allocated for itself.

In order to avoid this, emit the "free xxxxxx" tracing report just
before the actual call to free the memory, instead of just after it.

Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
---
v1 -> v2:
 - Modify the description
 - Add "Reviewed-by"

 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 432145d7b4ec..427e62034c3f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
 	s = cache_from_obj(s, x);
 	if (!s)
 		return;
-	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
 	trace_kmem_cache_free(_RET_IP_, x, s->name);
+	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
 }
 EXPORT_SYMBOL(kmem_cache_free);

-- 
2.27.0

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free()
  2021-11-02 11:43 [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free() Yunfeng Ye
@ 2021-11-02 13:53 ` Tang Yizhou
  2021-11-02 14:39   ` Vlastimil Babka
  2021-11-02 18:37 ` John Hubbard
  1 sibling, 1 reply; 6+ messages in thread
From: Tang Yizhou @ 2021-11-02 13:53 UTC (permalink / raw)
  To: Yunfeng Ye, cl, penberg, rientjes, iamjoonsoo.kim, Andrew Morton,
	vbabka, linux-mm, linux-kernel
  Cc: jhubbard, songmuchun, willy, wuxu.wu, Hewenliang

On 2021/11/2 19:43, Yunfeng Ye wrote:
> After the memory is freed, it can be immediately allocated by other
> CPUs, before the "free" trace report has been emitted. This causes
> inaccurate traces.
> 
> For example, if the following sequence of events occurs:
> 
>     CPU 0                 CPU 1
> 
>   (1) alloc xxxxxx
>   (2) free  xxxxxx
>                          (3) alloc xxxxxx
>                          (4) free  xxxxxx
> 
> Then they will be inaccurately reported via tracing, so that they appear
> to have happened in this order:
> 
>     CPU 0                 CPU 1
> 
>   (1) alloc xxxxxx
>                          (2) alloc xxxxxx
>   (3) free  xxxxxx
>                          (4) free  xxxxxx
> 
> This makes it look like CPU 1 somehow managed to allocate mmemory that
> CPU 0 still had allocated for itself.
> 
> In order to avoid this, emit the "free xxxxxx" tracing report just
> before the actual call to free the memory, instead of just after it.
> 
> Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> v1 -> v2:
>  - Modify the description
>  - Add "Reviewed-by"
> 
>  mm/slub.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 432145d7b4ec..427e62034c3f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
>  	s = cache_from_obj(s, x);
>  	if (!s)
>  		return;
> -	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>  	trace_kmem_cache_free(_RET_IP_, x, s->name);
> +	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>  }

It seems that kmem_cache_free() in mm/slab.c has the same problem.
We can fix it. Thanks.

>  EXPORT_SYMBOL(kmem_cache_free);
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free()
  2021-11-02 13:53 ` Tang Yizhou
@ 2021-11-02 14:39   ` Vlastimil Babka
  2021-11-03  3:39     ` Yunfeng Ye
  0 siblings, 1 reply; 6+ messages in thread
From: Vlastimil Babka @ 2021-11-02 14:39 UTC (permalink / raw)
  To: Tang Yizhou, Yunfeng Ye, cl, penberg, rientjes, iamjoonsoo.kim,
	Andrew Morton, linux-mm, linux-kernel
  Cc: jhubbard, songmuchun, willy, wuxu.wu, Hewenliang

On 11/2/21 14:53, Tang Yizhou wrote:
> On 2021/11/2 19:43, Yunfeng Ye wrote:
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
>>  	s = cache_from_obj(s, x);
>>  	if (!s)
>>  		return;
>> -	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>>  	trace_kmem_cache_free(_RET_IP_, x, s->name);
>> +	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>>  }
> 
> It seems that kmem_cache_free() in mm/slab.c has the same problem.
> We can fix it. Thanks.

Doh, true. Should go best before the local_irq_save() there.
And also kmem_cache_free() in mm/slob.c.

Interestingly kfree() is already OK in all 3 implementations.

>>  EXPORT_SYMBOL(kmem_cache_free);
>> 
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free()
  2021-11-02 11:43 [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free() Yunfeng Ye
  2021-11-02 13:53 ` Tang Yizhou
@ 2021-11-02 18:37 ` John Hubbard
  2021-11-03  3:41   ` Yunfeng Ye
  1 sibling, 1 reply; 6+ messages in thread
From: John Hubbard @ 2021-11-02 18:37 UTC (permalink / raw)
  To: Yunfeng Ye, cl, penberg, rientjes, iamjoonsoo.kim, Andrew Morton,
	vbabka, linux-mm, linux-kernel
  Cc: songmuchun, willy, wuxu.wu, Hewenliang

On 11/2/21 04:43, Yunfeng Ye wrote:
> After the memory is freed, it can be immediately allocated by other
> CPUs, before the "free" trace report has been emitted. This causes
> inaccurate traces.
> 
> For example, if the following sequence of events occurs:
> 
>      CPU 0                 CPU 1
> 
>    (1) alloc xxxxxx
>    (2) free  xxxxxx
>                           (3) alloc xxxxxx
>                           (4) free  xxxxxx
> 
> Then they will be inaccurately reported via tracing, so that they appear
> to have happened in this order:
> 
>      CPU 0                 CPU 1
> 
>    (1) alloc xxxxxx
>                           (2) alloc xxxxxx
>    (3) free  xxxxxx
>                           (4) free  xxxxxx
> 
> This makes it look like CPU 1 somehow managed to allocate mmemory that


I see I created a typo for you, sorry about that: s/mmemory/memory/

But anyway, the wording looks good now. Please feel free to add:

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,
-- 
John Hubbard
NVIDIA

> CPU 0 still had allocated for itself.
> 
> In order to avoid this, emit the "free xxxxxx" tracing report just
> before the actual call to free the memory, instead of just after it.
> 
> Signed-off-by: Yunfeng Ye <yeyunfeng@huawei.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> v1 -> v2:
>   - Modify the description
>   - Add "Reviewed-by"
> 
>   mm/slub.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 432145d7b4ec..427e62034c3f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
>   	s = cache_from_obj(s, x);
>   	if (!s)
>   		return;
> -	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>   	trace_kmem_cache_free(_RET_IP_, x, s->name);
> +	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>   }
>   EXPORT_SYMBOL(kmem_cache_free);
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free()
  2021-11-02 14:39   ` Vlastimil Babka
@ 2021-11-03  3:39     ` Yunfeng Ye
  0 siblings, 0 replies; 6+ messages in thread
From: Yunfeng Ye @ 2021-11-03  3:39 UTC (permalink / raw)
  To: Vlastimil Babka, Tang Yizhou, cl, penberg, rientjes,
	iamjoonsoo.kim, Andrew Morton, linux-mm, linux-kernel
  Cc: jhubbard, songmuchun, willy, wuxu.wu, Hewenliang



On 2021/11/2 22:39, Vlastimil Babka wrote:
> On 11/2/21 14:53, Tang Yizhou wrote:
>> On 2021/11/2 19:43, Yunfeng Ye wrote:
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -3526,8 +3526,8 @@ void kmem_cache_free(struct kmem_cache *s, void *x)
>>>  	s = cache_from_obj(s, x);
>>>  	if (!s)
>>>  		return;
>>> -	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>>>  	trace_kmem_cache_free(_RET_IP_, x, s->name);
>>> +	slab_free(s, virt_to_head_page(x), x, NULL, 1, _RET_IP_);
>>>  }
>>
>> It seems that kmem_cache_free() in mm/slab.c has the same problem.
>> We can fix it. Thanks.
> 
> Doh, true. Should go best before the local_irq_save() there.
> And also kmem_cache_free() in mm/slob.c.
> 
Yes, I will fix the same problem together in the v3 patch.

Thanks.


> Interestingly kfree() is already OK in all 3 implementations.
> 
>>>  EXPORT_SYMBOL(kmem_cache_free);
>>>
>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free()
  2021-11-02 18:37 ` John Hubbard
@ 2021-11-03  3:41   ` Yunfeng Ye
  0 siblings, 0 replies; 6+ messages in thread
From: Yunfeng Ye @ 2021-11-03  3:41 UTC (permalink / raw)
  To: John Hubbard, cl, penberg, rientjes, iamjoonsoo.kim,
	Andrew Morton, vbabka, linux-mm, linux-kernel
  Cc: songmuchun, willy, wuxu.wu, Hewenliang



On 2021/11/3 2:37, John Hubbard wrote:
> On 11/2/21 04:43, Yunfeng Ye wrote:
>> After the memory is freed, it can be immediately allocated by other
>> CPUs, before the "free" trace report has been emitted. This causes
>> inaccurate traces.
>>
>> For example, if the following sequence of events occurs:
>>
>>      CPU 0                 CPU 1
>>
>>    (1) alloc xxxxxx
>>    (2) free  xxxxxx
>>                           (3) alloc xxxxxx
>>                           (4) free  xxxxxx
>>
>> Then they will be inaccurately reported via tracing, so that they appear
>> to have happened in this order:
>>
>>      CPU 0                 CPU 1
>>
>>    (1) alloc xxxxxx
>>                           (2) alloc xxxxxx
>>    (3) free  xxxxxx
>>                           (4) free  xxxxxx
>>
>> This makes it look like CPU 1 somehow managed to allocate mmemory that
> 
> 
> I see I created a typo for you, sorry about that: s/mmemory/memory/
> 
> But anyway, the wording looks good now. Please feel free to add:
> 
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> 
Ok, I will fix the typo in the v3 patch.

Thanks.

> 
> thanks,

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-11-03  3:41 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-02 11:43 [PATCH v2] mm, slub: emit the "free" trace report before freeing memory in kmem_cache_free() Yunfeng Ye
2021-11-02 13:53 ` Tang Yizhou
2021-11-02 14:39   ` Vlastimil Babka
2021-11-03  3:39     ` Yunfeng Ye
2021-11-02 18:37 ` John Hubbard
2021-11-03  3:41   ` Yunfeng Ye

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.