* [PATCH] mm/slub: remove useless kmem_cache_debug
@ 2020-08-10 8:07 wuyun.wu
2020-08-10 19:44 ` David Rientjes
0 siblings, 1 reply; 4+ messages in thread
From: wuyun.wu @ 2020-08-10 8:07 UTC (permalink / raw)
To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton
Cc: liu.xiang6, Abel Wu, open list:SLAB ALLOCATOR, open list
From: Abel Wu <wuyun.wu@huawei.com>
The commit below is incomplete, as it didn't handle the add_full() part.
commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
Signed-off-by: Abel Wu <wuyun.wu@huawei.com>
---
mm/slub.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index fe81773..0b021b7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
}
} else {
m = M_FULL;
- if (kmem_cache_debug(s) && !lock) {
+#ifdef CONFIG_SLUB_DEBUG
+ if (!lock) {
lock = 1;
/*
* This also ensures that the scanning of full
@@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
*/
spin_lock(&n->list_lock);
}
+#endif
}
if (l != m) {
--
1.8.3.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/slub: remove useless kmem_cache_debug
2020-08-10 8:07 [PATCH] mm/slub: remove useless kmem_cache_debug wuyun.wu
@ 2020-08-10 19:44 ` David Rientjes
2020-08-11 1:29 ` Abel Wu
0 siblings, 1 reply; 4+ messages in thread
From: David Rientjes @ 2020-08-10 19:44 UTC (permalink / raw)
To: Abel Wu
Cc: Christoph Lameter, Pekka Enberg, Joonsoo Kim, Andrew Morton,
liu.xiang6, open list:SLAB ALLOCATOR, open list
On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote:
> From: Abel Wu <wuyun.wu@huawei.com>
>
> The commit below is incomplete, as it didn't handle the add_full() part.
> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
>
> Signed-off-by: Abel Wu <wuyun.wu@huawei.com>
> ---
> mm/slub.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index fe81773..0b021b7 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
> }
> } else {
> m = M_FULL;
> - if (kmem_cache_debug(s) && !lock) {
> +#ifdef CONFIG_SLUB_DEBUG
> + if (!lock) {
> lock = 1;
> /*
> * This also ensures that the scanning of full
> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
> */
> spin_lock(&n->list_lock);
> }
> +#endif
> }
>
> if (l != m) {
This should be functionally safe, I'm wonder if it would make sense to
only check for SLAB_STORE_USER here instead of kmem_cache_debug(),
however, since that should be the only context in which we need the
list_lock for add_full()? It seems more explicit.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/slub: remove useless kmem_cache_debug
2020-08-10 19:44 ` David Rientjes
@ 2020-08-11 1:29 ` Abel Wu
2020-08-11 1:50 ` Abel Wu
0 siblings, 1 reply; 4+ messages in thread
From: Abel Wu @ 2020-08-11 1:29 UTC (permalink / raw)
To: David Rientjes
Cc: Christoph Lameter, Pekka Enberg, Joonsoo Kim, Andrew Morton,
liu.xiang6, open list:SLAB ALLOCATOR, open list
On 2020/8/11 3:44, David Rientjes wrote:
> On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote:
>
>> From: Abel Wu <wuyun.wu@huawei.com>
>>
>> The commit below is incomplete, as it didn't handle the add_full() part.
>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
>>
>> Signed-off-by: Abel Wu <wuyun.wu@huawei.com>
>> ---
>> mm/slub.c | 4 +++-
>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index fe81773..0b021b7 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>> }
>> } else {
>> m = M_FULL;
>> - if (kmem_cache_debug(s) && !lock) {
>> +#ifdef CONFIG_SLUB_DEBUG
>> + if (!lock) {
>> lock = 1;
>> /*
>> * This also ensures that the scanning of full
>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>> */
>> spin_lock(&n->list_lock);
>> }
>> +#endif
>> }
>>
>> if (l != m) {
>
> This should be functionally safe, I'm wonder if it would make sense to
> only check for SLAB_STORE_USER here instead of kmem_cache_debug(),
> however, since that should be the only context in which we need the
> list_lock for add_full()? It seems more explicit.
> .
>
Yes, checking for SLAB_STORE_USER here can also get rid of noising macros.
I will resend the patch later.
Thanks,
Abel
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/slub: remove useless kmem_cache_debug
2020-08-11 1:29 ` Abel Wu
@ 2020-08-11 1:50 ` Abel Wu
0 siblings, 0 replies; 4+ messages in thread
From: Abel Wu @ 2020-08-11 1:50 UTC (permalink / raw)
To: David Rientjes
Cc: Christoph Lameter, Pekka Enberg, Joonsoo Kim, Andrew Morton,
liu.xiang6, open list:SLAB ALLOCATOR, open list
On 2020/8/11 9:29, Abel Wu wrote:
>
>
> On 2020/8/11 3:44, David Rientjes wrote:
>> On Mon, 10 Aug 2020, wuyun.wu@huawei.com wrote:
>>
>>> From: Abel Wu <wuyun.wu@huawei.com>
>>>
>>> The commit below is incomplete, as it didn't handle the add_full() part.
>>> commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()")
>>>
>>> Signed-off-by: Abel Wu <wuyun.wu@huawei.com>
>>> ---
>>> mm/slub.c | 4 +++-
>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index fe81773..0b021b7 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -2182,7 +2182,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>>> }
>>> } else {
>>> m = M_FULL;
>>> - if (kmem_cache_debug(s) && !lock) {
>>> +#ifdef CONFIG_SLUB_DEBUG
>>> + if (!lock) {
>>> lock = 1;
>>> /*
>>> * This also ensures that the scanning of full
>>> @@ -2191,6 +2192,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
>>> */
>>> spin_lock(&n->list_lock);
>>> }
>>> +#endif
>>> }
>>>
>>> if (l != m) {
>>
>> This should be functionally safe, I'm wonder if it would make sense to
>> only check for SLAB_STORE_USER here instead of kmem_cache_debug(),
>> however, since that should be the only context in which we need the
>> list_lock for add_full()? It seems more explicit.
>> .
>>
> Yes, checking for SLAB_STORE_USER here can also get rid of noising macros.
> I will resend the patch later.
>
> Thanks,
> Abel
> .
>
Wait... It still needs CONFIG_SLUB_DEBUG to wrap around, but can avoid
locking overhead when SLAB_STORE_USER is not set (as what you said).
I will keep the CONFIG_SLUB_DEBUG in my new patch.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-08-11 1:50 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-10 8:07 [PATCH] mm/slub: remove useless kmem_cache_debug wuyun.wu
2020-08-10 19:44 ` David Rientjes
2020-08-11 1:29 ` Abel Wu
2020-08-11 1:50 ` Abel Wu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).