* [PATCH 1/3] mm/page_alloc: use might_alloc()
@ 2022-06-05 15:25 ` Daniel Vetter
0 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2022-06-05 15:25 UTC (permalink / raw)
To: LKML
Cc: DRI Development, Daniel Vetter, Daniel Vetter, Andrew Morton, linux-mm
... instead of open codding it. Completely equivalent code, just
a notch more meaningful when reading.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
---
mm/page_alloc.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2db95780e003..277774d170cb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
*alloc_flags |= ALLOC_CPUSET;
}
- fs_reclaim_acquire(gfp_mask);
- fs_reclaim_release(gfp_mask);
-
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
+ might_alloc(gfp_mask);
if (should_fail_alloc_page(gfp_mask, order))
return false;
--
2.36.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 1/3] mm/page_alloc: use might_alloc()
@ 2022-06-05 15:25 ` Daniel Vetter
0 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2022-06-05 15:25 UTC (permalink / raw)
To: LKML
Cc: linux-mm, Daniel Vetter, Andrew Morton, DRI Development, Daniel Vetter
... instead of open codding it. Completely equivalent code, just
a notch more meaningful when reading.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
---
mm/page_alloc.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2db95780e003..277774d170cb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
*alloc_flags |= ALLOC_CPUSET;
}
- fs_reclaim_acquire(gfp_mask);
- fs_reclaim_release(gfp_mask);
-
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
+ might_alloc(gfp_mask);
if (should_fail_alloc_page(gfp_mask, order))
return false;
--
2.36.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-05 15:25 ` Daniel Vetter
-1 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2022-06-05 15:25 UTC (permalink / raw)
To: LKML
Cc: DRI Development, Daniel Vetter, Daniel Vetter, Christoph Lameter,
Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
Vlastimil Babka, Roman Gushchin, linux-mm
It only does a might_sleep_if(GFP_RECLAIM) check, which is already
covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
of cache_alloc_debugcheck_before() call that beforehand already.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: linux-mm@kvack.org
---
mm/slab.c | 10 ----------
1 file changed, 10 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index b04e40078bdf..75779ac5f5ba 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
return ac->entry[--ac->avail];
}
-static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
- gfp_t flags)
-{
- might_sleep_if(gfpflags_allow_blocking(flags));
-}
-
#if DEBUG
static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
gfp_t flags, void *objp, unsigned long caller)
@@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
if (unlikely(ptr))
goto out_hooks;
- cache_alloc_debugcheck_before(cachep, flags);
local_irq_save(save_flags);
if (nodeid == NUMA_NO_NODE)
@@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
if (unlikely(objp))
goto out;
- cache_alloc_debugcheck_before(cachep, flags);
local_irq_save(save_flags);
objp = __do_cache_alloc(cachep, flags);
local_irq_restore(save_flags);
@@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
if (!s)
return 0;
- cache_alloc_debugcheck_before(s, flags);
-
local_irq_disable();
for (i = 0; i < size; i++) {
void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
--
2.36.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
@ 2022-06-05 15:25 ` Daniel Vetter
0 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2022-06-05 15:25 UTC (permalink / raw)
To: LKML
Cc: Andrew Morton, Daniel Vetter, Roman Gushchin, DRI Development,
Pekka Enberg, linux-mm, David Rientjes, Daniel Vetter,
Christoph Lameter, Joonsoo Kim, Vlastimil Babka
It only does a might_sleep_if(GFP_RECLAIM) check, which is already
covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
of cache_alloc_debugcheck_before() call that beforehand already.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: linux-mm@kvack.org
---
mm/slab.c | 10 ----------
1 file changed, 10 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index b04e40078bdf..75779ac5f5ba 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
return ac->entry[--ac->avail];
}
-static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
- gfp_t flags)
-{
- might_sleep_if(gfpflags_allow_blocking(flags));
-}
-
#if DEBUG
static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
gfp_t flags, void *objp, unsigned long caller)
@@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
if (unlikely(ptr))
goto out_hooks;
- cache_alloc_debugcheck_before(cachep, flags);
local_irq_save(save_flags);
if (nodeid == NUMA_NO_NODE)
@@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
if (unlikely(objp))
goto out;
- cache_alloc_debugcheck_before(cachep, flags);
local_irq_save(save_flags);
objp = __do_cache_alloc(cachep, flags);
local_irq_restore(save_flags);
@@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
if (!s)
return 0;
- cache_alloc_debugcheck_before(s, flags);
-
local_irq_disable();
for (i = 0; i < size; i++) {
void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
--
2.36.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 3/3] mm/mempool: use might_alloc()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-05 15:25 ` Daniel Vetter
-1 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2022-06-05 15:25 UTC (permalink / raw)
To: LKML
Cc: DRI Development, Daniel Vetter, Daniel Vetter, Andrew Morton, linux-mm
mempool are generally used for GFP_NOIO, so this wont benefit all that
much because might_alloc currently only checks GFP_NOFS. But it does
validate against mmu notifier pte zapping, some might catch some
drivers doing really silly things, plus it's a bit more meaningful in
what we're checking for here.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
---
mm/mempool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mempool.c b/mm/mempool.c
index b933d0fc21b8..96488b13a1ef 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -379,7 +379,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
gfp_t gfp_temp;
VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO);
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
+ might_alloc(gfp_mask);
gfp_mask |= __GFP_NOMEMALLOC; /* don't allocate emergency reserves */
gfp_mask |= __GFP_NORETRY; /* don't loop in __alloc_pages */
--
2.36.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 3/3] mm/mempool: use might_alloc()
@ 2022-06-05 15:25 ` Daniel Vetter
0 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2022-06-05 15:25 UTC (permalink / raw)
To: LKML
Cc: linux-mm, Daniel Vetter, Andrew Morton, DRI Development, Daniel Vetter
mempool are generally used for GFP_NOIO, so this wont benefit all that
much because might_alloc currently only checks GFP_NOFS. But it does
validate against mmu notifier pte zapping, some might catch some
drivers doing really silly things, plus it's a bit more meaningful in
what we're checking for here.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org
---
mm/mempool.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mempool.c b/mm/mempool.c
index b933d0fc21b8..96488b13a1ef 100644
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -379,7 +379,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
gfp_t gfp_temp;
VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO);
- might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
+ might_alloc(gfp_mask);
gfp_mask |= __GFP_NOMEMALLOC; /* don't allocate emergency reserves */
gfp_mask |= __GFP_NORETRY; /* don't loop in __alloc_pages */
--
2.36.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 1/3] mm/page_alloc: use might_alloc()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-07 12:57 ` David Hildenbrand
-1 siblings, 0 replies; 20+ messages in thread
From: David Hildenbrand @ 2022-06-07 12:57 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: DRI Development, Daniel Vetter, Andrew Morton, linux-mm
On 05.06.22 17:25, Daniel Vetter wrote:
> ... instead of open codding it. Completely equivalent code, just
> a notch more meaningful when reading.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
> ---
> mm/page_alloc.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2db95780e003..277774d170cb 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> *alloc_flags |= ALLOC_CPUSET;
> }
>
> - fs_reclaim_acquire(gfp_mask);
> - fs_reclaim_release(gfp_mask);
> -
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> if (should_fail_alloc_page(gfp_mask, order))
> return false;
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/3] mm/page_alloc: use might_alloc()
@ 2022-06-07 12:57 ` David Hildenbrand
0 siblings, 0 replies; 20+ messages in thread
From: David Hildenbrand @ 2022-06-07 12:57 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: Daniel Vetter, Andrew Morton, DRI Development, linux-mm
On 05.06.22 17:25, Daniel Vetter wrote:
> ... instead of open codding it. Completely equivalent code, just
> a notch more meaningful when reading.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
> ---
> mm/page_alloc.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2db95780e003..277774d170cb 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> *alloc_flags |= ALLOC_CPUSET;
> }
>
> - fs_reclaim_acquire(gfp_mask);
> - fs_reclaim_release(gfp_mask);
> -
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> if (should_fail_alloc_page(gfp_mask, order))
> return false;
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-07 12:59 ` David Hildenbrand
-1 siblings, 0 replies; 20+ messages in thread
From: David Hildenbrand @ 2022-06-07 12:59 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: Andrew Morton, Roman Gushchin, DRI Development, Pekka Enberg,
linux-mm, David Rientjes, Daniel Vetter, Christoph Lameter,
Joonsoo Kim, Vlastimil Babka
On 05.06.22 17:25, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: linux-mm@kvack.org
> ---
LGTM
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
@ 2022-06-07 12:59 ` David Hildenbrand
0 siblings, 0 replies; 20+ messages in thread
From: David Hildenbrand @ 2022-06-07 12:59 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: DRI Development, Daniel Vetter, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
Roman Gushchin, linux-mm
On 05.06.22 17:25, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: linux-mm@kvack.org
> ---
LGTM
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-12 23:00 ` David Rientjes
-1 siblings, 0 replies; 20+ messages in thread
From: David Rientjes @ 2022-06-12 23:00 UTC (permalink / raw)
To: Daniel Vetter
Cc: LKML, DRI Development, Daniel Vetter, Christoph Lameter,
Pekka Enberg, Joonsoo Kim, Andrew Morton, Vlastimil Babka,
Roman Gushchin, linux-mm
On Sun, 5 Jun 2022, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: linux-mm@kvack.org
Acked-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
@ 2022-06-12 23:00 ` David Rientjes
0 siblings, 0 replies; 20+ messages in thread
From: David Rientjes @ 2022-06-12 23:00 UTC (permalink / raw)
To: Daniel Vetter
Cc: Andrew Morton, Roman Gushchin, LKML, DRI Development,
Pekka Enberg, linux-mm, Daniel Vetter, Christoph Lameter,
Joonsoo Kim, Vlastimil Babka
On Sun, 5 Jun 2022, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: linux-mm@kvack.org
Acked-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-13 3:21 ` Muchun Song
-1 siblings, 0 replies; 20+ messages in thread
From: Muchun Song @ 2022-06-13 3:21 UTC (permalink / raw)
To: Daniel Vetter
Cc: LKML, DRI Development, Daniel Vetter, Christoph Lameter,
Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
Vlastimil Babka, Roman Gushchin, linux-mm
On Sun, Jun 05, 2022 at 05:25:38PM +0200, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Nice cleanup.
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Thanks.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
@ 2022-06-13 3:21 ` Muchun Song
0 siblings, 0 replies; 20+ messages in thread
From: Muchun Song @ 2022-06-13 3:21 UTC (permalink / raw)
To: Daniel Vetter
Cc: Andrew Morton, Roman Gushchin, LKML, DRI Development,
Pekka Enberg, linux-mm, David Rientjes, Daniel Vetter,
Christoph Lameter, Joonsoo Kim, Vlastimil Babka
On Sun, Jun 05, 2022 at 05:25:38PM +0200, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Nice cleanup.
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Thanks.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-14 13:05 ` Vlastimil Babka
-1 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2022-06-14 13:05 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: DRI Development, Daniel Vetter, Christoph Lameter, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton, Roman Gushchin,
linux-mm
On 6/5/22 17:25, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: linux-mm@kvack.org
Thanks, added to slab/for-5.20/cleanup as it's slab-specific and independent
from 1/3 and 3/3.
> ---
> mm/slab.c | 10 ----------
> 1 file changed, 10 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index b04e40078bdf..75779ac5f5ba 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
> return ac->entry[--ac->avail];
> }
>
> -static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
> - gfp_t flags)
> -{
> - might_sleep_if(gfpflags_allow_blocking(flags));
> -}
> -
> #if DEBUG
> static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
> gfp_t flags, void *objp, unsigned long caller)
> @@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
> if (unlikely(ptr))
> goto out_hooks;
>
> - cache_alloc_debugcheck_before(cachep, flags);
> local_irq_save(save_flags);
>
> if (nodeid == NUMA_NO_NODE)
> @@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
> if (unlikely(objp))
> goto out;
>
> - cache_alloc_debugcheck_before(cachep, flags);
> local_irq_save(save_flags);
> objp = __do_cache_alloc(cachep, flags);
> local_irq_restore(save_flags);
> @@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> if (!s)
> return 0;
>
> - cache_alloc_debugcheck_before(s, flags);
> -
> local_irq_disable();
> for (i = 0; i < size; i++) {
> void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before()
@ 2022-06-14 13:05 ` Vlastimil Babka
0 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2022-06-14 13:05 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: Andrew Morton, Roman Gushchin, DRI Development, Pekka Enberg,
linux-mm, David Rientjes, Daniel Vetter, Christoph Lameter,
Joonsoo Kim
On 6/5/22 17:25, Daniel Vetter wrote:
> It only does a might_sleep_if(GFP_RECLAIM) check, which is already
> covered by the might_alloc() in slab_pre_alloc_hook(). And all callers
> of cache_alloc_debugcheck_before() call that beforehand already.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: linux-mm@kvack.org
Thanks, added to slab/for-5.20/cleanup as it's slab-specific and independent
from 1/3 and 3/3.
> ---
> mm/slab.c | 10 ----------
> 1 file changed, 10 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index b04e40078bdf..75779ac5f5ba 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2973,12 +2973,6 @@ static void *cache_alloc_refill(struct kmem_cache *cachep, gfp_t flags)
> return ac->entry[--ac->avail];
> }
>
> -static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
> - gfp_t flags)
> -{
> - might_sleep_if(gfpflags_allow_blocking(flags));
> -}
> -
> #if DEBUG
> static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
> gfp_t flags, void *objp, unsigned long caller)
> @@ -3219,7 +3213,6 @@ slab_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_
> if (unlikely(ptr))
> goto out_hooks;
>
> - cache_alloc_debugcheck_before(cachep, flags);
> local_irq_save(save_flags);
>
> if (nodeid == NUMA_NO_NODE)
> @@ -3304,7 +3297,6 @@ slab_alloc(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags,
> if (unlikely(objp))
> goto out;
>
> - cache_alloc_debugcheck_before(cachep, flags);
> local_irq_save(save_flags);
> objp = __do_cache_alloc(cachep, flags);
> local_irq_restore(save_flags);
> @@ -3541,8 +3533,6 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> if (!s)
> return 0;
>
> - cache_alloc_debugcheck_before(s, flags);
> -
> local_irq_disable();
> for (i = 0; i < size; i++) {
> void *objp = kfence_alloc(s, s->object_size, flags) ?: __do_cache_alloc(s, flags);
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/3] mm/page_alloc: use might_alloc()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-14 13:07 ` Vlastimil Babka (SUSE)
-1 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka (SUSE) @ 2022-06-14 13:07 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: DRI Development, Daniel Vetter, Andrew Morton, linux-mm
On 6/5/22 17:25, Daniel Vetter wrote:
> ... instead of open codding it. Completely equivalent code, just
> a notch more meaningful when reading.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/page_alloc.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2db95780e003..277774d170cb 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> *alloc_flags |= ALLOC_CPUSET;
> }
>
> - fs_reclaim_acquire(gfp_mask);
> - fs_reclaim_release(gfp_mask);
> -
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> if (should_fail_alloc_page(gfp_mask, order))
> return false;
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/3] mm/page_alloc: use might_alloc()
@ 2022-06-14 13:07 ` Vlastimil Babka (SUSE)
0 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka (SUSE) @ 2022-06-14 13:07 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: Daniel Vetter, Andrew Morton, DRI Development, linux-mm
On 6/5/22 17:25, Daniel Vetter wrote:
> ... instead of open codding it. Completely equivalent code, just
> a notch more meaningful when reading.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/page_alloc.c | 5 +----
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2db95780e003..277774d170cb 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5177,10 +5177,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
> *alloc_flags |= ALLOC_CPUSET;
> }
>
> - fs_reclaim_acquire(gfp_mask);
> - fs_reclaim_release(gfp_mask);
> -
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> if (should_fail_alloc_page(gfp_mask, order))
> return false;
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/3] mm/mempool: use might_alloc()
2022-06-05 15:25 ` Daniel Vetter
@ 2022-06-14 13:08 ` Vlastimil Babka (SUSE)
-1 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka (SUSE) @ 2022-06-14 13:08 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: DRI Development, Daniel Vetter, Andrew Morton, linux-mm
On 6/5/22 17:25, Daniel Vetter wrote:
> mempool are generally used for GFP_NOIO, so this wont benefit all that
> much because might_alloc currently only checks GFP_NOFS. But it does
> validate against mmu notifier pte zapping, some might catch some
> drivers doing really silly things, plus it's a bit more meaningful in
> what we're checking for here.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/mempool.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/mempool.c b/mm/mempool.c
> index b933d0fc21b8..96488b13a1ef 100644
> --- a/mm/mempool.c
> +++ b/mm/mempool.c
> @@ -379,7 +379,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
> gfp_t gfp_temp;
>
> VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO);
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> gfp_mask |= __GFP_NOMEMALLOC; /* don't allocate emergency reserves */
> gfp_mask |= __GFP_NORETRY; /* don't loop in __alloc_pages */
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/3] mm/mempool: use might_alloc()
@ 2022-06-14 13:08 ` Vlastimil Babka (SUSE)
0 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka (SUSE) @ 2022-06-14 13:08 UTC (permalink / raw)
To: Daniel Vetter, LKML
Cc: Daniel Vetter, Andrew Morton, DRI Development, linux-mm
On 6/5/22 17:25, Daniel Vetter wrote:
> mempool are generally used for GFP_NOIO, so this wont benefit all that
> much because might_alloc currently only checks GFP_NOFS. But it does
> validate against mmu notifier pte zapping, some might catch some
> drivers doing really silly things, plus it's a bit more meaningful in
> what we're checking for here.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: linux-mm@kvack.org
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/mempool.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/mempool.c b/mm/mempool.c
> index b933d0fc21b8..96488b13a1ef 100644
> --- a/mm/mempool.c
> +++ b/mm/mempool.c
> @@ -379,7 +379,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
> gfp_t gfp_temp;
>
> VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO);
> - might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
> + might_alloc(gfp_mask);
>
> gfp_mask |= __GFP_NOMEMALLOC; /* don't allocate emergency reserves */
> gfp_mask |= __GFP_NORETRY; /* don't loop in __alloc_pages */
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2022-06-14 13:08 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-05 15:25 [PATCH 1/3] mm/page_alloc: use might_alloc() Daniel Vetter
2022-06-05 15:25 ` Daniel Vetter
2022-06-05 15:25 ` [PATCH 2/3] mm/slab: delete cache_alloc_debugcheck_before() Daniel Vetter
2022-06-05 15:25 ` Daniel Vetter
2022-06-07 12:59 ` David Hildenbrand
2022-06-07 12:59 ` David Hildenbrand
2022-06-12 23:00 ` David Rientjes
2022-06-12 23:00 ` David Rientjes
2022-06-13 3:21 ` Muchun Song
2022-06-13 3:21 ` Muchun Song
2022-06-14 13:05 ` Vlastimil Babka
2022-06-14 13:05 ` Vlastimil Babka
2022-06-05 15:25 ` [PATCH 3/3] mm/mempool: use might_alloc() Daniel Vetter
2022-06-05 15:25 ` Daniel Vetter
2022-06-14 13:08 ` Vlastimil Babka (SUSE)
2022-06-14 13:08 ` Vlastimil Babka (SUSE)
2022-06-07 12:57 ` [PATCH 1/3] mm/page_alloc: " David Hildenbrand
2022-06-07 12:57 ` David Hildenbrand
2022-06-14 13:07 ` Vlastimil Babka (SUSE)
2022-06-14 13:07 ` Vlastimil Babka (SUSE)
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.