* [PATCH] mm/slab.c: add node spinlock protect in __cache_free_alien
@ 2020-07-28 9:55 qiang.zhang
2020-07-28 19:46 ` David Rientjes
0 siblings, 1 reply; 4+ messages in thread
From: qiang.zhang @ 2020-07-28 9:55 UTC (permalink / raw)
To: cl, penberg, rientjes, iamjoonsoo.kim, akpm; +Cc: linux-mm, linux-kernel
From: Zhang Qiang <qiang.zhang@windriver.com>
We should add node spinlock protect "n->alien" which may be
assigned to NULL in cpuup_canceled func. cause address access
exception.
Fixes: 18bf854117c6 ("slab: use get_node() and kmem_cache_node() functions")
Signed-off-by: Zhang Qiang <qiang.zhang@windriver.com>
---
mm/slab.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index a89633603b2d..290523c90b4e 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -759,8 +759,10 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
n = get_node(cachep, node);
STATS_INC_NODEFREES(cachep);
+ spin_lock(&n->list_lock);
if (n->alien && n->alien[page_node]) {
alien = n->alien[page_node];
+ spin_unlock(&n->list_lock);
ac = &alien->ac;
spin_lock(&alien->lock);
if (unlikely(ac->avail == ac->limit)) {
@@ -769,14 +771,15 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
}
ac->entry[ac->avail++] = objp;
spin_unlock(&alien->lock);
- slabs_destroy(cachep, &list);
} else {
+ spin_unlock(&n->list_lock);
n = get_node(cachep, page_node);
spin_lock(&n->list_lock);
free_block(cachep, &objp, 1, page_node, &list);
spin_unlock(&n->list_lock);
- slabs_destroy(cachep, &list);
}
+
+ slabs_destroy(cachep, &list);
return 1;
}
--
2.26.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] mm/slab.c: add node spinlock protect in __cache_free_alien
2020-07-28 9:55 [PATCH] mm/slab.c: add node spinlock protect in __cache_free_alien qiang.zhang
@ 2020-07-28 19:46 ` David Rientjes
2020-07-29 1:25 ` 回复: " Zhang, Qiang
0 siblings, 1 reply; 4+ messages in thread
From: David Rientjes @ 2020-07-28 19:46 UTC (permalink / raw)
To: qiang.zhang; +Cc: cl, penberg, iamjoonsoo.kim, akpm, linux-mm, linux-kernel
On Tue, 28 Jul 2020, qiang.zhang@windriver.com wrote:
> From: Zhang Qiang <qiang.zhang@windriver.com>
>
> We should add node spinlock protect "n->alien" which may be
> assigned to NULL in cpuup_canceled func. cause address access
> exception.
>
Hi, do you have an example NULL pointer dereference where you have hit
this?
This rather looks like something to fix up in cpuup_canceled() since it's
currently manipulating the alien cache for the canceled cpu's node.
> Fixes: 18bf854117c6 ("slab: use get_node() and kmem_cache_node() functions")
> Signed-off-by: Zhang Qiang <qiang.zhang@windriver.com>
> ---
> mm/slab.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index a89633603b2d..290523c90b4e 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -759,8 +759,10 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
>
> n = get_node(cachep, node);
> STATS_INC_NODEFREES(cachep);
> + spin_lock(&n->list_lock);
> if (n->alien && n->alien[page_node]) {
> alien = n->alien[page_node];
> + spin_unlock(&n->list_lock);
> ac = &alien->ac;
> spin_lock(&alien->lock);
> if (unlikely(ac->avail == ac->limit)) {
> @@ -769,14 +771,15 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
> }
> ac->entry[ac->avail++] = objp;
> spin_unlock(&alien->lock);
> - slabs_destroy(cachep, &list);
> } else {
> + spin_unlock(&n->list_lock);
> n = get_node(cachep, page_node);
> spin_lock(&n->list_lock);
> free_block(cachep, &objp, 1, page_node, &list);
> spin_unlock(&n->list_lock);
> - slabs_destroy(cachep, &list);
> }
> +
> + slabs_destroy(cachep, &list);
> return 1;
> }
>
> --
> 2.26.2
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* 回复: [PATCH] mm/slab.c: add node spinlock protect in __cache_free_alien
2020-07-28 19:46 ` David Rientjes
@ 2020-07-29 1:25 ` Zhang, Qiang
2020-07-29 23:32 ` David Rientjes
0 siblings, 1 reply; 4+ messages in thread
From: Zhang, Qiang @ 2020-07-29 1:25 UTC (permalink / raw)
To: David Rientjes; +Cc: cl, penberg, iamjoonsoo.kim, akpm, linux-mm, linux-kernel
________________________________________
发件人: David Rientjes <rientjes@google.com>
发送时间: 2020年7月29日 3:46
收件人: Zhang, Qiang
抄送: cl@linux.com; penberg@kernel.org; iamjoonsoo.kim@lge.com; akpm@linux-foundation.org; linux-mm@kvack.org; linux-kernel@vger.kernel.org
主题: Re: [PATCH] mm/slab.c: add node spinlock protect in __cache_free_alien
On Tue, 28 Jul 2020, qiang.zhang@windriver.com wrote:
> From: Zhang Qiang <qiang.zhang@windriver.com>
>
> We should add node spinlock protect "n->alien" which may be
> assigned to NULL in cpuup_canceled func. cause address access
> exception.
>
>Hi, do you have an example NULL pointer dereference where you have hit
>this?
>This rather looks like something to fix up in cpuup_canceled() since it's
>currently manipulating the alien cache for the canceled cpu's node.
yes , it is fix up in cpuup_canceled it's
currently manipulating the alien cache for the canceled cpu's node which may be the same as the node being operated on in the __cache_free_alien func.
void cpuup_canceled
{
n = get_node(cachep, node);
spin_lock_irq(&n->list_lock);
...
n->alien = NULL;
spin_unlock_irq(&n->list_lock);
....
}
> Fixes: 18bf854117c6 ("slab: use get_node() and kmem_cache_node() functions")
> Signed-off-by: Zhang Qiang <qiang.zhang@windriver.com>
> ---
> mm/slab.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index a89633603b2d..290523c90b4e 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -759,8 +759,10 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
>
> n = get_node(cachep, node);
> STATS_INC_NODEFREES(cachep);
> + spin_lock(&n->list_lock);
> if (n->alien && n->alien[page_node]) {
> alien = n->alien[page_node];
> + spin_unlock(&n->list_lock);
> ac = &alien->ac;
> spin_lock(&alien->lock);
> if (unlikely(ac->avail == ac->limit)) {
> @@ -769,14 +771,15 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
> }
> ac->entry[ac->avail++] = objp;
> spin_unlock(&alien->lock);
> - slabs_destroy(cachep, &list);
> } else {
> + spin_unlock(&n->list_lock);
> n = get_node(cachep, page_node);
> spin_lock(&n->list_lock);
> free_block(cachep, &objp, 1, page_node, &list);
> spin_unlock(&n->list_lock);
> - slabs_destroy(cachep, &list);
> }
> +
> + slabs_destroy(cachep, &list);
> return 1;
> }
>
> --
> 2.26.2
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: 回复: [PATCH] mm/slab.c: add node spinlock protect in __cache_free_alien
2020-07-29 1:25 ` 回复: " Zhang, Qiang
@ 2020-07-29 23:32 ` David Rientjes
0 siblings, 0 replies; 4+ messages in thread
From: David Rientjes @ 2020-07-29 23:32 UTC (permalink / raw)
To: Zhang, Qiang; +Cc: cl, penberg, iamjoonsoo.kim, akpm, linux-mm, linux-kernel
On Wed, 29 Jul 2020, Zhang, Qiang wrote:
> > From: Zhang Qiang <qiang.zhang@windriver.com>
> >
> > We should add node spinlock protect "n->alien" which may be
> > assigned to NULL in cpuup_canceled func. cause address access
> > exception.
> >
>
> >Hi, do you have an example NULL pointer dereference where you have hit
> >this?
>
If you have a NULL pointer dereference or a GPF that occurred because of
this, it would be helpful to provide as rationale.
> >This rather looks like something to fix up in cpuup_canceled() since it's
> >currently manipulating the alien cache for the canceled cpu's node.
>
> yes , it is fix up in cpuup_canceled it's
> currently manipulating the alien cache for the canceled cpu's node which may be the same as the node being operated on in the __cache_free_alien func.
>
> void cpuup_canceled
> {
> n = get_node(cachep, node);
> spin_lock_irq(&n->list_lock);
> ...
> n->alien = NULL;
> spin_unlock_irq(&n->list_lock);
> ....
> }
>
Right, so the idea is that this should be fixed in cpuup_canceled()
instead -- why would we invaliate the entire node's alien cache because a
single cpu failed to come online?
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-07-29 23:32 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-28 9:55 [PATCH] mm/slab.c: add node spinlock protect in __cache_free_alien qiang.zhang
2020-07-28 19:46 ` David Rientjes
2020-07-29 1:25 ` 回复: " Zhang, Qiang
2020-07-29 23:32 ` David Rientjes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).