linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien
@ 2020-07-30 10:19 qiang.zhang
  2020-07-30 23:45 ` David Rientjes
  0 siblings, 1 reply; 4+ messages in thread
From: qiang.zhang @ 2020-07-30 10:19 UTC (permalink / raw)
  To: cl, penberg, rientjes, iamjoonsoo.kim, akpm; +Cc: linux-mm, linux-kernel

From: Zhang Qiang <qiang.zhang@windriver.com>

for example:
			        node0
	cpu0				                cpu1
slab_dead_cpu
   >mutex_lock(&slab_mutex)
     >cpuup_canceled                            slab_dead_cpu
       >mask = cpumask_of_node(node)               >mutex_lock(&slab_mutex)
       >n = get_node(cachep0, node0)
       >spin_lock_irq(n&->list_lock)
       >if (!cpumask_empty(mask)) == true
       	>spin_unlock_irq(&n->list_lock)
	>goto free_slab
       ....
   >mutex_unlock(&slab_mutex)

....						   >cpuup_canceled
						     >mask = cpumask_of_node(node)
kmem_cache_free(cachep0 )			     >n = get_node(cachep0, node0)
 >__cache_free_alien(cachep0 )			     >spin_lock_irq(n&->list_lock)
   >n = get_node(cachep0, node0)		     >if (!cpumask_empty(mask)) == false
   >if (n->alien && n->alien[page_node])	     >alien = n->alien
     >alien = n->alien[page_node]	             >n->alien = NULL
     >....					     >spin_unlock_irq(&n->list_lock)
						     >....

Due to multiple cpu offline, The same cache in a node may be operated in
parallel,the "n->alien" should be protect.

Fixes: 6731d4f12315 ("slab: Convert to hotplug state machine")
Signed-off-by: Zhang Qiang <qiang.zhang@windriver.com>
---
 v1->v2->v3:
 change submission information and fixes tags.

 mm/slab.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index a89633603b2d..290523c90b4e 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -759,8 +759,10 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
 
 	n = get_node(cachep, node);
 	STATS_INC_NODEFREES(cachep);
+	spin_lock(&n->list_lock);
 	if (n->alien && n->alien[page_node]) {
 		alien = n->alien[page_node];
+		spin_unlock(&n->list_lock);
 		ac = &alien->ac;
 		spin_lock(&alien->lock);
 		if (unlikely(ac->avail == ac->limit)) {
@@ -769,14 +771,15 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
 		}
 		ac->entry[ac->avail++] = objp;
 		spin_unlock(&alien->lock);
-		slabs_destroy(cachep, &list);
 	} else {
+		spin_unlock(&n->list_lock);
 		n = get_node(cachep, page_node);
 		spin_lock(&n->list_lock);
 		free_block(cachep, &objp, 1, page_node, &list);
 		spin_unlock(&n->list_lock);
-		slabs_destroy(cachep, &list);
 	}
+
+	slabs_destroy(cachep, &list);
 	return 1;
 }
 
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien
  2020-07-30 10:19 [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien qiang.zhang
@ 2020-07-30 23:45 ` David Rientjes
  2020-07-31  1:27   ` 回复: " Zhang, Qiang
  0 siblings, 1 reply; 4+ messages in thread
From: David Rientjes @ 2020-07-30 23:45 UTC (permalink / raw)
  To: qiang.zhang; +Cc: cl, penberg, iamjoonsoo.kim, akpm, linux-mm, linux-kernel

On Thu, 30 Jul 2020, qiang.zhang@windriver.com wrote:

> From: Zhang Qiang <qiang.zhang@windriver.com>
> 
> for example:
> 			        node0
> 	cpu0				                cpu1
> slab_dead_cpu
>    >mutex_lock(&slab_mutex)
>      >cpuup_canceled                            slab_dead_cpu
>        >mask = cpumask_of_node(node)               >mutex_lock(&slab_mutex)
>        >n = get_node(cachep0, node0)
>        >spin_lock_irq(n&->list_lock)
>        >if (!cpumask_empty(mask)) == true
>        	>spin_unlock_irq(&n->list_lock)
> 	>goto free_slab
>        ....
>    >mutex_unlock(&slab_mutex)
> 
> ....						   >cpuup_canceled
> 						     >mask = cpumask_of_node(node)
> kmem_cache_free(cachep0 )			     >n = get_node(cachep0, node0)
>  >__cache_free_alien(cachep0 )			     >spin_lock_irq(n&->list_lock)
>    >n = get_node(cachep0, node0)		     >if (!cpumask_empty(mask)) == false
>    >if (n->alien && n->alien[page_node])	     >alien = n->alien
>      >alien = n->alien[page_node]	             >n->alien = NULL
>      >....					     >spin_unlock_irq(&n->list_lock)
> 						     >....
> 

As mentioned in the review of v1 of this patch, we likely want to do a fix 
for cpuup_canceled() instead.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* 回复: [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien
  2020-07-30 23:45 ` David Rientjes
@ 2020-07-31  1:27   ` Zhang, Qiang
  2020-07-31  8:10     ` Zhang, Qiang
  0 siblings, 1 reply; 4+ messages in thread
From: Zhang, Qiang @ 2020-07-31  1:27 UTC (permalink / raw)
  To: David Rientjes; +Cc: cl, penberg, iamjoonsoo.kim, akpm, linux-mm, linux-kernel



________________________________________
发件人: David Rientjes <rientjes@google.com>
发送时间: 2020年7月31日 7:45
收件人: Zhang, Qiang
抄送: cl@linux.com; penberg@kernel.org; iamjoonsoo.kim@lge.com; akpm@linux-foundation.org; linux-mm@kvack.org; linux-kernel@vger.kernel.org
主题: Re: [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien

On Thu, 30 Jul 2020, qiang.zhang@windriver.com wrote:

> From: Zhang Qiang <qiang.zhang@windriver.com>
>
> for example:
>                               node0
>       cpu0                                            cpu1
> slab_dead_cpu
>    >mutex_lock(&slab_mutex)
>      >cpuup_canceled                            slab_dead_cpu
>        >mask = cpumask_of_node(node)               >mutex_lock(&slab_mutex)
>        >n = get_node(cachep0, node0)
>        >spin_lock_irq(n&->list_lock)
>        >if (!cpumask_empty(mask)) == true
>               >spin_unlock_irq(&n->list_lock)
>       >goto free_slab
>        ....
>    >mutex_unlock(&slab_mutex)
>
> ....                                             >cpuup_canceled
>                                                    >mask = cpumask_of_node(node)
> kmem_cache_free(cachep0 )                          >n = get_node(cachep0, node0)
>  >__cache_free_alien(cachep0 )                             >spin_lock_irq(n&->list_lock)
>    >n = get_node(cachep0, node0)                   >if (!cpumask_empty(mask)) == false
>    >if (n->alien && n->alien[page_node])           >alien = n->alien
>      >alien = n->alien[page_node]                  >n->alien = NULL
>      >....                                         >spin_unlock_irq(&n->list_lock)
>                                                    >....
>

>As mentioned in the review of v1 of this patch, we likely want to do a fix
>for cpuup_canceled() instead.

I see, you mean  do fix in "cpuup_canceled" func?

^ permalink raw reply	[flat|nested] 4+ messages in thread

* 回复: [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien
  2020-07-31  1:27   ` 回复: " Zhang, Qiang
@ 2020-07-31  8:10     ` Zhang, Qiang
  0 siblings, 0 replies; 4+ messages in thread
From: Zhang, Qiang @ 2020-07-31  8:10 UTC (permalink / raw)
  To: David Rientjes; +Cc: cl, penberg, iamjoonsoo.kim, akpm, linux-mm, linux-kernel



________________________________________
发件人: Zhang, Qiang <Qiang.Zhang@windriver.com>
发送时间: 2020年7月31日 9:27
收件人: David Rientjes
抄送: cl@linux.com; penberg@kernel.org; iamjoonsoo.kim@lge.com; akpm@linux-foundation.org; linux-mm@kvack.org; linux-kernel@vger.kernel.org
主题: 回复: [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien



________________________________________
发件人: David Rientjes <rientjes@google.com>
发送时间: 2020年7月31日 7:45
收件人: Zhang, Qiang
抄送: cl@linux.com; penberg@kernel.org; iamjoonsoo.kim@lge.com; akpm@linux-foundation.org; linux-mm@kvack.org; linux-kernel@vger.kernel.org
主题: Re: [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien

On Thu, 30 Jul 2020, qiang.zhang@windriver.com wrote:

> From: Zhang Qiang <qiang.zhang@windriver.com>
>
> for example:
>                               node0
>       cpu0                                            cpu1
> slab_dead_cpu
>    >mutex_lock(&slab_mutex)
>      >cpuup_canceled                            slab_dead_cpu
>        >mask = cpumask_of_node(node)               >mutex_lock(&slab_mutex)
>        >n = get_node(cachep0, node0)
>        >spin_lock_irq(n&->list_lock)
>        >if (!cpumask_empty(mask)) == true
>               >spin_unlock_irq(&n->list_lock)
>       >goto free_slab
>        ....
>    >mutex_unlock(&slab_mutex)
>
> ....                                             >cpuup_canceled
>                                                    >mask = cpumask_of_node(node)
> kmem_cache_free(cachep0 )                          >n = get_node(cachep0, node0)
>  >__cache_free_alien(cachep0 )                             >spin_lock_irq(n&->list_lock)
>    >n = get_node(cachep0, node0)                   >if (!cpumask_empty(mask)) == false
>    >if (n->alien && n->alien[page_node])           >alien = n->alien
>      >alien = n->alien[page_node]                  >n->alien = NULL
>      >....                                         >spin_unlock_irq(&n->list_lock)
>                                                    >....
>

>As mentioned in the review of v1 of this patch, we likely want to do a fix
>for cpuup_canceled() instead.

>I see, you mean  do fix in "cpuup_canceled" func?

 I'm very sorry, due to cpu_down receive gobal  "cpu_hotplug_lock" write lock  protect. multiple cpu offline is serial,the scenario I described above does not exist.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-07-31  8:10 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-30 10:19 [PATCH v3] mm/slab.c: add node spinlock protect in __cache_free_alien qiang.zhang
2020-07-30 23:45 ` David Rientjes
2020-07-31  1:27   ` 回复: " Zhang, Qiang
2020-07-31  8:10     ` Zhang, Qiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).