All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
@ 2010-02-03 21:39 ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


This fixes various problems in slab found during memory hotadd testing.

All straight forward bug fixes, so could be still .33 candidates.

-Andi


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
@ 2010-02-03 21:39 ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


This fixes various problems in slab found during memory hotadd testing.

All straight forward bug fixes, so could be still .33 candidates.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
  2010-02-03 21:39 ` Andi Kleen
@ 2010-02-03 21:39   ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


When fallback_alloc() runs the node of the CPU might not be initialized yet.
Handle this case by allocating in another node.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 mm/slab.c |   19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

Index: linux-2.6.33-rc3-ak/mm/slab.c
===================================================================
--- linux-2.6.33-rc3-ak.orig/mm/slab.c
+++ linux-2.6.33-rc3-ak/mm/slab.c
@@ -3210,7 +3210,24 @@ retry:
 		if (local_flags & __GFP_WAIT)
 			local_irq_enable();
 		kmem_flagcheck(cache, flags);
-		obj = kmem_getpages(cache, local_flags, numa_node_id());
+
+		/*
+		 * Node not set up yet? Try one that the cache has been set up
+		 * for.
+		 */
+		nid = numa_node_id();
+		if (cache->nodelists[nid] == NULL) {
+			for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
+				nid = zone_to_nid(zone);
+				if (cache->nodelists[nid])
+					break;
+			}
+			if (!cache->nodelists[nid])
+				return NULL;
+		}
+
+
+		obj = kmem_getpages(cache, local_flags, nid);
 		if (local_flags & __GFP_WAIT)
 			local_irq_disable();
 		if (obj) {

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
@ 2010-02-03 21:39   ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


When fallback_alloc() runs the node of the CPU might not be initialized yet.
Handle this case by allocating in another node.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 mm/slab.c |   19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

Index: linux-2.6.33-rc3-ak/mm/slab.c
===================================================================
--- linux-2.6.33-rc3-ak.orig/mm/slab.c
+++ linux-2.6.33-rc3-ak/mm/slab.c
@@ -3210,7 +3210,24 @@ retry:
 		if (local_flags & __GFP_WAIT)
 			local_irq_enable();
 		kmem_flagcheck(cache, flags);
-		obj = kmem_getpages(cache, local_flags, numa_node_id());
+
+		/*
+		 * Node not set up yet? Try one that the cache has been set up
+		 * for.
+		 */
+		nid = numa_node_id();
+		if (cache->nodelists[nid] == NULL) {
+			for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
+				nid = zone_to_nid(zone);
+				if (cache->nodelists[nid])
+					break;
+			}
+			if (!cache->nodelists[nid])
+				return NULL;
+		}
+
+
+		obj = kmem_getpages(cache, local_flags, nid);
 		if (local_flags & __GFP_WAIT)
 			local_irq_disable();
 		if (obj) {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
  2010-02-03 21:39 ` Andi Kleen
@ 2010-02-03 21:39   ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


So kmalloc_node() works even if no CPU is up yet on the new node.

Requires previous refactor patch.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 mm/slab.c |   17 +++++++++++++++++
 1 file changed, 17 insertions(+)

Index: linux-2.6.33-rc3-ak/mm/slab.c
===================================================================
--- linux-2.6.33-rc3-ak.orig/mm/slab.c
+++ linux-2.6.33-rc3-ak/mm/slab.c
@@ -115,6 +115,7 @@
 #include	<linux/reciprocal_div.h>
 #include	<linux/debugobjects.h>
 #include	<linux/kmemcheck.h>
+#include	<linux/memory.h>
 
 #include	<asm/cacheflush.h>
 #include	<asm/tlbflush.h>
@@ -1560,6 +1561,20 @@ void __init kmem_cache_init(void)
 	g_cpucache_up = EARLY;
 }
 
+static int slab_memory_callback(struct notifier_block *self,
+				unsigned long action, void *arg)
+{
+	struct memory_notify *mn = (struct memory_notify *)arg;
+
+	/*
+	 * When a node goes online allocate l3s early.	 This way
+	 * kmalloc_node() works for it.
+	 */
+	if (action == MEM_ONLINE && mn->status_change_nid >= 0)
+		slab_node_prepare(mn->status_change_nid);
+	return NOTIFY_OK;
+}
+
 void __init kmem_cache_init_late(void)
 {
 	struct kmem_cache *cachep;
@@ -1583,6 +1598,8 @@ void __init kmem_cache_init_late(void)
 	 */
 	register_cpu_notifier(&cpucache_notifier);
 
+	hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);
+
 	/*
 	 * The reap timers are started later, with a module init call: That part
 	 * of the kernel is not yet operational.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
@ 2010-02-03 21:39   ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


So kmalloc_node() works even if no CPU is up yet on the new node.

Requires previous refactor patch.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 mm/slab.c |   17 +++++++++++++++++
 1 file changed, 17 insertions(+)

Index: linux-2.6.33-rc3-ak/mm/slab.c
===================================================================
--- linux-2.6.33-rc3-ak.orig/mm/slab.c
+++ linux-2.6.33-rc3-ak/mm/slab.c
@@ -115,6 +115,7 @@
 #include	<linux/reciprocal_div.h>
 #include	<linux/debugobjects.h>
 #include	<linux/kmemcheck.h>
+#include	<linux/memory.h>
 
 #include	<asm/cacheflush.h>
 #include	<asm/tlbflush.h>
@@ -1560,6 +1561,20 @@ void __init kmem_cache_init(void)
 	g_cpucache_up = EARLY;
 }
 
+static int slab_memory_callback(struct notifier_block *self,
+				unsigned long action, void *arg)
+{
+	struct memory_notify *mn = (struct memory_notify *)arg;
+
+	/*
+	 * When a node goes online allocate l3s early.	 This way
+	 * kmalloc_node() works for it.
+	 */
+	if (action == MEM_ONLINE && mn->status_change_nid >= 0)
+		slab_node_prepare(mn->status_change_nid);
+	return NOTIFY_OK;
+}
+
 void __init kmem_cache_init_late(void)
 {
 	struct kmem_cache *cachep;
@@ -1583,6 +1598,8 @@ void __init kmem_cache_init_late(void)
 	 */
 	register_cpu_notifier(&cpucache_notifier);
 
+	hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);
+
 	/*
 	 * The reap timers are started later, with a module init call: That part
 	 * of the kernel is not yet operational.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [3/4] SLAB: Separate node initialization into separate function
  2010-02-03 21:39 ` Andi Kleen
@ 2010-02-03 21:39   ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


No functional changes.

Needed for next patch.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 mm/slab.c |   34 +++++++++++++++++++++-------------
 1 file changed, 21 insertions(+), 13 deletions(-)

Index: linux-2.6.33-rc3-ak/mm/slab.c
===================================================================
--- linux-2.6.33-rc3-ak.orig/mm/slab.c
+++ linux-2.6.33-rc3-ak/mm/slab.c
@@ -1171,19 +1171,9 @@ free_array_cache:
 	}
 }
 
-static int __cpuinit cpuup_prepare(long cpu)
+static int slab_node_prepare(int node)
 {
 	struct kmem_cache *cachep;
-	struct kmem_list3 *l3 = NULL;
-	int node = cpu_to_node(cpu);
-	const int memsize = sizeof(struct kmem_list3);
-
-	/*
-	 * We need to do this right in the beginning since
-	 * alloc_arraycache's are going to use this list.
-	 * kmalloc_node allows us to add the slab to the right
-	 * kmem_list3 and not this cpu's kmem_list3
-	 */
 
 	list_for_each_entry(cachep, &cache_chain, next) {
 		/*
@@ -1192,9 +1182,10 @@ static int __cpuinit cpuup_prepare(long
 		 * node has not already allocated this
 		 */
 		if (!cachep->nodelists[node]) {
-			l3 = kmalloc_node(memsize, GFP_KERNEL, node);
+			struct kmem_list3 *l3;
+			l3 = kmalloc_node(sizeof(struct kmem_list3), GFP_KERNEL, node);
 			if (!l3)
-				goto bad;
+				return -1;
 			kmem_list3_init(l3);
 			l3->next_reap = jiffies + REAPTIMEOUT_LIST3 +
 			    ((unsigned long)cachep) % REAPTIMEOUT_LIST3;
@@ -1213,6 +1204,23 @@ static int __cpuinit cpuup_prepare(long
 			cachep->batchcount + cachep->num;
 		spin_unlock_irq(&cachep->nodelists[node]->list_lock);
 	}
+	return 0;
+}
+
+static int __cpuinit cpuup_prepare(long cpu)
+{
+	struct kmem_cache *cachep;
+	struct kmem_list3 *l3 = NULL;
+	int node = cpu_to_node(cpu);
+
+	/*
+	 * We need to do this right in the beginning since
+	 * alloc_arraycache's are going to use this list.
+	 * kmalloc_node allows us to add the slab to the right
+	 * kmem_list3 and not this cpu's kmem_list3
+	 */
+	if (slab_node_prepare(node) < 0)
+		goto bad;
 
 	/*
 	 * Now we can go ahead with allocating the shared arrays and

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [3/4] SLAB: Separate node initialization into separate function
@ 2010-02-03 21:39   ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


No functional changes.

Needed for next patch.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 mm/slab.c |   34 +++++++++++++++++++++-------------
 1 file changed, 21 insertions(+), 13 deletions(-)

Index: linux-2.6.33-rc3-ak/mm/slab.c
===================================================================
--- linux-2.6.33-rc3-ak.orig/mm/slab.c
+++ linux-2.6.33-rc3-ak/mm/slab.c
@@ -1171,19 +1171,9 @@ free_array_cache:
 	}
 }
 
-static int __cpuinit cpuup_prepare(long cpu)
+static int slab_node_prepare(int node)
 {
 	struct kmem_cache *cachep;
-	struct kmem_list3 *l3 = NULL;
-	int node = cpu_to_node(cpu);
-	const int memsize = sizeof(struct kmem_list3);
-
-	/*
-	 * We need to do this right in the beginning since
-	 * alloc_arraycache's are going to use this list.
-	 * kmalloc_node allows us to add the slab to the right
-	 * kmem_list3 and not this cpu's kmem_list3
-	 */
 
 	list_for_each_entry(cachep, &cache_chain, next) {
 		/*
@@ -1192,9 +1182,10 @@ static int __cpuinit cpuup_prepare(long
 		 * node has not already allocated this
 		 */
 		if (!cachep->nodelists[node]) {
-			l3 = kmalloc_node(memsize, GFP_KERNEL, node);
+			struct kmem_list3 *l3;
+			l3 = kmalloc_node(sizeof(struct kmem_list3), GFP_KERNEL, node);
 			if (!l3)
-				goto bad;
+				return -1;
 			kmem_list3_init(l3);
 			l3->next_reap = jiffies + REAPTIMEOUT_LIST3 +
 			    ((unsigned long)cachep) % REAPTIMEOUT_LIST3;
@@ -1213,6 +1204,23 @@ static int __cpuinit cpuup_prepare(long
 			cachep->batchcount + cachep->num;
 		spin_unlock_irq(&cachep->nodelists[node]->list_lock);
 	}
+	return 0;
+}
+
+static int __cpuinit cpuup_prepare(long cpu)
+{
+	struct kmem_cache *cachep;
+	struct kmem_list3 *l3 = NULL;
+	int node = cpu_to_node(cpu);
+
+	/*
+	 * We need to do this right in the beginning since
+	 * alloc_arraycache's are going to use this list.
+	 * kmalloc_node allows us to add the slab to the right
+	 * kmem_list3 and not this cpu's kmem_list3
+	 */
+	if (slab_node_prepare(node) < 0)
+		goto bad;
 
 	/*
 	 * Now we can go ahead with allocating the shared arrays and

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [4/4] SLAB: Fix node add timer race in cache_reap
  2010-02-03 21:39 ` Andi Kleen
@ 2010-02-03 21:39   ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


cache_reap can run before the node is set up and then reference a NULL 
l3 list. Check for this explicitely and just continue. The node
will be eventually set up.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 mm/slab.c |    3 +++
 1 file changed, 3 insertions(+)

Index: linux-2.6.33-rc3-ak/mm/slab.c
===================================================================
--- linux-2.6.33-rc3-ak.orig/mm/slab.c
+++ linux-2.6.33-rc3-ak/mm/slab.c
@@ -4112,6 +4112,9 @@ static void cache_reap(struct work_struc
 		 * we can do some work if the lock was obtained.
 		 */
 		l3 = searchp->nodelists[node];
+		/* Note node yet set up */
+		if (!l3)
+			break;
 
 		reap_alien(searchp, l3);
 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH] [4/4] SLAB: Fix node add timer race in cache_reap
@ 2010-02-03 21:39   ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-03 21:39 UTC (permalink / raw)
  To: submit, linux-kernel, haicheng.li, penberg, linux-mm


cache_reap can run before the node is set up and then reference a NULL 
l3 list. Check for this explicitely and just continue. The node
will be eventually set up.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---
 mm/slab.c |    3 +++
 1 file changed, 3 insertions(+)

Index: linux-2.6.33-rc3-ak/mm/slab.c
===================================================================
--- linux-2.6.33-rc3-ak.orig/mm/slab.c
+++ linux-2.6.33-rc3-ak/mm/slab.c
@@ -4112,6 +4112,9 @@ static void cache_reap(struct work_struc
 		 * we can do some work if the lock was obtained.
 		 */
 		l3 = searchp->nodelists[node];
+		/* Note node yet set up */
+		if (!l3)
+			break;
 
 		reap_alien(searchp, l3);
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
  2010-02-03 21:39 ` Andi Kleen
@ 2010-02-05  8:27   ` Pekka Enberg
  -1 siblings, 0 replies; 46+ messages in thread
From: Pekka Enberg @ 2010-02-05  8:27 UTC (permalink / raw)
  To: Andi Kleen
  Cc: submit, linux-kernel, haicheng.li, linux-mm, Christoph Lameter,
	Nick Piggin

Hi Andi,

On Wed, Feb 3, 2010 at 11:39 PM, Andi Kleen <andi@firstfloor.org> wrote:
> This fixes various problems in slab found during memory hotadd testing.
>
> All straight forward bug fixes, so could be still .33 candidates.

AFAICT, they aren't regression fixes so they should wait for .34, no?
The patches look good to me. Nick, Christoph, you might want to take a
peek at them before I shove them in linux-next.

                        Pekka

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
@ 2010-02-05  8:27   ` Pekka Enberg
  0 siblings, 0 replies; 46+ messages in thread
From: Pekka Enberg @ 2010-02-05  8:27 UTC (permalink / raw)
  To: Andi Kleen
  Cc: submit, linux-kernel, haicheng.li, linux-mm, Christoph Lameter,
	Nick Piggin

Hi Andi,

On Wed, Feb 3, 2010 at 11:39 PM, Andi Kleen <andi@firstfloor.org> wrote:
> This fixes various problems in slab found during memory hotadd testing.
>
> All straight forward bug fixes, so could be still .33 candidates.

AFAICT, they aren't regression fixes so they should wait for .34, no?
The patches look good to me. Nick, Christoph, you might want to take a
peek at them before I shove them in linux-next.

                        Pekka

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
  2010-02-03 21:39   ` Andi Kleen
@ 2010-02-05 19:12     ` Christoph Lameter
  -1 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 19:12 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> Requires previous refactor patch.

Not in this series?

> +	if (action == MEM_ONLINE && mn->status_change_nid >= 0)
> +		slab_node_prepare(mn->status_change_nid);
> +	return NOTIFY_OK;

Never seen a slab_node_prepare function before.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
@ 2010-02-05 19:12     ` Christoph Lameter
  0 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 19:12 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> Requires previous refactor patch.

Not in this series?

> +	if (action == MEM_ONLINE && mn->status_change_nid >= 0)
> +		slab_node_prepare(mn->status_change_nid);
> +	return NOTIFY_OK;

Never seen a slab_node_prepare function before.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [3/4] SLAB: Separate node initialization into separate function
  2010-02-03 21:39   ` Andi Kleen
@ 2010-02-05 19:15     ` Christoph Lameter
  -1 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 19:15 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm


Out of sequence. Last patch already uses a function introduced by this
one.

Acked-by: Christoph Lameter <cl@linux-foundation.org>



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [3/4] SLAB: Separate node initialization into separate function
@ 2010-02-05 19:15     ` Christoph Lameter
  0 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 19:15 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm


Out of sequence. Last patch already uses a function introduced by this
one.

Acked-by: Christoph Lameter <cl@linux-foundation.org>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [4/4] SLAB: Fix node add timer race in cache_reap
  2010-02-03 21:39   ` Andi Kleen
@ 2010-02-05 19:16     ` Christoph Lameter
  -1 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 19:16 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm



Acked-by: Christoph Lameter <cl@linux-foundation.org>


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [4/4] SLAB: Fix node add timer race in cache_reap
@ 2010-02-05 19:16     ` Christoph Lameter
  0 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 19:16 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm



Acked-by: Christoph Lameter <cl@linux-foundation.org>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
  2010-02-03 21:39 ` Andi Kleen
@ 2010-02-05 19:19   ` Christoph Lameter
  -1 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 19:19 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> This fixes various problems in slab found during memory hotadd testing.

It changes the bootstrap semantics. The requirement was so far that slab
initialization must be complete before slab operations can be used.

This patchset allows such use before bootstrap on a node is complete and
also allows the running of cache reaper before bootstrap is done.

I have a bad feeling that this could be the result of Pekka's changes to
the bootstrap.



^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
@ 2010-02-05 19:19   ` Christoph Lameter
  0 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 19:19 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> This fixes various problems in slab found during memory hotadd testing.

It changes the bootstrap semantics. The requirement was so far that slab
initialization must be complete before slab operations can be used.

This patchset allows such use before bootstrap on a node is complete and
also allows the running of cache reaper before bootstrap is done.

I have a bad feeling that this could be the result of Pekka's changes to
the bootstrap.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
  2010-02-05 19:19   ` Christoph Lameter
@ 2010-02-05 20:22     ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-05 20:22 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: linux-kernel, haicheng.li, penberg, linux-mm

Christoph Lameter <cl@linux-foundation.org> writes:

> On Wed, 3 Feb 2010, Andi Kleen wrote:
>
>> This fixes various problems in slab found during memory hotadd testing.
>
> It changes the bootstrap semantics. The requirement was so far that slab
> initialization must be complete before slab operations can be used.

The problem is that slab itself uses slab it initialize itself.

> This patchset allows such use before bootstrap on a node is complete and
> also allows the running of cache reaper before bootstrap is done.
>
> I have a bad feeling that this could be the result of Pekka's changes to
> the bootstrap.

Not sure I fully follow what you're saying.

Are you saying this is a regression fix after all?

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
@ 2010-02-05 20:22     ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-05 20:22 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: linux-kernel, haicheng.li, penberg, linux-mm

Christoph Lameter <cl@linux-foundation.org> writes:

> On Wed, 3 Feb 2010, Andi Kleen wrote:
>
>> This fixes various problems in slab found during memory hotadd testing.
>
> It changes the bootstrap semantics. The requirement was so far that slab
> initialization must be complete before slab operations can be used.

The problem is that slab itself uses slab it initialize itself.

> This patchset allows such use before bootstrap on a node is complete and
> also allows the running of cache reaper before bootstrap is done.
>
> I have a bad feeling that this could be the result of Pekka's changes to
> the bootstrap.

Not sure I fully follow what you're saying.

Are you saying this is a regression fix after all?

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
  2010-02-05 20:22     ` Andi Kleen
@ 2010-02-05 20:55       ` Christoph Lameter
  -1 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 20:55 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel, haicheng.li, penberg, linux-mm

On Fri, 5 Feb 2010, Andi Kleen wrote:

> > It changes the bootstrap semantics. The requirement was so far that slab
> > initialization must be complete before slab operations can be used.
>
> The problem is that slab itself uses slab it initialize itself.

slab uses itself and also the page allocator to bootstrap itself. The
sequence was always a bit fragile. The page allocator also needs to use
the slab allocator in turn to bootstrap itself.

> > This patchset allows such use before bootstrap on a node is complete and
> > also allows the running of cache reaper before bootstrap is done.
> >
> > I have a bad feeling that this could be the result of Pekka's changes to
> > the bootstrap.
>
> Not sure I fully follow what you're saying.
>
> Are you saying this is a regression fix after all?

I am saying that we may have more trouble lurking here.

If we fix it this way then the bootstrap of a node is different from
system bootstrap that so far does not need these special measures.


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues
@ 2010-02-05 20:55       ` Christoph Lameter
  0 siblings, 0 replies; 46+ messages in thread
From: Christoph Lameter @ 2010-02-05 20:55 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel, haicheng.li, penberg, linux-mm

On Fri, 5 Feb 2010, Andi Kleen wrote:

> > It changes the bootstrap semantics. The requirement was so far that slab
> > initialization must be complete before slab operations can be used.
>
> The problem is that slab itself uses slab it initialize itself.

slab uses itself and also the page allocator to bootstrap itself. The
sequence was always a bit fragile. The page allocator also needs to use
the slab allocator in turn to bootstrap itself.

> > This patchset allows such use before bootstrap on a node is complete and
> > also allows the running of cache reaper before bootstrap is done.
> >
> > I have a bad feeling that this could be the result of Pekka's changes to
> > the bootstrap.
>
> Not sure I fully follow what you're saying.
>
> Are you saying this is a regression fix after all?

I am saying that we may have more trouble lurking here.

If we fix it this way then the bootstrap of a node is different from
system bootstrap that so far does not need these special measures.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
  2010-02-03 21:39   ` Andi Kleen
@ 2010-02-05 21:06     ` David Rientjes
  -1 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-05 21:06 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> When fallback_alloc() runs the node of the CPU might not be initialized yet.
> Handle this case by allocating in another node.
> 

That other node must be allowed by current's cpuset, otherwise 
kmem_getpages() will fail when get_page_from_freelist() iterates only over 
unallowed nodes.

> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> 
> ---
>  mm/slab.c |   19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> Index: linux-2.6.33-rc3-ak/mm/slab.c
> ===================================================================
> --- linux-2.6.33-rc3-ak.orig/mm/slab.c
> +++ linux-2.6.33-rc3-ak/mm/slab.c
> @@ -3210,7 +3210,24 @@ retry:
>  		if (local_flags & __GFP_WAIT)
>  			local_irq_enable();
>  		kmem_flagcheck(cache, flags);
> -		obj = kmem_getpages(cache, local_flags, numa_node_id());
> +
> +		/*
> +		 * Node not set up yet? Try one that the cache has been set up
> +		 * for.
> +		 */
> +		nid = numa_node_id();
> +		if (cache->nodelists[nid] == NULL) {
> +			for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> +				nid = zone_to_nid(zone);
> +				if (cache->nodelists[nid])
> +					break;

If you set a bit in a nodemask_t everytime ____cache_alloc_node() fails in 
the previous for_each_zone_zonelist() iteration, you could just iterate 
that nodemask here without duplicating the zone_to_nid() and 
cache->nodelists[nid] != NULL check.

	nid = numa_node_id();
	if (!cache->nodelists[nid])
		for_each_node_mask(nid, allowed_nodes) {
			obj = kmem_getpages(cache, local_flags, nid);
			if (obj)
				break;
		}
	else
		obj = kmem_getpages(cache, local_flags, nid);

This way you can try all allowed nodes for memory instead of just one when 
cache->nodelists[numa_node_id()] == NULL.

> +			}
> +			if (!cache->nodelists[nid])
> +				return NULL;
> +		}
> +
> +
> +		obj = kmem_getpages(cache, local_flags, nid);
>  		if (local_flags & __GFP_WAIT)
>  			local_irq_disable();
>  		if (obj) {

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
@ 2010-02-05 21:06     ` David Rientjes
  0 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-05 21:06 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> When fallback_alloc() runs the node of the CPU might not be initialized yet.
> Handle this case by allocating in another node.
> 

That other node must be allowed by current's cpuset, otherwise 
kmem_getpages() will fail when get_page_from_freelist() iterates only over 
unallowed nodes.

> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> 
> ---
>  mm/slab.c |   19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> Index: linux-2.6.33-rc3-ak/mm/slab.c
> ===================================================================
> --- linux-2.6.33-rc3-ak.orig/mm/slab.c
> +++ linux-2.6.33-rc3-ak/mm/slab.c
> @@ -3210,7 +3210,24 @@ retry:
>  		if (local_flags & __GFP_WAIT)
>  			local_irq_enable();
>  		kmem_flagcheck(cache, flags);
> -		obj = kmem_getpages(cache, local_flags, numa_node_id());
> +
> +		/*
> +		 * Node not set up yet? Try one that the cache has been set up
> +		 * for.
> +		 */
> +		nid = numa_node_id();
> +		if (cache->nodelists[nid] == NULL) {
> +			for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> +				nid = zone_to_nid(zone);
> +				if (cache->nodelists[nid])
> +					break;

If you set a bit in a nodemask_t everytime ____cache_alloc_node() fails in 
the previous for_each_zone_zonelist() iteration, you could just iterate 
that nodemask here without duplicating the zone_to_nid() and 
cache->nodelists[nid] != NULL check.

	nid = numa_node_id();
	if (!cache->nodelists[nid])
		for_each_node_mask(nid, allowed_nodes) {
			obj = kmem_getpages(cache, local_flags, nid);
			if (obj)
				break;
		}
	else
		obj = kmem_getpages(cache, local_flags, nid);

This way you can try all allowed nodes for memory instead of just one when 
cache->nodelists[numa_node_id()] == NULL.

> +			}
> +			if (!cache->nodelists[nid])
> +				return NULL;
> +		}
> +
> +
> +		obj = kmem_getpages(cache, local_flags, nid);
>  		if (local_flags & __GFP_WAIT)
>  			local_irq_disable();
>  		if (obj) {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
  2010-02-03 21:39   ` Andi Kleen
@ 2010-02-05 21:17     ` David Rientjes
  -1 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-05 21:17 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> Index: linux-2.6.33-rc3-ak/mm/slab.c
> ===================================================================
> --- linux-2.6.33-rc3-ak.orig/mm/slab.c
> +++ linux-2.6.33-rc3-ak/mm/slab.c
> @@ -115,6 +115,7 @@
>  #include	<linux/reciprocal_div.h>
>  #include	<linux/debugobjects.h>
>  #include	<linux/kmemcheck.h>
> +#include	<linux/memory.h>
>  
>  #include	<asm/cacheflush.h>
>  #include	<asm/tlbflush.h>
> @@ -1560,6 +1561,20 @@ void __init kmem_cache_init(void)
>  	g_cpucache_up = EARLY;
>  }
>  
> +static int slab_memory_callback(struct notifier_block *self,
> +				unsigned long action, void *arg)
> +{
> +	struct memory_notify *mn = (struct memory_notify *)arg;

No cast necessary.

> +
> +	/*
> +	 * When a node goes online allocate l3s early.	 This way
> +	 * kmalloc_node() works for it.
> +	 */
> +	if (action == MEM_ONLINE && mn->status_change_nid >= 0)
> +		slab_node_prepare(mn->status_change_nid);
> +	return NOTIFY_OK;
> +}
> +
>  void __init kmem_cache_init_late(void)
>  {
>  	struct kmem_cache *cachep;
> @@ -1583,6 +1598,8 @@ void __init kmem_cache_init_late(void)
>  	 */
>  	register_cpu_notifier(&cpucache_notifier);
>  
> +	hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);

Only needed for CONFIG_NUMA.

> +
>  	/*
>  	 * The reap timers are started later, with a module init call: That part
>  	 * of the kernel is not yet operational.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
@ 2010-02-05 21:17     ` David Rientjes
  0 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-05 21:17 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, penberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> Index: linux-2.6.33-rc3-ak/mm/slab.c
> ===================================================================
> --- linux-2.6.33-rc3-ak.orig/mm/slab.c
> +++ linux-2.6.33-rc3-ak/mm/slab.c
> @@ -115,6 +115,7 @@
>  #include	<linux/reciprocal_div.h>
>  #include	<linux/debugobjects.h>
>  #include	<linux/kmemcheck.h>
> +#include	<linux/memory.h>
>  
>  #include	<asm/cacheflush.h>
>  #include	<asm/tlbflush.h>
> @@ -1560,6 +1561,20 @@ void __init kmem_cache_init(void)
>  	g_cpucache_up = EARLY;
>  }
>  
> +static int slab_memory_callback(struct notifier_block *self,
> +				unsigned long action, void *arg)
> +{
> +	struct memory_notify *mn = (struct memory_notify *)arg;

No cast necessary.

> +
> +	/*
> +	 * When a node goes online allocate l3s early.	 This way
> +	 * kmalloc_node() works for it.
> +	 */
> +	if (action == MEM_ONLINE && mn->status_change_nid >= 0)
> +		slab_node_prepare(mn->status_change_nid);
> +	return NOTIFY_OK;
> +}
> +
>  void __init kmem_cache_init_late(void)
>  {
>  	struct kmem_cache *cachep;
> @@ -1583,6 +1598,8 @@ void __init kmem_cache_init_late(void)
>  	 */
>  	register_cpu_notifier(&cpucache_notifier);
>  
> +	hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);

Only needed for CONFIG_NUMA.

> +
>  	/*
>  	 * The reap timers are started later, with a module init call: That part
>  	 * of the kernel is not yet operational.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [3/4] SLAB: Separate node initialization into separate function
  2010-02-03 21:39   ` Andi Kleen
@ 2010-02-05 21:29     ` David Rientjes
  -1 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-05 21:29 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> 
> No functional changes.
> 
> Needed for next patch.
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> 
> ---
>  mm/slab.c |   34 +++++++++++++++++++++-------------
>  1 file changed, 21 insertions(+), 13 deletions(-)
> 
> Index: linux-2.6.33-rc3-ak/mm/slab.c
> ===================================================================
> --- linux-2.6.33-rc3-ak.orig/mm/slab.c
> +++ linux-2.6.33-rc3-ak/mm/slab.c
> @@ -1171,19 +1171,9 @@ free_array_cache:
>  	}
>  }
>  
> -static int __cpuinit cpuup_prepare(long cpu)
> +static int slab_node_prepare(int node)
>  {
>  	struct kmem_cache *cachep;
> -	struct kmem_list3 *l3 = NULL;
> -	int node = cpu_to_node(cpu);
> -	const int memsize = sizeof(struct kmem_list3);
> -
> -	/*
> -	 * We need to do this right in the beginning since
> -	 * alloc_arraycache's are going to use this list.
> -	 * kmalloc_node allows us to add the slab to the right
> -	 * kmem_list3 and not this cpu's kmem_list3
> -	 */
>  
>  	list_for_each_entry(cachep, &cache_chain, next) {
>  		/*

As Christoph mentioned, this patch is out of order with the previous one 
in the series; slab_node_prepare() is called in that previous patch by a 
memory hotplug callback without holding cache_chain_mutex (it's taken by 
the cpu hotplug callback prior to calling cpuup_prepare() currently).  So 
slab_node_prepare() should note that we require the mutex and the memory 
hotplug callback should take it in the previous patch.

> @@ -1192,9 +1182,10 @@ static int __cpuinit cpuup_prepare(long
>  		 * node has not already allocated this
>  		 */
>  		if (!cachep->nodelists[node]) {
> -			l3 = kmalloc_node(memsize, GFP_KERNEL, node);
> +			struct kmem_list3 *l3;
> +			l3 = kmalloc_node(sizeof(struct kmem_list3), GFP_KERNEL, node);
>  			if (!l3)
> -				goto bad;
> +				return -1;
>  			kmem_list3_init(l3);
>  			l3->next_reap = jiffies + REAPTIMEOUT_LIST3 +
>  			    ((unsigned long)cachep) % REAPTIMEOUT_LIST3;
> @@ -1213,6 +1204,23 @@ static int __cpuinit cpuup_prepare(long
>  			cachep->batchcount + cachep->num;
>  		spin_unlock_irq(&cachep->nodelists[node]->list_lock);
>  	}
> +	return 0;
> +}
> +
> +static int __cpuinit cpuup_prepare(long cpu)
> +{
> +	struct kmem_cache *cachep;
> +	struct kmem_list3 *l3 = NULL;
> +	int node = cpu_to_node(cpu);
> +
> +	/*
> +	 * We need to do this right in the beginning since
> +	 * alloc_arraycache's are going to use this list.
> +	 * kmalloc_node allows us to add the slab to the right
> +	 * kmem_list3 and not this cpu's kmem_list3
> +	 */
> +	if (slab_node_prepare(node) < 0)
> +		goto bad;
>  
>  	/*
>  	 * Now we can go ahead with allocating the shared arrays and

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [3/4] SLAB: Separate node initialization into separate function
@ 2010-02-05 21:29     ` David Rientjes
  0 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-05 21:29 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Wed, 3 Feb 2010, Andi Kleen wrote:

> 
> No functional changes.
> 
> Needed for next patch.
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> 
> ---
>  mm/slab.c |   34 +++++++++++++++++++++-------------
>  1 file changed, 21 insertions(+), 13 deletions(-)
> 
> Index: linux-2.6.33-rc3-ak/mm/slab.c
> ===================================================================
> --- linux-2.6.33-rc3-ak.orig/mm/slab.c
> +++ linux-2.6.33-rc3-ak/mm/slab.c
> @@ -1171,19 +1171,9 @@ free_array_cache:
>  	}
>  }
>  
> -static int __cpuinit cpuup_prepare(long cpu)
> +static int slab_node_prepare(int node)
>  {
>  	struct kmem_cache *cachep;
> -	struct kmem_list3 *l3 = NULL;
> -	int node = cpu_to_node(cpu);
> -	const int memsize = sizeof(struct kmem_list3);
> -
> -	/*
> -	 * We need to do this right in the beginning since
> -	 * alloc_arraycache's are going to use this list.
> -	 * kmalloc_node allows us to add the slab to the right
> -	 * kmem_list3 and not this cpu's kmem_list3
> -	 */
>  
>  	list_for_each_entry(cachep, &cache_chain, next) {
>  		/*

As Christoph mentioned, this patch is out of order with the previous one 
in the series; slab_node_prepare() is called in that previous patch by a 
memory hotplug callback without holding cache_chain_mutex (it's taken by 
the cpu hotplug callback prior to calling cpuup_prepare() currently).  So 
slab_node_prepare() should note that we require the mutex and the memory 
hotplug callback should take it in the previous patch.

> @@ -1192,9 +1182,10 @@ static int __cpuinit cpuup_prepare(long
>  		 * node has not already allocated this
>  		 */
>  		if (!cachep->nodelists[node]) {
> -			l3 = kmalloc_node(memsize, GFP_KERNEL, node);
> +			struct kmem_list3 *l3;
> +			l3 = kmalloc_node(sizeof(struct kmem_list3), GFP_KERNEL, node);
>  			if (!l3)
> -				goto bad;
> +				return -1;
>  			kmem_list3_init(l3);
>  			l3->next_reap = jiffies + REAPTIMEOUT_LIST3 +
>  			    ((unsigned long)cachep) % REAPTIMEOUT_LIST3;
> @@ -1213,6 +1204,23 @@ static int __cpuinit cpuup_prepare(long
>  			cachep->batchcount + cachep->num;
>  		spin_unlock_irq(&cachep->nodelists[node]->list_lock);
>  	}
> +	return 0;
> +}
> +
> +static int __cpuinit cpuup_prepare(long cpu)
> +{
> +	struct kmem_cache *cachep;
> +	struct kmem_list3 *l3 = NULL;
> +	int node = cpu_to_node(cpu);
> +
> +	/*
> +	 * We need to do this right in the beginning since
> +	 * alloc_arraycache's are going to use this list.
> +	 * kmalloc_node allows us to add the slab to the right
> +	 * kmem_list3 and not this cpu's kmem_list3
> +	 */
> +	if (slab_node_prepare(node) < 0)
> +		goto bad;
>  
>  	/*
>  	 * Now we can go ahead with allocating the shared arrays and

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
  2010-02-05 21:06     ` David Rientjes
@ 2010-02-06  7:25       ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-06  7:25 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andi Kleen, submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Fri, Feb 05, 2010 at 01:06:56PM -0800, David Rientjes wrote:
> On Wed, 3 Feb 2010, Andi Kleen wrote:
> 
> > When fallback_alloc() runs the node of the CPU might not be initialized yet.
> > Handle this case by allocating in another node.
> > 
> 
> That other node must be allowed by current's cpuset, otherwise 
> kmem_getpages() will fail when get_page_from_freelist() iterates only over 
> unallowed nodes.

All theses cases are really only interesting in the memory hotplug path
itself (afterwards the slab is working anyways and memory is there)
and if someone sets funny cpusets for those he gets what he deserves ...

-Andi

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
@ 2010-02-06  7:25       ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-06  7:25 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andi Kleen, submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Fri, Feb 05, 2010 at 01:06:56PM -0800, David Rientjes wrote:
> On Wed, 3 Feb 2010, Andi Kleen wrote:
> 
> > When fallback_alloc() runs the node of the CPU might not be initialized yet.
> > Handle this case by allocating in another node.
> > 
> 
> That other node must be allowed by current's cpuset, otherwise 
> kmem_getpages() will fail when get_page_from_freelist() iterates only over 
> unallowed nodes.

All theses cases are really only interesting in the memory hotplug path
itself (afterwards the slab is working anyways and memory is there)
and if someone sets funny cpusets for those he gets what he deserves ...

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
  2010-02-05 21:17     ` David Rientjes
@ 2010-02-06  7:26       ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-06  7:26 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andi Kleen, submit, linux-kernel, haicheng.li, penberg, linux-mm

On Fri, Feb 05, 2010 at 01:17:56PM -0800, David Rientjes wrote:
> > +static int slab_memory_callback(struct notifier_block *self,
> > +				unsigned long action, void *arg)
> > +{
> > +	struct memory_notify *mn = (struct memory_notify *)arg;
> 
> No cast necessary.

It's standard practice to cast void *.

> >  void __init kmem_cache_init_late(void)
> >  {
> >  	struct kmem_cache *cachep;
> > @@ -1583,6 +1598,8 @@ void __init kmem_cache_init_late(void)
> >  	 */
> >  	register_cpu_notifier(&cpucache_notifier);
> >  
> > +	hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);
> 
> Only needed for CONFIG_NUMA.

Ok.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
@ 2010-02-06  7:26       ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-06  7:26 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andi Kleen, submit, linux-kernel, haicheng.li, penberg, linux-mm

On Fri, Feb 05, 2010 at 01:17:56PM -0800, David Rientjes wrote:
> > +static int slab_memory_callback(struct notifier_block *self,
> > +				unsigned long action, void *arg)
> > +{
> > +	struct memory_notify *mn = (struct memory_notify *)arg;
> 
> No cast necessary.

It's standard practice to cast void *.

> >  void __init kmem_cache_init_late(void)
> >  {
> >  	struct kmem_cache *cachep;
> > @@ -1583,6 +1598,8 @@ void __init kmem_cache_init_late(void)
> >  	 */
> >  	register_cpu_notifier(&cpucache_notifier);
> >  
> > +	hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI);
> 
> Only needed for CONFIG_NUMA.

Ok.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [3/4] SLAB: Separate node initialization into separate function
  2010-02-05 21:29     ` David Rientjes
@ 2010-02-06  7:27       ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-06  7:27 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andi Kleen, submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

> As Christoph mentioned, this patch is out of order with the previous one 

Ok.

> in the series; slab_node_prepare() is called in that previous patch by a 
> memory hotplug callback without holding cache_chain_mutex (it's taken by 
> the cpu hotplug callback prior to calling cpuup_prepare() currently).  So 
> slab_node_prepare() should note that we require the mutex and the memory 
> hotplug callback should take it in the previous patch.

AFAIK the code is correct. If you feel the need for additional
documentation feel free to send patches yourself.

-Andi

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [3/4] SLAB: Separate node initialization into separate function
@ 2010-02-06  7:27       ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-06  7:27 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andi Kleen, submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

> As Christoph mentioned, this patch is out of order with the previous one 

Ok.

> in the series; slab_node_prepare() is called in that previous patch by a 
> memory hotplug callback without holding cache_chain_mutex (it's taken by 
> the cpu hotplug callback prior to calling cpuup_prepare() currently).  So 
> slab_node_prepare() should note that we require the mutex and the memory 
> hotplug callback should take it in the previous patch.

AFAIK the code is correct. If you feel the need for additional
documentation feel free to send patches yourself.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
  2010-02-06  7:26       ` Andi Kleen
@ 2010-02-06  9:47         ` David Rientjes
  -1 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-06  9:47 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > > +static int slab_memory_callback(struct notifier_block *self,
> > > +				unsigned long action, void *arg)
> > > +{
> > > +	struct memory_notify *mn = (struct memory_notify *)arg;
> > 
> > No cast necessary.
> 
> It's standard practice to cast void *.
> 

$ grep -r "struct memory_notify.*=" *
arch/powerpc/platforms/pseries/cmm.c:	struct memory_notify *marg = arg;
drivers/base/node.c:	struct memory_notify *mnb = arg;
drivers/net/ehea/ehea_main.c:	struct memory_notify *arg = data;
mm/ksm.c:	struct memory_notify *mn = arg;
mm/slub.c:	struct memory_notify *marg = arg;
mm/slub.c:	struct memory_notify *marg = arg;
mm/page_cgroup.c:	struct memory_notify *mn = arg;

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory
@ 2010-02-06  9:47         ` David Rientjes
  0 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-06  9:47 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > > +static int slab_memory_callback(struct notifier_block *self,
> > > +				unsigned long action, void *arg)
> > > +{
> > > +	struct memory_notify *mn = (struct memory_notify *)arg;
> > 
> > No cast necessary.
> 
> It's standard practice to cast void *.
> 

$ grep -r "struct memory_notify.*=" *
arch/powerpc/platforms/pseries/cmm.c:	struct memory_notify *marg = arg;
drivers/base/node.c:	struct memory_notify *mnb = arg;
drivers/net/ehea/ehea_main.c:	struct memory_notify *arg = data;
mm/ksm.c:	struct memory_notify *mn = arg;
mm/slub.c:	struct memory_notify *marg = arg;
mm/slub.c:	struct memory_notify *marg = arg;
mm/page_cgroup.c:	struct memory_notify *mn = arg;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
  2010-02-06  7:25       ` Andi Kleen
@ 2010-02-06  9:53         ` David Rientjes
  -1 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-06  9:53 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > That other node must be allowed by current's cpuset, otherwise 
> > kmem_getpages() will fail when get_page_from_freelist() iterates only over 
> > unallowed nodes.
> 
> All theses cases are really only interesting in the memory hotplug path
> itself (afterwards the slab is working anyways and memory is there)
> and if someone sets funny cpusets for those he gets what he deserves ...
> 

If a hot-added node has not been initialized for the cache, your code is 
picking an existing one in zonelist order which may be excluded by 
current's cpuset.  Thus, your code has a very real chance of having 
kmem_getpages() return NULL because get_page_from_freelist() will reject 
non-atomic ALLOC_CPUSET allocations for prohibited nodes.  That isn't a 
scenario that requires a "funny cpuset," it just has to not allow whatever 
initialized node comes first in the zonelist.

My suggested alternative does not pick a single initialized node, rather 
it tries all nodes that actually have a chance of having kmem_getpages() 
succeed which increases the probability that your patch actually has an 
effect for cpuset users.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
@ 2010-02-06  9:53         ` David Rientjes
  0 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-06  9:53 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > That other node must be allowed by current's cpuset, otherwise 
> > kmem_getpages() will fail when get_page_from_freelist() iterates only over 
> > unallowed nodes.
> 
> All theses cases are really only interesting in the memory hotplug path
> itself (afterwards the slab is working anyways and memory is there)
> and if someone sets funny cpusets for those he gets what he deserves ...
> 

If a hot-added node has not been initialized for the cache, your code is 
picking an existing one in zonelist order which may be excluded by 
current's cpuset.  Thus, your code has a very real chance of having 
kmem_getpages() return NULL because get_page_from_freelist() will reject 
non-atomic ALLOC_CPUSET allocations for prohibited nodes.  That isn't a 
scenario that requires a "funny cpuset," it just has to not allow whatever 
initialized node comes first in the zonelist.

My suggested alternative does not pick a single initialized node, rather 
it tries all nodes that actually have a chance of having kmem_getpages() 
succeed which increases the probability that your patch actually has an 
effect for cpuset users.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [3/4] SLAB: Separate node initialization into separate function
  2010-02-06  7:27       ` Andi Kleen
@ 2010-02-06  9:55         ` David Rientjes
  -1 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-06  9:55 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > in the series; slab_node_prepare() is called in that previous patch by a 
> > memory hotplug callback without holding cache_chain_mutex (it's taken by 
> > the cpu hotplug callback prior to calling cpuup_prepare() currently).  So 
> > slab_node_prepare() should note that we require the mutex and the memory 
> > hotplug callback should take it in the previous patch.
> 
> AFAIK the code is correct. If you feel the need for additional
> documentation feel free to send patches yourself.
> 

Documentation?  You're required to take cache_chain_mutex before calling 
slab_node_prepare() in your memory hotplug notifier, it iterates 
cache_chain.  Please look again.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [3/4] SLAB: Separate node initialization into separate function
@ 2010-02-06  9:55         ` David Rientjes
  0 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-06  9:55 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > in the series; slab_node_prepare() is called in that previous patch by a 
> > memory hotplug callback without holding cache_chain_mutex (it's taken by 
> > the cpu hotplug callback prior to calling cpuup_prepare() currently).  So 
> > slab_node_prepare() should note that we require the mutex and the memory 
> > hotplug callback should take it in the previous patch.
> 
> AFAIK the code is correct. If you feel the need for additional
> documentation feel free to send patches yourself.
> 

Documentation?  You're required to take cache_chain_mutex before calling 
slab_node_prepare() in your memory hotplug notifier, it iterates 
cache_chain.  Please look again.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
  2010-02-06  9:53         ` David Rientjes
@ 2010-02-06 15:56           ` Andi Kleen
  -1 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-06 15:56 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andi Kleen, submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

> If a hot-added node has not been initialized for the cache, your code is 
> picking an existing one in zonelist order which may be excluded by 
> current's cpuset.  Thus, your code has a very real chance of having 
> kmem_getpages() return NULL because get_page_from_freelist() will reject 
> non-atomic ALLOC_CPUSET allocations for prohibited nodes.  That isn't a 
> scenario that requires a "funny cpuset," it just has to not allow whatever 
> initialized node comes first in the zonelist.

The point was that you would need to run whoever triggers the memory
hotadd in a cpuset with limitations. That would be a clear
don't do that if hurts(tm)
 
> My suggested alternative does not pick a single initialized node, rather 
> it tries all nodes that actually have a chance of having kmem_getpages() 
> succeed which increases the probability that your patch actually has an 
> effect for cpuset users.

cpuset users are unlikely to trigger memory hotadds from inside limiting
cpusets. Typically that's done from udev etc.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
@ 2010-02-06 15:56           ` Andi Kleen
  0 siblings, 0 replies; 46+ messages in thread
From: Andi Kleen @ 2010-02-06 15:56 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andi Kleen, submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

> If a hot-added node has not been initialized for the cache, your code is 
> picking an existing one in zonelist order which may be excluded by 
> current's cpuset.  Thus, your code has a very real chance of having 
> kmem_getpages() return NULL because get_page_from_freelist() will reject 
> non-atomic ALLOC_CPUSET allocations for prohibited nodes.  That isn't a 
> scenario that requires a "funny cpuset," it just has to not allow whatever 
> initialized node comes first in the zonelist.

The point was that you would need to run whoever triggers the memory
hotadd in a cpuset with limitations. That would be a clear
don't do that if hurts(tm)
 
> My suggested alternative does not pick a single initialized node, rather 
> it tries all nodes that actually have a chance of having kmem_getpages() 
> succeed which increases the probability that your patch actually has an 
> effect for cpuset users.

cpuset users are unlikely to trigger memory hotadds from inside limiting
cpusets. Typically that's done from udev etc.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
  2010-02-06 15:56           ` Andi Kleen
@ 2010-02-06 22:31             ` David Rientjes
  -1 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-06 22:31 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > If a hot-added node has not been initialized for the cache, your code is 
> > picking an existing one in zonelist order which may be excluded by 
> > current's cpuset.  Thus, your code has a very real chance of having 
> > kmem_getpages() return NULL because get_page_from_freelist() will reject 
> > non-atomic ALLOC_CPUSET allocations for prohibited nodes.  That isn't a 
> > scenario that requires a "funny cpuset," it just has to not allow whatever 
> > initialized node comes first in the zonelist.
> 
> The point was that you would need to run whoever triggers the memory
> hotadd in a cpuset with limitations. That would be a clear
> don't do that if hurts(tm)
>  

With a subset of memory nodes, yes.  What else prohibits that except for 
your new code?  

There's a second issue with this approach that I eluded to above: you're 
picking the first initialized node for the cache based solely on whether 
it is allocated or not.  kmem_getpages() may still return NULL when it 
would return new slab for any other initialized node, so you're better off 
trying them all.

In other words, my earlier (untested) suggestion:

diff --git a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3172,6 +3172,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
 	gfp_t local_flags;
 	struct zoneref *z;
 	struct zone *zone;
+	nodemask_t allowed_nodes = NODE_MASK_NONE;
 	enum zone_type high_zoneidx = gfp_zone(flags);
 	void *obj = NULL;
 	int nid;
@@ -3197,6 +3198,7 @@ retry:
 					flags | GFP_THISNODE, nid);
 				if (obj)
 					break;
+				node_set(nid, allowed_nodes);
 		}
 	}
 
@@ -3210,7 +3212,15 @@ retry:
 		if (local_flags & __GFP_WAIT)
 			local_irq_enable();
 		kmem_flagcheck(cache, flags);
-		obj = kmem_getpages(cache, local_flags, numa_node_id());
+		nid = numa_node_id();
+		if (cache->nodelists[nid])
+			obj = kmem_getpages(cache, local_flags, nid);
+		else
+			for_each_node_mask(nid, allowed_nodes) {
+				obj = kmem_getpages(cache, local_flags, nid);
+				if (obj)
+					break;
+			}
 		if (local_flags & __GFP_WAIT)
 			local_irq_disable();
 		if (obj) {

Anyway, I'll leave these otherwise unnecessary limitations to Pekka.  
Thanks.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc()
@ 2010-02-06 22:31             ` David Rientjes
  0 siblings, 0 replies; 46+ messages in thread
From: David Rientjes @ 2010-02-06 22:31 UTC (permalink / raw)
  To: Andi Kleen; +Cc: submit, linux-kernel, haicheng.li, Pekka Enberg, linux-mm

On Sat, 6 Feb 2010, Andi Kleen wrote:

> > If a hot-added node has not been initialized for the cache, your code is 
> > picking an existing one in zonelist order which may be excluded by 
> > current's cpuset.  Thus, your code has a very real chance of having 
> > kmem_getpages() return NULL because get_page_from_freelist() will reject 
> > non-atomic ALLOC_CPUSET allocations for prohibited nodes.  That isn't a 
> > scenario that requires a "funny cpuset," it just has to not allow whatever 
> > initialized node comes first in the zonelist.
> 
> The point was that you would need to run whoever triggers the memory
> hotadd in a cpuset with limitations. That would be a clear
> don't do that if hurts(tm)
>  

With a subset of memory nodes, yes.  What else prohibits that except for 
your new code?  

There's a second issue with this approach that I eluded to above: you're 
picking the first initialized node for the cache based solely on whether 
it is allocated or not.  kmem_getpages() may still return NULL when it 
would return new slab for any other initialized node, so you're better off 
trying them all.

In other words, my earlier (untested) suggestion:

diff --git a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3172,6 +3172,7 @@ static void *fallback_alloc(struct kmem_cache *cache, gfp_t flags)
 	gfp_t local_flags;
 	struct zoneref *z;
 	struct zone *zone;
+	nodemask_t allowed_nodes = NODE_MASK_NONE;
 	enum zone_type high_zoneidx = gfp_zone(flags);
 	void *obj = NULL;
 	int nid;
@@ -3197,6 +3198,7 @@ retry:
 					flags | GFP_THISNODE, nid);
 				if (obj)
 					break;
+				node_set(nid, allowed_nodes);
 		}
 	}
 
@@ -3210,7 +3212,15 @@ retry:
 		if (local_flags & __GFP_WAIT)
 			local_irq_enable();
 		kmem_flagcheck(cache, flags);
-		obj = kmem_getpages(cache, local_flags, numa_node_id());
+		nid = numa_node_id();
+		if (cache->nodelists[nid])
+			obj = kmem_getpages(cache, local_flags, nid);
+		else
+			for_each_node_mask(nid, allowed_nodes) {
+				obj = kmem_getpages(cache, local_flags, nid);
+				if (obj)
+					break;
+			}
 		if (local_flags & __GFP_WAIT)
 			local_irq_disable();
 		if (obj) {

Anyway, I'll leave these otherwise unnecessary limitations to Pekka.  
Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2010-02-06 22:31 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-02-03 21:39 [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues Andi Kleen
2010-02-03 21:39 ` Andi Kleen
2010-02-03 21:39 ` [PATCH] [1/4] SLAB: Handle node-not-up case in fallback_alloc() Andi Kleen
2010-02-03 21:39   ` Andi Kleen
2010-02-05 21:06   ` David Rientjes
2010-02-05 21:06     ` David Rientjes
2010-02-06  7:25     ` Andi Kleen
2010-02-06  7:25       ` Andi Kleen
2010-02-06  9:53       ` David Rientjes
2010-02-06  9:53         ` David Rientjes
2010-02-06 15:56         ` Andi Kleen
2010-02-06 15:56           ` Andi Kleen
2010-02-06 22:31           ` David Rientjes
2010-02-06 22:31             ` David Rientjes
2010-02-03 21:39 ` [PATCH] [2/4] SLAB: Set up the l3 lists for the memory of freshly added memory Andi Kleen
2010-02-03 21:39   ` Andi Kleen
2010-02-05 19:12   ` Christoph Lameter
2010-02-05 19:12     ` Christoph Lameter
2010-02-05 21:17   ` David Rientjes
2010-02-05 21:17     ` David Rientjes
2010-02-06  7:26     ` Andi Kleen
2010-02-06  7:26       ` Andi Kleen
2010-02-06  9:47       ` David Rientjes
2010-02-06  9:47         ` David Rientjes
2010-02-03 21:39 ` [PATCH] [3/4] SLAB: Separate node initialization into separate function Andi Kleen
2010-02-03 21:39   ` Andi Kleen
2010-02-05 19:15   ` Christoph Lameter
2010-02-05 19:15     ` Christoph Lameter
2010-02-05 21:29   ` David Rientjes
2010-02-05 21:29     ` David Rientjes
2010-02-06  7:27     ` Andi Kleen
2010-02-06  7:27       ` Andi Kleen
2010-02-06  9:55       ` David Rientjes
2010-02-06  9:55         ` David Rientjes
2010-02-03 21:39 ` [PATCH] [4/4] SLAB: Fix node add timer race in cache_reap Andi Kleen
2010-02-03 21:39   ` Andi Kleen
2010-02-05 19:16   ` Christoph Lameter
2010-02-05 19:16     ` Christoph Lameter
2010-02-05  8:27 ` [PATCH] [0/4] SLAB: Fix a couple of slab memory hotadd issues Pekka Enberg
2010-02-05  8:27   ` Pekka Enberg
2010-02-05 19:19 ` Christoph Lameter
2010-02-05 19:19   ` Christoph Lameter
2010-02-05 20:22   ` Andi Kleen
2010-02-05 20:22     ` Andi Kleen
2010-02-05 20:55     ` Christoph Lameter
2010-02-05 20:55       ` Christoph Lameter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.