All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] slab: common kmem_cache_cpu functions V2
@ 2014-06-11 19:15 Christoph Lameter
  2014-06-11 19:15 ` [PATCH 1/3] slab common: Add functions for kmem_cache_node access Christoph Lameter
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Christoph Lameter @ 2014-06-11 19:15 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Joonsoo Kim, David Rientjes, linux-mm, Andrew Morton

V1->V2
- Add some comments
- Use the new functions in more places to simplify code

The patchset provides two new functions in mm/slab.h and modifies SLAB and
SLUB to use these. The kmem_cache_node structure is shared between both
allocators and the use of common accessors will allow us to move more code
into slab_common.c in the future.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/3] slab common: Add functions for kmem_cache_node access
  2014-06-11 19:15 [PATCH 0/3] slab: common kmem_cache_cpu functions V2 Christoph Lameter
@ 2014-06-11 19:15 ` Christoph Lameter
  2014-06-11 23:07   ` David Rientjes
                     ` (2 more replies)
  2014-06-11 19:15 ` [PATCH 2/3] slub: Use new node functions Christoph Lameter
  2014-06-11 19:15 ` [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions Christoph Lameter
  2 siblings, 3 replies; 14+ messages in thread
From: Christoph Lameter @ 2014-06-11 19:15 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Joonsoo Kim, David Rientjes, linux-mm, Andrew Morton

[-- Attachment #1: common_node_functions --]
[-- Type: text/plain, Size: 1919 bytes --]

These functions allow to eliminate repeatedly used code in both
SLAB and SLUB and also allow for the insertion of debugging code
that may be needed in the development process.

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/mm/slab.h
===================================================================
--- linux.orig/mm/slab.h	2014-06-10 14:18:11.506956436 -0500
+++ linux/mm/slab.h	2014-06-10 14:21:51.279893231 -0500
@@ -294,5 +294,18 @@ struct kmem_cache_node {
 
 };
 
+static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
+{
+	return s->node[node];
+}
+
+/*
+ * Iterator over all nodes. The body will be executed for each node that has
+ * a kmem_cache_node structure allocated (which is true for all online nodes)
+ */
+#define for_each_kmem_cache_node(__s, __node, __n) \
+	for (__node = 0; __n = get_node(__s, __node), __node < nr_node_ids; __node++) \
+		 if (__n)
+
 void *slab_next(struct seq_file *m, void *p, loff_t *pos);
 void slab_stop(struct seq_file *m, void *p);
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c	2014-06-10 14:18:11.506956436 -0500
+++ linux/mm/slub.c	2014-06-10 14:19:58.000000000 -0500
@@ -233,11 +233,6 @@ static inline void stat(const struct kme
  * 			Core slab cache functions
  *******************************************************************/
 
-static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
-{
-	return s->node[node];
-}
-
 /* Verify that a pointer has an address that is valid within a slab page */
 static inline int check_valid_pointer(struct kmem_cache *s,
 				struct page *page, const void *object)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 2/3] slub: Use new node functions
  2014-06-11 19:15 [PATCH 0/3] slab: common kmem_cache_cpu functions V2 Christoph Lameter
  2014-06-11 19:15 ` [PATCH 1/3] slab common: Add functions for kmem_cache_node access Christoph Lameter
@ 2014-06-11 19:15 ` Christoph Lameter
  2014-06-11 23:12   ` David Rientjes
  2014-06-11 19:15 ` [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions Christoph Lameter
  2 siblings, 1 reply; 14+ messages in thread
From: Christoph Lameter @ 2014-06-11 19:15 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Joonsoo Kim, David Rientjes, linux-mm, Andrew Morton

[-- Attachment #1: common_slub_node --]
[-- Type: text/plain, Size: 6344 bytes --]

Make use of the new node functions in mm/slab.h to
reduce code size and simplify.

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c	2014-06-10 13:49:22.154458193 -0500
+++ linux/mm/slub.c	2014-06-10 13:51:03.959192299 -0500
@@ -2157,6 +2157,7 @@ slab_out_of_memory(struct kmem_cache *s,
 	static DEFINE_RATELIMIT_STATE(slub_oom_rs, DEFAULT_RATELIMIT_INTERVAL,
 				      DEFAULT_RATELIMIT_BURST);
 	int node;
+	struct kmem_cache_node *n;
 
 	if ((gfpflags & __GFP_NOWARN) || !__ratelimit(&slub_oom_rs))
 		return;
@@ -2171,15 +2172,11 @@ slab_out_of_memory(struct kmem_cache *s,
 		pr_warn("  %s debugging increased min order, use slub_debug=O to disable.\n",
 			s->name);
 
-	for_each_online_node(node) {
-		struct kmem_cache_node *n = get_node(s, node);
+	for_each_kmem_cache_node(s, node, n) {
 		unsigned long nr_slabs;
 		unsigned long nr_objs;
 		unsigned long nr_free;
 
-		if (!n)
-			continue;
-
 		nr_free  = count_partial(n, count_free);
 		nr_slabs = node_nr_slabs(n);
 		nr_objs  = node_nr_objs(n);
@@ -2923,13 +2920,10 @@ static void early_kmem_cache_node_alloc(
 static void free_kmem_cache_nodes(struct kmem_cache *s)
 {
 	int node;
+	struct kmem_cache_node *n;
 
-	for_each_node_state(node, N_NORMAL_MEMORY) {
-		struct kmem_cache_node *n = s->node[node];
-
-		if (n)
-			kmem_cache_free(kmem_cache_node, n);
-
+	for_each_kmem_cache_node(s, node, n) {
+		kmem_cache_free(kmem_cache_node, n);
 		s->node[node] = NULL;
 	}
 }
@@ -3217,11 +3211,11 @@ static void free_partial(struct kmem_cac
 static inline int kmem_cache_close(struct kmem_cache *s)
 {
 	int node;
+	struct kmem_cache_node *n;
 
 	flush_all(s);
 	/* Attempt to free all objects */
-	for_each_node_state(node, N_NORMAL_MEMORY) {
-		struct kmem_cache_node *n = get_node(s, node);
+	for_each_kmem_cache_node(s, node, n) {
 
 		free_partial(s, n);
 		if (n->nr_partial || slabs_node(s, node))
@@ -3407,11 +3401,7 @@ int __kmem_cache_shrink(struct kmem_cach
 		return -ENOMEM;
 
 	flush_all(s);
-	for_each_node_state(node, N_NORMAL_MEMORY) {
-		n = get_node(s, node);
-
-		if (!n->nr_partial)
-			continue;
+	for_each_kmem_cache_node(s, node, n) {
 
 		for (i = 0; i < objects; i++)
 			INIT_LIST_HEAD(slabs_by_inuse + i);
@@ -3581,6 +3571,7 @@ static struct kmem_cache * __init bootst
 {
 	int node;
 	struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
+	struct kmem_cache_node *n;
 
 	memcpy(s, static_cache, kmem_cache->object_size);
 
@@ -3590,19 +3581,16 @@ static struct kmem_cache * __init bootst
 	 * IPIs around.
 	 */
 	__flush_cpu_slab(s, smp_processor_id());
-	for_each_node_state(node, N_NORMAL_MEMORY) {
-		struct kmem_cache_node *n = get_node(s, node);
+	for_each_kmem_cache_node(s, node, n) {
 		struct page *p;
 
-		if (n) {
-			list_for_each_entry(p, &n->partial, lru)
-				p->slab_cache = s;
+		list_for_each_entry(p, &n->partial, lru)
+			p->slab_cache = s;
 
 #ifdef CONFIG_SLUB_DEBUG
-			list_for_each_entry(p, &n->full, lru)
-				p->slab_cache = s;
+		list_for_each_entry(p, &n->full, lru)
+			p->slab_cache = s;
 #endif
-		}
 	}
 	list_add(&s->list, &slab_caches);
 	return s;
@@ -3955,16 +3943,14 @@ static long validate_slab_cache(struct k
 	unsigned long count = 0;
 	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
 				sizeof(unsigned long), GFP_KERNEL);
+	struct kmem_cache_node *n;
 
 	if (!map)
 		return -ENOMEM;
 
 	flush_all(s);
-	for_each_node_state(node, N_NORMAL_MEMORY) {
-		struct kmem_cache_node *n = get_node(s, node);
-
+	for_each_kmem_cache_node(s, node, n)
 		count += validate_slab_node(s, n, map);
-	}
 	kfree(map);
 	return count;
 }
@@ -4118,6 +4104,7 @@ static int list_locations(struct kmem_ca
 	int node;
 	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
 				     sizeof(unsigned long), GFP_KERNEL);
+	struct kmem_cache_node *n;
 
 	if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
 				     GFP_TEMPORARY)) {
@@ -4127,8 +4114,7 @@ static int list_locations(struct kmem_ca
 	/* Push back cpu slabs */
 	flush_all(s);
 
-	for_each_node_state(node, N_NORMAL_MEMORY) {
-		struct kmem_cache_node *n = get_node(s, node);
+	for_each_kmem_cache_node(s, node, n) {
 		unsigned long flags;
 		struct page *page;
 
@@ -4327,8 +4313,9 @@ static ssize_t show_slab_objects(struct
 	get_online_mems();
 #ifdef CONFIG_SLUB_DEBUG
 	if (flags & SO_ALL) {
-		for_each_node_state(node, N_NORMAL_MEMORY) {
-			struct kmem_cache_node *n = get_node(s, node);
+		struct kmem_cache_node *n;
+
+		for_each_kmem_cache_node(s, node, n) {
 
 			if (flags & SO_TOTAL)
 				x = atomic_long_read(&n->total_objects);
@@ -4344,8 +4331,9 @@ static ssize_t show_slab_objects(struct
 	} else
 #endif
 	if (flags & SO_PARTIAL) {
-		for_each_node_state(node, N_NORMAL_MEMORY) {
-			struct kmem_cache_node *n = get_node(s, node);
+		struct kmem_cache_node *n;
+
+		for_each_kmem_cache_node(s, node, n) {
 
 			if (flags & SO_TOTAL)
 				x = count_partial(n, count_total);
@@ -4359,7 +4347,7 @@ static ssize_t show_slab_objects(struct
 	}
 	x = sprintf(buf, "%lu", total);
 #ifdef CONFIG_NUMA
-	for_each_node_state(node, N_NORMAL_MEMORY)
+	for(node = 0; node < nr_node_ids; node++)
 		if (nodes[node])
 			x += sprintf(buf + x, " N%d=%lu",
 					node, nodes[node]);
@@ -4373,16 +4361,12 @@ static ssize_t show_slab_objects(struct
 static int any_slab_objects(struct kmem_cache *s)
 {
 	int node;
+	struct kmem_cache_node *n;
 
-	for_each_online_node(node) {
-		struct kmem_cache_node *n = get_node(s, node);
-
-		if (!n)
-			continue;
-
+	for_each_kmem_cache_node(s, node, n)
 		if (atomic_long_read(&n->total_objects))
 			return 1;
-	}
+
 	return 0;
 }
 #endif
@@ -5337,12 +5321,9 @@ void get_slabinfo(struct kmem_cache *s,
 	unsigned long nr_objs = 0;
 	unsigned long nr_free = 0;
 	int node;
+	struct kmem_cache_node *n;
 
-	for_each_online_node(node) {
-		struct kmem_cache_node *n = get_node(s, node);
-
-		if (!n)
-			continue;
+	for_each_kmem_cache_node(s, node, n) {
 
 		nr_slabs += node_nr_slabs(n);
 		nr_objs += node_nr_objs(n);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions
  2014-06-11 19:15 [PATCH 0/3] slab: common kmem_cache_cpu functions V2 Christoph Lameter
  2014-06-11 19:15 ` [PATCH 1/3] slab common: Add functions for kmem_cache_node access Christoph Lameter
  2014-06-11 19:15 ` [PATCH 2/3] slub: Use new node functions Christoph Lameter
@ 2014-06-11 19:15 ` Christoph Lameter
  2014-06-11 23:15   ` David Rientjes
  2014-06-12  6:35   ` Joonsoo Kim
  2 siblings, 2 replies; 14+ messages in thread
From: Christoph Lameter @ 2014-06-11 19:15 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Joonsoo Kim, David Rientjes, linux-mm, Andrew Morton

[-- Attachment #1: common_slab_node --]
[-- Type: text/plain, Size: 13905 bytes --]

Use the two functions to simplify the code avoiding numerous explicit
checks coded checking for a certain node to be online.

Get rid of various repeated calculations of kmem_cache_node structures.

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c	2014-06-10 13:51:07.751070658 -0500
+++ linux/mm/slab.c	2014-06-10 14:14:47.821503296 -0500
@@ -267,7 +267,7 @@ static void kmem_cache_node_init(struct
 #define MAKE_LIST(cachep, listp, slab, nodeid)				\
 	do {								\
 		INIT_LIST_HEAD(listp);					\
-		list_splice(&(cachep->node[nodeid]->slab), listp);	\
+		list_splice(&get_node(cachep, nodeid)->slab, listp);	\
 	} while (0)
 
 #define	MAKE_ALL_LISTS(cachep, ptr, nodeid)				\
@@ -455,16 +455,11 @@ static struct lock_class_key debugobj_al
 
 static void slab_set_lock_classes(struct kmem_cache *cachep,
 		struct lock_class_key *l3_key, struct lock_class_key *alc_key,
-		int q)
+		struct kmem_cache_node *n)
 {
 	struct array_cache **alc;
-	struct kmem_cache_node *n;
 	int r;
 
-	n = cachep->node[q];
-	if (!n)
-		return;
-
 	lockdep_set_class(&n->list_lock, l3_key);
 	alc = n->alien;
 	/*
@@ -482,17 +477,19 @@ static void slab_set_lock_classes(struct
 	}
 }
 
-static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep, int node)
+static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep,
+	struct kmem_cache_node *n)
 {
-	slab_set_lock_classes(cachep, &debugobj_l3_key, &debugobj_alc_key, node);
+	slab_set_lock_classes(cachep, &debugobj_l3_key, &debugobj_alc_key, n);
 }
 
 static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
 {
 	int node;
+	struct kmem_cache_node *n;
 
-	for_each_online_node(node)
-		slab_set_debugobj_lock_classes_node(cachep, node);
+	for_each_kmem_cache_node(cachep, node, h)
+		slab_set_debugobj_lock_classes_node(cachep, n);
 }
 
 static void init_node_lock_keys(int q)
@@ -509,31 +506,30 @@ static void init_node_lock_keys(int q)
 		if (!cache)
 			continue;
 
-		n = cache->node[q];
+		n = get_node(cache, q);
 		if (!n || OFF_SLAB(cache))
 			continue;
 
 		slab_set_lock_classes(cache, &on_slab_l3_key,
-				&on_slab_alc_key, q);
+				&on_slab_alc_key, n);
 	}
 }
 
-static void on_slab_lock_classes_node(struct kmem_cache *cachep, int q)
+static void on_slab_lock_classes_node(struct kmem_cache *cachep,
+	struct kmem_cache_node *n)
 {
-	if (!cachep->node[q])
-		return;
-
 	slab_set_lock_classes(cachep, &on_slab_l3_key,
-			&on_slab_alc_key, q);
+			&on_slab_alc_key, n);
 }
 
 static inline void on_slab_lock_classes(struct kmem_cache *cachep)
 {
 	int node;
+	struct kmem_cache_node *n;
 
 	VM_BUG_ON(OFF_SLAB(cachep));
-	for_each_node(node)
-		on_slab_lock_classes_node(cachep, node);
+	for_each_kmem_cache_node(cachep, node, h)
+		on_slab_lock_classes_node(cachep, h);
 }
 
 static inline void init_lock_keys(void)
@@ -556,7 +552,8 @@ static inline void on_slab_lock_classes(
 {
 }
 
-static inline void on_slab_lock_classes_node(struct kmem_cache *cachep, int node)
+static inline void on_slab_lock_classes_node(struct kmem_cache *cachep,
+	int node, struct kmem_cache_node *n)
 {
 }
 
@@ -774,7 +771,7 @@ static inline bool is_slab_pfmemalloc(st
 static void recheck_pfmemalloc_active(struct kmem_cache *cachep,
 						struct array_cache *ac)
 {
-	struct kmem_cache_node *n = cachep->node[numa_mem_id()];
+	struct kmem_cache_node *n = get_node(cachep,numa_mem_id());
 	struct page *page;
 	unsigned long flags;
 
@@ -829,7 +826,7 @@ static void *__ac_get_obj(struct kmem_ca
 		 * If there are empty slabs on the slabs_free list and we are
 		 * being forced to refill the cache, mark this one !pfmemalloc.
 		 */
-		n = cachep->node[numa_mem_id()];
+		n = get_node(cachep, numa_mem_id());
 		if (!list_empty(&n->slabs_free) && force_refill) {
 			struct page *page = virt_to_head_page(objp);
 			ClearPageSlabPfmemalloc(page);
@@ -979,7 +976,7 @@ static void free_alien_cache(struct arra
 static void __drain_alien_cache(struct kmem_cache *cachep,
 				struct array_cache *ac, int node)
 {
-	struct kmem_cache_node *n = cachep->node[node];
+	struct kmem_cache_node *n = get_node(cachep, node);
 
 	if (ac->avail) {
 		spin_lock(&n->list_lock);
@@ -1047,7 +1044,7 @@ static inline int cache_free_alien(struc
 	if (likely(nodeid == node))
 		return 0;
 
-	n = cachep->node[node];
+	n = get_node(cachep, node);
 	STATS_INC_NODEFREES(cachep);
 	if (n->alien && n->alien[nodeid]) {
 		alien = n->alien[nodeid];
@@ -1059,9 +1056,10 @@ static inline int cache_free_alien(struc
 		ac_put_obj(cachep, alien, objp);
 		spin_unlock(&alien->lock);
 	} else {
-		spin_lock(&(cachep->node[nodeid])->list_lock);
+		n = get_node(cachep, nodeid);
+		spin_lock(&n->list_lock);
 		free_block(cachep, &objp, 1, nodeid);
-		spin_unlock(&(cachep->node[nodeid])->list_lock);
+		spin_unlock(&n->list_lock);
 	}
 	return 1;
 }
@@ -1088,7 +1086,8 @@ static int init_cache_node_node(int node
 		 * begin anything. Make sure some other cpu on this
 		 * node has not already allocated this
 		 */
-		if (!cachep->node[node]) {
+		n = get_node(cachep, node);
+		if (!n) {
 			n = kmalloc_node(memsize, GFP_KERNEL, node);
 			if (!n)
 				return -ENOMEM;
@@ -1104,11 +1103,11 @@ static int init_cache_node_node(int node
 			cachep->node[node] = n;
 		}
 
-		spin_lock_irq(&cachep->node[node]->list_lock);
-		cachep->node[node]->free_limit =
+		spin_lock_irq(&n->list_lock);
+		n->free_limit =
 			(1 + nr_cpus_node(node)) *
 			cachep->batchcount + cachep->num;
-		spin_unlock_irq(&cachep->node[node]->list_lock);
+		spin_unlock_irq(&n->list_lock);
 	}
 	return 0;
 }
@@ -1134,7 +1133,7 @@ static void cpuup_canceled(long cpu)
 		/* cpu is dead; no one can alloc from it. */
 		nc = cachep->array[cpu];
 		cachep->array[cpu] = NULL;
-		n = cachep->node[node];
+		n = get_node(cachep, node);
 
 		if (!n)
 			goto free_array_cache;
@@ -1177,7 +1176,7 @@ free_array_cache:
 	 * shrink each nodelist to its limit.
 	 */
 	list_for_each_entry(cachep, &slab_caches, list) {
-		n = cachep->node[node];
+		n = get_node(cachep, node);
 		if (!n)
 			continue;
 		drain_freelist(cachep, n, slabs_tofree(cachep, n));
@@ -1232,7 +1231,7 @@ static int cpuup_prepare(long cpu)
 			}
 		}
 		cachep->array[cpu] = nc;
-		n = cachep->node[node];
+		n = get_node(cachep, node);
 		BUG_ON(!n);
 
 		spin_lock_irq(&n->list_lock);
@@ -1257,7 +1256,7 @@ static int cpuup_prepare(long cpu)
 			slab_set_debugobj_lock_classes_node(cachep, node);
 		else if (!OFF_SLAB(cachep) &&
 			 !(cachep->flags & SLAB_DESTROY_BY_RCU))
-			on_slab_lock_classes_node(cachep, node);
+			on_slab_lock_classes_node(cachep, node, n);
 	}
 	init_node_lock_keys(node);
 
@@ -1343,7 +1342,7 @@ static int __meminit drain_cache_node_no
 	list_for_each_entry(cachep, &slab_caches, list) {
 		struct kmem_cache_node *n;
 
-		n = cachep->node[node];
+		n = get_node(cachep, node);
 		if (!n)
 			continue;
 
@@ -1638,14 +1637,10 @@ slab_out_of_memory(struct kmem_cache *ca
 	printk(KERN_WARNING "  cache: %s, object size: %d, order: %d\n",
 		cachep->name, cachep->size, cachep->gfporder);
 
-	for_each_online_node(node) {
+	for_each_kmem_cache_node(cachep, node, n) {
 		unsigned long active_objs = 0, num_objs = 0, free_objects = 0;
 		unsigned long active_slabs = 0, num_slabs = 0;
 
-		n = cachep->node[node];
-		if (!n)
-			continue;
-
 		spin_lock_irqsave(&n->list_lock, flags);
 		list_for_each_entry(page, &n->slabs_full, lru) {
 			active_objs += cachep->num;
@@ -2380,7 +2375,7 @@ static void check_spinlock_acquired(stru
 {
 #ifdef CONFIG_SMP
 	check_irq_off();
-	assert_spin_locked(&cachep->node[numa_mem_id()]->list_lock);
+	assert_spin_locked(&get_node(cachep, numa_mem_id())->list_lock);
 #endif
 }
 
@@ -2388,7 +2383,7 @@ static void check_spinlock_acquired_node
 {
 #ifdef CONFIG_SMP
 	check_irq_off();
-	assert_spin_locked(&cachep->node[node]->list_lock);
+	assert_spin_locked(&get_node(cachep, node)->list_lock);
 #endif
 }
 
@@ -2408,12 +2403,14 @@ static void do_drain(void *arg)
 	struct kmem_cache *cachep = arg;
 	struct array_cache *ac;
 	int node = numa_mem_id();
+	struct kmem_cache_node *n;
 
 	check_irq_off();
 	ac = cpu_cache_get(cachep);
-	spin_lock(&cachep->node[node]->list_lock);
+	n = get_node(cachep, node);
+	spin_lock(&n->list_lock);
 	free_block(cachep, ac->entry, ac->avail, node);
-	spin_unlock(&cachep->node[node]->list_lock);
+	spin_unlock(&n->list_lock);
 	ac->avail = 0;
 }
 
@@ -2424,17 +2421,12 @@ static void drain_cpu_caches(struct kmem
 
 	on_each_cpu(do_drain, cachep, 1);
 	check_irq_on();
-	for_each_online_node(node) {
-		n = cachep->node[node];
-		if (n && n->alien)
+	for_each_kmem_cache_node(cachep, node, n)
+		if (n->alien)
 			drain_alien_cache(cachep, n->alien);
-	}
 
-	for_each_online_node(node) {
-		n = cachep->node[node];
-		if (n)
-			drain_array(cachep, n, n->shared, 1, node);
-	}
+	for_each_kmem_cache_node(cachep, node, n)
+		drain_array(cachep, n, n->shared, 1, node);
 }
 
 /*
@@ -2480,17 +2472,14 @@ out:
 
 int __kmem_cache_shrink(struct kmem_cache *cachep)
 {
-	int ret = 0, i = 0;
+	int ret = 0;
+	int node;
 	struct kmem_cache_node *n;
 
 	drain_cpu_caches(cachep);
 
 	check_irq_on();
-	for_each_online_node(i) {
-		n = cachep->node[i];
-		if (!n)
-			continue;
-
+	for_each_kmem_cache_node(cachep, node, n) {
 		drain_freelist(cachep, n, slabs_tofree(cachep, n));
 
 		ret += !list_empty(&n->slabs_full) ||
@@ -2512,13 +2501,11 @@ int __kmem_cache_shutdown(struct kmem_ca
 	    kfree(cachep->array[i]);
 
 	/* NUMA: free the node structures */
-	for_each_online_node(i) {
-		n = cachep->node[i];
-		if (n) {
-			kfree(n->shared);
-			free_alien_cache(n->alien);
-			kfree(n);
-		}
+	for_each_kmem_cache_node(cachep, i, n) {
+		kfree(n->shared);
+		free_alien_cache(n->alien);
+		kfree(n);
+		cachep->node[i] = NULL;
 	}
 	return 0;
 }
@@ -2696,7 +2683,7 @@ static int cache_grow(struct kmem_cache
 
 	/* Take the node list lock to change the colour_next on this node */
 	check_irq_off();
-	n = cachep->node[nodeid];
+	n = get_node(cachep, nodeid);
 	spin_lock(&n->list_lock);
 
 	/* Get colour for the slab, and cal the next value. */
@@ -2864,7 +2851,7 @@ retry:
 		 */
 		batchcount = BATCHREFILL_LIMIT;
 	}
-	n = cachep->node[node];
+	n = get_node(cachep, node);
 
 	BUG_ON(ac->avail > 0 || !n);
 	spin_lock(&n->list_lock);
@@ -3108,8 +3095,8 @@ retry:
 		nid = zone_to_nid(zone);
 
 		if (cpuset_zone_allowed_hardwall(zone, flags) &&
-			cache->node[nid] &&
-			cache->node[nid]->free_objects) {
+			get_node(cache, nid) &&
+			get_node(cache, nid)->free_objects) {
 				obj = ____cache_alloc_node(cache,
 					flags | GFP_THISNODE, nid);
 				if (obj)
@@ -3172,7 +3159,7 @@ static void *____cache_alloc_node(struct
 	int x;
 
 	VM_BUG_ON(nodeid > num_online_nodes());
-	n = cachep->node[nodeid];
+	n = get_node(cachep, nodeid);
 	BUG_ON(!n);
 
 retry:
@@ -3243,7 +3230,7 @@ slab_alloc_node(struct kmem_cache *cache
 	if (nodeid == NUMA_NO_NODE)
 		nodeid = slab_node;
 
-	if (unlikely(!cachep->node[nodeid])) {
+	if (unlikely(!get_node(cachep, nodeid))) {
 		/* Node not bootstrapped yet */
 		ptr = fallback_alloc(cachep, flags);
 		goto out;
@@ -3359,7 +3346,7 @@ static void free_block(struct kmem_cache
 		objp = objpp[i];
 
 		page = virt_to_head_page(objp);
-		n = cachep->node[node];
+		n = get_node(cachep, node);
 		list_del(&page->lru);
 		check_spinlock_acquired_node(cachep, node);
 		slab_put_obj(cachep, page, objp, node);
@@ -3401,7 +3388,7 @@ static void cache_flusharray(struct kmem
 	BUG_ON(!batchcount || batchcount > ac->avail);
 #endif
 	check_irq_off();
-	n = cachep->node[node];
+	n = get_node(cachep, node);
 	spin_lock(&n->list_lock);
 	if (n->shared) {
 		struct array_cache *shared_array = n->shared;
@@ -3714,7 +3701,7 @@ static int alloc_kmem_cache_node(struct
 			}
 		}
 
-		n = cachep->node[node];
+		n = get_node(cachep, node);
 		if (n) {
 			struct array_cache *shared = n->shared;
 
@@ -3759,8 +3746,8 @@ fail:
 		/* Cache is not active yet. Roll back what we did */
 		node--;
 		while (node >= 0) {
-			if (cachep->node[node]) {
-				n = cachep->node[node];
+			if (get_node(cachep, node)) {
+				n = get_node(cachep, node);
 
 				kfree(n->shared);
 				free_alien_cache(n->alien);
@@ -3823,11 +3810,17 @@ static int __do_tune_cpucache(struct kme
 
 	for_each_online_cpu(i) {
 		struct array_cache *ccold = new->new[i];
+		int node;
+		struct kmem_cache_node *n;
+
 		if (!ccold)
 			continue;
-		spin_lock_irq(&cachep->node[cpu_to_mem(i)]->list_lock);
-		free_block(cachep, ccold->entry, ccold->avail, cpu_to_mem(i));
-		spin_unlock_irq(&cachep->node[cpu_to_mem(i)]->list_lock);
+
+		node = cpu_to_mem(i);
+		n = get_node(cachep, node);
+		spin_lock_irq(&n->list_lock);
+		free_block(cachep, ccold->entry, ccold->avail, node);
+		spin_unlock_irq(&n->list_lock);
 		kfree(ccold);
 	}
 	kfree(new);
@@ -3987,7 +3980,7 @@ static void cache_reap(struct work_struc
 		 * have established with reasonable certainty that
 		 * we can do some work if the lock was obtained.
 		 */
-		n = searchp->node[node];
+		n = get_node(searchp, node);
 
 		reap_alien(searchp, n);
 
@@ -4039,10 +4032,7 @@ void get_slabinfo(struct kmem_cache *cac
 
 	active_objs = 0;
 	num_slabs = 0;
-	for_each_online_node(node) {
-		n = cachep->node[node];
-		if (!n)
-			continue;
+	for_each_kmem_cache_node(cachep, node, n) {
 
 		check_irq_on();
 		spin_lock_irq(&n->list_lock);
@@ -4276,10 +4266,7 @@ static int leaks_show(struct seq_file *m
 
 	x[1] = 0;
 
-	for_each_online_node(node) {
-		n = cachep->node[node];
-		if (!n)
-			continue;
+	for_each_kmem_cache_node(cachep, node, n) {
 
 		check_irq_on();
 		spin_lock_irq(&n->list_lock);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] slab common: Add functions for kmem_cache_node access
  2014-06-11 19:15 ` [PATCH 1/3] slab common: Add functions for kmem_cache_node access Christoph Lameter
@ 2014-06-11 23:07   ` David Rientjes
  2014-06-12  6:10   ` Joonsoo Kim
  2014-06-17 21:17   ` Andrew Morton
  2 siblings, 0 replies; 14+ messages in thread
From: David Rientjes @ 2014-06-11 23:07 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, Joonsoo Kim, linux-mm, Andrew Morton

On Wed, 11 Jun 2014, Christoph Lameter wrote:

> These functions allow to eliminate repeatedly used code in both
> SLAB and SLUB and also allow for the insertion of debugging code
> that may be needed in the development process.
> 
> Signed-off-by: Christoph Lameter <cl@linux.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] slub: Use new node functions
  2014-06-11 19:15 ` [PATCH 2/3] slub: Use new node functions Christoph Lameter
@ 2014-06-11 23:12   ` David Rientjes
  2014-06-13 16:02     ` Christoph Lameter
  0 siblings, 1 reply; 14+ messages in thread
From: David Rientjes @ 2014-06-11 23:12 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, Joonsoo Kim, linux-mm, Andrew Morton

On Wed, 11 Jun 2014, Christoph Lameter wrote:

> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c	2014-06-10 13:49:22.154458193 -0500
> +++ linux/mm/slub.c	2014-06-10 13:51:03.959192299 -0500
> @@ -2157,6 +2157,7 @@ slab_out_of_memory(struct kmem_cache *s,
>  	static DEFINE_RATELIMIT_STATE(slub_oom_rs, DEFAULT_RATELIMIT_INTERVAL,
>  				      DEFAULT_RATELIMIT_BURST);
>  	int node;
> +	struct kmem_cache_node *n;
>  
>  	if ((gfpflags & __GFP_NOWARN) || !__ratelimit(&slub_oom_rs))
>  		return;
> @@ -2171,15 +2172,11 @@ slab_out_of_memory(struct kmem_cache *s,
>  		pr_warn("  %s debugging increased min order, use slub_debug=O to disable.\n",
>  			s->name);
>  
> -	for_each_online_node(node) {
> -		struct kmem_cache_node *n = get_node(s, node);
> +	for_each_kmem_cache_node(s, node, n) {
>  		unsigned long nr_slabs;
>  		unsigned long nr_objs;
>  		unsigned long nr_free;
>  
> -		if (!n)
> -			continue;
> -
>  		nr_free  = count_partial(n, count_free);
>  		nr_slabs = node_nr_slabs(n);
>  		nr_objs  = node_nr_objs(n);
> @@ -2923,13 +2920,10 @@ static void early_kmem_cache_node_alloc(
>  static void free_kmem_cache_nodes(struct kmem_cache *s)
>  {
>  	int node;
> +	struct kmem_cache_node *n;
>  
> -	for_each_node_state(node, N_NORMAL_MEMORY) {
> -		struct kmem_cache_node *n = s->node[node];
> -
> -		if (n)
> -			kmem_cache_free(kmem_cache_node, n);
> -
> +	for_each_kmem_cache_node(s, node, n) {
> +		kmem_cache_free(kmem_cache_node, n);
>  		s->node[node] = NULL;
>  	}
>  }
> @@ -3217,11 +3211,11 @@ static void free_partial(struct kmem_cac
>  static inline int kmem_cache_close(struct kmem_cache *s)
>  {
>  	int node;
> +	struct kmem_cache_node *n;
>  
>  	flush_all(s);
>  	/* Attempt to free all objects */
> -	for_each_node_state(node, N_NORMAL_MEMORY) {
> -		struct kmem_cache_node *n = get_node(s, node);
> +	for_each_kmem_cache_node(s, node, n) {
>  
>  		free_partial(s, n);
>  		if (n->nr_partial || slabs_node(s, node))

Newline not removed?

> @@ -3407,11 +3401,7 @@ int __kmem_cache_shrink(struct kmem_cach
>  		return -ENOMEM;
>  
>  	flush_all(s);
> -	for_each_node_state(node, N_NORMAL_MEMORY) {
> -		n = get_node(s, node);
> -
> -		if (!n->nr_partial)
> -			continue;
> +	for_each_kmem_cache_node(s, node, n) {
>  
>  		for (i = 0; i < objects; i++)
>  			INIT_LIST_HEAD(slabs_by_inuse + i);

Is there any reason not to keep the !n->nr_partial check to avoid taking 
n->list_lock unnecessarily?

> @@ -3581,6 +3571,7 @@ static struct kmem_cache * __init bootst
>  {
>  	int node;
>  	struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
> +	struct kmem_cache_node *n;
>  
>  	memcpy(s, static_cache, kmem_cache->object_size);
>  
> @@ -3590,19 +3581,16 @@ static struct kmem_cache * __init bootst
>  	 * IPIs around.
>  	 */
>  	__flush_cpu_slab(s, smp_processor_id());
> -	for_each_node_state(node, N_NORMAL_MEMORY) {
> -		struct kmem_cache_node *n = get_node(s, node);
> +	for_each_kmem_cache_node(s, node, n) {
>  		struct page *p;
>  
> -		if (n) {
> -			list_for_each_entry(p, &n->partial, lru)
> -				p->slab_cache = s;
> +		list_for_each_entry(p, &n->partial, lru)
> +			p->slab_cache = s;
>  
>  #ifdef CONFIG_SLUB_DEBUG
> -			list_for_each_entry(p, &n->full, lru)
> -				p->slab_cache = s;
> +		list_for_each_entry(p, &n->full, lru)
> +			p->slab_cache = s;
>  #endif
> -		}
>  	}
>  	list_add(&s->list, &slab_caches);
>  	return s;
> @@ -3955,16 +3943,14 @@ static long validate_slab_cache(struct k
>  	unsigned long count = 0;
>  	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
>  				sizeof(unsigned long), GFP_KERNEL);
> +	struct kmem_cache_node *n;
>  
>  	if (!map)
>  		return -ENOMEM;
>  
>  	flush_all(s);
> -	for_each_node_state(node, N_NORMAL_MEMORY) {
> -		struct kmem_cache_node *n = get_node(s, node);
> -
> +	for_each_kmem_cache_node(s, node, n)
>  		count += validate_slab_node(s, n, map);
> -	}
>  	kfree(map);
>  	return count;
>  }
> @@ -4118,6 +4104,7 @@ static int list_locations(struct kmem_ca
>  	int node;
>  	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
>  				     sizeof(unsigned long), GFP_KERNEL);
> +	struct kmem_cache_node *n;
>  
>  	if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
>  				     GFP_TEMPORARY)) {
> @@ -4127,8 +4114,7 @@ static int list_locations(struct kmem_ca
>  	/* Push back cpu slabs */
>  	flush_all(s);
>  
> -	for_each_node_state(node, N_NORMAL_MEMORY) {
> -		struct kmem_cache_node *n = get_node(s, node);
> +	for_each_kmem_cache_node(s, node, n) {
>  		unsigned long flags;
>  		struct page *page;
>  
> @@ -4327,8 +4313,9 @@ static ssize_t show_slab_objects(struct
>  	get_online_mems();
>  #ifdef CONFIG_SLUB_DEBUG
>  	if (flags & SO_ALL) {
> -		for_each_node_state(node, N_NORMAL_MEMORY) {
> -			struct kmem_cache_node *n = get_node(s, node);
> +		struct kmem_cache_node *n;
> +
> +		for_each_kmem_cache_node(s, node, n) {
>  
>  			if (flags & SO_TOTAL)
>  				x = atomic_long_read(&n->total_objects);

Another unnecessary newline?

> @@ -4344,8 +4331,9 @@ static ssize_t show_slab_objects(struct
>  	} else
>  #endif
>  	if (flags & SO_PARTIAL) {
> -		for_each_node_state(node, N_NORMAL_MEMORY) {
> -			struct kmem_cache_node *n = get_node(s, node);
> +		struct kmem_cache_node *n;
> +
> +		for_each_kmem_cache_node(s, node, n) {
>  
>  			if (flags & SO_TOTAL)
>  				x = count_partial(n, count_total);

Ditto.

> @@ -4359,7 +4347,7 @@ static ssize_t show_slab_objects(struct
>  	}
>  	x = sprintf(buf, "%lu", total);
>  #ifdef CONFIG_NUMA
> -	for_each_node_state(node, N_NORMAL_MEMORY)
> +	for(node = 0; node < nr_node_ids; node++)
>  		if (nodes[node])
>  			x += sprintf(buf + x, " N%d=%lu",
>  					node, nodes[node]);
> @@ -4373,16 +4361,12 @@ static ssize_t show_slab_objects(struct
>  static int any_slab_objects(struct kmem_cache *s)
>  {
>  	int node;
> +	struct kmem_cache_node *n;
>  
> -	for_each_online_node(node) {
> -		struct kmem_cache_node *n = get_node(s, node);
> -
> -		if (!n)
> -			continue;
> -
> +	for_each_kmem_cache_node(s, node, n)
>  		if (atomic_long_read(&n->total_objects))
>  			return 1;
> -	}
> +
>  	return 0;
>  }
>  #endif
> @@ -5337,12 +5321,9 @@ void get_slabinfo(struct kmem_cache *s,
>  	unsigned long nr_objs = 0;
>  	unsigned long nr_free = 0;
>  	int node;
> +	struct kmem_cache_node *n;
>  
> -	for_each_online_node(node) {
> -		struct kmem_cache_node *n = get_node(s, node);
> -
> -		if (!n)
> -			continue;
> +	for_each_kmem_cache_node(s, node, n) {
>  
>  		nr_slabs += node_nr_slabs(n);
>  		nr_objs += node_nr_objs(n);
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions
  2014-06-11 19:15 ` [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions Christoph Lameter
@ 2014-06-11 23:15   ` David Rientjes
  2014-06-12  6:35   ` Joonsoo Kim
  1 sibling, 0 replies; 14+ messages in thread
From: David Rientjes @ 2014-06-11 23:15 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, Joonsoo Kim, linux-mm, Andrew Morton

On Wed, 11 Jun 2014, Christoph Lameter wrote:

> Use the two functions to simplify the code avoiding numerous explicit
> checks coded checking for a certain node to be online.
> 
> Get rid of various repeated calculations of kmem_cache_node structures.
> 

You're not bragging about your diffstat that removes more lines than it 
removes that show why this change is helpful :)

 mm/slab.c |  167 +++++++++++++++++++++++++++++------------------------------------
 1 file changed, 77 insertions(+), 90 deletions(-)

> Signed-off-by: Christoph Lameter <cl@linux.com>
> 

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] slab common: Add functions for kmem_cache_node access
  2014-06-11 19:15 ` [PATCH 1/3] slab common: Add functions for kmem_cache_node access Christoph Lameter
  2014-06-11 23:07   ` David Rientjes
@ 2014-06-12  6:10   ` Joonsoo Kim
  2014-06-17 21:17   ` Andrew Morton
  2 siblings, 0 replies; 14+ messages in thread
From: Joonsoo Kim @ 2014-06-12  6:10 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, David Rientjes, linux-mm, Andrew Morton

On Wed, Jun 11, 2014 at 02:15:11PM -0500, Christoph Lameter wrote:
> These functions allow to eliminate repeatedly used code in both
> SLAB and SLUB and also allow for the insertion of debugging code
> that may be needed in the development process.
> 
> Signed-off-by: Christoph Lameter <cl@linux.com>

Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions
  2014-06-11 19:15 ` [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions Christoph Lameter
  2014-06-11 23:15   ` David Rientjes
@ 2014-06-12  6:35   ` Joonsoo Kim
  2014-06-13 16:32     ` Christoph Lameter
  1 sibling, 1 reply; 14+ messages in thread
From: Joonsoo Kim @ 2014-06-12  6:35 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, David Rientjes, linux-mm, Andrew Morton

On Wed, Jun 11, 2014 at 02:15:13PM -0500, Christoph Lameter wrote:
> Use the two functions to simplify the code avoiding numerous explicit
> checks coded checking for a certain node to be online.
> 
> Get rid of various repeated calculations of kmem_cache_node structures.
> 
> Signed-off-by: Christoph Lameter <cl@linux.com>
> 
> Index: linux/mm/slab.c
> ===================================================================
> --- linux.orig/mm/slab.c	2014-06-10 13:51:07.751070658 -0500
> +++ linux/mm/slab.c	2014-06-10 14:14:47.821503296 -0500
> @@ -267,7 +267,7 @@ static void kmem_cache_node_init(struct
>  #define MAKE_LIST(cachep, listp, slab, nodeid)				\
>  	do {								\
>  		INIT_LIST_HEAD(listp);					\
> -		list_splice(&(cachep->node[nodeid]->slab), listp);	\
> +		list_splice(&get_node(cachep, nodeid)->slab, listp);	\
>  	} while (0)
>  
>  #define	MAKE_ALL_LISTS(cachep, ptr, nodeid)				\
> @@ -455,16 +455,11 @@ static struct lock_class_key debugobj_al
>  
>  static void slab_set_lock_classes(struct kmem_cache *cachep,
>  		struct lock_class_key *l3_key, struct lock_class_key *alc_key,
> -		int q)
> +		struct kmem_cache_node *n)
>  {
>  	struct array_cache **alc;
> -	struct kmem_cache_node *n;
>  	int r;
>  
> -	n = cachep->node[q];
> -	if (!n)
> -		return;
> -
>  	lockdep_set_class(&n->list_lock, l3_key);
>  	alc = n->alien;
>  	/*
> @@ -482,17 +477,19 @@ static void slab_set_lock_classes(struct
>  	}
>  }
>  
> -static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep, int node)
> +static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep,
> +	struct kmem_cache_node *n)
>  {
> -	slab_set_lock_classes(cachep, &debugobj_l3_key, &debugobj_alc_key, node);
> +	slab_set_lock_classes(cachep, &debugobj_l3_key, &debugobj_alc_key, n);
>  }
>  
>  static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
>  {
>  	int node;
> +	struct kmem_cache_node *n;
>  
> -	for_each_online_node(node)
> -		slab_set_debugobj_lock_classes_node(cachep, node);
> +	for_each_kmem_cache_node(cachep, node, h)
> +		slab_set_debugobj_lock_classes_node(cachep, n);
>  }
>  
>  static void init_node_lock_keys(int q)
> @@ -509,31 +506,30 @@ static void init_node_lock_keys(int q)
>  		if (!cache)
>  			continue;
>  
> -		n = cache->node[q];
> +		n = get_node(cache, q);
>  		if (!n || OFF_SLAB(cache))
>  			continue;
>  
>  		slab_set_lock_classes(cache, &on_slab_l3_key,
> -				&on_slab_alc_key, q);
> +				&on_slab_alc_key, n);
>  	}
>  }
>  
> -static void on_slab_lock_classes_node(struct kmem_cache *cachep, int q)
> +static void on_slab_lock_classes_node(struct kmem_cache *cachep,
> +	struct kmem_cache_node *n)
>  {
> -	if (!cachep->node[q])
> -		return;
> -
>  	slab_set_lock_classes(cachep, &on_slab_l3_key,
> -			&on_slab_alc_key, q);
> +			&on_slab_alc_key, n);
>  }

Hello,

on_slab_lock_classes_node() definition differs with the !LOCKDEP case.
So if you turn on lockdep, compile error occurs.

>  
>  static inline void on_slab_lock_classes(struct kmem_cache *cachep)
>  {
>  	int node;
> +	struct kmem_cache_node *n;
>  
>  	VM_BUG_ON(OFF_SLAB(cachep));
> -	for_each_node(node)
> -		on_slab_lock_classes_node(cachep, node);
> +	for_each_kmem_cache_node(cachep, node, h)
> +		on_slab_lock_classes_node(cachep, h);
>  }

%s/h/n

>  
>  static inline void init_lock_keys(void)
> @@ -556,7 +552,8 @@ static inline void on_slab_lock_classes(
>  {
>  }
>  
> -static inline void on_slab_lock_classes_node(struct kmem_cache *cachep, int node)
> +static inline void on_slab_lock_classes_node(struct kmem_cache *cachep,
> +	int node, struct kmem_cache_node *n)
>  {
>  }

Here is different definition,

>  
> @@ -774,7 +771,7 @@ static inline bool is_slab_pfmemalloc(st
>  static void recheck_pfmemalloc_active(struct kmem_cache *cachep,
>  						struct array_cache *ac)
>  {
> -	struct kmem_cache_node *n = cachep->node[numa_mem_id()];
> +	struct kmem_cache_node *n = get_node(cachep,numa_mem_id());

after comma, one blank will be needed

>  	struct page *page;
>  	unsigned long flags;
>  
> @@ -829,7 +826,7 @@ static void *__ac_get_obj(struct kmem_ca
>  		 * If there are empty slabs on the slabs_free list and we are
>  		 * being forced to refill the cache, mark this one !pfmemalloc.
>  		 */
> -		n = cachep->node[numa_mem_id()];
> +		n = get_node(cachep, numa_mem_id());
>  		if (!list_empty(&n->slabs_free) && force_refill) {
>  			struct page *page = virt_to_head_page(objp);
>  			ClearPageSlabPfmemalloc(page);
> @@ -979,7 +976,7 @@ static void free_alien_cache(struct arra
>  static void __drain_alien_cache(struct kmem_cache *cachep,
>  				struct array_cache *ac, int node)
>  {
> -	struct kmem_cache_node *n = cachep->node[node];
> +	struct kmem_cache_node *n = get_node(cachep, node);
>  
>  	if (ac->avail) {
>  		spin_lock(&n->list_lock);
> @@ -1047,7 +1044,7 @@ static inline int cache_free_alien(struc
>  	if (likely(nodeid == node))
>  		return 0;
>  
> -	n = cachep->node[node];
> +	n = get_node(cachep, node);
>  	STATS_INC_NODEFREES(cachep);
>  	if (n->alien && n->alien[nodeid]) {
>  		alien = n->alien[nodeid];
> @@ -1059,9 +1056,10 @@ static inline int cache_free_alien(struc
>  		ac_put_obj(cachep, alien, objp);
>  		spin_unlock(&alien->lock);
>  	} else {
> -		spin_lock(&(cachep->node[nodeid])->list_lock);
> +		n = get_node(cachep, nodeid);
> +		spin_lock(&n->list_lock);
>  		free_block(cachep, &objp, 1, nodeid);
> -		spin_unlock(&(cachep->node[nodeid])->list_lock);
> +		spin_unlock(&n->list_lock);
>  	}
>  	return 1;
>  }
> @@ -1088,7 +1086,8 @@ static int init_cache_node_node(int node
>  		 * begin anything. Make sure some other cpu on this
>  		 * node has not already allocated this
>  		 */
> -		if (!cachep->node[node]) {
> +		n = get_node(cachep, node);
> +		if (!n) {
>  			n = kmalloc_node(memsize, GFP_KERNEL, node);
>  			if (!n)
>  				return -ENOMEM;
> @@ -1104,11 +1103,11 @@ static int init_cache_node_node(int node
>  			cachep->node[node] = n;
>  		}
>  
> -		spin_lock_irq(&cachep->node[node]->list_lock);
> -		cachep->node[node]->free_limit =
> +		spin_lock_irq(&n->list_lock);
> +		n->free_limit =
>  			(1 + nr_cpus_node(node)) *
>  			cachep->batchcount + cachep->num;
> -		spin_unlock_irq(&cachep->node[node]->list_lock);
> +		spin_unlock_irq(&n->list_lock);
>  	}
>  	return 0;
>  }
> @@ -1134,7 +1133,7 @@ static void cpuup_canceled(long cpu)
>  		/* cpu is dead; no one can alloc from it. */
>  		nc = cachep->array[cpu];
>  		cachep->array[cpu] = NULL;
> -		n = cachep->node[node];
> +		n = get_node(cachep, node);
>  
>  		if (!n)
>  			goto free_array_cache;
> @@ -1177,7 +1176,7 @@ free_array_cache:
>  	 * shrink each nodelist to its limit.
>  	 */
>  	list_for_each_entry(cachep, &slab_caches, list) {
> -		n = cachep->node[node];
> +		n = get_node(cachep, node);
>  		if (!n)
>  			continue;
>  		drain_freelist(cachep, n, slabs_tofree(cachep, n));
> @@ -1232,7 +1231,7 @@ static int cpuup_prepare(long cpu)
>  			}
>  		}
>  		cachep->array[cpu] = nc;
> -		n = cachep->node[node];
> +		n = get_node(cachep, node);
>  		BUG_ON(!n);
>  
>  		spin_lock_irq(&n->list_lock);
> @@ -1257,7 +1256,7 @@ static int cpuup_prepare(long cpu)
>  			slab_set_debugobj_lock_classes_node(cachep, node);
>  		else if (!OFF_SLAB(cachep) &&
>  			 !(cachep->flags & SLAB_DESTROY_BY_RCU))
> -			on_slab_lock_classes_node(cachep, node);
> +			on_slab_lock_classes_node(cachep, node, n);
>  	}
>  	init_node_lock_keys(node);
>  
> @@ -1343,7 +1342,7 @@ static int __meminit drain_cache_node_no
>  	list_for_each_entry(cachep, &slab_caches, list) {
>  		struct kmem_cache_node *n;
>  
> -		n = cachep->node[node];
> +		n = get_node(cachep, node);
>  		if (!n)
>  			continue;
>  
> @@ -1638,14 +1637,10 @@ slab_out_of_memory(struct kmem_cache *ca
>  	printk(KERN_WARNING "  cache: %s, object size: %d, order: %d\n",
>  		cachep->name, cachep->size, cachep->gfporder);
>  
> -	for_each_online_node(node) {
> +	for_each_kmem_cache_node(cachep, node, n) {
>  		unsigned long active_objs = 0, num_objs = 0, free_objects = 0;
>  		unsigned long active_slabs = 0, num_slabs = 0;
>  
> -		n = cachep->node[node];
> -		if (!n)
> -			continue;
> -
>  		spin_lock_irqsave(&n->list_lock, flags);
>  		list_for_each_entry(page, &n->slabs_full, lru) {
>  			active_objs += cachep->num;
> @@ -2380,7 +2375,7 @@ static void check_spinlock_acquired(stru
>  {
>  #ifdef CONFIG_SMP
>  	check_irq_off();
> -	assert_spin_locked(&cachep->node[numa_mem_id()]->list_lock);
> +	assert_spin_locked(&get_node(cachep, numa_mem_id())->list_lock);
>  #endif
>  }
>  
> @@ -2388,7 +2383,7 @@ static void check_spinlock_acquired_node
>  {
>  #ifdef CONFIG_SMP
>  	check_irq_off();
> -	assert_spin_locked(&cachep->node[node]->list_lock);
> +	assert_spin_locked(&get_node(cachep, node)->list_lock);
>  #endif
>  }
>  
> @@ -2408,12 +2403,14 @@ static void do_drain(void *arg)
>  	struct kmem_cache *cachep = arg;
>  	struct array_cache *ac;
>  	int node = numa_mem_id();
> +	struct kmem_cache_node *n;
>  
>  	check_irq_off();
>  	ac = cpu_cache_get(cachep);
> -	spin_lock(&cachep->node[node]->list_lock);
> +	n = get_node(cachep, node);
> +	spin_lock(&n->list_lock);
>  	free_block(cachep, ac->entry, ac->avail, node);
> -	spin_unlock(&cachep->node[node]->list_lock);
> +	spin_unlock(&n->list_lock);
>  	ac->avail = 0;
>  }
>  
> @@ -2424,17 +2421,12 @@ static void drain_cpu_caches(struct kmem
>  
>  	on_each_cpu(do_drain, cachep, 1);
>  	check_irq_on();
> -	for_each_online_node(node) {
> -		n = cachep->node[node];
> -		if (n && n->alien)
> +	for_each_kmem_cache_node(cachep, node, n)
> +		if (n->alien)
>  			drain_alien_cache(cachep, n->alien);
> -	}
>  
> -	for_each_online_node(node) {
> -		n = cachep->node[node];
> -		if (n)
> -			drain_array(cachep, n, n->shared, 1, node);
> -	}
> +	for_each_kmem_cache_node(cachep, node, n)
> +		drain_array(cachep, n, n->shared, 1, node);
>  }
>  
>  /*
> @@ -2480,17 +2472,14 @@ out:
>  
>  int __kmem_cache_shrink(struct kmem_cache *cachep)
>  {
> -	int ret = 0, i = 0;
> +	int ret = 0;
> +	int node;
>  	struct kmem_cache_node *n;
>  
>  	drain_cpu_caches(cachep);
>  
>  	check_irq_on();
> -	for_each_online_node(i) {
> -		n = cachep->node[i];
> -		if (!n)
> -			continue;
> -
> +	for_each_kmem_cache_node(cachep, node, n) {
>  		drain_freelist(cachep, n, slabs_tofree(cachep, n));
>  
>  		ret += !list_empty(&n->slabs_full) ||
> @@ -2512,13 +2501,11 @@ int __kmem_cache_shutdown(struct kmem_ca
>  	    kfree(cachep->array[i]);
>  
>  	/* NUMA: free the node structures */
> -	for_each_online_node(i) {
> -		n = cachep->node[i];
> -		if (n) {
> -			kfree(n->shared);
> -			free_alien_cache(n->alien);
> -			kfree(n);
> -		}
> +	for_each_kmem_cache_node(cachep, i, n) {
> +		kfree(n->shared);
> +		free_alien_cache(n->alien);
> +		kfree(n);
> +		cachep->node[i] = NULL;
>  	}
>  	return 0;
>  }
> @@ -2696,7 +2683,7 @@ static int cache_grow(struct kmem_cache
>  
>  	/* Take the node list lock to change the colour_next on this node */
>  	check_irq_off();
> -	n = cachep->node[nodeid];
> +	n = get_node(cachep, nodeid);
>  	spin_lock(&n->list_lock);
>  
>  	/* Get colour for the slab, and cal the next value. */
> @@ -2864,7 +2851,7 @@ retry:
>  		 */
>  		batchcount = BATCHREFILL_LIMIT;
>  	}
> -	n = cachep->node[node];
> +	n = get_node(cachep, node);
>  
>  	BUG_ON(ac->avail > 0 || !n);
>  	spin_lock(&n->list_lock);
> @@ -3108,8 +3095,8 @@ retry:
>  		nid = zone_to_nid(zone);
>  
>  		if (cpuset_zone_allowed_hardwall(zone, flags) &&
> -			cache->node[nid] &&
> -			cache->node[nid]->free_objects) {
> +			get_node(cache, nid) &&
> +			get_node(cache, nid)->free_objects) {
>  				obj = ____cache_alloc_node(cache,
>  					flags | GFP_THISNODE, nid);
>  				if (obj)
> @@ -3172,7 +3159,7 @@ static void *____cache_alloc_node(struct
>  	int x;
>  
>  	VM_BUG_ON(nodeid > num_online_nodes());
> -	n = cachep->node[nodeid];
> +	n = get_node(cachep, nodeid);
>  	BUG_ON(!n);
>  
>  retry:
> @@ -3243,7 +3230,7 @@ slab_alloc_node(struct kmem_cache *cache
>  	if (nodeid == NUMA_NO_NODE)
>  		nodeid = slab_node;
>  
> -	if (unlikely(!cachep->node[nodeid])) {
> +	if (unlikely(!get_node(cachep, nodeid))) {
>  		/* Node not bootstrapped yet */
>  		ptr = fallback_alloc(cachep, flags);
>  		goto out;
> @@ -3359,7 +3346,7 @@ static void free_block(struct kmem_cache
>  		objp = objpp[i];
>  
>  		page = virt_to_head_page(objp);
> -		n = cachep->node[node];
> +		n = get_node(cachep, node);
>  		list_del(&page->lru);
>  		check_spinlock_acquired_node(cachep, node);
>  		slab_put_obj(cachep, page, objp, node);
> @@ -3401,7 +3388,7 @@ static void cache_flusharray(struct kmem
>  	BUG_ON(!batchcount || batchcount > ac->avail);
>  #endif
>  	check_irq_off();
> -	n = cachep->node[node];
> +	n = get_node(cachep, node);
>  	spin_lock(&n->list_lock);
>  	if (n->shared) {
>  		struct array_cache *shared_array = n->shared;
> @@ -3714,7 +3701,7 @@ static int alloc_kmem_cache_node(struct
>  			}
>  		}
>  
> -		n = cachep->node[node];
> +		n = get_node(cachep, node);
>  		if (n) {
>  			struct array_cache *shared = n->shared;
>  
> @@ -3759,8 +3746,8 @@ fail:
>  		/* Cache is not active yet. Roll back what we did */
>  		node--;
>  		while (node >= 0) {
> -			if (cachep->node[node]) {
> -				n = cachep->node[node];
> +			if (get_node(cachep, node)) {
> +				n = get_node(cachep, node);

Could you do this as following?

n = get_node(cachep, node);
if (n) {
        ...
}

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] slub: Use new node functions
  2014-06-11 23:12   ` David Rientjes
@ 2014-06-13 16:02     ` Christoph Lameter
  2014-06-17 21:47       ` David Rientjes
  0 siblings, 1 reply; 14+ messages in thread
From: Christoph Lameter @ 2014-06-13 16:02 UTC (permalink / raw)
  To: David Rientjes; +Cc: Pekka Enberg, Joonsoo Kim, linux-mm, Andrew Morton

On Wed, 11 Jun 2014, David Rientjes wrote:

> > +	for_each_kmem_cache_node(s, node, n) {
> >
> >  		free_partial(s, n);
> >  		if (n->nr_partial || slabs_node(s, node))
>
> Newline not removed?

Ok got through the file and removed all the lines after
for_each_kmem_cache_node.

>
> > @@ -3407,11 +3401,7 @@ int __kmem_cache_shrink(struct kmem_cach
> >  		return -ENOMEM;
> >
> >  	flush_all(s);
> > -	for_each_node_state(node, N_NORMAL_MEMORY) {
> > -		n = get_node(s, node);
> > -
> > -		if (!n->nr_partial)
> > -			continue;
> > +	for_each_kmem_cache_node(s, node, n) {
> >
> >  		for (i = 0; i < objects; i++)
> >  			INIT_LIST_HEAD(slabs_by_inuse + i);
>
> Is there any reason not to keep the !n->nr_partial check to avoid taking
> n->list_lock unnecessarily?

No this was simply a mistake the check needs to be preserved.


Subject: slub: Fix up earlier patch

Signed-off-by: Christoph Lameter <cl@linux.com>


Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c	2014-06-13 10:59:01.815583306 -0500
+++ linux/mm/slub.c	2014-06-13 10:58:45.444109563 -0500
@@ -3216,7 +3216,6 @@ static inline int kmem_cache_close(struc
 	flush_all(s);
 	/* Attempt to free all objects */
 	for_each_kmem_cache_node(s, node, n) {
-
 		free_partial(s, n);
 		if (n->nr_partial || slabs_node(s, node))
 			return 1;
@@ -3402,6 +3401,8 @@ int __kmem_cache_shrink(struct kmem_cach

 	flush_all(s);
 	for_each_kmem_cache_node(s, node, n) {
+		if (!n->nr_partial)
+			continue;

 		for (i = 0; i < objects; i++)
 			INIT_LIST_HEAD(slabs_by_inuse + i);
@@ -4334,7 +4335,6 @@ static ssize_t show_slab_objects(struct
 		struct kmem_cache_node *n;

 		for_each_kmem_cache_node(s, node, n) {
-
 			if (flags & SO_TOTAL)
 				x = count_partial(n, count_total);
 			else if (flags & SO_OBJECTS)
@@ -5324,7 +5324,6 @@ void get_slabinfo(struct kmem_cache *s,
 	struct kmem_cache_node *n;

 	for_each_kmem_cache_node(s, node, n) {
-
 		nr_slabs += node_nr_slabs(n);
 		nr_objs += node_nr_objs(n);
 		nr_free += count_partial(n, count_free);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions
  2014-06-12  6:35   ` Joonsoo Kim
@ 2014-06-13 16:32     ` Christoph Lameter
  0 siblings, 0 replies; 14+ messages in thread
From: Christoph Lameter @ 2014-06-13 16:32 UTC (permalink / raw)
  To: Joonsoo Kim; +Cc: Pekka Enberg, David Rientjes, linux-mm, Andrew Morton

On Thu, 12 Jun 2014, Joonsoo Kim wrote:

> > @@ -3759,8 +3746,8 @@ fail:
> >  		/* Cache is not active yet. Roll back what we did */
> >  		node--;
> >  		while (node >= 0) {
> > -			if (cachep->node[node]) {
> > -				n = cachep->node[node];
> > +			if (get_node(cachep, node)) {
> > +				n = get_node(cachep, node);
>
> Could you do this as following?
>
> n = get_node(cachep, node);
> if (n) {
>         ...
> }

Sure....

Subject: slab: Fixes to earlier patch

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c	2014-06-13 11:12:05.018384359 -0500
+++ linux/mm/slab.c	2014-06-13 11:11:57.970611243 -0500
@@ -528,8 +528,8 @@ static inline void on_slab_lock_classes(
 	struct kmem_cache_node *n;

 	VM_BUG_ON(OFF_SLAB(cachep));
-	for_each_kmem_cache_node(cachep, node, h)
-		on_slab_lock_classes_node(cachep, h);
+	for_each_kmem_cache_node(cachep, node, n)
+		on_slab_lock_classes_node(cachep, n);
 }

 static inline void init_lock_keys(void)
@@ -553,7 +553,7 @@ static inline void on_slab_lock_classes(
 }

 static inline void on_slab_lock_classes_node(struct kmem_cache *cachep,
-	int node, struct kmem_cache_node *n)
+	struct kmem_cache_node *n)
 {
 }

@@ -771,7 +771,7 @@ static inline bool is_slab_pfmemalloc(st
 static void recheck_pfmemalloc_active(struct kmem_cache *cachep,
 						struct array_cache *ac)
 {
-	struct kmem_cache_node *n = get_node(cachep,numa_mem_id());
+	struct kmem_cache_node *n = get_node(cachep, numa_mem_id());
 	struct page *page;
 	unsigned long flags;

@@ -1256,7 +1256,7 @@ static int cpuup_prepare(long cpu)
 			slab_set_debugobj_lock_classes_node(cachep, node);
 		else if (!OFF_SLAB(cachep) &&
 			 !(cachep->flags & SLAB_DESTROY_BY_RCU))
-			on_slab_lock_classes_node(cachep, node, n);
+			on_slab_lock_classes_node(cachep, n);
 	}
 	init_node_lock_keys(node);

@@ -3746,9 +3746,8 @@ fail:
 		/* Cache is not active yet. Roll back what we did */
 		node--;
 		while (node >= 0) {
-			if (get_node(cachep, node)) {
-				n = get_node(cachep, node);
-
+			n = get_node(cachep, node);
+			if (n) {
 				kfree(n->shared);
 				free_alien_cache(n->alien);
 				kfree(n);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] slab common: Add functions for kmem_cache_node access
  2014-06-11 19:15 ` [PATCH 1/3] slab common: Add functions for kmem_cache_node access Christoph Lameter
  2014-06-11 23:07   ` David Rientjes
  2014-06-12  6:10   ` Joonsoo Kim
@ 2014-06-17 21:17   ` Andrew Morton
  2014-06-17 21:45     ` David Rientjes
  2 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2014-06-17 21:17 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, Joonsoo Kim, David Rientjes, linux-mm

On Wed, 11 Jun 2014 14:15:11 -0500 Christoph Lameter <cl@linux.com> wrote:

> These functions allow to eliminate repeatedly used code in both
> SLAB and SLUB and also allow for the insertion of debugging code
> that may be needed in the development process.
> 
> ...
>
> --- linux.orig/mm/slab.h	2014-06-10 14:18:11.506956436 -0500
> +++ linux/mm/slab.h	2014-06-10 14:21:51.279893231 -0500
> @@ -294,5 +294,18 @@ struct kmem_cache_node {
>  
>  };
>  
> +static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
> +{
> +	return s->node[node];
> +}
> +
> +/*
> + * Iterator over all nodes. The body will be executed for each node that has
> + * a kmem_cache_node structure allocated (which is true for all online nodes)
> + */
> +#define for_each_kmem_cache_node(__s, __node, __n) \
> +	for (__node = 0; __n = get_node(__s, __node), __node < nr_node_ids; __node++) \
> +		 if (__n)

Clueless newbs would be aided if this comment were to describe the
iterator's locking requirements.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] slab common: Add functions for kmem_cache_node access
  2014-06-17 21:17   ` Andrew Morton
@ 2014-06-17 21:45     ` David Rientjes
  0 siblings, 0 replies; 14+ messages in thread
From: David Rientjes @ 2014-06-17 21:45 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Christoph Lameter, Pekka Enberg, Joonsoo Kim, linux-mm

On Tue, 17 Jun 2014, Andrew Morton wrote:

> On Wed, 11 Jun 2014 14:15:11 -0500 Christoph Lameter <cl@linux.com> wrote:
> 
> > These functions allow to eliminate repeatedly used code in both
> > SLAB and SLUB and also allow for the insertion of debugging code
> > that may be needed in the development process.
> > 
> > ...
> >
> > --- linux.orig/mm/slab.h	2014-06-10 14:18:11.506956436 -0500
> > +++ linux/mm/slab.h	2014-06-10 14:21:51.279893231 -0500
> > @@ -294,5 +294,18 @@ struct kmem_cache_node {
> >  
> >  };
> >  
> > +static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
> > +{
> > +	return s->node[node];
> > +}
> > +
> > +/*
> > + * Iterator over all nodes. The body will be executed for each node that has
> > + * a kmem_cache_node structure allocated (which is true for all online nodes)
> > + */
> > +#define for_each_kmem_cache_node(__s, __node, __n) \
> > +	for (__node = 0; __n = get_node(__s, __node), __node < nr_node_ids; __node++) \
> > +		 if (__n)
> 
> Clueless newbs would be aided if this comment were to describe the
> iterator's locking requirements.
> 

There are no locking requirements, if the nodelist is initialized then we 
are good to go.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] slub: Use new node functions
  2014-06-13 16:02     ` Christoph Lameter
@ 2014-06-17 21:47       ` David Rientjes
  0 siblings, 0 replies; 14+ messages in thread
From: David Rientjes @ 2014-06-17 21:47 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, Joonsoo Kim, linux-mm, Andrew Morton

On Fri, 13 Jun 2014, Christoph Lameter wrote:

> On Wed, 11 Jun 2014, David Rientjes wrote:
> 
> > > +	for_each_kmem_cache_node(s, node, n) {
> > >
> > >  		free_partial(s, n);
> > >  		if (n->nr_partial || slabs_node(s, node))
> >
> > Newline not removed?
> 
> Ok got through the file and removed all the lines after
> for_each_kmem_cache_node.
> 
> >
> > > @@ -3407,11 +3401,7 @@ int __kmem_cache_shrink(struct kmem_cach
> > >  		return -ENOMEM;
> > >
> > >  	flush_all(s);
> > > -	for_each_node_state(node, N_NORMAL_MEMORY) {
> > > -		n = get_node(s, node);
> > > -
> > > -		if (!n->nr_partial)
> > > -			continue;
> > > +	for_each_kmem_cache_node(s, node, n) {
> > >
> > >  		for (i = 0; i < objects; i++)
> > >  			INIT_LIST_HEAD(slabs_by_inuse + i);
> >
> > Is there any reason not to keep the !n->nr_partial check to avoid taking
> > n->list_lock unnecessarily?
> 
> No this was simply a mistake the check needs to be preserved.
> 
> 
> Subject: slub: Fix up earlier patch
> 
> Signed-off-by: Christoph Lameter <cl@linux.com>

Thanks!

Acked-by: David Rientjes <rientjes@google.com>

as merged in -mm.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2014-06-17 21:47 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-11 19:15 [PATCH 0/3] slab: common kmem_cache_cpu functions V2 Christoph Lameter
2014-06-11 19:15 ` [PATCH 1/3] slab common: Add functions for kmem_cache_node access Christoph Lameter
2014-06-11 23:07   ` David Rientjes
2014-06-12  6:10   ` Joonsoo Kim
2014-06-17 21:17   ` Andrew Morton
2014-06-17 21:45     ` David Rientjes
2014-06-11 19:15 ` [PATCH 2/3] slub: Use new node functions Christoph Lameter
2014-06-11 23:12   ` David Rientjes
2014-06-13 16:02     ` Christoph Lameter
2014-06-17 21:47       ` David Rientjes
2014-06-11 19:15 ` [PATCH 3/3] slab: Use get_node() and kmem_cache_node() functions Christoph Lameter
2014-06-11 23:15   ` David Rientjes
2014-06-12  6:35   ` Joonsoo Kim
2014-06-13 16:32     ` Christoph Lameter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.