linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] slab: implement byte sized indexes for the freelist of a slab
@ 2013-12-02  8:49 Joonsoo Kim
  2013-12-02  8:49 ` [PATCH v3 1/5] slab: factor out calculate nr objects in cache_estimate Joonsoo Kim
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-02  8:49 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim, Joonsoo Kim

This patchset implements byte sized indexes for the freelist of a slab.

Currently, the freelist of a slab consist of unsigned int sized indexes.
Most of slabs have less number of objects than 256, so much space is wasted.
To reduce this overhead, this patchset implements byte sized indexes for
the freelist of a slab. With it, we can save 3 bytes for each objects.

Below is some numbers of 'cat /proc/slabinfo'.

* Before *
kmalloc-512          525    640    512    8    1 : tunables   54   27    0 : slabdata     80     80      0
kmalloc-256          210    210    256   15    1 : tunables  120   60    0 : slabdata     14     14      0
kmalloc-192         1016   1040    192   20    1 : tunables  120   60    0 : slabdata     52     52      0
kmalloc-96           560    620    128   31    1 : tunables  120   60    0 : slabdata     20     20      0
kmalloc-64          2148   2280     64   60    1 : tunables  120   60    0 : slabdata     38     38      0
kmalloc-128          647    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
kmalloc-32         11360  11413     32  113    1 : tunables  120   60    0 : slabdata    101    101      0
kmem_cache           197    200    192   20    1 : tunables  120   60    0 : slabdata     10     10      0

* After *
kmalloc-512          521    648    512    8    1 : tunables   54   27    0 : slabdata     81     81      0
kmalloc-256          208    208    256   16    1 : tunables  120   60    0 : slabdata     13     13      0
kmalloc-192         1029   1029    192   21    1 : tunables  120   60    0 : slabdata     49     49      0
kmalloc-96           529    589    128   31    1 : tunables  120   60    0 : slabdata     19     19      0
kmalloc-64          2142   2142     64   63    1 : tunables  120   60    0 : slabdata     34     34      0
kmalloc-128          660    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
kmalloc-32         11716  11780     32  124    1 : tunables  120   60    0 : slabdata     95     95      0
kmem_cache           197    210    192   21    1 : tunables  120   60    0 : slabdata     10     10      0

kmem_caches consisting of objects less than or equal to 256 byte have
one or more objects than before. In the case of kmalloc-32, we have 11 more
objects, so 352 bytes (11 * 32) are saved and this is roughly 9% saving of
memory. Of couse, this percentage decreases as the number of objects
in a slab decreases.

Here are the performance results on my 4 cpus machine.

* Before *

 Performance counter stats for 'perf bench sched messaging -g 50 -l 1000' (10 runs):

       229,945,138 cache-misses                                                  ( +-  0.23% )

      11.627897174 seconds time elapsed                                          ( +-  0.14% )

* After *

 Performance counter stats for 'perf bench sched messaging -g 50 -l 1000' (10 runs):

       218,640,472 cache-misses                                                  ( +-  0.42% )

      11.504999837 seconds time elapsed                                          ( +-  0.21% )

cache-misses are reduced by this patchset, roughly 5%.
And elapsed times are improved by 1%.

This patchset comes from a Christoph's idea.
https://lkml.org/lkml/2013/8/23/315

Patches are on top of v3.13-rc1.

Thanks.

Joonsoo Kim (5):
  slab: factor out calculate nr objects in cache_estimate
  slab: introduce helper functions to get/set free object
  slab: restrict the number of objects in a slab
  slab: introduce byte sized index for the freelist of a slab
  slab: make more slab management structure off the slab

 include/linux/slab.h |   11 ++++++
 mm/slab.c            |   97 +++++++++++++++++++++++++++++++++-----------------
 2 files changed, 76 insertions(+), 32 deletions(-)

-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/5] slab: factor out calculate nr objects in cache_estimate
  2013-12-02  8:49 [PATCH v3 0/5] slab: implement byte sized indexes for the freelist of a slab Joonsoo Kim
@ 2013-12-02  8:49 ` Joonsoo Kim
  2014-01-15  4:54   ` David Rientjes
  2013-12-02  8:49 ` [PATCH v3 2/5] slab: introduce helper functions to get/set free object Joonsoo Kim
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-02  8:49 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim, Joonsoo Kim

This logic is not simple to understand so that making separate function
helping readability. Additionally, we can use this change in the
following patch which implement for freelist to have another sized index
in according to nr objects.

Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/mm/slab.c b/mm/slab.c
index eb043bf..e749f75 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -565,9 +565,31 @@ static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep)
 	return cachep->array[smp_processor_id()];
 }
 
-static size_t slab_mgmt_size(size_t nr_objs, size_t align)
+static int calculate_nr_objs(size_t slab_size, size_t buffer_size,
+				size_t idx_size, size_t align)
 {
-	return ALIGN(nr_objs * sizeof(unsigned int), align);
+	int nr_objs;
+	size_t freelist_size;
+
+	/*
+	 * Ignore padding for the initial guess. The padding
+	 * is at most @align-1 bytes, and @buffer_size is at
+	 * least @align. In the worst case, this result will
+	 * be one greater than the number of objects that fit
+	 * into the memory allocation when taking the padding
+	 * into account.
+	 */
+	nr_objs = slab_size / (buffer_size + idx_size);
+
+	/*
+	 * This calculated number will be either the right
+	 * amount, or one greater than what we want.
+	 */
+	freelist_size = slab_size - nr_objs * buffer_size;
+	if (freelist_size < ALIGN(nr_objs * idx_size, align))
+		nr_objs--;
+
+	return nr_objs;
 }
 
 /*
@@ -600,25 +622,9 @@ static void cache_estimate(unsigned long gfporder, size_t buffer_size,
 		nr_objs = slab_size / buffer_size;
 
 	} else {
-		/*
-		 * Ignore padding for the initial guess. The padding
-		 * is at most @align-1 bytes, and @buffer_size is at
-		 * least @align. In the worst case, this result will
-		 * be one greater than the number of objects that fit
-		 * into the memory allocation when taking the padding
-		 * into account.
-		 */
-		nr_objs = (slab_size) / (buffer_size + sizeof(unsigned int));
-
-		/*
-		 * This calculated number will be either the right
-		 * amount, or one greater than what we want.
-		 */
-		if (slab_mgmt_size(nr_objs, align) + nr_objs*buffer_size
-		       > slab_size)
-			nr_objs--;
-
-		mgmt_size = slab_mgmt_size(nr_objs, align);
+		nr_objs = calculate_nr_objs(slab_size, buffer_size,
+					sizeof(unsigned int), align);
+		mgmt_size = ALIGN(nr_objs * sizeof(unsigned int), align);
 	}
 	*num = nr_objs;
 	*left_over = slab_size - nr_objs*buffer_size - mgmt_size;
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/5] slab: introduce helper functions to get/set free object
  2013-12-02  8:49 [PATCH v3 0/5] slab: implement byte sized indexes for the freelist of a slab Joonsoo Kim
  2013-12-02  8:49 ` [PATCH v3 1/5] slab: factor out calculate nr objects in cache_estimate Joonsoo Kim
@ 2013-12-02  8:49 ` Joonsoo Kim
  2014-01-15  4:57   ` David Rientjes
  2013-12-02  8:49 ` [PATCH v3 3/5] slab: restrict the number of objects in a slab Joonsoo Kim
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-02  8:49 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim, Joonsoo Kim

In the following patches, to get/set free objects from the freelist
is changed so that simple casting doesn't work for it. Therefore,
introduce helper functions.

Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/mm/slab.c b/mm/slab.c
index e749f75..77f9eae 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2548,9 +2548,15 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
 	return freelist;
 }
 
-static inline unsigned int *slab_freelist(struct page *page)
+static inline unsigned int get_free_obj(struct page *page, unsigned int idx)
 {
-	return (unsigned int *)(page->freelist);
+	return ((unsigned int *)page->freelist)[idx];
+}
+
+static inline void set_free_obj(struct page *page,
+					unsigned int idx, unsigned int val)
+{
+	((unsigned int *)(page->freelist))[idx] = val;
 }
 
 static void cache_init_objs(struct kmem_cache *cachep,
@@ -2595,7 +2601,7 @@ static void cache_init_objs(struct kmem_cache *cachep,
 		if (cachep->ctor)
 			cachep->ctor(objp);
 #endif
-		slab_freelist(page)[i] = i;
+		set_free_obj(page, i, i);
 	}
 }
 
@@ -2614,7 +2620,7 @@ static void *slab_get_obj(struct kmem_cache *cachep, struct page *page,
 {
 	void *objp;
 
-	objp = index_to_obj(cachep, page, slab_freelist(page)[page->active]);
+	objp = index_to_obj(cachep, page, get_free_obj(page, page->active));
 	page->active++;
 #if DEBUG
 	WARN_ON(page_to_nid(virt_to_page(objp)) != nodeid);
@@ -2635,7 +2641,7 @@ static void slab_put_obj(struct kmem_cache *cachep, struct page *page,
 
 	/* Verify double free bug */
 	for (i = page->active; i < cachep->num; i++) {
-		if (slab_freelist(page)[i] == objnr) {
+		if (get_free_obj(page, i) == objnr) {
 			printk(KERN_ERR "slab: double free detected in cache "
 					"'%s', objp %p\n", cachep->name, objp);
 			BUG();
@@ -2643,7 +2649,7 @@ static void slab_put_obj(struct kmem_cache *cachep, struct page *page,
 	}
 #endif
 	page->active--;
-	slab_freelist(page)[page->active] = objnr;
+	set_free_obj(page, page->active, objnr);
 }
 
 /*
@@ -4216,7 +4222,7 @@ static void handle_slab(unsigned long *n, struct kmem_cache *c,
 
 		for (j = page->active; j < c->num; j++) {
 			/* Skip freed item */
-			if (slab_freelist(page)[j] == i) {
+			if (get_free_obj(page, j) == i) {
 				active = false;
 				break;
 			}
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/5] slab: restrict the number of objects in a slab
  2013-12-02  8:49 [PATCH v3 0/5] slab: implement byte sized indexes for the freelist of a slab Joonsoo Kim
  2013-12-02  8:49 ` [PATCH v3 1/5] slab: factor out calculate nr objects in cache_estimate Joonsoo Kim
  2013-12-02  8:49 ` [PATCH v3 2/5] slab: introduce helper functions to get/set free object Joonsoo Kim
@ 2013-12-02  8:49 ` Joonsoo Kim
  2013-12-02 19:45   ` Christoph Lameter
  2014-01-15  5:05   ` David Rientjes
  2013-12-02  8:49 ` [PATCH v3 4/5] slab: introduce byte sized index for the freelist of " Joonsoo Kim
  2013-12-02  8:49 ` [PATCH v3 5/5] slab: make more slab management structure off the slab Joonsoo Kim
  4 siblings, 2 replies; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-02  8:49 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim, Joonsoo Kim

To prepare to implement byte sized index for managing the freelist
of a slab, we should restrict the number of objects in a slab to be less
or equal to 256, since byte only represent 256 different values.
Setting the size of object to value equal or more than newly introduced
SLAB_OBJ_MIN_SIZE ensures that the number of objects in a slab is less or
equal to 256 for a slab with 1 page.

If page size is rather larger than 4096, above assumption would be wrong.
In this case, we would fall back on 2 bytes sized index.

If minimum size of kmalloc is less than 16, we use it as minimum object
size and give up this optimization.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/include/linux/slab.h b/include/linux/slab.h
index c2bba24..23e1fa1 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -201,6 +201,17 @@ struct kmem_cache {
 #ifndef KMALLOC_SHIFT_LOW
 #define KMALLOC_SHIFT_LOW	5
 #endif
+
+/*
+ * This restriction comes from byte sized index implementation.
+ * Page size is normally 2^12 bytes and, in this case, if we want to use
+ * byte sized index which can represent 2^8 entries, the size of the object
+ * should be equal or greater to 2^12 / 2^8 = 2^4 = 16.
+ * If minimum size of kmalloc is less than 16, we use it as minimum object
+ * size and give up to use byte sized index.
+ */
+#define SLAB_OBJ_MIN_SIZE	(KMALLOC_SHIFT_LOW < 4 ? \
+				(1 << KMALLOC_SHIFT_LOW) : 16)
 #endif
 
 #ifdef CONFIG_SLUB
diff --git a/mm/slab.c b/mm/slab.c
index 77f9eae..7c3c132 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -157,6 +157,17 @@
 #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
 #endif
 
+#define FREELIST_BYTE_INDEX (((PAGE_SIZE >> BITS_PER_BYTE) \
+				<= SLAB_OBJ_MIN_SIZE) ? 1 : 0)
+
+#if FREELIST_BYTE_INDEX
+typedef unsigned char freelist_idx_t;
+#else
+typedef unsigned short freelist_idx_t;
+#endif
+
+#define SLAB_OBJ_MAX_NUM (1 << sizeof(freelist_idx_t) * BITS_PER_BYTE)
+
 /*
  * true if a page was allocated from pfmemalloc reserves for network-based
  * swap
@@ -2016,6 +2027,10 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
 		if (!num)
 			continue;
 
+		/* Can't handle number of objects more than SLAB_OBJ_MAX_NUM */
+		if (num > SLAB_OBJ_MAX_NUM)
+			break;
+
 		if (flags & CFLGS_OFF_SLAB) {
 			/*
 			 * Max number of objs-per-slab for caches which
@@ -2258,6 +2273,12 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
 		flags |= CFLGS_OFF_SLAB;
 
 	size = ALIGN(size, cachep->align);
+	/*
+	 * We should restrict the number of objects in a slab to implement
+	 * byte sized index. Refer comment on SLAB_OBJ_MIN_SIZE definition.
+	 */
+	if (FREELIST_BYTE_INDEX && size < SLAB_OBJ_MIN_SIZE)
+		size = ALIGN(SLAB_OBJ_MIN_SIZE, cachep->align);
 
 	left_over = calculate_slab_order(cachep, size, cachep->align, flags);
 
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 4/5] slab: introduce byte sized index for the freelist of a slab
  2013-12-02  8:49 [PATCH v3 0/5] slab: implement byte sized indexes for the freelist of a slab Joonsoo Kim
                   ` (2 preceding siblings ...)
  2013-12-02  8:49 ` [PATCH v3 3/5] slab: restrict the number of objects in a slab Joonsoo Kim
@ 2013-12-02  8:49 ` Joonsoo Kim
  2013-12-03  2:25   ` Joonsoo Kim
  2014-01-15  5:08   ` David Rientjes
  2013-12-02  8:49 ` [PATCH v3 5/5] slab: make more slab management structure off the slab Joonsoo Kim
  4 siblings, 2 replies; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-02  8:49 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim, Joonsoo Kim

Currently, the freelist of a slab consist of unsigned int sized indexes.
Since most of slabs have less number of objects than 256, large sized
indexes is needless. For example, consider the minimum kmalloc slab. It's
object size is 32 byte and it would consist of one page, so 256 indexes
through byte sized index are enough to contain all possible indexes.

There can be some slabs whose object size is 8 byte. We cannot handle
this case with byte sized index, so we need to restrict minimum
object size. Since these slabs are not major, wasted memory from these
slabs would be negligible.

Some architectures' page size isn't 4096 bytes and rather larger than
4096 bytes (One example is 64KB page size on PPC or IA64) so that
byte sized index doesn't fit to them. In this case, we will use
two bytes sized index.

Below is some number for this patch.

* Before *
kmalloc-512          525    640    512    8    1 : tunables   54   27    0 : slabdata     80     80      0
kmalloc-256          210    210    256   15    1 : tunables  120   60    0 : slabdata     14     14      0
kmalloc-192         1016   1040    192   20    1 : tunables  120   60    0 : slabdata     52     52      0
kmalloc-96           560    620    128   31    1 : tunables  120   60    0 : slabdata     20     20      0
kmalloc-64          2148   2280     64   60    1 : tunables  120   60    0 : slabdata     38     38      0
kmalloc-128          647    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
kmalloc-32         11360  11413     32  113    1 : tunables  120   60    0 : slabdata    101    101      0
kmem_cache           197    200    192   20    1 : tunables  120   60    0 : slabdata     10     10      0

* After *
kmalloc-512          521    648    512    8    1 : tunables   54   27    0 : slabdata     81     81      0
kmalloc-256          208    208    256   16    1 : tunables  120   60    0 : slabdata     13     13      0
kmalloc-192         1029   1029    192   21    1 : tunables  120   60    0 : slabdata     49     49      0
kmalloc-96           529    589    128   31    1 : tunables  120   60    0 : slabdata     19     19      0
kmalloc-64          2142   2142     64   63    1 : tunables  120   60    0 : slabdata     34     34      0
kmalloc-128          660    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
kmalloc-32         11716  11780     32  124    1 : tunables  120   60    0 : slabdata     95     95      0
kmem_cache           197    210    192   21    1 : tunables  120   60    0 : slabdata     10     10      0

kmem_caches consisting of objects less than or equal to 256 byte have
one or more objects than before. In the case of kmalloc-32, we have 11 more
objects, so 352 bytes (11 * 32) are saved and this is roughly 9% saving of
memory. Of couse, this percentage decreases as the number of objects
in a slab decreases.

Here are the performance results on my 4 cpus machine.

* Before *

 Performance counter stats for 'perf bench sched messaging -g 50 -l 1000' (10 runs):

       229,945,138 cache-misses                                                  ( +-  0.23% )

      11.627897174 seconds time elapsed                                          ( +-  0.14% )

* After *

 Performance counter stats for 'perf bench sched messaging -g 50 -l 1000' (10 runs):

       218,640,472 cache-misses                                                  ( +-  0.42% )

      11.504999837 seconds time elapsed                                          ( +-  0.21% )

cache-misses are reduced by this patchset, roughly 5%.
And elapsed times are improved by 1%.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/mm/slab.c b/mm/slab.c
index 7c3c132..7fab788 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -634,8 +634,8 @@ static void cache_estimate(unsigned long gfporder, size_t buffer_size,
 
 	} else {
 		nr_objs = calculate_nr_objs(slab_size, buffer_size,
-					sizeof(unsigned int), align);
-		mgmt_size = ALIGN(nr_objs * sizeof(unsigned int), align);
+					sizeof(freelist_idx_t), align);
+		mgmt_size = ALIGN(nr_objs * sizeof(freelist_idx_t), align);
 	}
 	*num = nr_objs;
 	*left_over = slab_size - nr_objs*buffer_size - mgmt_size;
@@ -2038,7 +2038,7 @@ static size_t calculate_slab_order(struct kmem_cache *cachep,
 			 * looping condition in cache_grow().
 			 */
 			offslab_limit = size;
-			offslab_limit /= sizeof(unsigned int);
+			offslab_limit /= sizeof(freelist_idx_t);
 
  			if (num > offslab_limit)
 				break;
@@ -2286,7 +2286,7 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
 		return -E2BIG;
 
 	freelist_size =
-		ALIGN(cachep->num * sizeof(unsigned int), cachep->align);
+		ALIGN(cachep->num * sizeof(freelist_idx_t), cachep->align);
 
 	/*
 	 * If the slab has been placed off-slab, and we have enough space then
@@ -2299,7 +2299,7 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
 
 	if (flags & CFLGS_OFF_SLAB) {
 		/* really off slab. No need for manual alignment */
-		freelist_size = cachep->num * sizeof(unsigned int);
+		freelist_size = cachep->num * sizeof(freelist_idx_t);
 
 #ifdef CONFIG_PAGE_POISONING
 		/* If we're going to use the generic kernel_map_pages()
@@ -2569,15 +2569,15 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
 	return freelist;
 }
 
-static inline unsigned int get_free_obj(struct page *page, unsigned int idx)
+static inline freelist_idx_t get_free_obj(struct page *page, unsigned char idx)
 {
-	return ((unsigned int *)page->freelist)[idx];
+	return ((freelist_idx_t *)page->freelist)[idx];
 }
 
 static inline void set_free_obj(struct page *page,
-					unsigned int idx, unsigned int val)
+					unsigned char idx, freelist_idx_t val)
 {
-	((unsigned int *)(page->freelist))[idx] = val;
+	((freelist_idx_t *)(page->freelist))[idx] = val;
 }
 
 static void cache_init_objs(struct kmem_cache *cachep,
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 5/5] slab: make more slab management structure off the slab
  2013-12-02  8:49 [PATCH v3 0/5] slab: implement byte sized indexes for the freelist of a slab Joonsoo Kim
                   ` (3 preceding siblings ...)
  2013-12-02  8:49 ` [PATCH v3 4/5] slab: introduce byte sized index for the freelist of " Joonsoo Kim
@ 2013-12-02  8:49 ` Joonsoo Kim
  2013-12-02 14:58   ` Christoph Lameter
  2014-01-15  5:09   ` David Rientjes
  4 siblings, 2 replies; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-02  8:49 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim, Joonsoo Kim

Now, the size of the freelist for the slab management diminish,
so that the on-slab management structure can waste large space
if the object of the slab is large.

Consider a 128 byte sized slab. If on-slab is used, 31 objects can be
in the slab. The size of the freelist for this case would be 31 bytes
so that 97 bytes, that is, more than 75% of object size, are wasted.

In a 64 byte sized slab case, no space is wasted if we use on-slab.
So set off-slab determining constraint to 128 bytes.

Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/mm/slab.c b/mm/slab.c
index 7fab788..1a7f19d 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2264,7 +2264,7 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
 	 * it too early on. Always use on-slab management when
 	 * SLAB_NOLEAKTRACE to avoid recursive calls into kmemleak)
 	 */
-	if ((size >= (PAGE_SIZE >> 3)) && !slab_early_init &&
+	if ((size >= (PAGE_SIZE >> 5)) && !slab_early_init &&
 	    !(flags & SLAB_NOLEAKTRACE))
 		/*
 		 * Size is large, assume best to place the slab management obj
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] slab: make more slab management structure off the slab
  2013-12-02  8:49 ` [PATCH v3 5/5] slab: make more slab management structure off the slab Joonsoo Kim
@ 2013-12-02 14:58   ` Christoph Lameter
  2013-12-03  2:13     ` Joonsoo Kim
  2014-01-15  5:09   ` David Rientjes
  1 sibling, 1 reply; 18+ messages in thread
From: Christoph Lameter @ 2013-12-02 14:58 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Pekka Enberg, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim

On Mon, 2 Dec 2013, Joonsoo Kim wrote:

> Now, the size of the freelist for the slab management diminish,
> so that the on-slab management structure can waste large space
> if the object of the slab is large.

Hmmm.. That is confusing to me. "Since the size of the freelist has shrunk
significantly we have to adjust the heuristic for making the on/off slab
placement decision"?

Make this clearer.

Acked-by: Christoph Lameter <cl@linux.com>

> Consider a 128 byte sized slab. If on-slab is used, 31 objects can be
> in the slab. The size of the freelist for this case would be 31 bytes
> so that 97 bytes, that is, more than 75% of object size, are wasted.
>
> In a 64 byte sized slab case, no space is wasted if we use on-slab.
> So set off-slab determining constraint to 128 bytes.
>
> Acked-by: Christoph Lameter <cl@linux.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> diff --git a/mm/slab.c b/mm/slab.c
> index 7fab788..1a7f19d 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2264,7 +2264,7 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
>  	 * it too early on. Always use on-slab management when
>  	 * SLAB_NOLEAKTRACE to avoid recursive calls into kmemleak)
>  	 */
> -	if ((size >= (PAGE_SIZE >> 3)) && !slab_early_init &&
> +	if ((size >= (PAGE_SIZE >> 5)) && !slab_early_init &&
>  	    !(flags & SLAB_NOLEAKTRACE))
>  		/*
>  		 * Size is large, assume best to place the slab management obj
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 3/5] slab: restrict the number of objects in a slab
  2013-12-02  8:49 ` [PATCH v3 3/5] slab: restrict the number of objects in a slab Joonsoo Kim
@ 2013-12-02 19:45   ` Christoph Lameter
  2014-01-15  5:05   ` David Rientjes
  1 sibling, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2013-12-02 19:45 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Pekka Enberg, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim

On Mon, 2 Dec 2013, Joonsoo Kim wrote:

> If page size is rather larger than 4096, above assumption would be wrong.
> In this case, we would fall back on 2 bytes sized index.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] slab: make more slab management structure off the slab
  2013-12-02 14:58   ` Christoph Lameter
@ 2013-12-03  2:13     ` Joonsoo Kim
  2013-12-13  7:03       ` Joonsoo Kim
  0 siblings, 1 reply; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-03  2:13 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Pekka Enberg, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel

On Mon, Dec 02, 2013 at 02:58:41PM +0000, Christoph Lameter wrote:
> On Mon, 2 Dec 2013, Joonsoo Kim wrote:
> 
> > Now, the size of the freelist for the slab management diminish,
> > so that the on-slab management structure can waste large space
> > if the object of the slab is large.
> 
> Hmmm.. That is confusing to me. "Since the size of the freelist has shrunk
> significantly we have to adjust the heuristic for making the on/off slab
> placement decision"?
> 
> Make this clearer.

Yes. your understanding is right.
I will replace above line with yours.

Thanks.

> 
> Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/5] slab: introduce byte sized index for the freelist of a slab
  2013-12-02  8:49 ` [PATCH v3 4/5] slab: introduce byte sized index for the freelist of " Joonsoo Kim
@ 2013-12-03  2:25   ` Joonsoo Kim
  2013-12-03 15:24     ` Christoph Lameter
  2014-01-15  5:08   ` David Rientjes
  1 sibling, 1 reply; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-03  2:25 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel

On Mon, Dec 02, 2013 at 05:49:42PM +0900, Joonsoo Kim wrote:
> Currently, the freelist of a slab consist of unsigned int sized indexes.
> Since most of slabs have less number of objects than 256, large sized
> indexes is needless. For example, consider the minimum kmalloc slab. It's
> object size is 32 byte and it would consist of one page, so 256 indexes
> through byte sized index are enough to contain all possible indexes.
> 
> There can be some slabs whose object size is 8 byte. We cannot handle
> this case with byte sized index, so we need to restrict minimum
> object size. Since these slabs are not major, wasted memory from these
> slabs would be negligible.
> 
> Some architectures' page size isn't 4096 bytes and rather larger than
> 4096 bytes (One example is 64KB page size on PPC or IA64) so that
> byte sized index doesn't fit to them. In this case, we will use
> two bytes sized index.
> 
> Below is some number for this patch.
> 
> * Before *
> kmalloc-512          525    640    512    8    1 : tunables   54   27    0 : slabdata     80     80      0
> kmalloc-256          210    210    256   15    1 : tunables  120   60    0 : slabdata     14     14      0
> kmalloc-192         1016   1040    192   20    1 : tunables  120   60    0 : slabdata     52     52      0
> kmalloc-96           560    620    128   31    1 : tunables  120   60    0 : slabdata     20     20      0
> kmalloc-64          2148   2280     64   60    1 : tunables  120   60    0 : slabdata     38     38      0
> kmalloc-128          647    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
> kmalloc-32         11360  11413     32  113    1 : tunables  120   60    0 : slabdata    101    101      0
> kmem_cache           197    200    192   20    1 : tunables  120   60    0 : slabdata     10     10      0
> 
> * After *
> kmalloc-512          521    648    512    8    1 : tunables   54   27    0 : slabdata     81     81      0
> kmalloc-256          208    208    256   16    1 : tunables  120   60    0 : slabdata     13     13      0
> kmalloc-192         1029   1029    192   21    1 : tunables  120   60    0 : slabdata     49     49      0
> kmalloc-96           529    589    128   31    1 : tunables  120   60    0 : slabdata     19     19      0
> kmalloc-64          2142   2142     64   63    1 : tunables  120   60    0 : slabdata     34     34      0
> kmalloc-128          660    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
> kmalloc-32         11716  11780     32  124    1 : tunables  120   60    0 : slabdata     95     95      0
> kmem_cache           197    210    192   21    1 : tunables  120   60    0 : slabdata     10     10      0
> 
> kmem_caches consisting of objects less than or equal to 256 byte have
> one or more objects than before. In the case of kmalloc-32, we have 11 more
> objects, so 352 bytes (11 * 32) are saved and this is roughly 9% saving of
> memory. Of couse, this percentage decreases as the number of objects
> in a slab decreases.
> 
> Here are the performance results on my 4 cpus machine.
> 
> * Before *
> 
>  Performance counter stats for 'perf bench sched messaging -g 50 -l 1000' (10 runs):
> 
>        229,945,138 cache-misses                                                  ( +-  0.23% )
> 
>       11.627897174 seconds time elapsed                                          ( +-  0.14% )
> 
> * After *
> 
>  Performance counter stats for 'perf bench sched messaging -g 50 -l 1000' (10 runs):
> 
>        218,640,472 cache-misses                                                  ( +-  0.42% )
> 
>       11.504999837 seconds time elapsed                                          ( +-  0.21% )
> 
> cache-misses are reduced by this patchset, roughly 5%.
> And elapsed times are improved by 1%.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 

Hello, Christoph.

Can I get your ACK for this patch?

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/5] slab: introduce byte sized index for the freelist of a slab
  2013-12-03  2:25   ` Joonsoo Kim
@ 2013-12-03 15:24     ` Christoph Lameter
  0 siblings, 0 replies; 18+ messages in thread
From: Christoph Lameter @ 2013-12-03 15:24 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Pekka Enberg, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel

> Can I get your ACK for this patch?

Sure.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] slab: make more slab management structure off the slab
  2013-12-03  2:13     ` Joonsoo Kim
@ 2013-12-13  7:03       ` Joonsoo Kim
  2014-02-08 10:17         ` Pekka Enberg
  0 siblings, 1 reply; 18+ messages in thread
From: Joonsoo Kim @ 2013-12-13  7:03 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Pekka Enberg, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, linux-kernel

On Tue, Dec 03, 2013 at 11:13:08AM +0900, Joonsoo Kim wrote:
> On Mon, Dec 02, 2013 at 02:58:41PM +0000, Christoph Lameter wrote:
> > On Mon, 2 Dec 2013, Joonsoo Kim wrote:
> > 
> > > Now, the size of the freelist for the slab management diminish,
> > > so that the on-slab management structure can waste large space
> > > if the object of the slab is large.
> > 
> > Hmmm.. That is confusing to me. "Since the size of the freelist has shrunk
> > significantly we have to adjust the heuristic for making the on/off slab
> > placement decision"?
> > 
> > Make this clearer.
> 
> Yes. your understanding is right.
> I will replace above line with yours.
> 
> Thanks.
> 
> > 
> > Acked-by: Christoph Lameter <cl@linux.com>

Hello, Pekka.

Below is updated patch for 5/5 in this series.
Now I get acks from Christoph to all patches in this series.
So, could you merge this patchset? :)
If you want to resend wholeset with proper ack, I will do it
with pleasure.

Thanks.

--------8<---------------

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/5] slab: factor out calculate nr objects in cache_estimate
  2013-12-02  8:49 ` [PATCH v3 1/5] slab: factor out calculate nr objects in cache_estimate Joonsoo Kim
@ 2014-01-15  4:54   ` David Rientjes
  0 siblings, 0 replies; 18+ messages in thread
From: David Rientjes @ 2014-01-15  4:54 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Pekka Enberg, Christoph Lameter, Andrew Morton, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim

On Mon, 2 Dec 2013, Joonsoo Kim wrote:

> This logic is not simple to understand so that making separate function
> helping readability. Additionally, we can use this change in the
> following patch which implement for freelist to have another sized index
> in according to nr objects.
> 
> Acked-by: Christoph Lameter <cl@linux.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 2/5] slab: introduce helper functions to get/set free object
  2013-12-02  8:49 ` [PATCH v3 2/5] slab: introduce helper functions to get/set free object Joonsoo Kim
@ 2014-01-15  4:57   ` David Rientjes
  0 siblings, 0 replies; 18+ messages in thread
From: David Rientjes @ 2014-01-15  4:57 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Pekka Enberg, Christoph Lameter, Andrew Morton, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim

On Mon, 2 Dec 2013, Joonsoo Kim wrote:

> In the following patches, to get/set free objects from the freelist
> is changed so that simple casting doesn't work for it. Therefore,
> introduce helper functions.
> 
> Acked-by: Christoph Lameter <cl@linux.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 3/5] slab: restrict the number of objects in a slab
  2013-12-02  8:49 ` [PATCH v3 3/5] slab: restrict the number of objects in a slab Joonsoo Kim
  2013-12-02 19:45   ` Christoph Lameter
@ 2014-01-15  5:05   ` David Rientjes
  1 sibling, 0 replies; 18+ messages in thread
From: David Rientjes @ 2014-01-15  5:05 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Pekka Enberg, Christoph Lameter, Andrew Morton, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim

On Mon, 2 Dec 2013, Joonsoo Kim wrote:

> To prepare to implement byte sized index for managing the freelist
> of a slab, we should restrict the number of objects in a slab to be less
> or equal to 256, since byte only represent 256 different values.
> Setting the size of object to value equal or more than newly introduced
> SLAB_OBJ_MIN_SIZE ensures that the number of objects in a slab is less or
> equal to 256 for a slab with 1 page.
> 
> If page size is rather larger than 4096, above assumption would be wrong.
> In this case, we would fall back on 2 bytes sized index.
> 
> If minimum size of kmalloc is less than 16, we use it as minimum object
> size and give up this optimization.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/5] slab: introduce byte sized index for the freelist of a slab
  2013-12-02  8:49 ` [PATCH v3 4/5] slab: introduce byte sized index for the freelist of " Joonsoo Kim
  2013-12-03  2:25   ` Joonsoo Kim
@ 2014-01-15  5:08   ` David Rientjes
  1 sibling, 0 replies; 18+ messages in thread
From: David Rientjes @ 2014-01-15  5:08 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Pekka Enberg, Christoph Lameter, Andrew Morton, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim

On Mon, 2 Dec 2013, Joonsoo Kim wrote:

> Currently, the freelist of a slab consist of unsigned int sized indexes.
> Since most of slabs have less number of objects than 256, large sized
> indexes is needless. For example, consider the minimum kmalloc slab. It's
> object size is 32 byte and it would consist of one page, so 256 indexes
> through byte sized index are enough to contain all possible indexes.
> 
> There can be some slabs whose object size is 8 byte. We cannot handle
> this case with byte sized index, so we need to restrict minimum
> object size. Since these slabs are not major, wasted memory from these
> slabs would be negligible.
> 
> Some architectures' page size isn't 4096 bytes and rather larger than
> 4096 bytes (One example is 64KB page size on PPC or IA64) so that
> byte sized index doesn't fit to them. In this case, we will use
> two bytes sized index.
> 
> Below is some number for this patch.
> 
> * Before *
> kmalloc-512          525    640    512    8    1 : tunables   54   27    0 : slabdata     80     80      0
> kmalloc-256          210    210    256   15    1 : tunables  120   60    0 : slabdata     14     14      0
> kmalloc-192         1016   1040    192   20    1 : tunables  120   60    0 : slabdata     52     52      0
> kmalloc-96           560    620    128   31    1 : tunables  120   60    0 : slabdata     20     20      0
> kmalloc-64          2148   2280     64   60    1 : tunables  120   60    0 : slabdata     38     38      0
> kmalloc-128          647    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
> kmalloc-32         11360  11413     32  113    1 : tunables  120   60    0 : slabdata    101    101      0
> kmem_cache           197    200    192   20    1 : tunables  120   60    0 : slabdata     10     10      0
> 
> * After *
> kmalloc-512          521    648    512    8    1 : tunables   54   27    0 : slabdata     81     81      0
> kmalloc-256          208    208    256   16    1 : tunables  120   60    0 : slabdata     13     13      0
> kmalloc-192         1029   1029    192   21    1 : tunables  120   60    0 : slabdata     49     49      0
> kmalloc-96           529    589    128   31    1 : tunables  120   60    0 : slabdata     19     19      0
> kmalloc-64          2142   2142     64   63    1 : tunables  120   60    0 : slabdata     34     34      0
> kmalloc-128          660    682    128   31    1 : tunables  120   60    0 : slabdata     22     22      0
> kmalloc-32         11716  11780     32  124    1 : tunables  120   60    0 : slabdata     95     95      0
> kmem_cache           197    210    192   21    1 : tunables  120   60    0 : slabdata     10     10      0
> 
> kmem_caches consisting of objects less than or equal to 256 byte have
> one or more objects than before. In the case of kmalloc-32, we have 11 more
> objects, so 352 bytes (11 * 32) are saved and this is roughly 9% saving of
> memory. Of couse, this percentage decreases as the number of objects
> in a slab decreases.
> 
> Here are the performance results on my 4 cpus machine.
> 
> * Before *
> 
>  Performance counter stats for 'perf bench sched messaging -g 50 -l 1000' (10 runs):
> 
>        229,945,138 cache-misses                                                  ( +-  0.23% )
> 
>       11.627897174 seconds time elapsed                                          ( +-  0.14% )
> 
> * After *
> 
>  Performance counter stats for 'perf bench sched messaging -g 50 -l 1000' (10 runs):
> 
>        218,640,472 cache-misses                                                  ( +-  0.42% )
> 
>       11.504999837 seconds time elapsed                                          ( +-  0.21% )
> 
> cache-misses are reduced by this patchset, roughly 5%.
> And elapsed times are improved by 1%.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] slab: make more slab management structure off the slab
  2013-12-02  8:49 ` [PATCH v3 5/5] slab: make more slab management structure off the slab Joonsoo Kim
  2013-12-02 14:58   ` Christoph Lameter
@ 2014-01-15  5:09   ` David Rientjes
  1 sibling, 0 replies; 18+ messages in thread
From: David Rientjes @ 2014-01-15  5:09 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Pekka Enberg, Christoph Lameter, Andrew Morton, Wanpeng Li,
	linux-mm, linux-kernel, Joonsoo Kim

On Mon, 2 Dec 2013, Joonsoo Kim wrote:

> Now, the size of the freelist for the slab management diminish,
> so that the on-slab management structure can waste large space
> if the object of the slab is large.
> 
> Consider a 128 byte sized slab. If on-slab is used, 31 objects can be
> in the slab. The size of the freelist for this case would be 31 bytes
> so that 97 bytes, that is, more than 75% of object size, are wasted.
> 
> In a 64 byte sized slab case, no space is wasted if we use on-slab.
> So set off-slab determining constraint to 128 bytes.
> 
> Acked-by: Christoph Lameter <cl@linux.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/5] slab: make more slab management structure off the slab
  2013-12-13  7:03       ` Joonsoo Kim
@ 2014-02-08 10:17         ` Pekka Enberg
  0 siblings, 0 replies; 18+ messages in thread
From: Pekka Enberg @ 2014-02-08 10:17 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Christoph Lameter, Andrew Morton, David Rientjes, Wanpeng Li,
	linux-mm, LKML

On Fri, Dec 13, 2013 at 9:03 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> Hello, Pekka.
>
> Below is updated patch for 5/5 in this series.
> Now I get acks from Christoph to all patches in this series.
> So, could you merge this patchset? :)
> If you want to resend wholeset with proper ack, I will do it
> with pleasure.

Applied, thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2014-02-08 10:17 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-12-02  8:49 [PATCH v3 0/5] slab: implement byte sized indexes for the freelist of a slab Joonsoo Kim
2013-12-02  8:49 ` [PATCH v3 1/5] slab: factor out calculate nr objects in cache_estimate Joonsoo Kim
2014-01-15  4:54   ` David Rientjes
2013-12-02  8:49 ` [PATCH v3 2/5] slab: introduce helper functions to get/set free object Joonsoo Kim
2014-01-15  4:57   ` David Rientjes
2013-12-02  8:49 ` [PATCH v3 3/5] slab: restrict the number of objects in a slab Joonsoo Kim
2013-12-02 19:45   ` Christoph Lameter
2014-01-15  5:05   ` David Rientjes
2013-12-02  8:49 ` [PATCH v3 4/5] slab: introduce byte sized index for the freelist of " Joonsoo Kim
2013-12-03  2:25   ` Joonsoo Kim
2013-12-03 15:24     ` Christoph Lameter
2014-01-15  5:08   ` David Rientjes
2013-12-02  8:49 ` [PATCH v3 5/5] slab: make more slab management structure off the slab Joonsoo Kim
2013-12-02 14:58   ` Christoph Lameter
2013-12-03  2:13     ` Joonsoo Kim
2013-12-13  7:03       ` Joonsoo Kim
2014-02-08 10:17         ` Pekka Enberg
2014-01-15  5:09   ` David Rientjes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).