All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/25] slab: fixup calculate_alignment() argument type
@ 2018-03-05 20:07 Alexey Dobriyan
  2018-03-05 20:07 ` [PATCH 02/25] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (25 more replies)
  0 siblings, 26 replies; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index a1237d38a27e..7626a64b8f14 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -280,7 +280,7 @@ static inline void memcg_unlink_cache(struct kmem_cache *s)
  * Figure out what the alignment of the objects will be given a set of
  * flags, a user specified alignment and the size of the objects.
  */
-static unsigned long calculate_alignment(unsigned long flags,
+static unsigned long calculate_alignment(slab_flags_t flags,
 		unsigned long align, unsigned long size)
 {
 	/*
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 02/25] slab: make kmalloc_index() return "unsigned int"
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:24   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 03/25] slab: make kmalloc_size() " Alexey Dobriyan
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

kmalloc_index() return index into an array of kmalloc kmem caches,
therefore should be unsigned.

Space savings with SLUB on trimmed down .config:

	add/remove: 0/1 grow/shrink: 6/56 up/down: 85/-557 (-472)
	Function                                     old     new   delta
	calculate_sizes                              924     983     +59
	on_freelist                                  589     604     +15
	init_cache_random_seq                        122     127      +5
	ext4_mb_init                                1206    1210      +4
	slab_pad_check.part                          270     271      +1
	cpu_partial_store                            112     113      +1
	usersize_show                                 28      27      -1
		...
	new_slab                                    1871    1837     -34
	slab_order                                   204       -    -204

This patch start a series of converting SLUB (mostly) to "unsigned int".
1) Most integers in the code are in fact unsigned entities: array indexes,
   lengths, buffer sizes, allocation orders. It is therefore better to use
   unsigned variables

2) Some integers in the code are either "size_t" or "unsigned long"
   for no reason.

   size_t usually comes from people trying to maintain type correctness
   and figuring out that "sizeof" operator returns size_t or memset/memcpy
   takes size_t so should everything passed to it.

   However the number of 4GB+ objects in the kernel is very small.
   Most, if not all, dynamically allocated objects with kmalloc() or
   kmem_cache_create() aren't actually big. Maintaining wide types
   doesn't do anything.

   64-bit ops are bigger than 32-bit on our beloved x86_64,
   so try to not use 64-bit where it isn't necessary
   (read: everywhere where integers are integers not pointers)

3) in case of SLAB allocators, there are additional limitations
   *) page->inuse, page->objects are only 16-/15-bit,
   *) cache size was always 32-bit
   *) slab orders are small, order 20 is needed to go 64-bit on x86_64
      (PAGE_SIZE << order)

Basically everything is 32-bit except kmalloc(1ULL<<32) which gets shortcut
through page allocator.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slab.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 231abc8976c5..296f33a512eb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -308,7 +308,7 @@ extern struct kmem_cache *kmalloc_dma_caches[KMALLOC_SHIFT_HIGH + 1];
  * 2 = 129 .. 192 bytes
  * n = 2^(n-1)+1 .. 2^n
  */
-static __always_inline int kmalloc_index(size_t size)
+static __always_inline unsigned int kmalloc_index(size_t size)
 {
 	if (!size)
 		return 0;
@@ -504,7 +504,7 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
 			return kmalloc_large(size, flags);
 #ifndef CONFIG_SLOB
 		if (!(flags & GFP_DMA)) {
-			int index = kmalloc_index(size);
+			unsigned int index = kmalloc_index(size);
 
 			if (!index)
 				return ZERO_SIZE_PTR;
@@ -542,7 +542,7 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 #ifndef CONFIG_SLOB
 	if (__builtin_constant_p(size) &&
 		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
-		int i = kmalloc_index(size);
+		unsigned int i = kmalloc_index(size);
 
 		if (!i)
 			return ZERO_SIZE_PTR;
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 03/25] slab: make kmalloc_size() return "unsigned int"
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
  2018-03-05 20:07 ` [PATCH 02/25] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:24   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 04/25] slab: make create_kmalloc_cache() work with 32-bit sizes Alexey Dobriyan
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

kmalloc_size() derives size of kmalloc cache from internal index,
which can't be negative.

Propagate unsignedness a bit.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slab.h | 4 ++--
 mm/slab_common.c     | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 296f33a512eb..ad157fbf3886 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -522,11 +522,11 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
  * return size or 0 if a kmalloc cache for that
  * size does not exist
  */
-static __always_inline int kmalloc_size(int n)
+static __always_inline unsigned int kmalloc_size(unsigned int n)
 {
 #ifndef CONFIG_SLOB
 	if (n > 2)
-		return 1 << n;
+		return 1U << n;
 
 	if (n == 1 && KMALLOC_MIN_SIZE <= 32)
 		return 96;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 7626a64b8f14..d3f4209c297d 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1138,9 +1138,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
 		struct kmem_cache *s = kmalloc_caches[i];
 
 		if (s) {
-			int size = kmalloc_size(i);
+			unsigned int size = kmalloc_size(i);
 			char *n = kasprintf(GFP_NOWAIT,
-				 "dma-kmalloc-%d", size);
+				 "dma-kmalloc-%u", size);
 
 			BUG_ON(!n);
 			kmalloc_dma_caches[i] = create_kmalloc_cache(n,
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 04/25] slab: make create_kmalloc_cache() work with 32-bit sizes
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
  2018-03-05 20:07 ` [PATCH 02/25] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
  2018-03-05 20:07 ` [PATCH 03/25] slab: make kmalloc_size() " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:32   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 05/25] slab: make create_boot_cache() " Alexey Dobriyan
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

KMALLOC_MAX_CACHE_SIZE is 32-bit so is the largest kmalloc cache size.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab.h        | 8 ++++----
 mm/slab_common.c | 6 +++---
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 51813236e773..c8887965491b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -77,7 +77,7 @@ extern struct kmem_cache *kmem_cache;
 /* A table of kmalloc cache names and sizes */
 extern const struct kmalloc_info_struct {
 	const char *name;
-	unsigned long size;
+	unsigned int size;
 } kmalloc_info[];
 
 #ifndef CONFIG_SLOB
@@ -93,9 +93,9 @@ struct kmem_cache *kmalloc_slab(size_t, gfp_t);
 /* Functions provided by the slab allocators */
 int __kmem_cache_create(struct kmem_cache *, slab_flags_t flags);
 
-extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size,
-			slab_flags_t flags, size_t useroffset,
-			size_t usersize);
+struct kmem_cache *create_kmalloc_cache(const char *name, unsigned int size,
+			slab_flags_t flags, unsigned int useroffset,
+			unsigned int usersize);
 extern void create_boot_cache(struct kmem_cache *, const char *name,
 			size_t size, slab_flags_t flags, size_t useroffset,
 			size_t usersize);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d3f4209c297d..f9afca292858 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -939,9 +939,9 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
 	s->refcount = -1;	/* Exempt from merging for now */
 }
 
-struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
-				slab_flags_t flags, size_t useroffset,
-				size_t usersize)
+struct kmem_cache *__init create_kmalloc_cache(const char *name,
+		unsigned int size, slab_flags_t flags,
+		unsigned int useroffset, unsigned int usersize)
 {
 	struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
 
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 05/25] slab: make create_boot_cache() work with 32-bit sizes
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (2 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 04/25] slab: make create_kmalloc_cache() work with 32-bit sizes Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:34   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 06/25] slab: make kmem_cache_create() " Alexey Dobriyan
                   ` (21 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

struct kmem_cache::size has always been "int", all those
"size_t size" are fake.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab.h        | 4 ++--
 mm/slab_common.c | 7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index c8887965491b..2a6d88044a56 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -97,8 +97,8 @@ struct kmem_cache *create_kmalloc_cache(const char *name, unsigned int size,
 			slab_flags_t flags, unsigned int useroffset,
 			unsigned int usersize);
 extern void create_boot_cache(struct kmem_cache *, const char *name,
-			size_t size, slab_flags_t flags, size_t useroffset,
-			size_t usersize);
+			unsigned int size, slab_flags_t flags,
+			unsigned int useroffset, unsigned int usersize);
 
 int slab_unmergeable(struct kmem_cache *s);
 struct kmem_cache *find_mergeable(size_t size, size_t align,
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f9afca292858..2a7f09ce7c84 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -917,8 +917,9 @@ bool slab_is_available(void)
 
 #ifndef CONFIG_SLOB
 /* Create a cache during boot when no slab services are available yet */
-void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t size,
-		slab_flags_t flags, size_t useroffset, size_t usersize)
+void __init create_boot_cache(struct kmem_cache *s, const char *name,
+		unsigned int size, slab_flags_t flags,
+		unsigned int useroffset, unsigned int usersize)
 {
 	int err;
 
@@ -933,7 +934,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
 	err = __kmem_cache_create(s, flags);
 
 	if (err)
-		panic("Creation of kmalloc slab %s size=%zu failed. Reason %d\n",
+		panic("Creation of kmalloc slab %s size=%u failed. Reason %d\n",
 					name, size, err);
 
 	s->refcount = -1;	/* Exempt from merging for now */
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 06/25] slab: make kmem_cache_create() work with 32-bit sizes
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (3 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 05/25] slab: make create_boot_cache() " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:37   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 07/25] slab: make size_index[] array u8 Alexey Dobriyan
                   ` (20 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

struct kmem_cache::size and ::align were always 32-bit.

Out of curiosity I created 4GB kmem_cache, it oopsed with division by 0.
kmem_cache_create(1UL<<32+1) created 1-byte cache as expected.

size_t doesn't work and never did.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slab.h |  7 ++++---
 mm/slab.c            |  2 +-
 mm/slab.h            |  6 +++---
 mm/slab_common.c     | 19 ++++++++++---------
 mm/slub.c            |  2 +-
 5 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index ad157fbf3886..d36e8f03730e 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -137,11 +137,12 @@ bool slab_is_available(void);
 
 extern bool usercopy_fallback;
 
-struct kmem_cache *kmem_cache_create(const char *name, size_t size,
-			size_t align, slab_flags_t flags,
+struct kmem_cache *kmem_cache_create(const char *name, unsigned int size,
+			unsigned int align, slab_flags_t flags,
 			void (*ctor)(void *));
 struct kmem_cache *kmem_cache_create_usercopy(const char *name,
-			size_t size, size_t align, slab_flags_t flags,
+			unsigned int size, unsigned int align,
+			slab_flags_t flags,
 			size_t useroffset, size_t usersize,
 			void (*ctor)(void *));
 void kmem_cache_destroy(struct kmem_cache *);
diff --git a/mm/slab.c b/mm/slab.c
index 324446621b3e..cc136fcedfb9 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1876,7 +1876,7 @@ slab_flags_t kmem_cache_flags(unsigned long object_size,
 }
 
 struct kmem_cache *
-__kmem_cache_alias(const char *name, size_t size, size_t align,
+__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *))
 {
 	struct kmem_cache *cachep;
diff --git a/mm/slab.h b/mm/slab.h
index 2a6d88044a56..0809580428fe 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -101,11 +101,11 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,
 			unsigned int useroffset, unsigned int usersize);
 
 int slab_unmergeable(struct kmem_cache *s);
-struct kmem_cache *find_mergeable(size_t size, size_t align,
+struct kmem_cache *find_mergeable(unsigned size, unsigned align,
 		slab_flags_t flags, const char *name, void (*ctor)(void *));
 #ifndef CONFIG_SLOB
 struct kmem_cache *
-__kmem_cache_alias(const char *name, size_t size, size_t align,
+__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *));
 
 slab_flags_t kmem_cache_flags(unsigned long object_size,
@@ -113,7 +113,7 @@ slab_flags_t kmem_cache_flags(unsigned long object_size,
 	void (*ctor)(void *));
 #else
 static inline struct kmem_cache *
-__kmem_cache_alias(const char *name, size_t size, size_t align,
+__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *))
 { return NULL; }
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 2a7f09ce7c84..a4545a61a7c8 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -82,7 +82,7 @@ unsigned int kmem_cache_size(struct kmem_cache *s)
 EXPORT_SYMBOL(kmem_cache_size);
 
 #ifdef CONFIG_DEBUG_VM
-static int kmem_cache_sanity_check(const char *name, size_t size)
+static int kmem_cache_sanity_check(const char *name, unsigned int size)
 {
 	struct kmem_cache *s = NULL;
 
@@ -113,7 +113,7 @@ static int kmem_cache_sanity_check(const char *name, size_t size)
 	return 0;
 }
 #else
-static inline int kmem_cache_sanity_check(const char *name, size_t size)
+static inline int kmem_cache_sanity_check(const char *name, unsigned int size)
 {
 	return 0;
 }
@@ -280,8 +280,8 @@ static inline void memcg_unlink_cache(struct kmem_cache *s)
  * Figure out what the alignment of the objects will be given a set of
  * flags, a user specified alignment and the size of the objects.
  */
-static unsigned long calculate_alignment(slab_flags_t flags,
-		unsigned long align, unsigned long size)
+static unsigned int calculate_alignment(slab_flags_t flags,
+		unsigned int align, unsigned int size)
 {
 	/*
 	 * If the user wants hardware cache aligned objects then follow that
@@ -291,7 +291,7 @@ static unsigned long calculate_alignment(slab_flags_t flags,
 	 * alignment though. If that is greater then use it.
 	 */
 	if (flags & SLAB_HWCACHE_ALIGN) {
-		unsigned long ralign;
+		unsigned int ralign;
 
 		ralign = cache_line_size();
 		while (size <= ralign / 2)
@@ -331,7 +331,7 @@ int slab_unmergeable(struct kmem_cache *s)
 	return 0;
 }
 
-struct kmem_cache *find_mergeable(size_t size, size_t align,
+struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
 		slab_flags_t flags, const char *name, void (*ctor)(void *))
 {
 	struct kmem_cache *s;
@@ -379,7 +379,7 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
 }
 
 static struct kmem_cache *create_cache(const char *name,
-		size_t object_size, size_t size, size_t align,
+		unsigned int object_size, unsigned int size, unsigned int align,
 		slab_flags_t flags, size_t useroffset,
 		size_t usersize, void (*ctor)(void *),
 		struct mem_cgroup *memcg, struct kmem_cache *root_cache)
@@ -452,7 +452,8 @@ static struct kmem_cache *create_cache(const char *name,
  * as davem.
  */
 struct kmem_cache *
-kmem_cache_create_usercopy(const char *name, size_t size, size_t align,
+kmem_cache_create_usercopy(const char *name,
+		  unsigned int size, unsigned int align,
 		  slab_flags_t flags, size_t useroffset, size_t usersize,
 		  void (*ctor)(void *))
 {
@@ -532,7 +533,7 @@ kmem_cache_create_usercopy(const char *name, size_t size, size_t align,
 EXPORT_SYMBOL(kmem_cache_create_usercopy);
 
 struct kmem_cache *
-kmem_cache_create(const char *name, size_t size, size_t align,
+kmem_cache_create(const char *name, unsigned int size, unsigned int align,
 		slab_flags_t flags, void (*ctor)(void *))
 {
 	return kmem_cache_create_usercopy(name, size, align, flags, 0, 0,
diff --git a/mm/slub.c b/mm/slub.c
index e381728a3751..b2f529a33400 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4241,7 +4241,7 @@ void __init kmem_cache_init_late(void)
 }
 
 struct kmem_cache *
-__kmem_cache_alias(const char *name, size_t size, size_t align,
+__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *))
 {
 	struct kmem_cache *s, *c;
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 07/25] slab: make size_index[] array u8
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (4 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 06/25] slab: make kmem_cache_create() " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:38   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 08/25] slab: make size_index_elem() unsigned int Alexey Dobriyan
                   ` (19 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

All those small numbers are reverse indexes into kmalloc caches array
and can't be negative.

On x86_64 "unsigned int = fls()" can drop CDQE instruction:

	add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-2 (-2)
	Function                                     old     new   delta
	kmalloc_slab                                 101      99      -2

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab_common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index a4545a61a7c8..dda966e6bc58 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -971,7 +971,7 @@ EXPORT_SYMBOL(kmalloc_dma_caches);
  * of two cache sizes there. The size of larger slabs can be determined using
  * fls.
  */
-static s8 size_index[24] __ro_after_init = {
+static u8 size_index[24] __ro_after_init = {
 	3,	/* 8 */
 	4,	/* 16 */
 	5,	/* 24 */
@@ -1009,7 +1009,7 @@ static inline int size_index_elem(size_t bytes)
  */
 struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
 {
-	int index;
+	unsigned int index;
 
 	if (unlikely(size > KMALLOC_MAX_SIZE)) {
 		WARN_ON_ONCE(!(flags & __GFP_NOWARN));
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 08/25] slab: make size_index_elem() unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (5 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 07/25] slab: make size_index[] array u8 Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:39   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 09/25] slub: make ->remote_node_defrag_ratio " Alexey Dobriyan
                   ` (18 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

size_index_elem() always works with small sizes (kmalloc caches are 32-bit)
and returns small indexes.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab_common.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index dda966e6bc58..8abb2a46ae85 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -998,7 +998,7 @@ static u8 size_index[24] __ro_after_init = {
 	2	/* 192 */
 };
 
-static inline int size_index_elem(size_t bytes)
+static inline unsigned int size_index_elem(unsigned int bytes)
 {
 	return (bytes - 1) / 8;
 }
@@ -1067,13 +1067,13 @@ const struct kmalloc_info_struct kmalloc_info[] __initconst = {
  */
 void __init setup_kmalloc_cache_index_table(void)
 {
-	int i;
+	unsigned int i;
 
 	BUILD_BUG_ON(KMALLOC_MIN_SIZE > 256 ||
 		(KMALLOC_MIN_SIZE & (KMALLOC_MIN_SIZE - 1)));
 
 	for (i = 8; i < KMALLOC_MIN_SIZE; i += 8) {
-		int elem = size_index_elem(i);
+		unsigned int elem = size_index_elem(i);
 
 		if (elem >= ARRAY_SIZE(size_index))
 			break;
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 09/25] slub: make ->remote_node_defrag_ratio unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (6 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 08/25] slab: make size_index_elem() unsigned int Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:41   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 10/25] slub: make ->max_attr_size " Alexey Dobriyan
                   ` (17 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

->remote_node_defrag_ratio is in range 0..1000.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h |  2 +-
 mm/slub.c                | 11 ++++++-----
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 8ad99c47b19c..f6548083fe0f 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -124,7 +124,7 @@ struct kmem_cache {
 	/*
 	 * Defragmentation by allocating from a remote node.
 	 */
-	int remote_node_defrag_ratio;
+	unsigned int remote_node_defrag_ratio;
 #endif
 
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
diff --git a/mm/slub.c b/mm/slub.c
index b2f529a33400..d9db1d184549 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5288,21 +5288,22 @@ SLAB_ATTR(shrink);
 #ifdef CONFIG_NUMA
 static ssize_t remote_node_defrag_ratio_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->remote_node_defrag_ratio / 10);
+	return sprintf(buf, "%u\n", s->remote_node_defrag_ratio / 10);
 }
 
 static ssize_t remote_node_defrag_ratio_store(struct kmem_cache *s,
 				const char *buf, size_t length)
 {
-	unsigned long ratio;
+	unsigned int ratio;
 	int err;
 
-	err = kstrtoul(buf, 10, &ratio);
+	err = kstrtouint(buf, 10, &ratio);
 	if (err)
 		return err;
+	if (ratio > 100)
+		return -ERANGE;
 
-	if (ratio <= 100)
-		s->remote_node_defrag_ratio = ratio * 10;
+	s->remote_node_defrag_ratio = ratio * 10;
 
 	return length;
 }
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 10/25] slub: make ->max_attr_size unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (7 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 09/25] slub: make ->remote_node_defrag_ratio " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:42   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 11/25] slub: make ->red_left_pad " Alexey Dobriyan
                   ` (16 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

->max_attr_size is maximum length of every SLAB memcg attribute
ever written. VFS limits those to INT_MAX.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index f6548083fe0f..9bb761324a9c 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,7 +110,8 @@ struct kmem_cache {
 #endif
 #ifdef CONFIG_MEMCG
 	struct memcg_cache_params memcg_params;
-	int max_attr_size; /* for propagation, maximum size of a stored attr */
+	/* for propagation, maximum size of a stored attr */
+	unsigned int max_attr_size;
 #ifdef CONFIG_SYSFS
 	struct kset *memcg_kset;
 #endif
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 11/25] slub: make ->red_left_pad unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (8 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 10/25] slub: make ->max_attr_size " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:42   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 12/25] slub: make ->reserved " Alexey Dobriyan
                   ` (15 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

Padding length can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 9bb761324a9c..9f59fc16444b 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -101,7 +101,7 @@ struct kmem_cache {
 	int inuse;		/* Offset to metadata */
 	int align;		/* Alignment */
 	int reserved;		/* Reserved bytes at the end of slabs */
-	int red_left_pad;	/* Left redzone padding size */
+	unsigned int red_left_pad;	/* Left redzone padding size */
 	const char *name;	/* Name (only for display!) */
 	struct list_head list;	/* List of slab caches */
 #ifdef CONFIG_SYSFS
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 12/25] slub: make ->reserved unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (9 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 11/25] slub: make ->red_left_pad " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:43   ` Christopher Lameter
  2018-03-06 18:45   ` Matthew Wilcox
  2018-03-05 20:07 ` [PATCH 13/25] slub: make ->align " Alexey Dobriyan
                   ` (14 subsequent siblings)
  25 siblings, 2 replies; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

->reserved is either 0 or sizeof(struct rcu_head), can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 mm/slub.c                | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 9f59fc16444b..2b4417aa15d8 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -100,7 +100,7 @@ struct kmem_cache {
 	void (*ctor)(void *);
 	int inuse;		/* Offset to metadata */
 	int align;		/* Alignment */
-	int reserved;		/* Reserved bytes at the end of slabs */
+	unsigned int reserved;		/* Reserved bytes at the end of slabs */
 	unsigned int red_left_pad;	/* Left redzone padding size */
 	const char *name;	/* Name (only for display!) */
 	struct list_head list;	/* List of slab caches */
diff --git a/mm/slub.c b/mm/slub.c
index d9db1d184549..72623f210892 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5093,7 +5093,7 @@ SLAB_ATTR_RO(destroy_by_rcu);
 
 static ssize_t reserved_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->reserved);
+	return sprintf(buf, "%u\n", s->reserved);
 }
 SLAB_ATTR_RO(reserved);
 
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 13/25] slub: make ->align unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (10 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 12/25] slub: make ->reserved " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:43   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 14/25] slub: make ->inuse " Alexey Dobriyan
                   ` (13 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

Kmem cache alignment can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 mm/slub.c                | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 2b4417aa15d8..2a0eabeff78f 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -99,7 +99,7 @@ struct kmem_cache {
 	int refcount;		/* Refcount for slab cache destroy */
 	void (*ctor)(void *);
 	int inuse;		/* Offset to metadata */
-	int align;		/* Alignment */
+	unsigned int align;		/* Alignment */
 	unsigned int reserved;		/* Reserved bytes at the end of slabs */
 	unsigned int red_left_pad;	/* Left redzone padding size */
 	const char *name;	/* Name (only for display!) */
diff --git a/mm/slub.c b/mm/slub.c
index 72623f210892..246f0132d308 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4895,7 +4895,7 @@ SLAB_ATTR_RO(slab_size);
 
 static ssize_t align_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->align);
+	return sprintf(buf, "%u\n", s->align);
 }
 SLAB_ATTR_RO(align);
 
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 14/25] slub: make ->inuse unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (11 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 13/25] slub: make ->align " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:44   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 15/25] slub: make ->cpu_partial " Alexey Dobriyan
                   ` (12 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

->inuse is "the number of bytes in actual use by the object",
can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 mm/slub.c                | 5 ++---
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 2a0eabeff78f..2287b800474f 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -98,7 +98,7 @@ struct kmem_cache {
 	gfp_t allocflags;	/* gfp flags to use on each alloc */
 	int refcount;		/* Refcount for slab cache destroy */
 	void (*ctor)(void *);
-	int inuse;		/* Offset to metadata */
+	unsigned int inuse;		/* Offset to metadata */
 	unsigned int align;		/* Alignment */
 	unsigned int reserved;		/* Reserved bytes at the end of slabs */
 	unsigned int red_left_pad;	/* Left redzone padding size */
diff --git a/mm/slub.c b/mm/slub.c
index 246f0132d308..b4c07dcab0e1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4255,12 +4255,11 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		 * the complete object on kzalloc.
 		 */
 		s->object_size = max(s->object_size, (int)size);
-		s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
+		s->inuse = max(s->inuse, ALIGN(size, sizeof(void *)));
 
 		for_each_memcg_cache(c, s) {
 			c->object_size = s->object_size;
-			c->inuse = max_t(int, c->inuse,
-					 ALIGN(size, sizeof(void *)));
+			c->inuse = max(c->inuse, ALIGN(size, sizeof(void *)));
 		}
 
 		if (sysfs_slab_alias(s, name)) {
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 15/25] slub: make ->cpu_partial unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (12 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 14/25] slub: make ->inuse " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:44   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 16/25] slub: make ->offset " Alexey Dobriyan
                   ` (11 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

	/*
	 * cpu_partial determined the maximum number of objects
	 * kept in the per cpu partial lists of a processor.
	 */

Can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 3 ++-
 mm/slub.c                | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 2287b800474f..d2cc1391f17a 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -88,7 +88,8 @@ struct kmem_cache {
 	int object_size;	/* The size of an object without meta data */
 	int offset;		/* Free pointer offset. */
 #ifdef CONFIG_SLUB_CPU_PARTIAL
-	int cpu_partial;	/* Number of per cpu partial objects to keep around */
+	/* Number of per cpu partial objects to keep around */
+	unsigned int cpu_partial;
 #endif
 	struct kmem_cache_order_objects oo;
 
diff --git a/mm/slub.c b/mm/slub.c
index b4c07dcab0e1..2fbf5a16e453 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1811,7 +1811,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
 {
 	struct page *page, *page2;
 	void *object = NULL;
-	int available = 0;
+	unsigned int available = 0;
 	int objects;
 
 	/*
@@ -4961,10 +4961,10 @@ static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf)
 static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf,
 				 size_t length)
 {
-	unsigned long objects;
+	unsigned int objects;
 	int err;
 
-	err = kstrtoul(buf, 10, &objects);
+	err = kstrtouint(buf, 10, &objects);
 	if (err)
 		return err;
 	if (objects && !kmem_cache_has_cpu_partial(s))
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 16/25] slub: make ->offset unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (13 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 15/25] slub: make ->cpu_partial " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:45   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 17/25] slub: make ->object_size " Alexey Dobriyan
                   ` (10 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

->offset is free pointer offset from the start of the object,
can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d2cc1391f17a..db00dbd7e89f 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -86,7 +86,7 @@ struct kmem_cache {
 	unsigned long min_partial;
 	int size;		/* The size of an object including meta data */
 	int object_size;	/* The size of an object without meta data */
-	int offset;		/* Free pointer offset. */
+	unsigned int offset;	/* Free pointer offset. */
 #ifdef CONFIG_SLUB_CPU_PARTIAL
 	/* Number of per cpu partial objects to keep around */
 	unsigned int cpu_partial;
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 17/25] slub: make ->object_size unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (14 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 16/25] slub: make ->offset " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:45   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 18/25] slub: make ->size " Alexey Dobriyan
                   ` (9 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

Linux doesn't support negative length objects.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 mm/slab_common.c         | 2 +-
 mm/slub.c                | 8 ++++----
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index db00dbd7e89f..7d74f121ef4e 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -85,7 +85,7 @@ struct kmem_cache {
 	slab_flags_t flags;
 	unsigned long min_partial;
 	int size;		/* The size of an object including meta data */
-	int object_size;	/* The size of an object without meta data */
+	unsigned int object_size;/* The size of an object without meta data */
 	unsigned int offset;	/* Free pointer offset. */
 #ifdef CONFIG_SLUB_CPU_PARTIAL
 	/* Number of per cpu partial objects to keep around */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 8abb2a46ae85..3e07b1fb22bd 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -103,7 +103,7 @@ static int kmem_cache_sanity_check(const char *name, unsigned int size)
 		 */
 		res = probe_kernel_address(s->name, tmp);
 		if (res) {
-			pr_err("Slab cache with size %d has lost its name\n",
+			pr_err("Slab cache with size %u has lost its name\n",
 			       s->object_size);
 			continue;
 		}
diff --git a/mm/slub.c b/mm/slub.c
index 2fbf5a16e453..153340cbe48e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -680,7 +680,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 		print_section(KERN_ERR, "Bytes b4 ", p - 16, 16);
 
 	print_section(KERN_ERR, "Object ", p,
-		      min_t(unsigned long, s->object_size, PAGE_SIZE));
+		      min_t(unsigned int, s->object_size, PAGE_SIZE));
 	if (s->flags & SLAB_RED_ZONE)
 		print_section(KERN_ERR, "Redzone ", p + s->object_size,
 			s->inuse - s->object_size);
@@ -2398,7 +2398,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 
 	pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n",
 		nid, gfpflags, &gfpflags);
-	pr_warn("  cache: %s, object size: %d, buffer size: %d, default order: %d, min order: %d\n",
+	pr_warn("  cache: %s, object size: %u, buffer size: %d, default order: %d, min order: %d\n",
 		s->name, s->object_size, s->size, oo_order(s->oo),
 		oo_order(s->min));
 
@@ -4254,7 +4254,7 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		 * Adjust the object sizes so that we clear
 		 * the complete object on kzalloc.
 		 */
-		s->object_size = max(s->object_size, (int)size);
+		s->object_size = max(s->object_size, size);
 		s->inuse = max(s->inuse, ALIGN(size, sizeof(void *)));
 
 		for_each_memcg_cache(c, s) {
@@ -4900,7 +4900,7 @@ SLAB_ATTR_RO(align);
 
 static ssize_t object_size_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->object_size);
+	return sprintf(buf, "%u\n", s->object_size);
 }
 SLAB_ATTR_RO(object_size);
 
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 18/25] slub: make ->size unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (15 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 17/25] slub: make ->object_size " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:46   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 19/25] slab: make kmem_cache_flags accept 32-bit object size Alexey Dobriyan
                   ` (8 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

Linux doesn't support negative length objects (including meta data).

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h |  2 +-
 mm/slub.c                | 12 ++++++------
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 7d74f121ef4e..bc02fd3a8ccf 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -84,7 +84,7 @@ struct kmem_cache {
 	/* Used for retriving partial slabs etc */
 	slab_flags_t flags;
 	unsigned long min_partial;
-	int size;		/* The size of an object including meta data */
+	unsigned int size;	/* The size of an object including meta data */
 	unsigned int object_size;/* The size of an object without meta data */
 	unsigned int offset;	/* Free pointer offset. */
 #ifdef CONFIG_SLUB_CPU_PARTIAL
diff --git a/mm/slub.c b/mm/slub.c
index 153340cbe48e..424cb7693a5c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2398,7 +2398,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 
 	pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n",
 		nid, gfpflags, &gfpflags);
-	pr_warn("  cache: %s, object size: %u, buffer size: %d, default order: %d, min order: %d\n",
+	pr_warn("  cache: %s, object size: %u, buffer size: %u, default order: %d, min order: %d\n",
 		s->name, s->object_size, s->size, oo_order(s->oo),
 		oo_order(s->min));
 
@@ -3632,8 +3632,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
 	free_kmem_cache_nodes(s);
 error:
 	if (flags & SLAB_PANIC)
-		panic("Cannot create slab %s size=%lu realsize=%u order=%u offset=%u flags=%lx\n",
-		      s->name, (unsigned long)s->size, s->size,
+		panic("Cannot create slab %s size=%u realsize=%u order=%u offset=%u flags=%lx\n",
+		      s->name, s->size, s->size,
 		      oo_order(s->oo), s->offset, (unsigned long)flags);
 	return -EINVAL;
 }
@@ -3824,7 +3824,7 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
 			 bool to_user)
 {
 	struct kmem_cache *s;
-	unsigned long offset;
+	unsigned int offset;
 	size_t object_size;
 
 	/* Find object and usable object size. */
@@ -4888,7 +4888,7 @@ struct slab_attribute {
 
 static ssize_t slab_size_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->size);
+	return sprintf(buf, "%u\n", s->size);
 }
 SLAB_ATTR_RO(slab_size);
 
@@ -5663,7 +5663,7 @@ static char *create_unique_id(struct kmem_cache *s)
 		*p++ = 'A';
 	if (p != name + 1)
 		*p++ = '-';
-	p += sprintf(p, "%07d", s->size);
+	p += sprintf(p, "%07u", s->size);
 
 	BUG_ON(p > name + ID_STR_LENGTH - 1);
 	return name;
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 19/25] slab: make kmem_cache_flags accept 32-bit object size
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (16 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 18/25] slub: make ->size " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:47   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 20/25] kasan: make kasan_cache_create() work with 32-bit slab cache sizes Alexey Dobriyan
                   ` (7 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

Now that all sizes are properly typed, propagate "unsigned int" down
the callgraph.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab.c | 2 +-
 mm/slab.h | 4 ++--
 mm/slub.c | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index cc136fcedfb9..7d17206dd574 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1868,7 +1868,7 @@ static int __ref setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp)
 	return 0;
 }
 
-slab_flags_t kmem_cache_flags(unsigned long object_size,
+slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *))
 {
diff --git a/mm/slab.h b/mm/slab.h
index 0809580428fe..8f1072f49285 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -108,7 +108,7 @@ struct kmem_cache *
 __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *));
 
-slab_flags_t kmem_cache_flags(unsigned long object_size,
+slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *));
 #else
@@ -117,7 +117,7 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *))
 { return NULL; }
 
-static inline slab_flags_t kmem_cache_flags(unsigned long object_size,
+static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *))
 {
diff --git a/mm/slub.c b/mm/slub.c
index 424cb7693a5c..e82a6b50b3ef 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1292,7 +1292,7 @@ static int __init setup_slub_debug(char *str)
 
 __setup("slub_debug", setup_slub_debug);
 
-slab_flags_t kmem_cache_flags(unsigned long object_size,
+slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *))
 {
@@ -1325,7 +1325,7 @@ static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n,
 					struct page *page) {}
 static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n,
 					struct page *page) {}
-slab_flags_t kmem_cache_flags(unsigned long object_size,
+slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *))
 {
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 20/25] kasan: make kasan_cache_create() work with 32-bit slab cache sizes
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (17 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 19/25] slab: make kmem_cache_flags accept 32-bit object size Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-05 20:07 ` [PATCH 21/25] slab: make usercopy region 32-bit Alexey Dobriyan
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

If SLAB doesn't support 4GB+ kmem caches (it never did), KASAN should not
do it as well.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/kasan.h |  4 ++--
 mm/kasan/kasan.c      | 12 ++++++------
 mm/slab.c             |  2 +-
 mm/slub.c             |  2 +-
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index adc13474a53b..024d4219b953 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -43,7 +43,7 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
-void kasan_cache_create(struct kmem_cache *cache, size_t *size,
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 			slab_flags_t *flags);
 void kasan_cache_shrink(struct kmem_cache *cache);
 void kasan_cache_shutdown(struct kmem_cache *cache);
@@ -92,7 +92,7 @@ static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
 static inline void kasan_cache_create(struct kmem_cache *cache,
-				      size_t *size,
+				      unsigned int *size,
 				      slab_flags_t *flags) {}
 static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
 static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index e13d911251e7..f7a5e1d1ba87 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -323,9 +323,9 @@ void kasan_free_pages(struct page *page, unsigned int order)
  * Adaptive redzone policy taken from the userspace AddressSanitizer runtime.
  * For larger allocations larger redzones are used.
  */
-static size_t optimal_redzone(size_t object_size)
+static unsigned int optimal_redzone(unsigned int object_size)
 {
-	int rz =
+	return
 		object_size <= 64        - 16   ? 16 :
 		object_size <= 128       - 32   ? 32 :
 		object_size <= 512       - 64   ? 64 :
@@ -333,14 +333,13 @@ static size_t optimal_redzone(size_t object_size)
 		object_size <= (1 << 14) - 256  ? 256 :
 		object_size <= (1 << 15) - 512  ? 512 :
 		object_size <= (1 << 16) - 1024 ? 1024 : 2048;
-	return rz;
 }
 
-void kasan_cache_create(struct kmem_cache *cache, size_t *size,
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 			slab_flags_t *flags)
 {
+	unsigned int orig_size = *size;
 	int redzone_adjust;
-	int orig_size = *size;
 
 	/* Add alloc meta. */
 	cache->kasan_info.alloc_meta_offset = *size;
@@ -358,7 +357,8 @@ void kasan_cache_create(struct kmem_cache *cache, size_t *size,
 	if (redzone_adjust > 0)
 		*size += redzone_adjust;
 
-	*size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
+	*size = min_t(unsigned int, KMALLOC_MAX_SIZE,
+			max(*size, cache->object_size +
 					optimal_redzone(cache->object_size)));
 
 	/*
diff --git a/mm/slab.c b/mm/slab.c
index 7d17206dd574..5988f9a1cca8 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1993,7 +1993,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
 	size_t ralign = BYTES_PER_WORD;
 	gfp_t gfp;
 	int err;
-	size_t size = cachep->size;
+	unsigned int size = cachep->size;
 
 #if DEBUG
 #if FORCED_DEBUG
diff --git a/mm/slub.c b/mm/slub.c
index e82a6b50b3ef..87a7a947f2c9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3457,7 +3457,7 @@ static void set_cpu_partial(struct kmem_cache *s)
 static int calculate_sizes(struct kmem_cache *s, int forced_order)
 {
 	slab_flags_t flags = s->flags;
-	size_t size = s->object_size;
+	unsigned int size = s->object_size;
 	int order;
 
 	/*
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 21/25] slab: make usercopy region 32-bit
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (18 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 20/25] kasan: make kasan_cache_create() work with 32-bit slab cache sizes Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-05 20:07 ` [PATCH 22/25] slub: make slab_index() return unsigned int Alexey Dobriyan
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan, netdev

If kmem case sizes are 32-bit, then usecopy region should be too.

Cc: netdev@vger.kernel.org
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slab.h     | 2 +-
 include/linux/slab_def.h | 4 ++--
 include/linux/slub_def.h | 4 ++--
 include/net/sock.h       | 4 ++--
 mm/slab.h                | 4 ++--
 mm/slab_common.c         | 7 ++++---
 mm/slub.c                | 2 +-
 7 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index d36e8f03730e..04402c637171 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -143,7 +143,7 @@ struct kmem_cache *kmem_cache_create(const char *name, unsigned int size,
 struct kmem_cache *kmem_cache_create_usercopy(const char *name,
 			unsigned int size, unsigned int align,
 			slab_flags_t flags,
-			size_t useroffset, size_t usersize,
+			unsigned int useroffset, unsigned int usersize,
 			void (*ctor)(void *));
 void kmem_cache_destroy(struct kmem_cache *);
 int kmem_cache_shrink(struct kmem_cache *);
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 7385547c04b1..d9228e4d0320 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -85,8 +85,8 @@ struct kmem_cache {
 	unsigned int *random_seq;
 #endif
 
-	size_t useroffset;		/* Usercopy region offset */
-	size_t usersize;		/* Usercopy region size */
+	unsigned int useroffset;	/* Usercopy region offset */
+	unsigned int usersize;		/* Usercopy region size */
 
 	struct kmem_cache_node *node[MAX_NUMNODES];
 };
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index bc02fd3a8ccf..623d6ba92036 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -137,8 +137,8 @@ struct kmem_cache {
 	struct kasan_cache kasan_info;
 #endif
 
-	size_t useroffset;		/* Usercopy region offset */
-	size_t usersize;		/* Usercopy region size */
+	unsigned int useroffset;	/* Usercopy region offset */
+	unsigned int usersize;		/* Usercopy region size */
 
 	struct kmem_cache_node *node[MAX_NUMNODES];
 };
diff --git a/include/net/sock.h b/include/net/sock.h
index 169c92afcafa..c86b1ebaae7a 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1109,8 +1109,8 @@ struct proto {
 	struct kmem_cache	*slab;
 	unsigned int		obj_size;
 	slab_flags_t		slab_flags;
-	size_t			useroffset;	/* Usercopy region offset */
-	size_t			usersize;	/* Usercopy region size */
+	unsigned int		useroffset;	/* Usercopy region offset */
+	unsigned int		usersize;	/* Usercopy region size */
 
 	struct percpu_counter	*orphan_count;
 
diff --git a/mm/slab.h b/mm/slab.h
index 8f1072f49285..e8981e811c45 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -22,8 +22,8 @@ struct kmem_cache {
 	unsigned int size;	/* The aligned/padded/added on size  */
 	unsigned int align;	/* Alignment as calculated */
 	slab_flags_t flags;	/* Active flags on the slab */
-	size_t useroffset;	/* Usercopy region offset */
-	size_t usersize;	/* Usercopy region size */
+	unsigned int useroffset;/* Usercopy region offset */
+	unsigned int usersize;	/* Usercopy region size */
 	const char *name;	/* Slab name for sysfs */
 	int refcount;		/* Use counter */
 	void (*ctor)(void *);	/* Called on object slot creation */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3e07b1fb22bd..01224cb90080 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -380,8 +380,8 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
 
 static struct kmem_cache *create_cache(const char *name,
 		unsigned int object_size, unsigned int size, unsigned int align,
-		slab_flags_t flags, size_t useroffset,
-		size_t usersize, void (*ctor)(void *),
+		slab_flags_t flags, unsigned int useroffset,
+		unsigned int usersize, void (*ctor)(void *),
 		struct mem_cgroup *memcg, struct kmem_cache *root_cache)
 {
 	struct kmem_cache *s;
@@ -454,7 +454,8 @@ static struct kmem_cache *create_cache(const char *name,
 struct kmem_cache *
 kmem_cache_create_usercopy(const char *name,
 		  unsigned int size, unsigned int align,
-		  slab_flags_t flags, size_t useroffset, size_t usersize,
+		  slab_flags_t flags,
+		  unsigned int useroffset, unsigned int usersize,
 		  void (*ctor)(void *))
 {
 	struct kmem_cache *s = NULL;
diff --git a/mm/slub.c b/mm/slub.c
index 87a7a947f2c9..865d964f4c93 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5080,7 +5080,7 @@ SLAB_ATTR_RO(cache_dma);
 
 static ssize_t usersize_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%zu\n", s->usersize);
+	return sprintf(buf, "%u\n", s->usersize);
 }
 SLAB_ATTR_RO(usersize);
 
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 22/25] slub: make slab_index() return unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (19 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 21/25] slab: make usercopy region 32-bit Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:48   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 23/25] slub: make struct kmem_cache_order_objects::x " Alexey Dobriyan
                   ` (4 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

slab_index() returns index of an object within a slab
which is at most u15 (or u16?).

Iterators additionally guarantee that "p >= addr".

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 865d964f4c93..5d367e0a64ca 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -311,7 +311,7 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
 		__p += (__s)->size, __idx++)
 
 /* Determine object index from a given position */
-static inline int slab_index(void *p, struct kmem_cache *s, void *addr)
+static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
 {
 	return (p - addr) / s->size;
 }
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 23/25] slub: make struct kmem_cache_order_objects::x unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (20 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 22/25] slub: make slab_index() return unsigned int Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:51   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 24/25] slub: make size_from_object() return " Alexey Dobriyan
                   ` (3 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

struct kmem_cache_order_objects is for mixing order and number of objects,
and orders aren't bit enough to warrant 64-bit width.

Propagate unsignedness down so that everything fits.

!!! Patch assumes that "PAGE_SIZE << order" doesn't overflow. !!!

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h |  2 +-
 mm/slub.c                | 74 +++++++++++++++++++++++++-----------------------
 2 files changed, 40 insertions(+), 36 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 623d6ba92036..3773e26c08c1 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -73,7 +73,7 @@ struct kmem_cache_cpu {
  * given order would contain.
  */
 struct kmem_cache_order_objects {
-	unsigned long x;
+	unsigned int x;
 };
 
 /*
diff --git a/mm/slub.c b/mm/slub.c
index 5d367e0a64ca..9df658ee83fe 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -316,13 +316,13 @@ static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
 	return (p - addr) / s->size;
 }
 
-static inline int order_objects(int order, unsigned long size, int reserved)
+static inline unsigned int order_objects(unsigned int order, unsigned int size, unsigned int reserved)
 {
-	return ((PAGE_SIZE << order) - reserved) / size;
+	return (((unsigned int)PAGE_SIZE << order) - reserved) / size;
 }
 
-static inline struct kmem_cache_order_objects oo_make(int order,
-		unsigned long size, int reserved)
+static inline struct kmem_cache_order_objects oo_make(unsigned int order,
+		unsigned int size, unsigned int reserved)
 {
 	struct kmem_cache_order_objects x = {
 		(order << OO_SHIFT) + order_objects(order, size, reserved)
@@ -331,12 +331,12 @@ static inline struct kmem_cache_order_objects oo_make(int order,
 	return x;
 }
 
-static inline int oo_order(struct kmem_cache_order_objects x)
+static inline unsigned int oo_order(struct kmem_cache_order_objects x)
 {
 	return x.x >> OO_SHIFT;
 }
 
-static inline int oo_objects(struct kmem_cache_order_objects x)
+static inline unsigned int oo_objects(struct kmem_cache_order_objects x)
 {
 	return x.x & OO_MASK;
 }
@@ -1435,7 +1435,7 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
 		gfp_t flags, int node, struct kmem_cache_order_objects oo)
 {
 	struct page *page;
-	int order = oo_order(oo);
+	unsigned int order = oo_order(oo);
 
 	if (node == NUMA_NO_NODE)
 		page = alloc_pages(flags, order);
@@ -1454,8 +1454,8 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
 /* Pre-initialize the random sequence cache */
 static int init_cache_random_seq(struct kmem_cache *s)
 {
+	unsigned int count = oo_objects(s->oo);
 	int err;
-	unsigned long i, count = oo_objects(s->oo);
 
 	/* Bailout if already initialised */
 	if (s->random_seq)
@@ -1470,6 +1470,8 @@ static int init_cache_random_seq(struct kmem_cache *s)
 
 	/* Transform to an offset on the set of pages */
 	if (s->random_seq) {
+		unsigned int i;
+
 		for (i = 0; i < count; i++)
 			s->random_seq[i] *= s->size;
 	}
@@ -2398,7 +2400,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 
 	pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n",
 		nid, gfpflags, &gfpflags);
-	pr_warn("  cache: %s, object size: %u, buffer size: %u, default order: %d, min order: %d\n",
+	pr_warn("  cache: %s, object size: %u, buffer size: %u, default order: %u, min order: %u\n",
 		s->name, s->object_size, s->size, oo_order(s->oo),
 		oo_order(s->min));
 
@@ -3181,9 +3183,9 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk);
  * and increases the number of allocations possible without having to
  * take the list_lock.
  */
-static int slub_min_order;
-static int slub_max_order = PAGE_ALLOC_COSTLY_ORDER;
-static int slub_min_objects;
+static unsigned int slub_min_order;
+static unsigned int slub_max_order = PAGE_ALLOC_COSTLY_ORDER;
+static unsigned int slub_min_objects;
 
 /*
  * Calculate the order of allocation given an slab object size.
@@ -3210,20 +3212,21 @@ static int slub_min_objects;
  * requested a higher mininum order then we start with that one instead of
  * the smallest order which will fit the object.
  */
-static inline int slab_order(int size, int min_objects,
-				int max_order, int fract_leftover, int reserved)
+static inline unsigned int slab_order(unsigned int size,
+		unsigned int min_objects, unsigned int max_order,
+		unsigned int fract_leftover, unsigned int reserved)
 {
-	int order;
-	int rem;
-	int min_order = slub_min_order;
+	unsigned int min_order = slub_min_order;
+	unsigned int order;
 
 	if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE)
 		return get_order(size * MAX_OBJS_PER_PAGE) - 1;
 
-	for (order = max(min_order, get_order(min_objects * size + reserved));
+	for (order = max(min_order, (unsigned int)get_order(min_objects * size + reserved));
 			order <= max_order; order++) {
 
-		unsigned long slab_size = PAGE_SIZE << order;
+		unsigned int slab_size = (unsigned int)PAGE_SIZE << order;
+		unsigned int rem;
 
 		rem = (slab_size - reserved) % size;
 
@@ -3234,12 +3237,11 @@ static inline int slab_order(int size, int min_objects,
 	return order;
 }
 
-static inline int calculate_order(int size, int reserved)
+static inline int calculate_order(unsigned int size, unsigned int reserved)
 {
-	int order;
-	int min_objects;
-	int fraction;
-	int max_objects;
+	unsigned int order;
+	unsigned int min_objects;
+	unsigned int max_objects;
 
 	/*
 	 * Attempt to find best configuration for a slab. This
@@ -3256,6 +3258,8 @@ static inline int calculate_order(int size, int reserved)
 	min_objects = min(min_objects, max_objects);
 
 	while (min_objects > 1) {
+		unsigned int fraction;
+
 		fraction = 16;
 		while (fraction >= 4) {
 			order = slab_order(size, min_objects,
@@ -3458,7 +3462,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 {
 	slab_flags_t flags = s->flags;
 	unsigned int size = s->object_size;
-	int order;
+	unsigned int order;
 
 	/*
 	 * Round up object size to the next word boundary. We can only
@@ -3548,7 +3552,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	else
 		order = calculate_order(size, s->reserved);
 
-	if (order < 0)
+	if ((int)order < 0)
 		return 0;
 
 	s->allocflags = 0;
@@ -3716,7 +3720,7 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
 
 static int __init setup_slub_min_order(char *str)
 {
-	get_option(&str, &slub_min_order);
+	get_option(&str, (int *)&slub_min_order);
 
 	return 1;
 }
@@ -3725,8 +3729,8 @@ __setup("slub_min_order=", setup_slub_min_order);
 
 static int __init setup_slub_max_order(char *str)
 {
-	get_option(&str, &slub_max_order);
-	slub_max_order = min(slub_max_order, MAX_ORDER - 1);
+	get_option(&str, (int *)&slub_max_order);
+	slub_max_order = min(slub_max_order, (unsigned int)MAX_ORDER - 1);
 
 	return 1;
 }
@@ -3735,7 +3739,7 @@ __setup("slub_max_order=", setup_slub_max_order);
 
 static int __init setup_slub_min_objects(char *str)
 {
-	get_option(&str, &slub_min_objects);
+	get_option(&str, (int *)&slub_min_objects);
 
 	return 1;
 }
@@ -4230,7 +4234,7 @@ void __init kmem_cache_init(void)
 	cpuhp_setup_state_nocalls(CPUHP_SLUB_DEAD, "slub:dead", NULL,
 				  slub_cpu_dead);
 
-	pr_info("SLUB: HWalign=%d, Order=%d-%d, MinObjects=%d, CPUs=%u, Nodes=%d\n",
+	pr_info("SLUB: HWalign=%d, Order=%u-%u, MinObjects=%u, CPUs=%u, Nodes=%d\n",
 		cache_line_size(),
 		slub_min_order, slub_max_order, slub_min_objects,
 		nr_cpu_ids, nr_node_ids);
@@ -4906,17 +4910,17 @@ SLAB_ATTR_RO(object_size);
 
 static ssize_t objs_per_slab_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", oo_objects(s->oo));
+	return sprintf(buf, "%u\n", oo_objects(s->oo));
 }
 SLAB_ATTR_RO(objs_per_slab);
 
 static ssize_t order_store(struct kmem_cache *s,
 				const char *buf, size_t length)
 {
-	unsigned long order;
+	unsigned int order;
 	int err;
 
-	err = kstrtoul(buf, 10, &order);
+	err = kstrtouint(buf, 10, &order);
 	if (err)
 		return err;
 
@@ -4929,7 +4933,7 @@ static ssize_t order_store(struct kmem_cache *s,
 
 static ssize_t order_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", oo_order(s->oo));
+	return sprintf(buf, "%u\n", oo_order(s->oo));
 }
 SLAB_ATTR(order);
 
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 24/25] slub: make size_from_object() return unsigned int
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (21 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 23/25] slub: make struct kmem_cache_order_objects::x " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:52   ` Christopher Lameter
  2018-03-05 20:07 ` [PATCH 25/25] slab: use 32-bit arithmetic in freelist_randomize() Alexey Dobriyan
                   ` (2 subsequent siblings)
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

Function returns size of the object without red zone which can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 9df658ee83fe..7f27fb3b13b7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -466,7 +466,7 @@ static void get_map(struct kmem_cache *s, struct page *page, unsigned long *map)
 		set_bit(slab_index(p, s, addr), map);
 }
 
-static inline int size_from_object(struct kmem_cache *s)
+static inline unsigned int size_from_object(struct kmem_cache *s)
 {
 	if (s->flags & SLAB_RED_ZONE)
 		return s->size - s->red_left_pad;
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* [PATCH 25/25] slab: use 32-bit arithmetic in freelist_randomize()
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (22 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 24/25] slub: make size_from_object() return " Alexey Dobriyan
@ 2018-03-05 20:07 ` Alexey Dobriyan
  2018-03-06 18:52   ` Christopher Lameter
  2018-03-06 18:21 ` [PATCH 01/25] slab: fixup calculate_alignment() argument type Christopher Lameter
  2018-04-10 20:25 ` Matthew Wilcox
  25 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-05 20:07 UTC (permalink / raw)
  To: akpm; +Cc: cl, penberg, rientjes, iamjoonsoo.kim, linux-mm, adobriyan

SLAB doesn't support 4GB+ of objects per slab, therefore randomization
doesn't need size_t.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab_common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 01224cb90080..e2e2485b3496 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1186,10 +1186,10 @@ EXPORT_SYMBOL(kmalloc_order_trace);
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
 /* Randomize a generic freelist */
 static void freelist_randomize(struct rnd_state *state, unsigned int *list,
-			size_t count)
+			       unsigned int count)
 {
-	size_t i;
 	unsigned int rand;
+	unsigned int i;
 
 	for (i = 0; i < count; i++)
 		list[i] = i;
-- 
2.16.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/25] slab: fixup calculate_alignment() argument type
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (23 preceding siblings ...)
  2018-03-05 20:07 ` [PATCH 25/25] slab: use 32-bit arithmetic in freelist_randomize() Alexey Dobriyan
@ 2018-03-06 18:21 ` Christopher Lameter
  2018-04-10 20:25 ` Matthew Wilcox
  25 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:21 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 02/25] slab: make kmalloc_index() return "unsigned int"
  2018-03-05 20:07 ` [PATCH 02/25] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
@ 2018-03-06 18:24   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:24 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Mon, 5 Mar 2018, Alexey Dobriyan wrote:

> 3) in case of SLAB allocators, there are additional limitations
>    *) page->inuse, page->objects are only 16-/15-bit,
>    *) cache size was always 32-bit
>    *) slab orders are small, order 20 is needed to go 64-bit on x86_64
>       (PAGE_SIZE << order)

That changes with large base page size on power and ARM64 f.e. but then
we do not want to encourage larger allocations through slab anyways.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 03/25] slab: make kmalloc_size() return "unsigned int"
  2018-03-05 20:07 ` [PATCH 03/25] slab: make kmalloc_size() " Alexey Dobriyan
@ 2018-03-06 18:24   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:24 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 04/25] slab: make create_kmalloc_cache() work with 32-bit sizes
  2018-03-05 20:07 ` [PATCH 04/25] slab: make create_kmalloc_cache() work with 32-bit sizes Alexey Dobriyan
@ 2018-03-06 18:32   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:32 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Mon, 5 Mar 2018, Alexey Dobriyan wrote:

> KMALLOC_MAX_CACHE_SIZE is 32-bit so is the largest kmalloc cache size.

Ok SLABs maximum allocation size is limited to 32M (see
include/linux/slab.h:

#define KMALLOC_SHIFT_HIGH      ((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \
                                (MAX_ORDER + PAGE_SHIFT - 1) : 25)

And SLUB/SLOB pass all larger requests to the page allocator anyways.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 05/25] slab: make create_boot_cache() work with 32-bit sizes
  2018-03-05 20:07 ` [PATCH 05/25] slab: make create_boot_cache() " Alexey Dobriyan
@ 2018-03-06 18:34   ` Christopher Lameter
  2018-03-06 19:14     ` Matthew Wilcox
  0 siblings, 1 reply; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:34 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Mon, 5 Mar 2018, Alexey Dobriyan wrote:

> struct kmem_cache::size has always been "int", all those
> "size_t size" are fake.

They are useful since you typically pass sizeof( < whatever > ) as a
parameter to kmem_cache_create(). Passing those values onto other
functions internal to slab could use int.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 06/25] slab: make kmem_cache_create() work with 32-bit sizes
  2018-03-05 20:07 ` [PATCH 06/25] slab: make kmem_cache_create() " Alexey Dobriyan
@ 2018-03-06 18:37   ` Christopher Lameter
  2018-04-05 21:48     ` Andrew Morton
  0 siblings, 1 reply; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:37 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Mon, 5 Mar 2018, Alexey Dobriyan wrote:

> struct kmem_cache::size and ::align were always 32-bit.
>
> Out of curiosity I created 4GB kmem_cache, it oopsed with division by 0.
> kmem_cache_create(1UL<<32+1) created 1-byte cache as expected.

Could you add a check to avoid that in the future?

> size_t doesn't work and never did.

Its not so simple. Please verify that the edge cases of all object size /
alignment etc calculations are doable with 32 bit entities first.

And size_t makes sense as a parameter.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 07/25] slab: make size_index[] array u8
  2018-03-05 20:07 ` [PATCH 07/25] slab: make size_index[] array u8 Alexey Dobriyan
@ 2018-03-06 18:38   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:38 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 08/25] slab: make size_index_elem() unsigned int
  2018-03-05 20:07 ` [PATCH 08/25] slab: make size_index_elem() unsigned int Alexey Dobriyan
@ 2018-03-06 18:39   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:39 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 09/25] slub: make ->remote_node_defrag_ratio unsigned int
  2018-03-05 20:07 ` [PATCH 09/25] slub: make ->remote_node_defrag_ratio " Alexey Dobriyan
@ 2018-03-06 18:41   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:41 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Mon, 5 Mar 2018, Alexey Dobriyan wrote:

> ->remote_node_defrag_ratio is in range 0..1000.

This also adds a check and modifies the behavior to return an error code.
Before this patch invalid values were ignored.

Acked-by: Christoph Lameter <cl@linux.com>

> -	err = kstrtoul(buf, 10, &ratio);
> +	err = kstrtouint(buf, 10, &ratio);
>  	if (err)
>  		return err;
> +	if (ratio > 100)
> +		return -ERANGE;
>
> -	if (ratio <= 100)
> -		s->remote_node_defrag_ratio = ratio * 10;
> +	s->remote_node_defrag_ratio = ratio * 10;
>
>  	return length;
>  }
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 10/25] slub: make ->max_attr_size unsigned int
  2018-03-05 20:07 ` [PATCH 10/25] slub: make ->max_attr_size " Alexey Dobriyan
@ 2018-03-06 18:42   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:42 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 11/25] slub: make ->red_left_pad unsigned int
  2018-03-05 20:07 ` [PATCH 11/25] slub: make ->red_left_pad " Alexey Dobriyan
@ 2018-03-06 18:42   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:42 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 12/25] slub: make ->reserved unsigned int
  2018-03-05 20:07 ` [PATCH 12/25] slub: make ->reserved " Alexey Dobriyan
@ 2018-03-06 18:43   ` Christopher Lameter
  2018-03-09 15:51     ` Alexey Dobriyan
  2018-03-06 18:45   ` Matthew Wilcox
  1 sibling, 1 reply; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:43 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Mon, 5 Mar 2018, Alexey Dobriyan wrote:

> ->reserved is either 0 or sizeof(struct rcu_head), can't be negative.

Thus it should be size_t? ;-)

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 13/25] slub: make ->align unsigned int
  2018-03-05 20:07 ` [PATCH 13/25] slub: make ->align " Alexey Dobriyan
@ 2018-03-06 18:43   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:43 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 14/25] slub: make ->inuse unsigned int
  2018-03-05 20:07 ` [PATCH 14/25] slub: make ->inuse " Alexey Dobriyan
@ 2018-03-06 18:44   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:44 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 15/25] slub: make ->cpu_partial unsigned int
  2018-03-05 20:07 ` [PATCH 15/25] slub: make ->cpu_partial " Alexey Dobriyan
@ 2018-03-06 18:44   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:44 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 12/25] slub: make ->reserved unsigned int
  2018-03-05 20:07 ` [PATCH 12/25] slub: make ->reserved " Alexey Dobriyan
  2018-03-06 18:43   ` Christopher Lameter
@ 2018-03-06 18:45   ` Matthew Wilcox
  2018-03-09 22:42     ` Alexey Dobriyan
  1 sibling, 1 reply; 61+ messages in thread
From: Matthew Wilcox @ 2018-03-06 18:45 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Mon, Mar 05, 2018 at 11:07:17PM +0300, Alexey Dobriyan wrote:
> ->reserved is either 0 or sizeof(struct rcu_head), can't be negative.

Maybe make it unsigned char instead of unsigned int in case there's
anything else that could use the space?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 16/25] slub: make ->offset unsigned int
  2018-03-05 20:07 ` [PATCH 16/25] slub: make ->offset " Alexey Dobriyan
@ 2018-03-06 18:45   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:45 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 17/25] slub: make ->object_size unsigned int
  2018-03-05 20:07 ` [PATCH 17/25] slub: make ->object_size " Alexey Dobriyan
@ 2018-03-06 18:45   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:45 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 18/25] slub: make ->size unsigned int
  2018-03-05 20:07 ` [PATCH 18/25] slub: make ->size " Alexey Dobriyan
@ 2018-03-06 18:46   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:46 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 19/25] slab: make kmem_cache_flags accept 32-bit object size
  2018-03-05 20:07 ` [PATCH 19/25] slab: make kmem_cache_flags accept 32-bit object size Alexey Dobriyan
@ 2018-03-06 18:47   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:47 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 22/25] slub: make slab_index() return unsigned int
  2018-03-05 20:07 ` [PATCH 22/25] slub: make slab_index() return unsigned int Alexey Dobriyan
@ 2018-03-06 18:48   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:48 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 23/25] slub: make struct kmem_cache_order_objects::x unsigned int
  2018-03-05 20:07 ` [PATCH 23/25] slub: make struct kmem_cache_order_objects::x " Alexey Dobriyan
@ 2018-03-06 18:51   ` Christopher Lameter
  2018-04-05 21:51     ` Andrew Morton
  0 siblings, 1 reply; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:51 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Mon, 5 Mar 2018, Alexey Dobriyan wrote:

> struct kmem_cache_order_objects is for mixing order and number of objects,
> and orders aren't bit enough to warrant 64-bit width.
>
> Propagate unsignedness down so that everything fits.
>
> !!! Patch assumes that "PAGE_SIZE << order" doesn't overflow. !!!

PAGE_SIZE could be a couple of megs on some platforms (256 or so on
Itanium/PowerPC???) . So what are the worst case scenarios here?

I think both order and # object should fit in a 32 bit number.

A page with 256M size and 4 byte objects would have 64M objects.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 24/25] slub: make size_from_object() return unsigned int
  2018-03-05 20:07 ` [PATCH 24/25] slub: make size_from_object() return " Alexey Dobriyan
@ 2018-03-06 18:52   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:52 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 25/25] slab: use 32-bit arithmetic in freelist_randomize()
  2018-03-05 20:07 ` [PATCH 25/25] slab: use 32-bit arithmetic in freelist_randomize() Alexey Dobriyan
@ 2018-03-06 18:52   ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-03-06 18:52 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm


Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 05/25] slab: make create_boot_cache() work with 32-bit sizes
  2018-03-06 18:34   ` Christopher Lameter
@ 2018-03-06 19:14     ` Matthew Wilcox
  0 siblings, 0 replies; 61+ messages in thread
From: Matthew Wilcox @ 2018-03-06 19:14 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: Alexey Dobriyan, akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Tue, Mar 06, 2018 at 12:34:05PM -0600, Christopher Lameter wrote:
> On Mon, 5 Mar 2018, Alexey Dobriyan wrote:
> 
> > struct kmem_cache::size has always been "int", all those
> > "size_t size" are fake.
> 
> They are useful since you typically pass sizeof( < whatever > ) as a
> parameter to kmem_cache_create(). Passing those values onto other
> functions internal to slab could use int.

Sure, but:

struct foo {
	int n;
	char *p;
};
int f(unsigned int x);

int g(void)
{
	return f(sizeof(struct foo));
}

gives:

   0:   bf 10 00 00 00          mov    $0x10,%edi
   5:   e9 00 00 00 00          jmpq   a <g+0xa>

Changing the prototype to "int f(unsigned long x)" produces _exactly the
same assembly_.  Why?  Because mov to %edi will zero out the upper 32-bits
of %rdi.  I consider it one of the flaws in the x86 instruction set that
mov %di doesn't zero out the upper 16 bits of %edi (and correspondingly
the upper 48 bits of %rdi), as it'd save an awful lot of bytes in the
instruction stream by replacing 32-bit constants with 16-bit constants.

There's just no difference between these two.  Unless you want to talk
about a structure exceeding 4GB in size, and then I'm afraid we have
bigger problems.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 12/25] slub: make ->reserved unsigned int
  2018-03-06 18:43   ` Christopher Lameter
@ 2018-03-09 15:51     ` Alexey Dobriyan
  0 siblings, 0 replies; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-09 15:51 UTC (permalink / raw)
  To: Christopher Lameter; +Cc: akpm, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Tue, Mar 06, 2018 at 12:43:26PM -0600, Christopher Lameter wrote:
> On Mon, 5 Mar 2018, Alexey Dobriyan wrote:
> 
> > ->reserved is either 0 or sizeof(struct rcu_head), can't be negative.
> 
> Thus it should be size_t? ;-)

:-)

Christoph, using "unsigned int" should be default for kernel really.

As was noted earlier it doesn't matter for constants as x86_64 clears
upper half of a register. But it matters for sizes which aren't known
at compile time.

I've looked at a lot of places where size_t is used.
There is a certain degree of "type correctness" when people try to keep
type as much as possible. It works until first multiplication.

	int n;
	size_t len = sizeof(struct foo0) + n * sizeof(struct foo);

Most likely MOVSX or CDQE will be generated which is not the case
if everything is "unsigned int".

Generally, on x86_64,

	uint32_t > uint64_t > uint16_t
	uint8_t	 >

uint64_t adds REX prefix.
uint16_t additionally adds 66 prefix

uint8_t doesn't add anything but it is suboptimal on embedded archs
which emit "& 0xff" and thus should be used only for trimming memory
usage.

Additionally,

	unsigned int > int

as it is easy for compiler to lose track of value range and generate
size extensions.

There is only one exception, namely, when pointers are mixed with
integers:

	int n;
	void *p = p0 + n;

Quite often, gcc generates bigger code when types are made unsigned.
I don't quite understand how it thinks, but overall code will be smaller
if every signed type is made into unsigned.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 12/25] slub: make ->reserved unsigned int
  2018-03-06 18:45   ` Matthew Wilcox
@ 2018-03-09 22:42     ` Alexey Dobriyan
  0 siblings, 0 replies; 61+ messages in thread
From: Alexey Dobriyan @ 2018-03-09 22:42 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Tue, Mar 06, 2018 at 10:45:08AM -0800, Matthew Wilcox wrote:
> On Mon, Mar 05, 2018 at 11:07:17PM +0300, Alexey Dobriyan wrote:
> > ->reserved is either 0 or sizeof(struct rcu_head), can't be negative.
> 
> Maybe make it unsigned char instead of unsigned int in case there's
> anything else that could use the space?

Lokks like nothing except ->red_left_pad qualifies for uint8_t.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 06/25] slab: make kmem_cache_create() work with 32-bit sizes
  2018-03-06 18:37   ` Christopher Lameter
@ 2018-04-05 21:48     ` Andrew Morton
  2018-04-06  8:40       ` Alexey Dobriyan
  0 siblings, 1 reply; 61+ messages in thread
From: Andrew Morton @ 2018-04-05 21:48 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: Alexey Dobriyan, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Tue, 6 Mar 2018 12:37:49 -0600 (CST) Christopher Lameter <cl@linux.com> wrote:

> On Mon, 5 Mar 2018, Alexey Dobriyan wrote:
> 
> > struct kmem_cache::size and ::align were always 32-bit.
> >
> > Out of curiosity I created 4GB kmem_cache, it oopsed with division by 0.
> > kmem_cache_create(1UL<<32+1) created 1-byte cache as expected.
> 
> Could you add a check to avoid that in the future?
> 
> > size_t doesn't work and never did.
> 
> Its not so simple. Please verify that the edge cases of all object size /
> alignment etc calculations are doable with 32 bit entities first.
> 
> And size_t makes sense as a parameter.

Alexey, please don't let this stuff dangle on.

I think I'll merge this as-is but some fixups might be needed as a
result of Christoph's suggestion?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 23/25] slub: make struct kmem_cache_order_objects::x unsigned int
  2018-03-06 18:51   ` Christopher Lameter
@ 2018-04-05 21:51     ` Andrew Morton
  2018-04-06 18:02       ` Alexey Dobriyan
  0 siblings, 1 reply; 61+ messages in thread
From: Andrew Morton @ 2018-04-05 21:51 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: Alexey Dobriyan, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Tue, 6 Mar 2018 12:51:47 -0600 (CST) Christopher Lameter <cl@linux.com> wrote:

> On Mon, 5 Mar 2018, Alexey Dobriyan wrote:
> 
> > struct kmem_cache_order_objects is for mixing order and number of objects,
> > and orders aren't bit enough to warrant 64-bit width.
> >
> > Propagate unsignedness down so that everything fits.
> >
> > !!! Patch assumes that "PAGE_SIZE << order" doesn't overflow. !!!
> 
> PAGE_SIZE could be a couple of megs on some platforms (256 or so on
> Itanium/PowerPC???) . So what are the worst case scenarios here?
> 
> I think both order and # object should fit in a 32 bit number.
> 
> A page with 256M size and 4 byte objects would have 64M objects.

Another dangling review comment.  Alexey, please respond?

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 06/25] slab: make kmem_cache_create() work with 32-bit sizes
  2018-04-05 21:48     ` Andrew Morton
@ 2018-04-06  8:40       ` Alexey Dobriyan
  2018-04-07 15:13         ` Christopher Lameter
  0 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-04-06  8:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Christopher Lameter, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Thu, Apr 05, 2018 at 02:48:33PM -0700, Andrew Morton wrote:
> On Tue, 6 Mar 2018 12:37:49 -0600 (CST) Christopher Lameter <cl@linux.com> wrote:
> 
> > On Mon, 5 Mar 2018, Alexey Dobriyan wrote:
> > 
> > > struct kmem_cache::size and ::align were always 32-bit.
> > >
> > > Out of curiosity I created 4GB kmem_cache, it oopsed with division by 0.
> > > kmem_cache_create(1UL<<32+1) created 1-byte cache as expected.
> > 
> > Could you add a check to avoid that in the future?
> > 
> > > size_t doesn't work and never did.
> > 
> > Its not so simple. Please verify that the edge cases of all object size /
> > alignment etc calculations are doable with 32 bit entities first.
> > 
> > And size_t makes sense as a parameter.
> 
> Alexey, please don't let this stuff dangle on.
> 
> I think I'll merge this as-is but some fixups might be needed as a
> result of Christoph's suggestion?

I see this email in public archives, but not in my mailbox :-\

Anyway,

I think the answer is in fact simple.

1)
"int size" proves that 4GB+ caches were always broken both on SLUB
and SLAB. I could audit calculate_sizes() and friends but why bother
if create_cache() already truncated everything.

You're writing:

	that the edge cases of all object size ...
	... are doable with 32 bit entities

AS IF they were doable with 64-bit. They weren't.

2)
Dynamically allocated kernel data structures are in fact small.
I know of "struct kvm_vcpu", it is 20KB on my machine and it's
the biggest.

kmalloc is limited to 64MB, after that it fallbacks to page allocator.
Which means that some huge structure cache must be created by cache or
not affected by conversion as it still falls back to page allocator.

3)
->size and ->align were signed ints, making them unsigned makes
overflows twice as unlikely :^)

> And size_t makes sense as a parameter.

size_t doesn't make sense for kernel as 4GB+ objects are few and far
between.

I remember such patches could shrink SLUB by ~1KB. SLUB is 30KB total.
So it is 2-3% reduction simply by not using "unsigned long" and "size_t"
and using 32-bit arithmetic.

Userspace shifted to size_t, people copy, it bloats kernel for no reason.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 23/25] slub: make struct kmem_cache_order_objects::x unsigned int
  2018-04-05 21:51     ` Andrew Morton
@ 2018-04-06 18:02       ` Alexey Dobriyan
  2018-04-07 15:18         ` Christopher Lameter
  0 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-04-06 18:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Christopher Lameter, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Thu, Apr 05, 2018 at 02:51:08PM -0700, Andrew Morton wrote:
> On Tue, 6 Mar 2018 12:51:47 -0600 (CST) Christopher Lameter <cl@linux.com> wrote:
> 
> > On Mon, 5 Mar 2018, Alexey Dobriyan wrote:
> > 
> > > struct kmem_cache_order_objects is for mixing order and number of objects,
> > > and orders aren't bit enough to warrant 64-bit width.
> > >
> > > Propagate unsignedness down so that everything fits.
> > >
> > > !!! Patch assumes that "PAGE_SIZE << order" doesn't overflow. !!!
> > 
> > PAGE_SIZE could be a couple of megs on some platforms (256 or so on
> > Itanium/PowerPC???) . So what are the worst case scenarios here?
> > 
> > I think both order and # object should fit in a 32 bit number.
> > 
> > A page with 256M size and 4 byte objects would have 64M objects.
> 
> Another dangling review comment.  Alexey, please respond?

PowerPC is 256KB, IA64 is 64KB.

So "PAGE_SIZE << order" overflows if order is 14 (or 13 if signed int
slips in somewhere. Highest safe order is 12, which should be enough.

When was the last time you saw 2GB slab?
It never happenes as costly order is 3(?).

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 06/25] slab: make kmem_cache_create() work with 32-bit sizes
  2018-04-06  8:40       ` Alexey Dobriyan
@ 2018-04-07 15:13         ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-04-07 15:13 UTC (permalink / raw)
  To: Alexey Dobriyan
  Cc: Andrew Morton, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Fri, 6 Apr 2018, Alexey Dobriyan wrote:

> > > Its not so simple. Please verify that the edge cases of all object size /
> > > alignment etc calculations are doable with 32 bit entities first.
> > >
> > > And size_t makes sense as a parameter.
> >
> > Alexey, please don't let this stuff dangle on.
> >
> > I think I'll merge this as-is but some fixups might be needed as a
> > result of Christoph's suggestion?
>
> I see this email in public archives, but not in my mailbox :-\

Oh gosh. More email trouble with routing via comcast.

> 1)
> "int size" proves that 4GB+ caches were always broken both on SLUB
> and SLAB. I could audit calculate_sizes() and friends but why bother
> if create_cache() already truncated everything.

The problem is that intermediate results in calculations may exceed the
int range. Please look at that.

> You're writing:
>
> 	that the edge cases of all object size ...
> 	... are doable with 32 bit entities
>
> AS IF they were doable with 64-bit. They weren't.

That was not the issue. No one ever claimed that slabs of more than 4GB
were supported.

> kmalloc is limited to 64MB, after that it fallbacks to page allocator.
> Which means that some huge structure cache must be created by cache or
> not affected by conversion as it still falls back to page allocator.

That is not accurate: kmalloc falls back after PAGE_SIZE << 1 to the page
allocator.

> > And size_t makes sense as a parameter.
>
> size_t doesn't make sense for kernel as 4GB+ objects are few and far
> between.

Again not the issue. Please stop fighting straw men and issues that you
come up with in your imagination. size_t makes sense because the type is
designed to represent the size of an object.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 23/25] slub: make struct kmem_cache_order_objects::x unsigned int
  2018-04-06 18:02       ` Alexey Dobriyan
@ 2018-04-07 15:18         ` Christopher Lameter
  0 siblings, 0 replies; 61+ messages in thread
From: Christopher Lameter @ 2018-04-07 15:18 UTC (permalink / raw)
  To: Alexey Dobriyan
  Cc: Andrew Morton, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Fri, 6 Apr 2018, Alexey Dobriyan wrote:

> > > I think both order and # object should fit in a 32 bit number.
> > >
> > > A page with 256M size and 4 byte objects would have 64M objects.
> >
> > Another dangling review comment.  Alexey, please respond?
>
> PowerPC is 256KB, IA64 is 64KB.

The page sizes on both platforms are configurable and there have been
experiments in the past with far larger page sizes. If this is what is
currently supported then its ok.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/25] slab: fixup calculate_alignment() argument type
  2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
                   ` (24 preceding siblings ...)
  2018-03-06 18:21 ` [PATCH 01/25] slab: fixup calculate_alignment() argument type Christopher Lameter
@ 2018-04-10 20:25 ` Matthew Wilcox
  2018-04-10 20:47   ` Alexey Dobriyan
  25 siblings, 1 reply; 61+ messages in thread
From: Matthew Wilcox @ 2018-04-10 20:25 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm


Hi Alexey,

I came across this:

        for (order = max(min_order, (unsigned int)get_order(min_objects * size + reserved));

Do you want to work on making get_order() return an unsigned int?

Also, I think get_order(0) should probably be 0, but you might develop
a different feeling for it as you work your way around the kernel looking
at how it's used.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/25] slab: fixup calculate_alignment() argument type
  2018-04-10 20:25 ` Matthew Wilcox
@ 2018-04-10 20:47   ` Alexey Dobriyan
  2018-04-10 21:02     ` Matthew Wilcox
  0 siblings, 1 reply; 61+ messages in thread
From: Alexey Dobriyan @ 2018-04-10 20:47 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Tue, Apr 10, 2018 at 01:25:46PM -0700, Matthew Wilcox wrote:
> I came across this:
> 
>         for (order = max(min_order, (unsigned int)get_order(min_objects * size + reserved));
> 
> Do you want to work on making get_order() return an unsigned int?
> 
> Also, I think get_order(0) should probably be 0, but you might develop
> a different feeling for it as you work your way around the kernel looking
> at how it's used.

IIRC total size increased when I made it return "unsigned int".

Another thing is that there should be 3 get_order's corresponding
to 32-bit, 64-bit and unsigned long versions of fls() which correspond
to REX and non-REX versions of BSR.

^ permalink raw reply	[flat|nested] 61+ messages in thread

* Re: [PATCH 01/25] slab: fixup calculate_alignment() argument type
  2018-04-10 20:47   ` Alexey Dobriyan
@ 2018-04-10 21:02     ` Matthew Wilcox
  0 siblings, 0 replies; 61+ messages in thread
From: Matthew Wilcox @ 2018-04-10 21:02 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, cl, penberg, rientjes, iamjoonsoo.kim, linux-mm

On Tue, Apr 10, 2018 at 11:47:32PM +0300, Alexey Dobriyan wrote:
> On Tue, Apr 10, 2018 at 01:25:46PM -0700, Matthew Wilcox wrote:
> > I came across this:
> > 
> >         for (order = max(min_order, (unsigned int)get_order(min_objects * size + reserved));
> > 
> > Do you want to work on making get_order() return an unsigned int?
> > 
> > Also, I think get_order(0) should probably be 0, but you might develop
> > a different feeling for it as you work your way around the kernel looking
> > at how it's used.
> 
> IIRC total size increased when I made it return "unsigned int".

Huh, weird.  Did you go so far as to try having it return unsigned char?
We know it's not going to return anything outside the range of 0-63.

^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2018-04-10 21:02 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-05 20:07 [PATCH 01/25] slab: fixup calculate_alignment() argument type Alexey Dobriyan
2018-03-05 20:07 ` [PATCH 02/25] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
2018-03-06 18:24   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 03/25] slab: make kmalloc_size() " Alexey Dobriyan
2018-03-06 18:24   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 04/25] slab: make create_kmalloc_cache() work with 32-bit sizes Alexey Dobriyan
2018-03-06 18:32   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 05/25] slab: make create_boot_cache() " Alexey Dobriyan
2018-03-06 18:34   ` Christopher Lameter
2018-03-06 19:14     ` Matthew Wilcox
2018-03-05 20:07 ` [PATCH 06/25] slab: make kmem_cache_create() " Alexey Dobriyan
2018-03-06 18:37   ` Christopher Lameter
2018-04-05 21:48     ` Andrew Morton
2018-04-06  8:40       ` Alexey Dobriyan
2018-04-07 15:13         ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 07/25] slab: make size_index[] array u8 Alexey Dobriyan
2018-03-06 18:38   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 08/25] slab: make size_index_elem() unsigned int Alexey Dobriyan
2018-03-06 18:39   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 09/25] slub: make ->remote_node_defrag_ratio " Alexey Dobriyan
2018-03-06 18:41   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 10/25] slub: make ->max_attr_size " Alexey Dobriyan
2018-03-06 18:42   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 11/25] slub: make ->red_left_pad " Alexey Dobriyan
2018-03-06 18:42   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 12/25] slub: make ->reserved " Alexey Dobriyan
2018-03-06 18:43   ` Christopher Lameter
2018-03-09 15:51     ` Alexey Dobriyan
2018-03-06 18:45   ` Matthew Wilcox
2018-03-09 22:42     ` Alexey Dobriyan
2018-03-05 20:07 ` [PATCH 13/25] slub: make ->align " Alexey Dobriyan
2018-03-06 18:43   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 14/25] slub: make ->inuse " Alexey Dobriyan
2018-03-06 18:44   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 15/25] slub: make ->cpu_partial " Alexey Dobriyan
2018-03-06 18:44   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 16/25] slub: make ->offset " Alexey Dobriyan
2018-03-06 18:45   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 17/25] slub: make ->object_size " Alexey Dobriyan
2018-03-06 18:45   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 18/25] slub: make ->size " Alexey Dobriyan
2018-03-06 18:46   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 19/25] slab: make kmem_cache_flags accept 32-bit object size Alexey Dobriyan
2018-03-06 18:47   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 20/25] kasan: make kasan_cache_create() work with 32-bit slab cache sizes Alexey Dobriyan
2018-03-05 20:07 ` [PATCH 21/25] slab: make usercopy region 32-bit Alexey Dobriyan
2018-03-05 20:07 ` [PATCH 22/25] slub: make slab_index() return unsigned int Alexey Dobriyan
2018-03-06 18:48   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 23/25] slub: make struct kmem_cache_order_objects::x " Alexey Dobriyan
2018-03-06 18:51   ` Christopher Lameter
2018-04-05 21:51     ` Andrew Morton
2018-04-06 18:02       ` Alexey Dobriyan
2018-04-07 15:18         ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 24/25] slub: make size_from_object() return " Alexey Dobriyan
2018-03-06 18:52   ` Christopher Lameter
2018-03-05 20:07 ` [PATCH 25/25] slab: use 32-bit arithmetic in freelist_randomize() Alexey Dobriyan
2018-03-06 18:52   ` Christopher Lameter
2018-03-06 18:21 ` [PATCH 01/25] slab: fixup calculate_alignment() argument type Christopher Lameter
2018-04-10 20:25 ` Matthew Wilcox
2018-04-10 20:47   ` Alexey Dobriyan
2018-04-10 21:02     ` Matthew Wilcox

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.