All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/23] slab: make kmalloc_index() return "unsigned int"
@ 2017-11-23 22:16 Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 02/23] slab: make kmalloc_size() " Alexey Dobriyan
                   ` (22 more replies)
  0 siblings, 23 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

kmalloc_index() return index into an array of kmalloc kmem caches,
therefore should unsigned.

Space savings:

	add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-6 (-6)
	Function                                     old     new   delta
	rtsx_scsi_handler                           9116    9114      -2
	vnic_rq_alloc                                424     420      -4

This patch start a series of converting SLUB (mostly) to "unsigned int".
1) Most integers in the code are in fact unsigned entities: array indexes,
   lengths, buffer sizes, allocation orders. It is therefore better to use
   unsigned variables

2) Some integers in the code are either "size_t" or "unsigned long" for no
   reason.
 
   size_t usually comes from people trying to "maintain" type correctness
   and figuring out that "sizeof" operator returns size_t or that
   memset/memcpy    takes size_t so should everything you pass to it.

   However the number of 4GB+ objects in the kernel is very small.
   Most, if not all, dynamically allocated objects with kmalloc() or
   kmem_cache_create() aren't actually big. Maintaining wide types
   doesn't do anything.

   64-bit ops are bigger than 32-bit on our beloved x86_64,
   so try to not use 64-bit where it isn't necessary
   (read: everywhere where integers are integers not pointers)

3) in case of SLAB allocators, there are additional limitations
   *) page->inuse, page->objects are only 16-/15-bit,
   *) cache size was always 32-bit
   *) slab orders are small, order 20 is needed to go 64-bit on x86_64
      (PAGE_SIZE << order)

Basically everything is 32-bit except kmalloc(1ULL<<32) which gets shortcut
through page allocator.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slab.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 50697a1d6621..e765800d7c9b 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -295,7 +295,7 @@ extern struct kmem_cache *kmalloc_dma_caches[KMALLOC_SHIFT_HIGH + 1];
  * 2 = 129 .. 192 bytes
  * n = 2^(n-1)+1 .. 2^n
  */
-static __always_inline int kmalloc_index(size_t size)
+static __always_inline unsigned int kmalloc_index(size_t size)
 {
 	if (!size)
 		return 0;
@@ -491,7 +491,7 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
 			return kmalloc_large(size, flags);
 #ifndef CONFIG_SLOB
 		if (!(flags & GFP_DMA)) {
-			int index = kmalloc_index(size);
+			unsigned int index = kmalloc_index(size);
 
 			if (!index)
 				return ZERO_SIZE_PTR;
@@ -529,7 +529,7 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node)
 #ifndef CONFIG_SLOB
 	if (__builtin_constant_p(size) &&
 		size <= KMALLOC_MAX_CACHE_SIZE && !(flags & GFP_DMA)) {
-		int i = kmalloc_index(size);
+		unsigned int i = kmalloc_index(size);
 
 		if (!i)
 			return ZERO_SIZE_PTR;
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 02/23] slab: make kmalloc_size() return "unsigned int"
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 03/23] slab: create_kmalloc_cache() works with 32-bit sizes Alexey Dobriyan
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

kmalloc_size() derives size of kmalloc cache from internal index,
which of course can't be negative.

Propagate unsignedness a bit.

Space savings:

	add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-2 (-2)
	Function                                     old     new   delta
	new_kmalloc_cache                             42      41      -1
	create_kmalloc_caches                        238     237      -1

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slab.h | 4 ++--
 mm/slab_common.c     | 8 ++++----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index e765800d7c9b..f3e4aca74406 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -509,11 +509,11 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags)
  * return size or 0 if a kmalloc cache for that
  * size does not exist
  */
-static __always_inline int kmalloc_size(int n)
+static __always_inline unsigned int kmalloc_size(unsigned int n)
 {
 #ifndef CONFIG_SLOB
 	if (n > 2)
-		return 1 << n;
+		return 1U << n;
 
 	if (n == 1 && KMALLOC_MIN_SIZE <= 32)
 		return 96;
diff --git a/mm/slab_common.c b/mm/slab_common.c
index c8cb36774ba1..8ba0ffb31279 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1057,7 +1057,7 @@ void __init setup_kmalloc_cache_index_table(void)
 	}
 }
 
-static void __init new_kmalloc_cache(int idx, slab_flags_t flags)
+static void __init new_kmalloc_cache(unsigned int idx, slab_flags_t flags)
 {
 	kmalloc_caches[idx] = create_kmalloc_cache(kmalloc_info[idx].name,
 					kmalloc_info[idx].size, flags);
@@ -1070,7 +1070,7 @@ static void __init new_kmalloc_cache(int idx, slab_flags_t flags)
  */
 void __init create_kmalloc_caches(slab_flags_t flags)
 {
-	int i;
+	unsigned int i;
 
 	for (i = KMALLOC_SHIFT_LOW; i <= KMALLOC_SHIFT_HIGH; i++) {
 		if (!kmalloc_caches[i])
@@ -1095,9 +1095,9 @@ void __init create_kmalloc_caches(slab_flags_t flags)
 		struct kmem_cache *s = kmalloc_caches[i];
 
 		if (s) {
-			int size = kmalloc_size(i);
+			unsigned int size = kmalloc_size(i);
 			char *n = kasprintf(GFP_NOWAIT,
-				 "dma-kmalloc-%d", size);
+				 "dma-kmalloc-%u", size);
 
 			BUG_ON(!n);
 			kmalloc_dma_caches[i] = create_kmalloc_cache(n,
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 03/23] slab: create_kmalloc_cache() works with 32-bit sizes
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 02/23] slab: make kmalloc_size() " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-24  1:06   ` Matthew Wilcox
  2017-11-23 22:16 ` [PATCH 04/23] slab: create_boot_cache() only " Alexey Dobriyan
                   ` (20 subsequent siblings)
  22 siblings, 1 reply; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

KMALLOC_MAX_CACHE_SIZE is 32-bit so is the largest kmalloc cache size.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab.h        | 4 ++--
 mm/slab_common.c | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index ad657ffa44e5..08f43ed41b75 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -75,7 +75,7 @@ extern struct kmem_cache *kmem_cache;
 /* A table of kmalloc cache names and sizes */
 extern const struct kmalloc_info_struct {
 	const char *name;
-	unsigned long size;
+	unsigned int size;
 } kmalloc_info[];
 
 unsigned long calculate_alignment(slab_flags_t flags,
@@ -94,7 +94,7 @@ struct kmem_cache *kmalloc_slab(size_t, gfp_t);
 /* Functions provided by the slab allocators */
 int __kmem_cache_create(struct kmem_cache *, slab_flags_t flags);
 
-extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size,
+struct kmem_cache *create_kmalloc_cache(const char *name, unsigned int size,
 			slab_flags_t flags);
 extern void create_boot_cache(struct kmem_cache *, const char *name,
 			size_t size, slab_flags_t flags);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 8ba0ffb31279..fa27e0492f89 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -898,7 +898,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
 	s->refcount = -1;	/* Exempt from merging for now */
 }
 
-struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
+struct kmem_cache *__init create_kmalloc_cache(const char *name, unsigned int size,
 				slab_flags_t flags)
 {
 	struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT);
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 04/23] slab: create_boot_cache() only works with 32-bit sizes
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 02/23] slab: make kmalloc_size() " Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 03/23] slab: create_kmalloc_cache() works with 32-bit sizes Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 05/23] slab: kmem_cache_create() " Alexey Dobriyan
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

struct kmem_cache::size has always been "int" so all those
"size_t size" are fake.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab.h        | 2 +-
 mm/slab_common.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 08f43ed41b75..6bbb7b5d1706 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -97,7 +97,7 @@ int __kmem_cache_create(struct kmem_cache *, slab_flags_t flags);
 struct kmem_cache *create_kmalloc_cache(const char *name, unsigned int size,
 			slab_flags_t flags);
 extern void create_boot_cache(struct kmem_cache *, const char *name,
-			size_t size, slab_flags_t flags);
+			unsigned int size, slab_flags_t flags);
 
 int slab_unmergeable(struct kmem_cache *s);
 struct kmem_cache *find_mergeable(size_t size, size_t align,
diff --git a/mm/slab_common.c b/mm/slab_common.c
index fa27e0492f89..9c8c55e1e0e3 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -878,7 +878,7 @@ bool slab_is_available(void)
 
 #ifndef CONFIG_SLOB
 /* Create a cache during boot when no slab services are available yet */
-void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t size,
+void __init create_boot_cache(struct kmem_cache *s, const char *name, unsigned int size,
 		slab_flags_t flags)
 {
 	int err;
@@ -892,7 +892,7 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz
 	err = __kmem_cache_create(s, flags);
 
 	if (err)
-		panic("Creation of kmalloc slab %s size=%zu failed. Reason %d\n",
+		panic("Creation of kmalloc slab %s size=%u failed. Reason %d\n",
 					name, size, err);
 
 	s->refcount = -1;	/* Exempt from merging for now */
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 05/23] slab: kmem_cache_create() only works with 32-bit sizes
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (2 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 04/23] slab: create_boot_cache() only " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 06/23] slab: make size_index[] array u8 Alexey Dobriyan
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

struct kmem_cache::size and ::align were always 32-bit.

Out of curiosity I created 4GB kmem_cache, it oopsed with division by 0.
kmem_cache_create(1UL<<32+1) created 1-byte cache as expected.

size_t doesn't work and never did.

Space savings (all cases where cache size is not known at compile time):

	add/remove: 0/0 grow/shrink: 3/21 up/down: 7/-61 (-54)
	Function                                     old     new   delta
	ext4_groupinfo_create_slab                   193     197      +4
	find_mergeable                               281     283      +2
	kmem_cache_create                            638     639      +1
	tipc_server_start                            771     770      -1
	skd_construct                               2616    2615      -1
	ovs_flow_init                                122     121      -1
	init_cifs                                   1271    1270      -1
	fork_init                                    284     283      -1
	ecryptfs_init                                405     404      -1
	dm_bufio_client_create                      1009    1008      -1
	kvm_init                                     692     690      -2
	elv_register                                 398     396      -2
	calculate_alignment                           60      58      -2
	verity_fec_ctr                               875     872      -3
	sg_pool_init                                 192     189      -3
	init_bio                                     203     200      -3
	early_amd_iommu_init                        2492    2489      -3
	__kmem_cache_alias                           164     161      -3
	jbd2_journal_load                            842     838      -4
	ccid_kmem_cache_create                       106     102      -4
	setup_conf                                  5027    5022      -5
	resize_stripes                              1607    1602      -5
	copy_pid_ns                                  825     819      -6
	create_boot_cache                            169     160      -9

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slab.h |  2 +-
 mm/slab.c            |  2 +-
 mm/slab.h            | 10 +++++-----
 mm/slab_common.c     | 16 ++++++++--------
 mm/slub.c            |  2 +-
 5 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index f3e4aca74406..00a2b48d9bae 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -135,7 +135,7 @@ struct mem_cgroup;
 void __init kmem_cache_init(void);
 bool slab_is_available(void);
 
-struct kmem_cache *kmem_cache_create(const char *, size_t, size_t,
+struct kmem_cache *kmem_cache_create(const char *, unsigned int, unsigned int,
 			slab_flags_t,
 			void (*)(void *));
 void kmem_cache_destroy(struct kmem_cache *);
diff --git a/mm/slab.c b/mm/slab.c
index 183e996dde5f..78fd096362da 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1882,7 +1882,7 @@ slab_flags_t kmem_cache_flags(unsigned long object_size,
 }
 
 struct kmem_cache *
-__kmem_cache_alias(const char *name, size_t size, size_t align,
+__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *))
 {
 	struct kmem_cache *cachep;
diff --git a/mm/slab.h b/mm/slab.h
index 6bbb7b5d1706..facaf949f727 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -78,8 +78,8 @@ extern const struct kmalloc_info_struct {
 	unsigned int size;
 } kmalloc_info[];
 
-unsigned long calculate_alignment(slab_flags_t flags,
-		unsigned long align, unsigned long size);
+unsigned int calculate_alignment(slab_flags_t flags,
+		unsigned int align, unsigned int size);
 
 #ifndef CONFIG_SLOB
 /* Kmalloc array related functions */
@@ -100,11 +100,11 @@ extern void create_boot_cache(struct kmem_cache *, const char *name,
 			unsigned int size, slab_flags_t flags);
 
 int slab_unmergeable(struct kmem_cache *s);
-struct kmem_cache *find_mergeable(size_t size, size_t align,
+struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
 		slab_flags_t flags, const char *name, void (*ctor)(void *));
 #ifndef CONFIG_SLOB
 struct kmem_cache *
-__kmem_cache_alias(const char *name, size_t size, size_t align,
+__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *));
 
 slab_flags_t kmem_cache_flags(unsigned long object_size,
@@ -112,7 +112,7 @@ slab_flags_t kmem_cache_flags(unsigned long object_size,
 	void (*ctor)(void *));
 #else
 static inline struct kmem_cache *
-__kmem_cache_alias(const char *name, size_t size, size_t align,
+__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *))
 { return NULL; }
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 9c8c55e1e0e3..1d46602c881e 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -73,7 +73,7 @@ unsigned int kmem_cache_size(struct kmem_cache *s)
 EXPORT_SYMBOL(kmem_cache_size);
 
 #ifdef CONFIG_DEBUG_VM
-static int kmem_cache_sanity_check(const char *name, size_t size)
+static int kmem_cache_sanity_check(const char *name, unsigned int size)
 {
 	struct kmem_cache *s = NULL;
 
@@ -104,7 +104,7 @@ static int kmem_cache_sanity_check(const char *name, size_t size)
 	return 0;
 }
 #else
-static inline int kmem_cache_sanity_check(const char *name, size_t size)
+static inline int kmem_cache_sanity_check(const char *name, unsigned int size)
 {
 	return 0;
 }
@@ -290,7 +290,7 @@ int slab_unmergeable(struct kmem_cache *s)
 	return 0;
 }
 
-struct kmem_cache *find_mergeable(size_t size, size_t align,
+struct kmem_cache *find_mergeable(unsigned int size, unsigned int align,
 		slab_flags_t flags, const char *name, void (*ctor)(void *))
 {
 	struct kmem_cache *s;
@@ -341,8 +341,8 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
  * Figure out what the alignment of the objects will be given a set of
  * flags, a user specified alignment and the size of the objects.
  */
-unsigned long calculate_alignment(slab_flags_t flags,
-		unsigned long align, unsigned long size)
+unsigned int calculate_alignment(slab_flags_t flags,
+		unsigned int align, unsigned int size)
 {
 	/*
 	 * If the user wants hardware cache aligned objects then follow that
@@ -352,7 +352,7 @@ unsigned long calculate_alignment(slab_flags_t flags,
 	 * alignment though. If that is greater then use it.
 	 */
 	if (flags & SLAB_HWCACHE_ALIGN) {
-		unsigned long ralign = cache_line_size();
+		unsigned int ralign = cache_line_size();
 		while (size <= ralign / 2)
 			ralign /= 2;
 		align = max(align, ralign);
@@ -365,7 +365,7 @@ unsigned long calculate_alignment(slab_flags_t flags,
 }
 
 static struct kmem_cache *create_cache(const char *name,
-		size_t object_size, size_t size, size_t align,
+		unsigned int object_size, unsigned int size, unsigned int align,
 		slab_flags_t flags, void (*ctor)(void *),
 		struct mem_cgroup *memcg, struct kmem_cache *root_cache)
 {
@@ -430,7 +430,7 @@ static struct kmem_cache *create_cache(const char *name,
  * as davem.
  */
 struct kmem_cache *
-kmem_cache_create(const char *name, size_t size, size_t align,
+kmem_cache_create(const char *name, unsigned int size, unsigned int align,
 		  slab_flags_t flags, void (*ctor)(void *))
 {
 	struct kmem_cache *s = NULL;
diff --git a/mm/slub.c b/mm/slub.c
index cfd56e5a35fb..e653c4b51403 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4223,7 +4223,7 @@ void __init kmem_cache_init_late(void)
 }
 
 struct kmem_cache *
-__kmem_cache_alias(const char *name, size_t size, size_t align,
+__kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *))
 {
 	struct kmem_cache *s, *c;
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 06/23] slab: make size_index[] array u8
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (3 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 05/23] slab: kmem_cache_create() " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 07/23] slab: make size_index_elem() unsigned int Alexey Dobriyan
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

All those small numbers are reverse indexes into kmalloc caches array
and can't be negative.

On x86_64 "unsigned int = fls()" can drop CDQE:

	add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-2 (-2)
	Function                                     old     new   delta
	kmalloc_slab                                 101      99      -2

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab_common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 1d46602c881e..4405af3ee8eb 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -927,7 +927,7 @@ EXPORT_SYMBOL(kmalloc_dma_caches);
  * of two cache sizes there. The size of larger slabs can be determined using
  * fls.
  */
-static s8 size_index[24] = {
+static u8 size_index[24] = {
 	3,	/* 8 */
 	4,	/* 16 */
 	5,	/* 24 */
@@ -965,7 +965,7 @@ static inline int size_index_elem(size_t bytes)
  */
 struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags)
 {
-	int index;
+	unsigned int index;
 
 	if (unlikely(size > KMALLOC_MAX_SIZE)) {
 		WARN_ON_ONCE(!(flags & __GFP_NOWARN));
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 07/23] slab: make size_index_elem() unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (4 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 06/23] slab: make size_index[] array u8 Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 08/23] slub: make ->remote_node_defrag_ratio " Alexey Dobriyan
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

size_index_elem() always work with small sizes (kmalloc cache are 32-bit)
and return small indexes.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab_common.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 4405af3ee8eb..1cec6225fc4c 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -954,7 +954,7 @@ static u8 size_index[24] = {
 	2	/* 192 */
 };
 
-static inline int size_index_elem(size_t bytes)
+static inline unsigned int size_index_elem(unsigned int bytes)
 {
 	return (bytes - 1) / 8;
 }
@@ -1023,13 +1023,13 @@ const struct kmalloc_info_struct kmalloc_info[] __initconst = {
  */
 void __init setup_kmalloc_cache_index_table(void)
 {
-	int i;
+	unsigned int i;
 
 	BUILD_BUG_ON(KMALLOC_MIN_SIZE > 256 ||
 		(KMALLOC_MIN_SIZE & (KMALLOC_MIN_SIZE - 1)));
 
 	for (i = 8; i < KMALLOC_MIN_SIZE; i += 8) {
-		int elem = size_index_elem(i);
+		unsigned int elem = size_index_elem(i);
 
 		if (elem >= ARRAY_SIZE(size_index))
 			break;
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 08/23] slub: make ->remote_node_defrag_ratio unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (5 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 07/23] slab: make size_index_elem() unsigned int Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 09/23] slub: make ->max_attr_size " Alexey Dobriyan
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

->remote_node_defrag_ratio is in range 0..1000.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h |  2 +-
 mm/slub.c                | 11 ++++++-----
 2 files changed, 7 insertions(+), 6 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 0adae162dc8f..571ff513ed97 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -124,7 +124,7 @@ struct kmem_cache {
 	/*
 	 * Defragmentation by allocating from a remote node.
 	 */
-	int remote_node_defrag_ratio;
+	unsigned int remote_node_defrag_ratio;
 #endif
 
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
diff --git a/mm/slub.c b/mm/slub.c
index e653c4b51403..45d8f0cbfb28 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5264,21 +5264,22 @@ SLAB_ATTR(shrink);
 #ifdef CONFIG_NUMA
 static ssize_t remote_node_defrag_ratio_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->remote_node_defrag_ratio / 10);
+	return sprintf(buf, "%u\n", s->remote_node_defrag_ratio / 10);
 }
 
 static ssize_t remote_node_defrag_ratio_store(struct kmem_cache *s,
 				const char *buf, size_t length)
 {
-	unsigned long ratio;
+	unsigned int ratio;
 	int err;
 
-	err = kstrtoul(buf, 10, &ratio);
+	err = kstrtouint(buf, 10, &ratio);
 	if (err)
 		return err;
+	if (ratio > 100)
+		return -ERANGE;
 
-	if (ratio <= 100)
-		s->remote_node_defrag_ratio = ratio * 10;
+	s->remote_node_defrag_ratio = ratio * 10;
 
 	return length;
 }
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 09/23] slub: make ->max_attr_size unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (6 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 08/23] slub: make ->remote_node_defrag_ratio " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 10/23] slub: make ->red_left_pad " Alexey Dobriyan
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

->max_attr_size is maximum length of every SLAB memcg attribute
ever written. VFS limits those to INT_MAX.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 571ff513ed97..5e98817e18a3 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -110,7 +110,8 @@ struct kmem_cache {
 #endif
 #ifdef CONFIG_MEMCG
 	struct memcg_cache_params memcg_params;
-	int max_attr_size; /* for propagation, maximum size of a stored attr */
+	/* for propagation, maximum size of a stored attr */
+	unsigned int max_attr_size;
 #ifdef CONFIG_SYSFS
 	struct kset *memcg_kset;
 #endif
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 10/23] slub: make ->red_left_pad unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (7 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 09/23] slub: make ->max_attr_size " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 11/23] slub: make ->reserved " Alexey Dobriyan
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

Padding length can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 5e98817e18a3..a7019a4c713d 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -101,7 +101,7 @@ struct kmem_cache {
 	int inuse;		/* Offset to metadata */
 	int align;		/* Alignment */
 	int reserved;		/* Reserved bytes at the end of slabs */
-	int red_left_pad;	/* Left redzone padding size */
+	unsigned int red_left_pad;	/* Left redzone padding size */
 	const char *name;	/* Name (only for display!) */
 	struct list_head list;	/* List of slab caches */
 #ifdef CONFIG_SYSFS
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 11/23] slub: make ->reserved unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (8 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 10/23] slub: make ->red_left_pad " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 12/23] slub: make ->align " Alexey Dobriyan
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

->reserved is either 0 or sizeof(struct rcu_head), can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 mm/slub.c                | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index a7019a4c713d..09ca236ce102 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -100,7 +100,7 @@ struct kmem_cache {
 	void (*ctor)(void *);
 	int inuse;		/* Offset to metadata */
 	int align;		/* Alignment */
-	int reserved;		/* Reserved bytes at the end of slabs */
+	unsigned int reserved;	/* Reserved bytes at the end of slabs */
 	unsigned int red_left_pad;	/* Left redzone padding size */
 	const char *name;	/* Name (only for display!) */
 	struct list_head list;	/* List of slab caches */
diff --git a/mm/slub.c b/mm/slub.c
index 45d8f0cbfb28..2ca7463c72c2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -5069,7 +5069,7 @@ SLAB_ATTR_RO(destroy_by_rcu);
 
 static ssize_t reserved_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->reserved);
+	return sprintf(buf, "%u\n", s->reserved);
 }
 SLAB_ATTR_RO(reserved);
 
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 12/23] slub: make ->align unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (9 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 11/23] slub: make ->reserved " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 13/23] slub: make ->inuse " Alexey Dobriyan
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

Kmem cache alignment can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 mm/slub.c                | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 09ca236ce102..ff2d3f513d15 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -99,7 +99,7 @@ struct kmem_cache {
 	int refcount;		/* Refcount for slab cache destroy */
 	void (*ctor)(void *);
 	int inuse;		/* Offset to metadata */
-	int align;		/* Alignment */
+	unsigned int align;	/* Alignment */
 	unsigned int reserved;	/* Reserved bytes at the end of slabs */
 	unsigned int red_left_pad;	/* Left redzone padding size */
 	const char *name;	/* Name (only for display!) */
diff --git a/mm/slub.c b/mm/slub.c
index 2ca7463c72c2..ddfeb1d5c512 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4877,7 +4877,7 @@ SLAB_ATTR_RO(slab_size);
 
 static ssize_t align_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->align);
+	return sprintf(buf, "%u\n", s->align);
 }
 SLAB_ATTR_RO(align);
 
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 13/23] slub: make ->inuse unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (10 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 12/23] slub: make ->align " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 14/23] slub: make ->cpu_partial " Alexey Dobriyan
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

->inuse is "the number of bytes in actual use by the object",
can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 mm/slub.c                | 5 ++---
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index ff2d3f513d15..2383c46c88ce 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -98,7 +98,7 @@ struct kmem_cache {
 	gfp_t allocflags;	/* gfp flags to use on each alloc */
 	int refcount;		/* Refcount for slab cache destroy */
 	void (*ctor)(void *);
-	int inuse;		/* Offset to metadata */
+	unsigned int inuse;	/* Offset to metadata */
 	unsigned int align;	/* Alignment */
 	unsigned int reserved;	/* Reserved bytes at the end of slabs */
 	unsigned int red_left_pad;	/* Left redzone padding size */
diff --git a/mm/slub.c b/mm/slub.c
index ddfeb1d5c512..f5b86d86be9a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4237,12 +4237,11 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		 * the complete object on kzalloc.
 		 */
 		s->object_size = max(s->object_size, (int)size);
-		s->inuse = max_t(int, s->inuse, ALIGN(size, sizeof(void *)));
+		s->inuse = max(s->inuse, ALIGN(size, sizeof(void *)));
 
 		for_each_memcg_cache(c, s) {
 			c->object_size = s->object_size;
-			c->inuse = max_t(int, c->inuse,
-					 ALIGN(size, sizeof(void *)));
+			c->inuse = max(c->inuse, ALIGN(size, sizeof(void *)));
 		}
 
 		if (sysfs_slab_alias(s, name)) {
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 14/23] slub: make ->cpu_partial unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (11 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 13/23] slub: make ->inuse " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 15/23] slub: make ->offset " Alexey Dobriyan
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

->cpu_partial is at least 0, can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 3 ++-
 mm/slub.c                | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 2383c46c88ce..d8b40e53e8f6 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -88,7 +88,8 @@ struct kmem_cache {
 	int object_size;	/* The size of an object without meta data */
 	int offset;		/* Free pointer offset. */
 #ifdef CONFIG_SLUB_CPU_PARTIAL
-	int cpu_partial;	/* Number of per cpu partial objects to keep around */
+	/* Number of per cpu partial objects to keep around */
+	unsigned int cpu_partial;
 #endif
 	struct kmem_cache_order_objects oo;
 
diff --git a/mm/slub.c b/mm/slub.c
index f5b86d86be9a..61218ecc0ea7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1809,7 +1809,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
 {
 	struct page *page, *page2;
 	void *object = NULL;
-	int available = 0;
+	unsigned int available = 0;
 	int objects;
 
 	/*
@@ -4943,10 +4943,10 @@ static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf)
 static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf,
 				 size_t length)
 {
-	unsigned long objects;
+	unsigned int objects;
 	int err;
 
-	err = kstrtoul(buf, 10, &objects);
+	err = kstrtouint(buf, 10, &objects);
 	if (err)
 		return err;
 	if (objects && !kmem_cache_has_cpu_partial(s))
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 15/23] slub: make ->offset unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (12 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 14/23] slub: make ->cpu_partial " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 16/23] slub: make ->object_size " Alexey Dobriyan
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

->offset is free pointer offset from the start of the object,
can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index d8b40e53e8f6..94f1228f2f41 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -86,7 +86,7 @@ struct kmem_cache {
 	unsigned long min_partial;
 	int size;		/* The size of an object including meta data */
 	int object_size;	/* The size of an object without meta data */
-	int offset;		/* Free pointer offset. */
+	unsigned int offset;	/* Free pointer offset. */
 #ifdef CONFIG_SLUB_CPU_PARTIAL
 	/* Number of per cpu partial objects to keep around */
 	unsigned int cpu_partial;
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 16/23] slub: make ->object_size unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (13 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 15/23] slub: make ->offset " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 17/23] slub: make ->size " Alexey Dobriyan
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

Linux doesn't support negative length objects in kmem caches.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h | 2 +-
 mm/slab_common.c         | 2 +-
 mm/slub.c                | 8 ++++----
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 94f1228f2f41..b9d1f0ef1335 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -85,7 +85,7 @@ struct kmem_cache {
 	slab_flags_t flags;
 	unsigned long min_partial;
 	int size;		/* The size of an object including meta data */
-	int object_size;	/* The size of an object without meta data */
+	unsigned int object_size;/* The size of an object without meta data */
 	unsigned int offset;	/* Free pointer offset. */
 #ifdef CONFIG_SLUB_CPU_PARTIAL
 	/* Number of per cpu partial objects to keep around */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 1cec6225fc4c..2b5435e1e619 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -94,7 +94,7 @@ static int kmem_cache_sanity_check(const char *name, unsigned int size)
 		 */
 		res = probe_kernel_address(s->name, tmp);
 		if (res) {
-			pr_err("Slab cache with size %d has lost its name\n",
+			pr_err("Slab cache with size %u has lost its name\n",
 			       s->object_size);
 			continue;
 		}
diff --git a/mm/slub.c b/mm/slub.c
index 61218ecc0ea7..4e09dabb89da 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -680,7 +680,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 		print_section(KERN_ERR, "Bytes b4 ", p - 16, 16);
 
 	print_section(KERN_ERR, "Object ", p,
-		      min_t(unsigned long, s->object_size, PAGE_SIZE));
+		      min_t(unsigned int, s->object_size, PAGE_SIZE));
 	if (s->flags & SLAB_RED_ZONE)
 		print_section(KERN_ERR, "Redzone ", p + s->object_size,
 			s->inuse - s->object_size);
@@ -2398,7 +2398,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 
 	pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n",
 		nid, gfpflags, &gfpflags);
-	pr_warn("  cache: %s, object size: %d, buffer size: %d, default order: %d, min order: %d\n",
+	pr_warn("  cache: %s, object size: %u, buffer size: %d, default order: %d, min order: %d\n",
 		s->name, s->object_size, s->size, oo_order(s->oo),
 		oo_order(s->min));
 
@@ -4236,7 +4236,7 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		 * Adjust the object sizes so that we clear
 		 * the complete object on kzalloc.
 		 */
-		s->object_size = max(s->object_size, (int)size);
+		s->object_size = max(s->object_size, size);
 		s->inuse = max(s->inuse, ALIGN(size, sizeof(void *)));
 
 		for_each_memcg_cache(c, s) {
@@ -4882,7 +4882,7 @@ SLAB_ATTR_RO(align);
 
 static ssize_t object_size_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->object_size);
+	return sprintf(buf, "%u\n", s->object_size);
 }
 SLAB_ATTR_RO(object_size);
 
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 17/23] slub: make ->size unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (14 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 16/23] slub: make ->object_size " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 18/23] slab: make kmem_cache_flags accept 32-bit object size Alexey Dobriyan
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

Linux doesn't support negative length objects in kmem caches
(including metadata).

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h |  2 +-
 mm/slub.c                | 12 ++++++------
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index b9d1f0ef1335..e768ac49f0b4 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -84,7 +84,7 @@ struct kmem_cache {
 	/* Used for retriving partial slabs etc */
 	slab_flags_t flags;
 	unsigned long min_partial;
-	int size;		/* The size of an object including meta data */
+	unsigned int size;	/* The size of an object including meta data */
 	unsigned int object_size;/* The size of an object without meta data */
 	unsigned int offset;	/* Free pointer offset. */
 #ifdef CONFIG_SLUB_CPU_PARTIAL
diff --git a/mm/slub.c b/mm/slub.c
index 4e09dabb89da..042421584ef8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2398,7 +2398,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 
 	pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n",
 		nid, gfpflags, &gfpflags);
-	pr_warn("  cache: %s, object size: %u, buffer size: %d, default order: %d, min order: %d\n",
+	pr_warn("  cache: %s, object size: %u, buffer size: %u, default order: %d, min order: %d\n",
 		s->name, s->object_size, s->size, oo_order(s->oo),
 		oo_order(s->min));
 
@@ -3632,8 +3632,8 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags)
 	free_kmem_cache_nodes(s);
 error:
 	if (flags & SLAB_PANIC)
-		panic("Cannot create slab %s size=%lu realsize=%u order=%u offset=%u flags=%lx\n",
-		      s->name, (unsigned long)s->size, s->size,
+		panic("Cannot create slab %s size=%u realsize=%u order=%u offset=%u flags=%lx\n",
+		      s->name, s->size, s->size,
 		      oo_order(s->oo), s->offset, (unsigned long)flags);
 	return -EINVAL;
 }
@@ -3822,7 +3822,7 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
 				struct page *page)
 {
 	struct kmem_cache *s;
-	unsigned long offset;
+	unsigned int offset;
 	size_t object_size;
 
 	/* Find object and usable object size. */
@@ -4870,7 +4870,7 @@ struct slab_attribute {
 
 static ssize_t slab_size_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", s->size);
+	return sprintf(buf, "%u\n", s->size);
 }
 SLAB_ATTR_RO(slab_size);
 
@@ -5638,7 +5638,7 @@ static char *create_unique_id(struct kmem_cache *s)
 		*p++ = 'A';
 	if (p != name + 1)
 		*p++ = '-';
-	p += sprintf(p, "%07d", s->size);
+	p += sprintf(p, "%07u", s->size);
 
 	BUG_ON(p > name + ID_STR_LENGTH - 1);
 	return name;
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 18/23] slab: make kmem_cache_flags accept 32-bit object size
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (15 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 17/23] slub: make ->size " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 19/23] kasan: make kasan_cache_create() work with 32-bit slab caches Alexey Dobriyan
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

Now that all sizes are properly typed, propagate "unsigned int" down
the callgraph.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab.c | 2 +-
 mm/slab.h | 4 ++--
 mm/slub.c | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 78fd096362da..15da0e177d7b 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1874,7 +1874,7 @@ static int __ref setup_cpu_cache(struct kmem_cache *cachep, gfp_t gfp)
 	return 0;
 }
 
-slab_flags_t kmem_cache_flags(unsigned long object_size,
+slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *))
 {
diff --git a/mm/slab.h b/mm/slab.h
index facaf949f727..2993ba92c89e 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -107,7 +107,7 @@ struct kmem_cache *
 __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *));
 
-slab_flags_t kmem_cache_flags(unsigned long object_size,
+slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *));
 #else
@@ -116,7 +116,7 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align,
 		   slab_flags_t flags, void (*ctor)(void *))
 { return NULL; }
 
-static inline slab_flags_t kmem_cache_flags(unsigned long object_size,
+static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *))
 {
diff --git a/mm/slub.c b/mm/slub.c
index 042421584ef8..cd09ae2c48e8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1290,7 +1290,7 @@ static int __init setup_slub_debug(char *str)
 
 __setup("slub_debug", setup_slub_debug);
 
-slab_flags_t kmem_cache_flags(unsigned long object_size,
+slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *))
 {
@@ -1323,7 +1323,7 @@ static inline void add_full(struct kmem_cache *s, struct kmem_cache_node *n,
 					struct page *page) {}
 static inline void remove_full(struct kmem_cache *s, struct kmem_cache_node *n,
 					struct page *page) {}
-slab_flags_t kmem_cache_flags(unsigned long object_size,
+slab_flags_t kmem_cache_flags(unsigned int object_size,
 	slab_flags_t flags, const char *name,
 	void (*ctor)(void *))
 {
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 19/23] kasan: make kasan_cache_create() work with 32-bit slab caches
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (16 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 18/23] slab: make kmem_cache_flags accept 32-bit object size Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-24 22:29   ` kbuild test robot
  2017-11-23 22:16 ` [PATCH 20/23] slub: make slab_index() return unsigned int Alexey Dobriyan
                   ` (4 subsequent siblings)
  22 siblings, 1 reply; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

If SLAB doesn't support 4GB+ kmem caches (it never did), KASAN should not
do it as well.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/kasan.h | 4 ++--
 mm/kasan/kasan.c      | 4 ++--
 mm/slab.c             | 2 +-
 mm/slub.c             | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index e3eb834c9a35..d0a05e0f9e8e 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -45,7 +45,7 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark);
 void kasan_alloc_pages(struct page *page, unsigned int order);
 void kasan_free_pages(struct page *page, unsigned int order);
 
-void kasan_cache_create(struct kmem_cache *cache, size_t *size,
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 			slab_flags_t *flags);
 void kasan_cache_shrink(struct kmem_cache *cache);
 void kasan_cache_shutdown(struct kmem_cache *cache);
@@ -94,7 +94,7 @@ static inline void kasan_alloc_pages(struct page *page, unsigned int order) {}
 static inline void kasan_free_pages(struct page *page, unsigned int order) {}
 
 static inline void kasan_cache_create(struct kmem_cache *cache,
-				      size_t *size,
+				      unsigned int *size,
 				      slab_flags_t *flags) {}
 static inline void kasan_cache_shrink(struct kmem_cache *cache) {}
 static inline void kasan_cache_shutdown(struct kmem_cache *cache) {}
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 405bba487df5..0bb95f6a1b7b 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -336,11 +336,11 @@ static size_t optimal_redzone(size_t object_size)
 	return rz;
 }
 
-void kasan_cache_create(struct kmem_cache *cache, size_t *size,
+void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
 			slab_flags_t *flags)
 {
+	unsigned int orig_size = *size;
 	int redzone_adjust;
-	int orig_size = *size;
 
 	/* Add alloc meta. */
 	cache->kasan_info.alloc_meta_offset = *size;
diff --git a/mm/slab.c b/mm/slab.c
index 15da0e177d7b..328b9b705981 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1999,7 +1999,7 @@ int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
 	size_t ralign = BYTES_PER_WORD;
 	gfp_t gfp;
 	int err;
-	size_t size = cachep->size;
+	unsigned int size = cachep->size;
 
 #if DEBUG
 #if FORCED_DEBUG
diff --git a/mm/slub.c b/mm/slub.c
index cd09ae2c48e8..ce71665e266c 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3457,7 +3457,7 @@ static void set_cpu_partial(struct kmem_cache *s)
 static int calculate_sizes(struct kmem_cache *s, int forced_order)
 {
 	slab_flags_t flags = s->flags;
-	size_t size = s->object_size;
+	unsigned int size = s->object_size;
 	int order;
 
 	/*
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 20/23] slub: make slab_index() return unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (17 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 19/23] kasan: make kasan_cache_create() work with 32-bit slab caches Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 21/23] slub: make struct kmem_cache_order_objects::x " Alexey Dobriyan
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

slab_index() returns index of an object within a slab
which is at most u15 (or u16?).

Iterators additionally guarantee that "p >= addr".

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index ce71665e266c..04c8348b8ce9 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -311,7 +311,7 @@ static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
 		__p += (__s)->size, __idx++)
 
 /* Determine object index from a given position */
-static inline int slab_index(void *p, struct kmem_cache *s, void *addr)
+static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
 {
 	return (p - addr) / s->size;
 }
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 21/23] slub: make struct kmem_cache_order_objects::x unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (18 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 20/23] slub: make slab_index() return unsigned int Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-24 17:31   ` Christopher Lameter
  2017-11-23 22:16 ` [PATCH 22/23] slub: make size_from_object() return " Alexey Dobriyan
                   ` (2 subsequent siblings)
  22 siblings, 1 reply; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

struct kmem_cache_order_objects is for mixing order and number of objects,
and orders aren't bit enough to warrant 64-bit width.

Propagate unsignedness down so that everything fits.

!!! Patch assumes that "PAGE_SIZE << order" doesn't overflow. !!!

Space savings on x86_64:

	add/remove: 1/0 grow/shrink: 1/14 up/down: 26/-163 (-137)
	Function                                     old     new   delta
	__bit_spin_unlock.constprop                    -      21     +21
	init_cache_random_seq                        123     128      +5
	kmem_cache_open                             1189    1188      -1
	on_freelist                                  585     582      -3
	get_slabinfo                                 155     152      -3
	check_slab                                   180     177      -3
	order_show                                    29      25      -4
	boot_kmem_cache_node                        8864    8856      -8
	boot_kmem_cache                             8864    8856      -8
	slab_out_of_memory                           260     250     -10
	__cmpxchg_double_slab.isra                   380     368     -12
	calculate_sizes                              625     612     -13
	order_store                                  103      88     -15
	slab_order                                   202     177     -25
	new_slab                                    1830    1805     -25
	cmpxchg_double_slab.isra                     569     536     -33

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 include/linux/slub_def.h |  2 +-
 mm/slub.c                | 74 +++++++++++++++++++++++++-----------------------
 2 files changed, 40 insertions(+), 36 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index e768ac49f0b4..b9daf0f88b4f 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -73,7 +73,7 @@ struct kmem_cache_cpu {
  * given order would contain.
  */
 struct kmem_cache_order_objects {
-	unsigned long x;
+	unsigned int x;
 };
 
 /*
diff --git a/mm/slub.c b/mm/slub.c
index 04c8348b8ce9..27e619a38a02 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -316,13 +316,13 @@ static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr)
 	return (p - addr) / s->size;
 }
 
-static inline int order_objects(int order, unsigned long size, int reserved)
+static inline unsigned int order_objects(unsigned int order, unsigned int size, unsigned int reserved)
 {
-	return ((PAGE_SIZE << order) - reserved) / size;
+	return (((unsigned int)PAGE_SIZE << order) - reserved) / size;
 }
 
-static inline struct kmem_cache_order_objects oo_make(int order,
-		unsigned long size, int reserved)
+static inline struct kmem_cache_order_objects oo_make(unsigned int order,
+		unsigned int size, unsigned int reserved)
 {
 	struct kmem_cache_order_objects x = {
 		(order << OO_SHIFT) + order_objects(order, size, reserved)
@@ -331,12 +331,12 @@ static inline struct kmem_cache_order_objects oo_make(int order,
 	return x;
 }
 
-static inline int oo_order(struct kmem_cache_order_objects x)
+static inline unsigned int oo_order(struct kmem_cache_order_objects x)
 {
 	return x.x >> OO_SHIFT;
 }
 
-static inline int oo_objects(struct kmem_cache_order_objects x)
+static inline unsigned int oo_objects(struct kmem_cache_order_objects x)
 {
 	return x.x & OO_MASK;
 }
@@ -1433,7 +1433,7 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
 		gfp_t flags, int node, struct kmem_cache_order_objects oo)
 {
 	struct page *page;
-	int order = oo_order(oo);
+	unsigned int order = oo_order(oo);
 
 	if (node == NUMA_NO_NODE)
 		page = alloc_pages(flags, order);
@@ -1452,8 +1452,8 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
 /* Pre-initialize the random sequence cache */
 static int init_cache_random_seq(struct kmem_cache *s)
 {
+	unsigned int count = oo_objects(s->oo);
 	int err;
-	unsigned long i, count = oo_objects(s->oo);
 
 	/* Bailout if already initialised */
 	if (s->random_seq)
@@ -1468,6 +1468,8 @@ static int init_cache_random_seq(struct kmem_cache *s)
 
 	/* Transform to an offset on the set of pages */
 	if (s->random_seq) {
+		unsigned int i;
+
 		for (i = 0; i < count; i++)
 			s->random_seq[i] *= s->size;
 	}
@@ -2398,7 +2400,7 @@ slab_out_of_memory(struct kmem_cache *s, gfp_t gfpflags, int nid)
 
 	pr_warn("SLUB: Unable to allocate memory on node %d, gfp=%#x(%pGg)\n",
 		nid, gfpflags, &gfpflags);
-	pr_warn("  cache: %s, object size: %u, buffer size: %u, default order: %d, min order: %d\n",
+	pr_warn("  cache: %s, object size: %u, buffer size: %u, default order: %u, min order: %u\n",
 		s->name, s->object_size, s->size, oo_order(s->oo),
 		oo_order(s->min));
 
@@ -3181,9 +3183,9 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk);
  * and increases the number of allocations possible without having to
  * take the list_lock.
  */
-static int slub_min_order;
-static int slub_max_order = PAGE_ALLOC_COSTLY_ORDER;
-static int slub_min_objects;
+static unsigned int slub_min_order;
+static unsigned int slub_max_order = PAGE_ALLOC_COSTLY_ORDER;
+static unsigned int slub_min_objects;
 
 /*
  * Calculate the order of allocation given an slab object size.
@@ -3210,20 +3212,21 @@ static int slub_min_objects;
  * requested a higher mininum order then we start with that one instead of
  * the smallest order which will fit the object.
  */
-static inline int slab_order(int size, int min_objects,
-				int max_order, int fract_leftover, int reserved)
+static inline unsigned int slab_order(unsigned int size,
+		unsigned int min_objects, unsigned int max_order,
+		unsigned int fract_leftover, unsigned int reserved)
 {
-	int order;
-	int rem;
-	int min_order = slub_min_order;
+	unsigned int min_order = slub_min_order;
+	unsigned int order;
 
 	if (order_objects(min_order, size, reserved) > MAX_OBJS_PER_PAGE)
 		return get_order(size * MAX_OBJS_PER_PAGE) - 1;
 
-	for (order = max(min_order, get_order(min_objects * size + reserved));
+	for (order = max(min_order, (unsigned int)get_order(min_objects * size + reserved));
 			order <= max_order; order++) {
 
-		unsigned long slab_size = PAGE_SIZE << order;
+		unsigned int slab_size = (unsigned int)PAGE_SIZE << order;
+		unsigned int rem;
 
 		rem = (slab_size - reserved) % size;
 
@@ -3234,12 +3237,11 @@ static inline int slab_order(int size, int min_objects,
 	return order;
 }
 
-static inline int calculate_order(int size, int reserved)
+static inline int calculate_order(unsigned int size, unsigned int reserved)
 {
-	int order;
-	int min_objects;
-	int fraction;
-	int max_objects;
+	unsigned int order;
+	unsigned int min_objects;
+	unsigned int max_objects;
 
 	/*
 	 * Attempt to find best configuration for a slab. This
@@ -3256,6 +3258,8 @@ static inline int calculate_order(int size, int reserved)
 	min_objects = min(min_objects, max_objects);
 
 	while (min_objects > 1) {
+		unsigned int fraction;
+
 		fraction = 16;
 		while (fraction >= 4) {
 			order = slab_order(size, min_objects,
@@ -3458,7 +3462,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 {
 	slab_flags_t flags = s->flags;
 	unsigned int size = s->object_size;
-	int order;
+	unsigned int order;
 
 	/*
 	 * Round up object size to the next word boundary. We can only
@@ -3548,7 +3552,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	else
 		order = calculate_order(size, s->reserved);
 
-	if (order < 0)
+	if ((int)order < 0)
 		return 0;
 
 	s->allocflags = 0;
@@ -3716,7 +3720,7 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
 
 static int __init setup_slub_min_order(char *str)
 {
-	get_option(&str, &slub_min_order);
+	get_option(&str, (int *)&slub_min_order);
 
 	return 1;
 }
@@ -3725,8 +3729,8 @@ __setup("slub_min_order=", setup_slub_min_order);
 
 static int __init setup_slub_max_order(char *str)
 {
-	get_option(&str, &slub_max_order);
-	slub_max_order = min(slub_max_order, MAX_ORDER - 1);
+	get_option(&str, (int *)&slub_max_order);
+	slub_max_order = min(slub_max_order, (unsigned int)MAX_ORDER - 1);
 
 	return 1;
 }
@@ -3735,7 +3739,7 @@ __setup("slub_max_order=", setup_slub_max_order);
 
 static int __init setup_slub_min_objects(char *str)
 {
-	get_option(&str, &slub_min_objects);
+	get_option(&str, (int *)&slub_min_objects);
 
 	return 1;
 }
@@ -4212,7 +4216,7 @@ void __init kmem_cache_init(void)
 	cpuhp_setup_state_nocalls(CPUHP_SLUB_DEAD, "slub:dead", NULL,
 				  slub_cpu_dead);
 
-	pr_info("SLUB: HWalign=%d, Order=%d-%d, MinObjects=%d, CPUs=%u, Nodes=%d\n",
+	pr_info("SLUB: HWalign=%d, Order=%u-%u, MinObjects=%u, CPUs=%u, Nodes=%d\n",
 		cache_line_size(),
 		slub_min_order, slub_max_order, slub_min_objects,
 		nr_cpu_ids, nr_node_ids);
@@ -4888,17 +4892,17 @@ SLAB_ATTR_RO(object_size);
 
 static ssize_t objs_per_slab_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", oo_objects(s->oo));
+	return sprintf(buf, "%u\n", oo_objects(s->oo));
 }
 SLAB_ATTR_RO(objs_per_slab);
 
 static ssize_t order_store(struct kmem_cache *s,
 				const char *buf, size_t length)
 {
-	unsigned long order;
+	unsigned int order;
 	int err;
 
-	err = kstrtoul(buf, 10, &order);
+	err = kstrtouint(buf, 10, &order);
 	if (err)
 		return err;
 
@@ -4911,7 +4915,7 @@ static ssize_t order_store(struct kmem_cache *s,
 
 static ssize_t order_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", oo_order(s->oo));
+	return sprintf(buf, "%u\n", oo_order(s->oo));
 }
 SLAB_ATTR(order);
 
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 22/23] slub: make size_from_object() return unsigned int
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (19 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 21/23] slub: make struct kmem_cache_order_objects::x " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-23 22:16 ` [PATCH 23/23] slab: use 32-bit arithmetic in freelist_randomize() Alexey Dobriyan
  2017-11-28  0:36 ` [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Andrew Morton
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

Function returns size of the object without red zone, so can't be negative.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 27e619a38a02..5badbac9d650 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -466,7 +466,7 @@ static void get_map(struct kmem_cache *s, struct page *page, unsigned long *map)
 		set_bit(slab_index(p, s, addr), map);
 }
 
-static inline int size_from_object(struct kmem_cache *s)
+static inline unsigned int size_from_object(struct kmem_cache *s)
 {
 	if (s->flags & SLAB_RED_ZONE)
 		return s->size - s->red_left_pad;
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH 23/23] slab: use 32-bit arithmetic in freelist_randomize()
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (20 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 22/23] slub: make size_from_object() return " Alexey Dobriyan
@ 2017-11-23 22:16 ` Alexey Dobriyan
  2017-11-28  0:36 ` [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Andrew Morton
  22 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-23 22:16 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim, Alexey Dobriyan

SLAB doesn't support 4GB+ of objects per slab, so randomization doesn't
need size_t.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
---
 mm/slab_common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 2b5435e1e619..012f0af5cd81 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -1140,10 +1140,10 @@ EXPORT_SYMBOL(kmalloc_order_trace);
 #ifdef CONFIG_SLAB_FREELIST_RANDOM
 /* Randomize a generic freelist */
 static void freelist_randomize(struct rnd_state *state, unsigned int *list,
-			size_t count)
+			       unsigned int count)
 {
-	size_t i;
 	unsigned int rand;
+	unsigned int i;
 
 	for (i = 0; i < count; i++)
 		list[i] = i;
-- 
2.13.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/23] slab: create_kmalloc_cache() works with 32-bit sizes
  2017-11-23 22:16 ` [PATCH 03/23] slab: create_kmalloc_cache() works with 32-bit sizes Alexey Dobriyan
@ 2017-11-24  1:06   ` Matthew Wilcox
  2017-11-27 10:21     ` Alexey Dobriyan
  0 siblings, 1 reply; 31+ messages in thread
From: Matthew Wilcox @ 2017-11-24  1:06 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, linux-mm, cl, penberg, rientjes, iamjoonsoo.kim

On Fri, Nov 24, 2017 at 01:16:08AM +0300, Alexey Dobriyan wrote:
> -struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size,
> +struct kmem_cache *__init create_kmalloc_cache(const char *name, unsigned int size,
>  				slab_flags_t flags)

Could you reflow this one?  Surprised checkpatch didn't whinge.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 21/23] slub: make struct kmem_cache_order_objects::x unsigned int
  2017-11-23 22:16 ` [PATCH 21/23] slub: make struct kmem_cache_order_objects::x " Alexey Dobriyan
@ 2017-11-24 17:31   ` Christopher Lameter
  2017-11-27 10:17     ` Alexey Dobriyan
  0 siblings, 1 reply; 31+ messages in thread
From: Christopher Lameter @ 2017-11-24 17:31 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, linux-mm, penberg, rientjes, iamjoonsoo.kim

On Fri, 24 Nov 2017, Alexey Dobriyan wrote:

> !!! Patch assumes that "PAGE_SIZE << order" doesn't overflow. !!!

Check for that condition and do not allow creation of such caches?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 19/23] kasan: make kasan_cache_create() work with 32-bit slab caches
  2017-11-23 22:16 ` [PATCH 19/23] kasan: make kasan_cache_create() work with 32-bit slab caches Alexey Dobriyan
@ 2017-11-24 22:29   ` kbuild test robot
  0 siblings, 0 replies; 31+ messages in thread
From: kbuild test robot @ 2017-11-24 22:29 UTC (permalink / raw)
  To: Alexey Dobriyan
  Cc: kbuild-all, akpm, linux-mm, cl, penberg, rientjes, iamjoonsoo.kim

Hi Alexey,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[cannot apply to mmotm/master v4.14 next-20171124]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Alexey-Dobriyan/slab-make-kmalloc_index-return-unsigned-int/20171125-035138
reproduce:
        # apt-get install sparse
        make ARCH=x86_64 allmodconfig
        make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)


vim +361 mm/kasan/kasan.c

7ed2f9e6 Alexander Potapenko 2016-03-25  338  
5d094e12 Alexey Dobriyan     2017-11-24  339  void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
d50112ed Alexey Dobriyan     2017-11-15  340  			slab_flags_t *flags)
7ed2f9e6 Alexander Potapenko 2016-03-25  341  {
5d094e12 Alexey Dobriyan     2017-11-24  342  	unsigned int orig_size = *size;
7ed2f9e6 Alexander Potapenko 2016-03-25  343  	int redzone_adjust;
80a9201a Alexander Potapenko 2016-07-28  344  
7ed2f9e6 Alexander Potapenko 2016-03-25  345  	/* Add alloc meta. */
7ed2f9e6 Alexander Potapenko 2016-03-25  346  	cache->kasan_info.alloc_meta_offset = *size;
7ed2f9e6 Alexander Potapenko 2016-03-25  347  	*size += sizeof(struct kasan_alloc_meta);
7ed2f9e6 Alexander Potapenko 2016-03-25  348  
7ed2f9e6 Alexander Potapenko 2016-03-25  349  	/* Add free meta. */
5f0d5a3a Paul E. McKenney    2017-01-18  350  	if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor ||
7ed2f9e6 Alexander Potapenko 2016-03-25  351  	    cache->object_size < sizeof(struct kasan_free_meta)) {
7ed2f9e6 Alexander Potapenko 2016-03-25  352  		cache->kasan_info.free_meta_offset = *size;
7ed2f9e6 Alexander Potapenko 2016-03-25  353  		*size += sizeof(struct kasan_free_meta);
7ed2f9e6 Alexander Potapenko 2016-03-25  354  	}
7ed2f9e6 Alexander Potapenko 2016-03-25  355  	redzone_adjust = optimal_redzone(cache->object_size) -
7ed2f9e6 Alexander Potapenko 2016-03-25  356  		(*size - cache->object_size);
80a9201a Alexander Potapenko 2016-07-28  357  
7ed2f9e6 Alexander Potapenko 2016-03-25  358  	if (redzone_adjust > 0)
7ed2f9e6 Alexander Potapenko 2016-03-25  359  		*size += redzone_adjust;
80a9201a Alexander Potapenko 2016-07-28  360  
80a9201a Alexander Potapenko 2016-07-28 @361  	*size = min(KMALLOC_MAX_SIZE, max(*size, cache->object_size +
7ed2f9e6 Alexander Potapenko 2016-03-25  362  					optimal_redzone(cache->object_size)));
80a9201a Alexander Potapenko 2016-07-28  363  
80a9201a Alexander Potapenko 2016-07-28  364  	/*
80a9201a Alexander Potapenko 2016-07-28  365  	 * If the metadata doesn't fit, don't enable KASAN at all.
80a9201a Alexander Potapenko 2016-07-28  366  	 */
80a9201a Alexander Potapenko 2016-07-28  367  	if (*size <= cache->kasan_info.alloc_meta_offset ||
80a9201a Alexander Potapenko 2016-07-28  368  			*size <= cache->kasan_info.free_meta_offset) {
80a9201a Alexander Potapenko 2016-07-28  369  		cache->kasan_info.alloc_meta_offset = 0;
80a9201a Alexander Potapenko 2016-07-28  370  		cache->kasan_info.free_meta_offset = 0;
80a9201a Alexander Potapenko 2016-07-28  371  		*size = orig_size;
80a9201a Alexander Potapenko 2016-07-28  372  		return;
80a9201a Alexander Potapenko 2016-07-28  373  	}
80a9201a Alexander Potapenko 2016-07-28  374  
80a9201a Alexander Potapenko 2016-07-28  375  	*flags |= SLAB_KASAN;
7ed2f9e6 Alexander Potapenko 2016-03-25  376  }
7ed2f9e6 Alexander Potapenko 2016-03-25  377  

:::::: The code at line 361 was first introduced by commit
:::::: 80a9201a5965f4715d5c09790862e0df84ce0614 mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUB

:::::: TO: Alexander Potapenko <glider@google.com>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 21/23] slub: make struct kmem_cache_order_objects::x unsigned int
  2017-11-24 17:31   ` Christopher Lameter
@ 2017-11-27 10:17     ` Alexey Dobriyan
  0 siblings, 0 replies; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-27 10:17 UTC (permalink / raw)
  To: Christopher Lameter; +Cc: akpm, linux-mm, penberg, rientjes, iamjoonsoo.kim

On 11/24/17, Christopher Lameter <cl@linux.com> wrote:
> On Fri, 24 Nov 2017, Alexey Dobriyan wrote:
>
>> !!! Patch assumes that "PAGE_SIZE << order" doesn't overflow. !!!
>
> Check for that condition and do not allow creation of such caches?

It should be enforced by MAX_ORDER in slab_order() and
setup_slub_max_order().

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/23] slab: create_kmalloc_cache() works with 32-bit sizes
  2017-11-24  1:06   ` Matthew Wilcox
@ 2017-11-27 10:21     ` Alexey Dobriyan
  2017-11-27 13:56       ` Matthew Wilcox
  0 siblings, 1 reply; 31+ messages in thread
From: Alexey Dobriyan @ 2017-11-27 10:21 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: akpm, linux-mm, cl, penberg, rientjes, iamjoonsoo.kim

On 11/24/17, Matthew Wilcox <willy@infradead.org> wrote:
> On Fri, Nov 24, 2017 at 01:16:08AM +0300, Alexey Dobriyan wrote:
>> -struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t
>> size,
>> +struct kmem_cache *__init create_kmalloc_cache(const char *name, unsigned
>> int size,
>>  				slab_flags_t flags)
>
> Could you reflow this one?  Surprised checkpatch didn't whinge.

If it doesn't run, it doesn't whinge. :-)

I think that in the era of 16:9 monitors line length should be ignored
altogether.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 03/23] slab: create_kmalloc_cache() works with 32-bit sizes
  2017-11-27 10:21     ` Alexey Dobriyan
@ 2017-11-27 13:56       ` Matthew Wilcox
  0 siblings, 0 replies; 31+ messages in thread
From: Matthew Wilcox @ 2017-11-27 13:56 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: akpm, linux-mm, cl, penberg, rientjes, iamjoonsoo.kim

On Mon, Nov 27, 2017 at 12:21:23PM +0200, Alexey Dobriyan wrote:
> On 11/24/17, Matthew Wilcox <willy@infradead.org> wrote:
> > On Fri, Nov 24, 2017 at 01:16:08AM +0300, Alexey Dobriyan wrote:
> >> -struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t
> >> size,
> >> +struct kmem_cache *__init create_kmalloc_cache(const char *name, unsigned
> >> int size,
> >>  				slab_flags_t flags)
> >
> > Could you reflow this one?  Surprised checkpatch didn't whinge.
> 
> If it doesn't run, it doesn't whinge. :-)
> 
> I think that in the era of 16:9 monitors line length should be ignored
> altogether.

16:9 monitors let me get more 80x24 xterms on one virtual desktop.  Please
stick to the line lengths.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 01/23] slab: make kmalloc_index() return "unsigned int"
  2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
                   ` (21 preceding siblings ...)
  2017-11-23 22:16 ` [PATCH 23/23] slab: use 32-bit arithmetic in freelist_randomize() Alexey Dobriyan
@ 2017-11-28  0:36 ` Andrew Morton
  2017-11-28 18:46   ` Christopher Lameter
  22 siblings, 1 reply; 31+ messages in thread
From: Andrew Morton @ 2017-11-28  0:36 UTC (permalink / raw)
  To: Alexey Dobriyan; +Cc: linux-mm, cl, penberg, rientjes, iamjoonsoo.kim

On Fri, 24 Nov 2017 01:16:06 +0300 Alexey Dobriyan <adobriyan@gmail.com> wrote:

> kmalloc_index() return index into an array of kmalloc kmem caches,
> therefore should unsigned.
> 
> Space savings:
> 
> 	add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-6 (-6)
> 	Function                                     old     new   delta
> 	rtsx_scsi_handler                           9116    9114      -2
> 	vnic_rq_alloc                                424     420      -4

While I applaud the use of accurate and appropriate types, that's one
heck of a big patch series.  What do the slab maintainers think?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 01/23] slab: make kmalloc_index() return "unsigned int"
  2017-11-28  0:36 ` [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Andrew Morton
@ 2017-11-28 18:46   ` Christopher Lameter
  0 siblings, 0 replies; 31+ messages in thread
From: Christopher Lameter @ 2017-11-28 18:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexey Dobriyan, linux-mm, penberg, rientjes, iamjoonsoo.kim

On Mon, 27 Nov 2017, Andrew Morton wrote:

> > 	add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-6 (-6)
> > 	Function                                     old     new   delta
> > 	rtsx_scsi_handler                           9116    9114      -2
> > 	vnic_rq_alloc                                424     420      -4
>
> While I applaud the use of accurate and appropriate types, that's one
> heck of a big patch series.  What do the slab maintainers think?

Run some regression tests and make sure that we did not get some false
aliasing?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-11-28 18:47 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-23 22:16 [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 02/23] slab: make kmalloc_size() " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 03/23] slab: create_kmalloc_cache() works with 32-bit sizes Alexey Dobriyan
2017-11-24  1:06   ` Matthew Wilcox
2017-11-27 10:21     ` Alexey Dobriyan
2017-11-27 13:56       ` Matthew Wilcox
2017-11-23 22:16 ` [PATCH 04/23] slab: create_boot_cache() only " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 05/23] slab: kmem_cache_create() " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 06/23] slab: make size_index[] array u8 Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 07/23] slab: make size_index_elem() unsigned int Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 08/23] slub: make ->remote_node_defrag_ratio " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 09/23] slub: make ->max_attr_size " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 10/23] slub: make ->red_left_pad " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 11/23] slub: make ->reserved " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 12/23] slub: make ->align " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 13/23] slub: make ->inuse " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 14/23] slub: make ->cpu_partial " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 15/23] slub: make ->offset " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 16/23] slub: make ->object_size " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 17/23] slub: make ->size " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 18/23] slab: make kmem_cache_flags accept 32-bit object size Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 19/23] kasan: make kasan_cache_create() work with 32-bit slab caches Alexey Dobriyan
2017-11-24 22:29   ` kbuild test robot
2017-11-23 22:16 ` [PATCH 20/23] slub: make slab_index() return unsigned int Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 21/23] slub: make struct kmem_cache_order_objects::x " Alexey Dobriyan
2017-11-24 17:31   ` Christopher Lameter
2017-11-27 10:17     ` Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 22/23] slub: make size_from_object() return " Alexey Dobriyan
2017-11-23 22:16 ` [PATCH 23/23] slab: use 32-bit arithmetic in freelist_randomize() Alexey Dobriyan
2017-11-28  0:36 ` [PATCH 01/23] slab: make kmalloc_index() return "unsigned int" Andrew Morton
2017-11-28 18:46   ` Christopher Lameter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.