All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 0/4] Improve performance for SLAB_POISON
@ 2016-02-15 18:44 ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook

Hi,

This is a follow up to my previous series
(http://lkml.kernel.org/g/<1453770913-32287-1-git-send-email-labbott@fedoraproject.org>)
This series takes the suggestion of Christoph Lameter and only focuses on
optimizing the slow path where the debug processing runs. The two main
optimizations in this series are letting the consistency checks be skipped and
relaxing the cmpxchg restrictions when we are not doing consistency checks.
With hackbench -g 20 -l 1000 averaged over 100 runs:

Before slub_debug=P
mean 15.607
variance .086
stdev .294

After slub_debug=P
mean 10.836
variance .155
stdev .394

This still isn't as fast as what is in grsecurity unfortunately so there's still
work to be done. Profiling ___slab_alloc shows that 25-50% of time is spent in
deactivate_slab. I haven't looked too closely to see if this is something that
can be optimized. My plan for now is to focus on getting all of this merged
(if appropriate) before digging in to another task.

As always feedback is appreciated.

Laura Abbott (4):
  slub: Drop lock at the end of free_debug_processing
  slub: Fix/clean free_debug_processing return paths
  sl[aob]: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
  slub: Relax CMPXCHG consistency restrictions

 Documentation/vm/slub.txt |   4 +-
 include/linux/slab.h      |   2 +-
 mm/slab.h                 |   5 +-
 mm/slub.c                 | 126 ++++++++++++++++++++++++++++------------------
 tools/vm/slabinfo.c       |   2 +-
 5 files changed, 83 insertions(+), 56 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCHv2 0/4] Improve performance for SLAB_POISON
@ 2016-02-15 18:44 ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook

Hi,

This is a follow up to my previous series
(http://lkml.kernel.org/g/<1453770913-32287-1-git-send-email-labbott@fedoraproject.org>)
This series takes the suggestion of Christoph Lameter and only focuses on
optimizing the slow path where the debug processing runs. The two main
optimizations in this series are letting the consistency checks be skipped and
relaxing the cmpxchg restrictions when we are not doing consistency checks.
With hackbench -g 20 -l 1000 averaged over 100 runs:

Before slub_debug=P
mean 15.607
variance .086
stdev .294

After slub_debug=P
mean 10.836
variance .155
stdev .394

This still isn't as fast as what is in grsecurity unfortunately so there's still
work to be done. Profiling ___slab_alloc shows that 25-50% of time is spent in
deactivate_slab. I haven't looked too closely to see if this is something that
can be optimized. My plan for now is to focus on getting all of this merged
(if appropriate) before digging in to another task.

As always feedback is appreciated.

Laura Abbott (4):
  slub: Drop lock at the end of free_debug_processing
  slub: Fix/clean free_debug_processing return paths
  sl[aob]: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
  slub: Relax CMPXCHG consistency restrictions

 Documentation/vm/slub.txt |   4 +-
 include/linux/slab.h      |   2 +-
 mm/slab.h                 |   5 +-
 mm/slub.c                 | 126 ++++++++++++++++++++++++++++------------------
 tools/vm/slabinfo.c       |   2 +-
 5 files changed, 83 insertions(+), 56 deletions(-)

-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCHv2 0/4] Improve performance for SLAB_POISON
@ 2016-02-15 18:44 ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook

Hi,

This is a follow up to my previous series
(http://lkml.kernel.org/g/<1453770913-32287-1-git-send-email-labbott@fedoraproject.org>)
This series takes the suggestion of Christoph Lameter and only focuses on
optimizing the slow path where the debug processing runs. The two main
optimizations in this series are letting the consistency checks be skipped and
relaxing the cmpxchg restrictions when we are not doing consistency checks.
With hackbench -g 20 -l 1000 averaged over 100 runs:

Before slub_debug=P
mean 15.607
variance .086
stdev .294

After slub_debug=P
mean 10.836
variance .155
stdev .394

This still isn't as fast as what is in grsecurity unfortunately so there's still
work to be done. Profiling ___slab_alloc shows that 25-50% of time is spent in
deactivate_slab. I haven't looked too closely to see if this is something that
can be optimized. My plan for now is to focus on getting all of this merged
(if appropriate) before digging in to another task.

As always feedback is appreciated.

Laura Abbott (4):
  slub: Drop lock at the end of free_debug_processing
  slub: Fix/clean free_debug_processing return paths
  sl[aob]: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
  slub: Relax CMPXCHG consistency restrictions

 Documentation/vm/slub.txt |   4 +-
 include/linux/slab.h      |   2 +-
 mm/slab.h                 |   5 +-
 mm/slub.c                 | 126 ++++++++++++++++++++++++++++------------------
 tools/vm/slabinfo.c       |   2 +-
 5 files changed, 83 insertions(+), 56 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
  2016-02-15 18:44 ` Laura Abbott
  (?)
@ 2016-02-15 18:44   ` Laura Abbott
  -1 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Currently, free_debug_processing has a comment "Keep node_lock to preserve
integrity until the object is actually freed". In actuallity,
the lock is dropped immediately in __slab_free. Rather than wait until
__slab_free and potentially throw off the unlikely marking, just drop
the lock in __slab_free. This also lets free_debug_processing take
its own copy of the spinlock flags rather than trying to share the ones
from __slab_free. Since there is no use for the node afterwards, change
the return type of free_debug_processing to return an int like
alloc_debug_processing.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
I didn't add Christoph's ack from the last time due to some
rebasing.
---
 mm/slub.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2e1355a..2d5a774 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1068,16 +1068,17 @@ bad:
 }
 
 /* Supports checking bulk free of a constructed freelist */
-static noinline struct kmem_cache_node *free_debug_processing(
+static noinline int free_debug_processing(
 	struct kmem_cache *s, struct page *page,
 	void *head, void *tail, int bulk_cnt,
-	unsigned long addr, unsigned long *flags)
+	unsigned long addr)
 {
 	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
 	void *object = head;
 	int cnt = 0;
+	unsigned long uninitialized_var(flags);
 
-	spin_lock_irqsave(&n->list_lock, *flags);
+	spin_lock_irqsave(&n->list_lock, flags);
 	slab_lock(page);
 
 	if (!check_slab(s, page))
@@ -1130,17 +1131,14 @@ out:
 			 bulk_cnt, cnt);
 
 	slab_unlock(page);
-	/*
-	 * Keep node_lock to preserve integrity
-	 * until the object is actually freed
-	 */
-	return n;
+	spin_unlock_irqrestore(&n->list_lock, flags);
+	return 1;
 
 fail:
 	slab_unlock(page);
-	spin_unlock_irqrestore(&n->list_lock, *flags);
+	spin_unlock_irqrestore(&n->list_lock, flags);
 	slab_fix(s, "Object at 0x%p not freed", object);
-	return NULL;
+	return 0;
 }
 
 static int __init setup_slub_debug(char *str)
@@ -1231,7 +1229,7 @@ static inline void setup_object_debug(struct kmem_cache *s,
 static inline int alloc_debug_processing(struct kmem_cache *s,
 	struct page *page, void *object, unsigned long addr) { return 0; }
 
-static inline struct kmem_cache_node *free_debug_processing(
+static inline int free_debug_processing(
 	struct kmem_cache *s, struct page *page,
 	void *head, void *tail, int bulk_cnt,
 	unsigned long addr, unsigned long *flags) { return NULL; }
@@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 	stat(s, FREE_SLOWPATH);
 
 	if (kmem_cache_debug(s) &&
-	    !(n = free_debug_processing(s, page, head, tail, cnt,
-					addr, &flags)))
+	    !free_debug_processing(s, page, head, tail, cnt, addr))
 		return;
 
 	do {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
@ 2016-02-15 18:44   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Currently, free_debug_processing has a comment "Keep node_lock to preserve
integrity until the object is actually freed". In actuallity,
the lock is dropped immediately in __slab_free. Rather than wait until
__slab_free and potentially throw off the unlikely marking, just drop
the lock in __slab_free. This also lets free_debug_processing take
its own copy of the spinlock flags rather than trying to share the ones
from __slab_free. Since there is no use for the node afterwards, change
the return type of free_debug_processing to return an int like
alloc_debug_processing.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
I didn't add Christoph's ack from the last time due to some
rebasing.
---
 mm/slub.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2e1355a..2d5a774 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1068,16 +1068,17 @@ bad:
 }
 
 /* Supports checking bulk free of a constructed freelist */
-static noinline struct kmem_cache_node *free_debug_processing(
+static noinline int free_debug_processing(
 	struct kmem_cache *s, struct page *page,
 	void *head, void *tail, int bulk_cnt,
-	unsigned long addr, unsigned long *flags)
+	unsigned long addr)
 {
 	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
 	void *object = head;
 	int cnt = 0;
+	unsigned long uninitialized_var(flags);
 
-	spin_lock_irqsave(&n->list_lock, *flags);
+	spin_lock_irqsave(&n->list_lock, flags);
 	slab_lock(page);
 
 	if (!check_slab(s, page))
@@ -1130,17 +1131,14 @@ out:
 			 bulk_cnt, cnt);
 
 	slab_unlock(page);
-	/*
-	 * Keep node_lock to preserve integrity
-	 * until the object is actually freed
-	 */
-	return n;
+	spin_unlock_irqrestore(&n->list_lock, flags);
+	return 1;
 
 fail:
 	slab_unlock(page);
-	spin_unlock_irqrestore(&n->list_lock, *flags);
+	spin_unlock_irqrestore(&n->list_lock, flags);
 	slab_fix(s, "Object at 0x%p not freed", object);
-	return NULL;
+	return 0;
 }
 
 static int __init setup_slub_debug(char *str)
@@ -1231,7 +1229,7 @@ static inline void setup_object_debug(struct kmem_cache *s,
 static inline int alloc_debug_processing(struct kmem_cache *s,
 	struct page *page, void *object, unsigned long addr) { return 0; }
 
-static inline struct kmem_cache_node *free_debug_processing(
+static inline int free_debug_processing(
 	struct kmem_cache *s, struct page *page,
 	void *head, void *tail, int bulk_cnt,
 	unsigned long addr, unsigned long *flags) { return NULL; }
@@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 	stat(s, FREE_SLOWPATH);
 
 	if (kmem_cache_debug(s) &&
-	    !(n = free_debug_processing(s, page, head, tail, cnt,
-					addr, &flags)))
+	    !free_debug_processing(s, page, head, tail, cnt, addr))
 		return;
 
 	do {
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
@ 2016-02-15 18:44   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Currently, free_debug_processing has a comment "Keep node_lock to preserve
integrity until the object is actually freed". In actuallity,
the lock is dropped immediately in __slab_free. Rather than wait until
__slab_free and potentially throw off the unlikely marking, just drop
the lock in __slab_free. This also lets free_debug_processing take
its own copy of the spinlock flags rather than trying to share the ones
from __slab_free. Since there is no use for the node afterwards, change
the return type of free_debug_processing to return an int like
alloc_debug_processing.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
I didn't add Christoph's ack from the last time due to some
rebasing.
---
 mm/slub.c | 23 ++++++++++-------------
 1 file changed, 10 insertions(+), 13 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2e1355a..2d5a774 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1068,16 +1068,17 @@ bad:
 }
 
 /* Supports checking bulk free of a constructed freelist */
-static noinline struct kmem_cache_node *free_debug_processing(
+static noinline int free_debug_processing(
 	struct kmem_cache *s, struct page *page,
 	void *head, void *tail, int bulk_cnt,
-	unsigned long addr, unsigned long *flags)
+	unsigned long addr)
 {
 	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
 	void *object = head;
 	int cnt = 0;
+	unsigned long uninitialized_var(flags);
 
-	spin_lock_irqsave(&n->list_lock, *flags);
+	spin_lock_irqsave(&n->list_lock, flags);
 	slab_lock(page);
 
 	if (!check_slab(s, page))
@@ -1130,17 +1131,14 @@ out:
 			 bulk_cnt, cnt);
 
 	slab_unlock(page);
-	/*
-	 * Keep node_lock to preserve integrity
-	 * until the object is actually freed
-	 */
-	return n;
+	spin_unlock_irqrestore(&n->list_lock, flags);
+	return 1;
 
 fail:
 	slab_unlock(page);
-	spin_unlock_irqrestore(&n->list_lock, *flags);
+	spin_unlock_irqrestore(&n->list_lock, flags);
 	slab_fix(s, "Object at 0x%p not freed", object);
-	return NULL;
+	return 0;
 }
 
 static int __init setup_slub_debug(char *str)
@@ -1231,7 +1229,7 @@ static inline void setup_object_debug(struct kmem_cache *s,
 static inline int alloc_debug_processing(struct kmem_cache *s,
 	struct page *page, void *object, unsigned long addr) { return 0; }
 
-static inline struct kmem_cache_node *free_debug_processing(
+static inline int free_debug_processing(
 	struct kmem_cache *s, struct page *page,
 	void *head, void *tail, int bulk_cnt,
 	unsigned long addr, unsigned long *flags) { return NULL; }
@@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 	stat(s, FREE_SLOWPATH);
 
 	if (kmem_cache_debug(s) &&
-	    !(n = free_debug_processing(s, page, head, tail, cnt,
-					addr, &flags)))
+	    !free_debug_processing(s, page, head, tail, cnt, addr))
 		return;
 
 	do {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCHv2 2/4] slub: Fix/clean free_debug_processing return paths
  2016-02-15 18:44 ` Laura Abbott
  (?)
@ 2016-02-15 18:44   ` Laura Abbott
  -1 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Since 19c7ff9ecd89 ("slub: Take node lock during object free checks")
check_object has been incorrectly returning success as it follows
the out label which just returns the node. Thanks to refactoring,
the out and fail paths are now basically the same. Combine the two
into one and just use a single label.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
If there is interest, I can split this off as a separate patch for stable
---
 mm/slub.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2d5a774..189c330 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1077,24 +1077,25 @@ static noinline int free_debug_processing(
 	void *object = head;
 	int cnt = 0;
 	unsigned long uninitialized_var(flags);
+	int ret = 0;
 
 	spin_lock_irqsave(&n->list_lock, flags);
 	slab_lock(page);
 
 	if (!check_slab(s, page))
-		goto fail;
+		goto out;
 
 next_object:
 	cnt++;
 
 	if (!check_valid_pointer(s, page, object)) {
 		slab_err(s, page, "Invalid object pointer 0x%p", object);
-		goto fail;
+		goto out;
 	}
 
 	if (on_freelist(s, page, object)) {
 		object_err(s, page, object, "Object already free");
-		goto fail;
+		goto out;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_ACTIVE))
@@ -1111,7 +1112,7 @@ next_object:
 		} else
 			object_err(s, page, object,
 					"page slab pointer corrupt.");
-		goto fail;
+		goto out;
 	}
 
 	if (s->flags & SLAB_STORE_USER)
@@ -1125,6 +1126,8 @@ next_object:
 		object = get_freepointer(s, object);
 		goto next_object;
 	}
+	ret = 1;
+
 out:
 	if (cnt != bulk_cnt)
 		slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n",
@@ -1132,13 +1135,9 @@ out:
 
 	slab_unlock(page);
 	spin_unlock_irqrestore(&n->list_lock, flags);
-	return 1;
-
-fail:
-	slab_unlock(page);
-	spin_unlock_irqrestore(&n->list_lock, flags);
-	slab_fix(s, "Object at 0x%p not freed", object);
-	return 0;
+	if (!ret)
+		slab_fix(s, "Object at 0x%p not freed", object);
+	return ret;
 }
 
 static int __init setup_slub_debug(char *str)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCHv2 2/4] slub: Fix/clean free_debug_processing return paths
@ 2016-02-15 18:44   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Since 19c7ff9ecd89 ("slub: Take node lock during object free checks")
check_object has been incorrectly returning success as it follows
the out label which just returns the node. Thanks to refactoring,
the out and fail paths are now basically the same. Combine the two
into one and just use a single label.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
If there is interest, I can split this off as a separate patch for stable
---
 mm/slub.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2d5a774..189c330 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1077,24 +1077,25 @@ static noinline int free_debug_processing(
 	void *object = head;
 	int cnt = 0;
 	unsigned long uninitialized_var(flags);
+	int ret = 0;
 
 	spin_lock_irqsave(&n->list_lock, flags);
 	slab_lock(page);
 
 	if (!check_slab(s, page))
-		goto fail;
+		goto out;
 
 next_object:
 	cnt++;
 
 	if (!check_valid_pointer(s, page, object)) {
 		slab_err(s, page, "Invalid object pointer 0x%p", object);
-		goto fail;
+		goto out;
 	}
 
 	if (on_freelist(s, page, object)) {
 		object_err(s, page, object, "Object already free");
-		goto fail;
+		goto out;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_ACTIVE))
@@ -1111,7 +1112,7 @@ next_object:
 		} else
 			object_err(s, page, object,
 					"page slab pointer corrupt.");
-		goto fail;
+		goto out;
 	}
 
 	if (s->flags & SLAB_STORE_USER)
@@ -1125,6 +1126,8 @@ next_object:
 		object = get_freepointer(s, object);
 		goto next_object;
 	}
+	ret = 1;
+
 out:
 	if (cnt != bulk_cnt)
 		slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n",
@@ -1132,13 +1135,9 @@ out:
 
 	slab_unlock(page);
 	spin_unlock_irqrestore(&n->list_lock, flags);
-	return 1;
-
-fail:
-	slab_unlock(page);
-	spin_unlock_irqrestore(&n->list_lock, flags);
-	slab_fix(s, "Object at 0x%p not freed", object);
-	return 0;
+	if (!ret)
+		slab_fix(s, "Object at 0x%p not freed", object);
+	return ret;
 }
 
 static int __init setup_slub_debug(char *str)
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCHv2 2/4] slub: Fix/clean free_debug_processing return paths
@ 2016-02-15 18:44   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Since 19c7ff9ecd89 ("slub: Take node lock during object free checks")
check_object has been incorrectly returning success as it follows
the out label which just returns the node. Thanks to refactoring,
the out and fail paths are now basically the same. Combine the two
into one and just use a single label.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
If there is interest, I can split this off as a separate patch for stable
---
 mm/slub.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2d5a774..189c330 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1077,24 +1077,25 @@ static noinline int free_debug_processing(
 	void *object = head;
 	int cnt = 0;
 	unsigned long uninitialized_var(flags);
+	int ret = 0;
 
 	spin_lock_irqsave(&n->list_lock, flags);
 	slab_lock(page);
 
 	if (!check_slab(s, page))
-		goto fail;
+		goto out;
 
 next_object:
 	cnt++;
 
 	if (!check_valid_pointer(s, page, object)) {
 		slab_err(s, page, "Invalid object pointer 0x%p", object);
-		goto fail;
+		goto out;
 	}
 
 	if (on_freelist(s, page, object)) {
 		object_err(s, page, object, "Object already free");
-		goto fail;
+		goto out;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_ACTIVE))
@@ -1111,7 +1112,7 @@ next_object:
 		} else
 			object_err(s, page, object,
 					"page slab pointer corrupt.");
-		goto fail;
+		goto out;
 	}
 
 	if (s->flags & SLAB_STORE_USER)
@@ -1125,6 +1126,8 @@ next_object:
 		object = get_freepointer(s, object);
 		goto next_object;
 	}
+	ret = 1;
+
 out:
 	if (cnt != bulk_cnt)
 		slab_err(s, page, "Bulk freelist count(%d) invalid(%d)\n",
@@ -1132,13 +1135,9 @@ out:
 
 	slab_unlock(page);
 	spin_unlock_irqrestore(&n->list_lock, flags);
-	return 1;
-
-fail:
-	slab_unlock(page);
-	spin_unlock_irqrestore(&n->list_lock, flags);
-	slab_fix(s, "Object at 0x%p not freed", object);
-	return 0;
+	if (!ret)
+		slab_fix(s, "Object at 0x%p not freed", object);
+	return ret;
 }
 
 static int __init setup_slub_debug(char *str)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCHv2 3/4] slub: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
  2016-02-15 18:44 ` Laura Abbott
  (?)
@ 2016-02-15 18:44   ` Laura Abbott
  -1 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


SLAB_DEBUG_FREE allows expensive consistency checks at free
to be turned on or off. Expand its use to be able to turn
off all consistency checks. This gives a nice speed up if
you only want features such as poisoning or tracing.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/vm/slub.txt |  4 +-
 include/linux/slab.h      |  2 +-
 mm/slab.h                 |  5 ++-
 mm/slub.c                 | 94 +++++++++++++++++++++++++++++------------------
 tools/vm/slabinfo.c       |  2 +-
 5 files changed, 66 insertions(+), 41 deletions(-)

diff --git a/Documentation/vm/slub.txt b/Documentation/vm/slub.txt
index f0d3409..8465241 100644
--- a/Documentation/vm/slub.txt
+++ b/Documentation/vm/slub.txt
@@ -35,8 +35,8 @@ slub_debug=<Debug-Options>,<slab name>
 				Enable options only for select slabs
 
 Possible debug options are
-	F		Sanity checks on (enables SLAB_DEBUG_FREE. Sorry
-			SLAB legacy issues)
+	F		Sanity checks on (enables SLAB_DEBUG_CONSISTENCY_CHECKS
+			Sorry SLAB legacy issues)
 	Z		Red zoning
 	P		Poisoning (object and padding)
 	U		User tracking (free and alloc)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 3627d5c..1070daa 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -20,7 +20,7 @@
  * Flags to pass to kmem_cache_create().
  * The ones marked DEBUG are only valid if CONFIG_DEBUG_SLAB is set.
  */
-#define SLAB_DEBUG_FREE		0x00000100UL	/* DEBUG: Perform (expensive) checks on free */
+#define SLAB_CONSISTENCY_CHECKS	0x00000100UL	/* DEBUG: Perform (expensive) checks on alloc/free */
 #define SLAB_RED_ZONE		0x00000400UL	/* DEBUG: Red zone objs in a cache */
 #define SLAB_POISON		0x00000800UL	/* DEBUG: Poison objects */
 #define SLAB_HWCACHE_ALIGN	0x00002000UL	/* Align objs on cache lines */
diff --git a/mm/slab.h b/mm/slab.h
index 834ad24..fca99be 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -121,7 +121,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
 #elif defined(CONFIG_SLUB_DEBUG)
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
-			  SLAB_TRACE | SLAB_DEBUG_FREE)
+			  SLAB_TRACE | SLAB_CONSISTENCY_CHECKS)
 #else
 #define SLAB_DEBUG_FLAGS (0)
 #endif
@@ -306,7 +306,8 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 	 * to not do even the assignment. In that case, slab_equal_or_root
 	 * will also be a constant.
 	 */
-	if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
+	if (!memcg_kmem_enabled() &&
+	    !unlikely(s->flags & SLAB_CONSISTENCY_CHECKS))
 		return s;
 
 	page = virt_to_head_page(x);
diff --git a/mm/slub.c b/mm/slub.c
index 189c330..01606ff 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -160,7 +160,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
  */
 #define MAX_PARTIAL 10
 
-#define DEBUG_DEFAULT_FLAGS (SLAB_DEBUG_FREE | SLAB_RED_ZONE | \
+#define DEBUG_DEFAULT_FLAGS (SLAB_CONSISTENCY_CHECKS | SLAB_RED_ZONE | \
 				SLAB_POISON | SLAB_STORE_USER)
 
 /*
@@ -1031,20 +1031,32 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
 	init_tracking(s, object);
 }
 
-static noinline int alloc_debug_processing(struct kmem_cache *s,
+static inline int alloc_consistency_checks(struct kmem_cache *s,
 					struct page *page,
 					void *object, unsigned long addr)
 {
 	if (!check_slab(s, page))
-		goto bad;
+		return 0;
 
 	if (!check_valid_pointer(s, page, object)) {
 		object_err(s, page, object, "Freelist Pointer check fails");
-		goto bad;
+		return 0;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_INACTIVE))
-		goto bad;
+		return 0;
+
+	return 1;
+}
+
+static noinline int alloc_debug_processing(struct kmem_cache *s,
+					struct page *page,
+					void *object, unsigned long addr)
+{
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!alloc_consistency_checks(s, page, object, addr))
+			goto bad;
+	}
 
 	/* Success perform special debug activities for allocs */
 	if (s->flags & SLAB_STORE_USER)
@@ -1067,39 +1079,21 @@ bad:
 	return 0;
 }
 
-/* Supports checking bulk free of a constructed freelist */
-static noinline int free_debug_processing(
-	struct kmem_cache *s, struct page *page,
-	void *head, void *tail, int bulk_cnt,
-	unsigned long addr)
+static inline int free_consistency_checks(struct kmem_cache *s,
+		struct page *page, void *object, unsigned long addr)
 {
-	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
-	void *object = head;
-	int cnt = 0;
-	unsigned long uninitialized_var(flags);
-	int ret = 0;
-
-	spin_lock_irqsave(&n->list_lock, flags);
-	slab_lock(page);
-
-	if (!check_slab(s, page))
-		goto out;
-
-next_object:
-	cnt++;
-
 	if (!check_valid_pointer(s, page, object)) {
 		slab_err(s, page, "Invalid object pointer 0x%p", object);
-		goto out;
+		return 0;
 	}
 
 	if (on_freelist(s, page, object)) {
 		object_err(s, page, object, "Object already free");
-		goto out;
+		return 0;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_ACTIVE))
-		goto out;
+		return 0;
 
 	if (unlikely(s != page->slab_cache)) {
 		if (!PageSlab(page)) {
@@ -1112,7 +1106,37 @@ next_object:
 		} else
 			object_err(s, page, object,
 					"page slab pointer corrupt.");
-		goto out;
+		return 0;
+	}
+	return 1;
+}
+
+/* Supports checking bulk free of a constructed freelist */
+static noinline int free_debug_processing(
+	struct kmem_cache *s, struct page *page,
+	void *head, void *tail, int bulk_cnt,
+	unsigned long addr)
+{
+	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
+	void *object = head;
+	int cnt = 0;
+	unsigned long uninitialized_var(flags);
+	int ret = 0;
+
+	spin_lock_irqsave(&n->list_lock, flags);
+	slab_lock(page);
+
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!check_slab(s, page))
+			goto out;
+	}
+
+next_object:
+	cnt++;
+
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!free_consistency_checks(s, page, object, addr))
+			goto out;
 	}
 
 	if (s->flags & SLAB_STORE_USER)
@@ -1169,7 +1193,7 @@ static int __init setup_slub_debug(char *str)
 	for (; *str && *str != ','; str++) {
 		switch (tolower(*str)) {
 		case 'f':
-			slub_debug |= SLAB_DEBUG_FREE;
+			slub_debug |= SLAB_CONSISTENCY_CHECKS;
 			break;
 		case 'z':
 			slub_debug |= SLAB_RED_ZONE;
@@ -1503,7 +1527,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	int order = compound_order(page);
 	int pages = 1 << order;
 
-	if (kmem_cache_debug(s)) {
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
 		void *p;
 
 		slab_pad_check(s, page);
@@ -4812,16 +4836,16 @@ SLAB_ATTR_RO(total_objects);
 
 static ssize_t sanity_checks_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", !!(s->flags & SLAB_DEBUG_FREE));
+	return sprintf(buf, "%d\n", !!(s->flags & SLAB_CONSISTENCY_CHECKS));
 }
 
 static ssize_t sanity_checks_store(struct kmem_cache *s,
 				const char *buf, size_t length)
 {
-	s->flags &= ~SLAB_DEBUG_FREE;
+	s->flags &= ~SLAB_CONSISTENCY_CHECKS;
 	if (buf[0] == '1') {
 		s->flags &= ~__CMPXCHG_DOUBLE;
-		s->flags |= SLAB_DEBUG_FREE;
+		s->flags |= SLAB_CONSISTENCY_CHECKS;
 	}
 	return length;
 }
@@ -5356,7 +5380,7 @@ static char *create_unique_id(struct kmem_cache *s)
 		*p++ = 'd';
 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
 		*p++ = 'a';
-	if (s->flags & SLAB_DEBUG_FREE)
+	if (s->flags & SLAB_CONSISTENCY_CHECKS)
 		*p++ = 'F';
 	if (!(s->flags & SLAB_NOTRACK))
 		*p++ = 't';
diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
index 86e698d..1889163 100644
--- a/tools/vm/slabinfo.c
+++ b/tools/vm/slabinfo.c
@@ -135,7 +135,7 @@ static void usage(void)
 		"\nValid debug options (FZPUT may be combined)\n"
 		"a / A          Switch on all debug options (=FZUP)\n"
 		"-              Switch off all debug options\n"
-		"f / F          Sanity Checks (SLAB_DEBUG_FREE)\n"
+		"f / F          Sanity Checks (SLAB_CONSISTENCY_CHECKS)\n"
 		"z / Z          Redzoning\n"
 		"p / P          Poisoning\n"
 		"u / U          Tracking\n"
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCHv2 3/4] slub: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
@ 2016-02-15 18:44   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


SLAB_DEBUG_FREE allows expensive consistency checks at free
to be turned on or off. Expand its use to be able to turn
off all consistency checks. This gives a nice speed up if
you only want features such as poisoning or tracing.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/vm/slub.txt |  4 +-
 include/linux/slab.h      |  2 +-
 mm/slab.h                 |  5 ++-
 mm/slub.c                 | 94 +++++++++++++++++++++++++++++------------------
 tools/vm/slabinfo.c       |  2 +-
 5 files changed, 66 insertions(+), 41 deletions(-)

diff --git a/Documentation/vm/slub.txt b/Documentation/vm/slub.txt
index f0d3409..8465241 100644
--- a/Documentation/vm/slub.txt
+++ b/Documentation/vm/slub.txt
@@ -35,8 +35,8 @@ slub_debug=<Debug-Options>,<slab name>
 				Enable options only for select slabs
 
 Possible debug options are
-	F		Sanity checks on (enables SLAB_DEBUG_FREE. Sorry
-			SLAB legacy issues)
+	F		Sanity checks on (enables SLAB_DEBUG_CONSISTENCY_CHECKS
+			Sorry SLAB legacy issues)
 	Z		Red zoning
 	P		Poisoning (object and padding)
 	U		User tracking (free and alloc)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 3627d5c..1070daa 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -20,7 +20,7 @@
  * Flags to pass to kmem_cache_create().
  * The ones marked DEBUG are only valid if CONFIG_DEBUG_SLAB is set.
  */
-#define SLAB_DEBUG_FREE		0x00000100UL	/* DEBUG: Perform (expensive) checks on free */
+#define SLAB_CONSISTENCY_CHECKS	0x00000100UL	/* DEBUG: Perform (expensive) checks on alloc/free */
 #define SLAB_RED_ZONE		0x00000400UL	/* DEBUG: Red zone objs in a cache */
 #define SLAB_POISON		0x00000800UL	/* DEBUG: Poison objects */
 #define SLAB_HWCACHE_ALIGN	0x00002000UL	/* Align objs on cache lines */
diff --git a/mm/slab.h b/mm/slab.h
index 834ad24..fca99be 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -121,7 +121,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
 #elif defined(CONFIG_SLUB_DEBUG)
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
-			  SLAB_TRACE | SLAB_DEBUG_FREE)
+			  SLAB_TRACE | SLAB_CONSISTENCY_CHECKS)
 #else
 #define SLAB_DEBUG_FLAGS (0)
 #endif
@@ -306,7 +306,8 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 	 * to not do even the assignment. In that case, slab_equal_or_root
 	 * will also be a constant.
 	 */
-	if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
+	if (!memcg_kmem_enabled() &&
+	    !unlikely(s->flags & SLAB_CONSISTENCY_CHECKS))
 		return s;
 
 	page = virt_to_head_page(x);
diff --git a/mm/slub.c b/mm/slub.c
index 189c330..01606ff 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -160,7 +160,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
  */
 #define MAX_PARTIAL 10
 
-#define DEBUG_DEFAULT_FLAGS (SLAB_DEBUG_FREE | SLAB_RED_ZONE | \
+#define DEBUG_DEFAULT_FLAGS (SLAB_CONSISTENCY_CHECKS | SLAB_RED_ZONE | \
 				SLAB_POISON | SLAB_STORE_USER)
 
 /*
@@ -1031,20 +1031,32 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
 	init_tracking(s, object);
 }
 
-static noinline int alloc_debug_processing(struct kmem_cache *s,
+static inline int alloc_consistency_checks(struct kmem_cache *s,
 					struct page *page,
 					void *object, unsigned long addr)
 {
 	if (!check_slab(s, page))
-		goto bad;
+		return 0;
 
 	if (!check_valid_pointer(s, page, object)) {
 		object_err(s, page, object, "Freelist Pointer check fails");
-		goto bad;
+		return 0;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_INACTIVE))
-		goto bad;
+		return 0;
+
+	return 1;
+}
+
+static noinline int alloc_debug_processing(struct kmem_cache *s,
+					struct page *page,
+					void *object, unsigned long addr)
+{
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!alloc_consistency_checks(s, page, object, addr))
+			goto bad;
+	}
 
 	/* Success perform special debug activities for allocs */
 	if (s->flags & SLAB_STORE_USER)
@@ -1067,39 +1079,21 @@ bad:
 	return 0;
 }
 
-/* Supports checking bulk free of a constructed freelist */
-static noinline int free_debug_processing(
-	struct kmem_cache *s, struct page *page,
-	void *head, void *tail, int bulk_cnt,
-	unsigned long addr)
+static inline int free_consistency_checks(struct kmem_cache *s,
+		struct page *page, void *object, unsigned long addr)
 {
-	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
-	void *object = head;
-	int cnt = 0;
-	unsigned long uninitialized_var(flags);
-	int ret = 0;
-
-	spin_lock_irqsave(&n->list_lock, flags);
-	slab_lock(page);
-
-	if (!check_slab(s, page))
-		goto out;
-
-next_object:
-	cnt++;
-
 	if (!check_valid_pointer(s, page, object)) {
 		slab_err(s, page, "Invalid object pointer 0x%p", object);
-		goto out;
+		return 0;
 	}
 
 	if (on_freelist(s, page, object)) {
 		object_err(s, page, object, "Object already free");
-		goto out;
+		return 0;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_ACTIVE))
-		goto out;
+		return 0;
 
 	if (unlikely(s != page->slab_cache)) {
 		if (!PageSlab(page)) {
@@ -1112,7 +1106,37 @@ next_object:
 		} else
 			object_err(s, page, object,
 					"page slab pointer corrupt.");
-		goto out;
+		return 0;
+	}
+	return 1;
+}
+
+/* Supports checking bulk free of a constructed freelist */
+static noinline int free_debug_processing(
+	struct kmem_cache *s, struct page *page,
+	void *head, void *tail, int bulk_cnt,
+	unsigned long addr)
+{
+	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
+	void *object = head;
+	int cnt = 0;
+	unsigned long uninitialized_var(flags);
+	int ret = 0;
+
+	spin_lock_irqsave(&n->list_lock, flags);
+	slab_lock(page);
+
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!check_slab(s, page))
+			goto out;
+	}
+
+next_object:
+	cnt++;
+
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!free_consistency_checks(s, page, object, addr))
+			goto out;
 	}
 
 	if (s->flags & SLAB_STORE_USER)
@@ -1169,7 +1193,7 @@ static int __init setup_slub_debug(char *str)
 	for (; *str && *str != ','; str++) {
 		switch (tolower(*str)) {
 		case 'f':
-			slub_debug |= SLAB_DEBUG_FREE;
+			slub_debug |= SLAB_CONSISTENCY_CHECKS;
 			break;
 		case 'z':
 			slub_debug |= SLAB_RED_ZONE;
@@ -1503,7 +1527,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	int order = compound_order(page);
 	int pages = 1 << order;
 
-	if (kmem_cache_debug(s)) {
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
 		void *p;
 
 		slab_pad_check(s, page);
@@ -4812,16 +4836,16 @@ SLAB_ATTR_RO(total_objects);
 
 static ssize_t sanity_checks_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", !!(s->flags & SLAB_DEBUG_FREE));
+	return sprintf(buf, "%d\n", !!(s->flags & SLAB_CONSISTENCY_CHECKS));
 }
 
 static ssize_t sanity_checks_store(struct kmem_cache *s,
 				const char *buf, size_t length)
 {
-	s->flags &= ~SLAB_DEBUG_FREE;
+	s->flags &= ~SLAB_CONSISTENCY_CHECKS;
 	if (buf[0] == '1') {
 		s->flags &= ~__CMPXCHG_DOUBLE;
-		s->flags |= SLAB_DEBUG_FREE;
+		s->flags |= SLAB_CONSISTENCY_CHECKS;
 	}
 	return length;
 }
@@ -5356,7 +5380,7 @@ static char *create_unique_id(struct kmem_cache *s)
 		*p++ = 'd';
 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
 		*p++ = 'a';
-	if (s->flags & SLAB_DEBUG_FREE)
+	if (s->flags & SLAB_CONSISTENCY_CHECKS)
 		*p++ = 'F';
 	if (!(s->flags & SLAB_NOTRACK))
 		*p++ = 't';
diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
index 86e698d..1889163 100644
--- a/tools/vm/slabinfo.c
+++ b/tools/vm/slabinfo.c
@@ -135,7 +135,7 @@ static void usage(void)
 		"\nValid debug options (FZPUT may be combined)\n"
 		"a / A          Switch on all debug options (=FZUP)\n"
 		"-              Switch off all debug options\n"
-		"f / F          Sanity Checks (SLAB_DEBUG_FREE)\n"
+		"f / F          Sanity Checks (SLAB_CONSISTENCY_CHECKS)\n"
 		"z / Z          Redzoning\n"
 		"p / P          Poisoning\n"
 		"u / U          Tracking\n"
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCHv2 3/4] slub: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
@ 2016-02-15 18:44   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


SLAB_DEBUG_FREE allows expensive consistency checks at free
to be turned on or off. Expand its use to be able to turn
off all consistency checks. This gives a nice speed up if
you only want features such as poisoning or tracing.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/vm/slub.txt |  4 +-
 include/linux/slab.h      |  2 +-
 mm/slab.h                 |  5 ++-
 mm/slub.c                 | 94 +++++++++++++++++++++++++++++------------------
 tools/vm/slabinfo.c       |  2 +-
 5 files changed, 66 insertions(+), 41 deletions(-)

diff --git a/Documentation/vm/slub.txt b/Documentation/vm/slub.txt
index f0d3409..8465241 100644
--- a/Documentation/vm/slub.txt
+++ b/Documentation/vm/slub.txt
@@ -35,8 +35,8 @@ slub_debug=<Debug-Options>,<slab name>
 				Enable options only for select slabs
 
 Possible debug options are
-	F		Sanity checks on (enables SLAB_DEBUG_FREE. Sorry
-			SLAB legacy issues)
+	F		Sanity checks on (enables SLAB_DEBUG_CONSISTENCY_CHECKS
+			Sorry SLAB legacy issues)
 	Z		Red zoning
 	P		Poisoning (object and padding)
 	U		User tracking (free and alloc)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 3627d5c..1070daa 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -20,7 +20,7 @@
  * Flags to pass to kmem_cache_create().
  * The ones marked DEBUG are only valid if CONFIG_DEBUG_SLAB is set.
  */
-#define SLAB_DEBUG_FREE		0x00000100UL	/* DEBUG: Perform (expensive) checks on free */
+#define SLAB_CONSISTENCY_CHECKS	0x00000100UL	/* DEBUG: Perform (expensive) checks on alloc/free */
 #define SLAB_RED_ZONE		0x00000400UL	/* DEBUG: Red zone objs in a cache */
 #define SLAB_POISON		0x00000800UL	/* DEBUG: Poison objects */
 #define SLAB_HWCACHE_ALIGN	0x00002000UL	/* Align objs on cache lines */
diff --git a/mm/slab.h b/mm/slab.h
index 834ad24..fca99be 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -121,7 +121,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
 #elif defined(CONFIG_SLUB_DEBUG)
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
-			  SLAB_TRACE | SLAB_DEBUG_FREE)
+			  SLAB_TRACE | SLAB_CONSISTENCY_CHECKS)
 #else
 #define SLAB_DEBUG_FLAGS (0)
 #endif
@@ -306,7 +306,8 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
 	 * to not do even the assignment. In that case, slab_equal_or_root
 	 * will also be a constant.
 	 */
-	if (!memcg_kmem_enabled() && !unlikely(s->flags & SLAB_DEBUG_FREE))
+	if (!memcg_kmem_enabled() &&
+	    !unlikely(s->flags & SLAB_CONSISTENCY_CHECKS))
 		return s;
 
 	page = virt_to_head_page(x);
diff --git a/mm/slub.c b/mm/slub.c
index 189c330..01606ff 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -160,7 +160,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
  */
 #define MAX_PARTIAL 10
 
-#define DEBUG_DEFAULT_FLAGS (SLAB_DEBUG_FREE | SLAB_RED_ZONE | \
+#define DEBUG_DEFAULT_FLAGS (SLAB_CONSISTENCY_CHECKS | SLAB_RED_ZONE | \
 				SLAB_POISON | SLAB_STORE_USER)
 
 /*
@@ -1031,20 +1031,32 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
 	init_tracking(s, object);
 }
 
-static noinline int alloc_debug_processing(struct kmem_cache *s,
+static inline int alloc_consistency_checks(struct kmem_cache *s,
 					struct page *page,
 					void *object, unsigned long addr)
 {
 	if (!check_slab(s, page))
-		goto bad;
+		return 0;
 
 	if (!check_valid_pointer(s, page, object)) {
 		object_err(s, page, object, "Freelist Pointer check fails");
-		goto bad;
+		return 0;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_INACTIVE))
-		goto bad;
+		return 0;
+
+	return 1;
+}
+
+static noinline int alloc_debug_processing(struct kmem_cache *s,
+					struct page *page,
+					void *object, unsigned long addr)
+{
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!alloc_consistency_checks(s, page, object, addr))
+			goto bad;
+	}
 
 	/* Success perform special debug activities for allocs */
 	if (s->flags & SLAB_STORE_USER)
@@ -1067,39 +1079,21 @@ bad:
 	return 0;
 }
 
-/* Supports checking bulk free of a constructed freelist */
-static noinline int free_debug_processing(
-	struct kmem_cache *s, struct page *page,
-	void *head, void *tail, int bulk_cnt,
-	unsigned long addr)
+static inline int free_consistency_checks(struct kmem_cache *s,
+		struct page *page, void *object, unsigned long addr)
 {
-	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
-	void *object = head;
-	int cnt = 0;
-	unsigned long uninitialized_var(flags);
-	int ret = 0;
-
-	spin_lock_irqsave(&n->list_lock, flags);
-	slab_lock(page);
-
-	if (!check_slab(s, page))
-		goto out;
-
-next_object:
-	cnt++;
-
 	if (!check_valid_pointer(s, page, object)) {
 		slab_err(s, page, "Invalid object pointer 0x%p", object);
-		goto out;
+		return 0;
 	}
 
 	if (on_freelist(s, page, object)) {
 		object_err(s, page, object, "Object already free");
-		goto out;
+		return 0;
 	}
 
 	if (!check_object(s, page, object, SLUB_RED_ACTIVE))
-		goto out;
+		return 0;
 
 	if (unlikely(s != page->slab_cache)) {
 		if (!PageSlab(page)) {
@@ -1112,7 +1106,37 @@ next_object:
 		} else
 			object_err(s, page, object,
 					"page slab pointer corrupt.");
-		goto out;
+		return 0;
+	}
+	return 1;
+}
+
+/* Supports checking bulk free of a constructed freelist */
+static noinline int free_debug_processing(
+	struct kmem_cache *s, struct page *page,
+	void *head, void *tail, int bulk_cnt,
+	unsigned long addr)
+{
+	struct kmem_cache_node *n = get_node(s, page_to_nid(page));
+	void *object = head;
+	int cnt = 0;
+	unsigned long uninitialized_var(flags);
+	int ret = 0;
+
+	spin_lock_irqsave(&n->list_lock, flags);
+	slab_lock(page);
+
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!check_slab(s, page))
+			goto out;
+	}
+
+next_object:
+	cnt++;
+
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
+		if (!free_consistency_checks(s, page, object, addr))
+			goto out;
 	}
 
 	if (s->flags & SLAB_STORE_USER)
@@ -1169,7 +1193,7 @@ static int __init setup_slub_debug(char *str)
 	for (; *str && *str != ','; str++) {
 		switch (tolower(*str)) {
 		case 'f':
-			slub_debug |= SLAB_DEBUG_FREE;
+			slub_debug |= SLAB_CONSISTENCY_CHECKS;
 			break;
 		case 'z':
 			slub_debug |= SLAB_RED_ZONE;
@@ -1503,7 +1527,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	int order = compound_order(page);
 	int pages = 1 << order;
 
-	if (kmem_cache_debug(s)) {
+	if (s->flags & SLAB_CONSISTENCY_CHECKS) {
 		void *p;
 
 		slab_pad_check(s, page);
@@ -4812,16 +4836,16 @@ SLAB_ATTR_RO(total_objects);
 
 static ssize_t sanity_checks_show(struct kmem_cache *s, char *buf)
 {
-	return sprintf(buf, "%d\n", !!(s->flags & SLAB_DEBUG_FREE));
+	return sprintf(buf, "%d\n", !!(s->flags & SLAB_CONSISTENCY_CHECKS));
 }
 
 static ssize_t sanity_checks_store(struct kmem_cache *s,
 				const char *buf, size_t length)
 {
-	s->flags &= ~SLAB_DEBUG_FREE;
+	s->flags &= ~SLAB_CONSISTENCY_CHECKS;
 	if (buf[0] == '1') {
 		s->flags &= ~__CMPXCHG_DOUBLE;
-		s->flags |= SLAB_DEBUG_FREE;
+		s->flags |= SLAB_CONSISTENCY_CHECKS;
 	}
 	return length;
 }
@@ -5356,7 +5380,7 @@ static char *create_unique_id(struct kmem_cache *s)
 		*p++ = 'd';
 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
 		*p++ = 'a';
-	if (s->flags & SLAB_DEBUG_FREE)
+	if (s->flags & SLAB_CONSISTENCY_CHECKS)
 		*p++ = 'F';
 	if (!(s->flags & SLAB_NOTRACK))
 		*p++ = 't';
diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c
index 86e698d..1889163 100644
--- a/tools/vm/slabinfo.c
+++ b/tools/vm/slabinfo.c
@@ -135,7 +135,7 @@ static void usage(void)
 		"\nValid debug options (FZPUT may be combined)\n"
 		"a / A          Switch on all debug options (=FZUP)\n"
 		"-              Switch off all debug options\n"
-		"f / F          Sanity Checks (SLAB_DEBUG_FREE)\n"
+		"f / F          Sanity Checks (SLAB_CONSISTENCY_CHECKS)\n"
 		"z / Z          Redzoning\n"
 		"p / P          Poisoning\n"
 		"u / U          Tracking\n"
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCHv2 4/4] slub: Relax CMPXCHG consistency restrictions
  2016-02-15 18:44 ` Laura Abbott
  (?)
@ 2016-02-15 18:44   ` Laura Abbott
  -1 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


When debug options are enabled, cmpxchg on the page is disabled. This is
because the page must be locked to ensure there are no false positives
when performing consistency checks. Some debug options such as poisoning
and red zoning only act on the object itself. There is no need to
protect other CPUs from modification on only the object. Allow cmpxchg
to happen with poisoning and red zoning are set on a slab.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 mm/slub.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 01606ff..0323e53 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -164,6 +164,14 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
 				SLAB_POISON | SLAB_STORE_USER)
 
 /*
+ * These debug flags cannot use CMPXCHG because there might be consistency
+ * issues when checking or reading debug information
+ */
+#define SLAB_NO_CMPXCHG (SLAB_CONSISTENCY_CHECKS | SLAB_STORE_USER | \
+				SLAB_TRACE)
+
+
+/*
  * Debugging flags that require metadata to be stored in the slab.  These get
  * disabled when slub_debug=O is used and a cache's min order increases with
  * metadata.
@@ -3377,7 +3385,7 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
 
 #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
     defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
-	if (system_has_cmpxchg_double() && (s->flags & SLAB_DEBUG_FLAGS) == 0)
+	if (system_has_cmpxchg_double() && (s->flags & SLAB_NO_CMPXCHG) == 0)
 		/* Enable fast mode */
 		s->flags |= __CMPXCHG_DOUBLE;
 #endif
@@ -4889,7 +4897,6 @@ static ssize_t red_zone_store(struct kmem_cache *s,
 
 	s->flags &= ~SLAB_RED_ZONE;
 	if (buf[0] == '1') {
-		s->flags &= ~__CMPXCHG_DOUBLE;
 		s->flags |= SLAB_RED_ZONE;
 	}
 	calculate_sizes(s, -1);
@@ -4910,7 +4917,6 @@ static ssize_t poison_store(struct kmem_cache *s,
 
 	s->flags &= ~SLAB_POISON;
 	if (buf[0] == '1') {
-		s->flags &= ~__CMPXCHG_DOUBLE;
 		s->flags |= SLAB_POISON;
 	}
 	calculate_sizes(s, -1);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCHv2 4/4] slub: Relax CMPXCHG consistency restrictions
@ 2016-02-15 18:44   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


When debug options are enabled, cmpxchg on the page is disabled. This is
because the page must be locked to ensure there are no false positives
when performing consistency checks. Some debug options such as poisoning
and red zoning only act on the object itself. There is no need to
protect other CPUs from modification on only the object. Allow cmpxchg
to happen with poisoning and red zoning are set on a slab.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 mm/slub.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 01606ff..0323e53 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -164,6 +164,14 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
 				SLAB_POISON | SLAB_STORE_USER)
 
 /*
+ * These debug flags cannot use CMPXCHG because there might be consistency
+ * issues when checking or reading debug information
+ */
+#define SLAB_NO_CMPXCHG (SLAB_CONSISTENCY_CHECKS | SLAB_STORE_USER | \
+				SLAB_TRACE)
+
+
+/*
  * Debugging flags that require metadata to be stored in the slab.  These get
  * disabled when slub_debug=O is used and a cache's min order increases with
  * metadata.
@@ -3377,7 +3385,7 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
 
 #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
     defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
-	if (system_has_cmpxchg_double() && (s->flags & SLAB_DEBUG_FLAGS) == 0)
+	if (system_has_cmpxchg_double() && (s->flags & SLAB_NO_CMPXCHG) == 0)
 		/* Enable fast mode */
 		s->flags |= __CMPXCHG_DOUBLE;
 #endif
@@ -4889,7 +4897,6 @@ static ssize_t red_zone_store(struct kmem_cache *s,
 
 	s->flags &= ~SLAB_RED_ZONE;
 	if (buf[0] == '1') {
-		s->flags &= ~__CMPXCHG_DOUBLE;
 		s->flags |= SLAB_RED_ZONE;
 	}
 	calculate_sizes(s, -1);
@@ -4910,7 +4917,6 @@ static ssize_t poison_store(struct kmem_cache *s,
 
 	s->flags &= ~SLAB_POISON;
 	if (buf[0] == '1') {
-		s->flags &= ~__CMPXCHG_DOUBLE;
 		s->flags |= SLAB_POISON;
 	}
 	calculate_sizes(s, -1);
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCHv2 4/4] slub: Relax CMPXCHG consistency restrictions
@ 2016-02-15 18:44   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-15 18:44 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


When debug options are enabled, cmpxchg on the page is disabled. This is
because the page must be locked to ensure there are no false positives
when performing consistency checks. Some debug options such as poisoning
and red zoning only act on the object itself. There is no need to
protect other CPUs from modification on only the object. Allow cmpxchg
to happen with poisoning and red zoning are set on a slab.

Credit to Mathias Krause for the original work which inspired this series

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 mm/slub.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 01606ff..0323e53 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -164,6 +164,14 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s)
 				SLAB_POISON | SLAB_STORE_USER)
 
 /*
+ * These debug flags cannot use CMPXCHG because there might be consistency
+ * issues when checking or reading debug information
+ */
+#define SLAB_NO_CMPXCHG (SLAB_CONSISTENCY_CHECKS | SLAB_STORE_USER | \
+				SLAB_TRACE)
+
+
+/*
  * Debugging flags that require metadata to be stored in the slab.  These get
  * disabled when slub_debug=O is used and a cache's min order increases with
  * metadata.
@@ -3377,7 +3385,7 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
 
 #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
     defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
-	if (system_has_cmpxchg_double() && (s->flags & SLAB_DEBUG_FLAGS) == 0)
+	if (system_has_cmpxchg_double() && (s->flags & SLAB_NO_CMPXCHG) == 0)
 		/* Enable fast mode */
 		s->flags |= __CMPXCHG_DOUBLE;
 #endif
@@ -4889,7 +4897,6 @@ static ssize_t red_zone_store(struct kmem_cache *s,
 
 	s->flags &= ~SLAB_RED_ZONE;
 	if (buf[0] == '1') {
-		s->flags &= ~__CMPXCHG_DOUBLE;
 		s->flags |= SLAB_RED_ZONE;
 	}
 	calculate_sizes(s, -1);
@@ -4910,7 +4917,6 @@ static ssize_t poison_store(struct kmem_cache *s,
 
 	s->flags &= ~SLAB_POISON;
 	if (buf[0] == '1') {
-		s->flags &= ~__CMPXCHG_DOUBLE;
 		s->flags |= SLAB_POISON;
 	}
 	calculate_sizes(s, -1);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
  2016-02-15 18:44   ` Laura Abbott
  (?)
@ 2016-02-16 16:28     ` Christoph Lameter
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:28 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> Credit to Mathias Krause for the original work which inspired this series

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
@ 2016-02-16 16:28     ` Christoph Lameter
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:28 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> Credit to Mathias Krause for the original work which inspired this series

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
@ 2016-02-16 16:28     ` Christoph Lameter
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:28 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> Credit to Mathias Krause for the original work which inspired this series

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 2/4] slub: Fix/clean free_debug_processing return paths
  2016-02-15 18:44   ` Laura Abbott
  (?)
@ 2016-02-16 16:30     ` Christoph Lameter
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:30 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> Since 19c7ff9ecd89 ("slub: Take node lock during object free checks")
> check_object has been incorrectly returning success as it follows
> the out label which just returns the node. Thanks to refactoring,
> the out and fail paths are now basically the same. Combine the two
> into one and just use a single label.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 2/4] slub: Fix/clean free_debug_processing return paths
@ 2016-02-16 16:30     ` Christoph Lameter
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:30 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> Since 19c7ff9ecd89 ("slub: Take node lock during object free checks")
> check_object has been incorrectly returning success as it follows
> the out label which just returns the node. Thanks to refactoring,
> the out and fail paths are now basically the same. Combine the two
> into one and just use a single label.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCHv2 2/4] slub: Fix/clean free_debug_processing return paths
@ 2016-02-16 16:30     ` Christoph Lameter
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:30 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> Since 19c7ff9ecd89 ("slub: Take node lock during object free checks")
> check_object has been incorrectly returning success as it follows
> the out label which just returns the node. Thanks to refactoring,
> the out and fail paths are now basically the same. Combine the two
> into one and just use a single label.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 3/4] slub: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
  2016-02-15 18:44   ` Laura Abbott
  (?)
@ 2016-02-16 16:32     ` Christoph Lameter
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:32 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> SLAB_DEBUG_FREE allows expensive consistency checks at free
> to be turned on or off. Expand its use to be able to turn
> off all consistency checks. This gives a nice speed up if
> you only want features such as poisoning or tracing.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 3/4] slub: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
@ 2016-02-16 16:32     ` Christoph Lameter
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:32 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> SLAB_DEBUG_FREE allows expensive consistency checks at free
> to be turned on or off. Expand its use to be able to turn
> off all consistency checks. This gives a nice speed up if
> you only want features such as poisoning or tracing.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCHv2 3/4] slub: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS
@ 2016-02-16 16:32     ` Christoph Lameter
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:32 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> SLAB_DEBUG_FREE allows expensive consistency checks at free
> to be turned on or off. Expand its use to be able to turn
> off all consistency checks. This gives a nice speed up if
> you only want features such as poisoning or tracing.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 4/4] slub: Relax CMPXCHG consistency restrictions
  2016-02-15 18:44   ` Laura Abbott
  (?)
@ 2016-02-16 16:33     ` Christoph Lameter
  -1 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:33 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> When debug options are enabled, cmpxchg on the page is disabled. This is
> because the page must be locked to ensure there are no false positives
> when performing consistency checks. Some debug options such as poisoning
> and red zoning only act on the object itself. There is no need to
> protect other CPUs from modification on only the object. Allow cmpxchg
> to happen with poisoning and red zoning are set on a slab.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 4/4] slub: Relax CMPXCHG consistency restrictions
@ 2016-02-16 16:33     ` Christoph Lameter
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:33 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> When debug options are enabled, cmpxchg on the page is disabled. This is
> because the page must be locked to ensure there are no false positives
> when performing consistency checks. Some debug options such as poisoning
> and red zoning only act on the object itself. There is no need to
> protect other CPUs from modification on only the object. Allow cmpxchg
> to happen with poisoning and red zoning are set on a slab.

Acked-by: Christoph Lameter <cl@linux.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCHv2 4/4] slub: Relax CMPXCHG consistency restrictions
@ 2016-02-16 16:33     ` Christoph Lameter
  0 siblings, 0 replies; 36+ messages in thread
From: Christoph Lameter @ 2016-02-16 16:33 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, kernel-hardening, Kees Cook

On Mon, 15 Feb 2016, Laura Abbott wrote:

> When debug options are enabled, cmpxchg on the page is disabled. This is
> because the page must be locked to ensure there are no false positives
> when performing consistency checks. Some debug options such as poisoning
> and red zoning only act on the object itself. There is no need to
> protect other CPUs from modification on only the object. Allow cmpxchg
> to happen with poisoning and red zoning are set on a slab.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 0/4] Improve performance for SLAB_POISON
  2016-02-15 18:44 ` Laura Abbott
  (?)
@ 2016-02-18  8:39   ` Joonsoo Kim
  -1 siblings, 0 replies; 36+ messages in thread
From: Joonsoo Kim @ 2016-02-18  8:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Andrew Morton,
	Linux Memory Management List, LKML, kernel-hardening, Kees Cook

2016-02-16 3:44 GMT+09:00 Laura Abbott <labbott@fedoraproject.org>:
> Hi,
>
> This is a follow up to my previous series
> (http://lkml.kernel.org/g/<1453770913-32287-1-git-send-email-labbott@fedoraproject.org>)
> This series takes the suggestion of Christoph Lameter and only focuses on
> optimizing the slow path where the debug processing runs. The two main
> optimizations in this series are letting the consistency checks be skipped and
> relaxing the cmpxchg restrictions when we are not doing consistency checks.
> With hackbench -g 20 -l 1000 averaged over 100 runs:
>
> Before slub_debug=P
> mean 15.607
> variance .086
> stdev .294
>
> After slub_debug=P
> mean 10.836
> variance .155
> stdev .394
>
> This still isn't as fast as what is in grsecurity unfortunately so there's still
> work to be done. Profiling ___slab_alloc shows that 25-50% of time is spent in
> deactivate_slab. I haven't looked too closely to see if this is something that
> can be optimized.

There is something to be optimized. deactivate_slab() deactivate objects of
freelist one by one and it's useless. And, it deactivates freelist
with two phases.
Deactivating objects except last one and then deactivating last object with
node lock. It would be also optimized although I didn't think deeply.

Thanks.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 0/4] Improve performance for SLAB_POISON
@ 2016-02-18  8:39   ` Joonsoo Kim
  0 siblings, 0 replies; 36+ messages in thread
From: Joonsoo Kim @ 2016-02-18  8:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Andrew Morton,
	Linux Memory Management List, LKML, kernel-hardening, Kees Cook

2016-02-16 3:44 GMT+09:00 Laura Abbott <labbott@fedoraproject.org>:
> Hi,
>
> This is a follow up to my previous series
> (http://lkml.kernel.org/g/<1453770913-32287-1-git-send-email-labbott@fedoraproject.org>)
> This series takes the suggestion of Christoph Lameter and only focuses on
> optimizing the slow path where the debug processing runs. The two main
> optimizations in this series are letting the consistency checks be skipped and
> relaxing the cmpxchg restrictions when we are not doing consistency checks.
> With hackbench -g 20 -l 1000 averaged over 100 runs:
>
> Before slub_debug=P
> mean 15.607
> variance .086
> stdev .294
>
> After slub_debug=P
> mean 10.836
> variance .155
> stdev .394
>
> This still isn't as fast as what is in grsecurity unfortunately so there's still
> work to be done. Profiling ___slab_alloc shows that 25-50% of time is spent in
> deactivate_slab. I haven't looked too closely to see if this is something that
> can be optimized.

There is something to be optimized. deactivate_slab() deactivate objects of
freelist one by one and it's useless. And, it deactivates freelist
with two phases.
Deactivating objects except last one and then deactivating last object with
node lock. It would be also optimized although I didn't think deeply.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCHv2 0/4] Improve performance for SLAB_POISON
@ 2016-02-18  8:39   ` Joonsoo Kim
  0 siblings, 0 replies; 36+ messages in thread
From: Joonsoo Kim @ 2016-02-18  8:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Andrew Morton,
	Linux Memory Management List, LKML, kernel-hardening, Kees Cook

2016-02-16 3:44 GMT+09:00 Laura Abbott <labbott@fedoraproject.org>:
> Hi,
>
> This is a follow up to my previous series
> (http://lkml.kernel.org/g/<1453770913-32287-1-git-send-email-labbott@fedoraproject.org>)
> This series takes the suggestion of Christoph Lameter and only focuses on
> optimizing the slow path where the debug processing runs. The two main
> optimizations in this series are letting the consistency checks be skipped and
> relaxing the cmpxchg restrictions when we are not doing consistency checks.
> With hackbench -g 20 -l 1000 averaged over 100 runs:
>
> Before slub_debug=P
> mean 15.607
> variance .086
> stdev .294
>
> After slub_debug=P
> mean 10.836
> variance .155
> stdev .394
>
> This still isn't as fast as what is in grsecurity unfortunately so there's still
> work to be done. Profiling ___slab_alloc shows that 25-50% of time is spent in
> deactivate_slab. I haven't looked too closely to see if this is something that
> can be optimized.

There is something to be optimized. deactivate_slab() deactivate objects of
freelist one by one and it's useless. And, it deactivates freelist
with two phases.
Deactivating objects except last one and then deactivating last object with
node lock. It would be also optimized although I didn't think deeply.

Thanks.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
  2016-02-15 18:44   ` Laura Abbott
  (?)
@ 2016-02-24 14:22     ` Paolo Bonzini
  -1 siblings, 0 replies; 36+ messages in thread
From: Paolo Bonzini @ 2016-02-24 14:22 UTC (permalink / raw)
  To: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook



On 15/02/2016 19:44, Laura Abbott wrote:
> -static inline struct kmem_cache_node *free_debug_processing(
> +static inline int free_debug_processing(
>  	struct kmem_cache *s, struct page *page,
>  	void *head, void *tail, int bulk_cnt,
>  	unsigned long addr, unsigned long *flags) { return NULL; }

I think this has a leftover flags argument.

Paolo

> @@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>  	stat(s, FREE_SLOWPATH);
>  
>  	if (kmem_cache_debug(s) &&
> -	    !(n = free_debug_processing(s, page, head, tail, cnt,
> -					addr, &flags)))
> +	    !free_debug_processing(s, page, head, tail, cnt, addr))

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
@ 2016-02-24 14:22     ` Paolo Bonzini
  0 siblings, 0 replies; 36+ messages in thread
From: Paolo Bonzini @ 2016-02-24 14:22 UTC (permalink / raw)
  To: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook



On 15/02/2016 19:44, Laura Abbott wrote:
> -static inline struct kmem_cache_node *free_debug_processing(
> +static inline int free_debug_processing(
>  	struct kmem_cache *s, struct page *page,
>  	void *head, void *tail, int bulk_cnt,
>  	unsigned long addr, unsigned long *flags) { return NULL; }

I think this has a leftover flags argument.

Paolo

> @@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>  	stat(s, FREE_SLOWPATH);
>  
>  	if (kmem_cache_debug(s) &&
> -	    !(n = free_debug_processing(s, page, head, tail, cnt,
> -					addr, &flags)))
> +	    !free_debug_processing(s, page, head, tail, cnt, addr))

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
@ 2016-02-24 14:22     ` Paolo Bonzini
  0 siblings, 0 replies; 36+ messages in thread
From: Paolo Bonzini @ 2016-02-24 14:22 UTC (permalink / raw)
  To: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook



On 15/02/2016 19:44, Laura Abbott wrote:
> -static inline struct kmem_cache_node *free_debug_processing(
> +static inline int free_debug_processing(
>  	struct kmem_cache *s, struct page *page,
>  	void *head, void *tail, int bulk_cnt,
>  	unsigned long addr, unsigned long *flags) { return NULL; }

I think this has a leftover flags argument.

Paolo

> @@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>  	stat(s, FREE_SLOWPATH);
>  
>  	if (kmem_cache_debug(s) &&
> -	    !(n = free_debug_processing(s, page, head, tail, cnt,
> -					addr, &flags)))
> +	    !free_debug_processing(s, page, head, tail, cnt, addr))

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
  2016-02-24 14:22     ` Paolo Bonzini
  (?)
@ 2016-02-24 18:09       ` Laura Abbott
  -1 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-24 18:09 UTC (permalink / raw)
  To: Paolo Bonzini, Laura Abbott, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook

On 02/24/2016 06:22 AM, Paolo Bonzini wrote:
>
>
> On 15/02/2016 19:44, Laura Abbott wrote:
>> -static inline struct kmem_cache_node *free_debug_processing(
>> +static inline int free_debug_processing(
>>   	struct kmem_cache *s, struct page *page,
>>   	void *head, void *tail, int bulk_cnt,
>>   	unsigned long addr, unsigned long *flags) { return NULL; }
>
> I think this has a leftover flags argument.
>
> Paolo
>

Yes, I believe Andrew folded in a patch to the mm tree.

Thanks,
Laura

>> @@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>>   	stat(s, FREE_SLOWPATH);
>>
>>   	if (kmem_cache_debug(s) &&
>> -	    !(n = free_debug_processing(s, page, head, tail, cnt,
>> -					addr, &flags)))
>> +	    !free_debug_processing(s, page, head, tail, cnt, addr))

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
@ 2016-02-24 18:09       ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-24 18:09 UTC (permalink / raw)
  To: Paolo Bonzini, Laura Abbott, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook

On 02/24/2016 06:22 AM, Paolo Bonzini wrote:
>
>
> On 15/02/2016 19:44, Laura Abbott wrote:
>> -static inline struct kmem_cache_node *free_debug_processing(
>> +static inline int free_debug_processing(
>>   	struct kmem_cache *s, struct page *page,
>>   	void *head, void *tail, int bulk_cnt,
>>   	unsigned long addr, unsigned long *flags) { return NULL; }
>
> I think this has a leftover flags argument.
>
> Paolo
>

Yes, I believe Andrew folded in a patch to the mm tree.

Thanks,
Laura

>> @@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>>   	stat(s, FREE_SLOWPATH);
>>
>>   	if (kmem_cache_debug(s) &&
>> -	    !(n = free_debug_processing(s, page, head, tail, cnt,
>> -					addr, &flags)))
>> +	    !free_debug_processing(s, page, head, tail, cnt, addr))

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing
@ 2016-02-24 18:09       ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2016-02-24 18:09 UTC (permalink / raw)
  To: Paolo Bonzini, Laura Abbott, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook

On 02/24/2016 06:22 AM, Paolo Bonzini wrote:
>
>
> On 15/02/2016 19:44, Laura Abbott wrote:
>> -static inline struct kmem_cache_node *free_debug_processing(
>> +static inline int free_debug_processing(
>>   	struct kmem_cache *s, struct page *page,
>>   	void *head, void *tail, int bulk_cnt,
>>   	unsigned long addr, unsigned long *flags) { return NULL; }
>
> I think this has a leftover flags argument.
>
> Paolo
>

Yes, I believe Andrew folded in a patch to the mm tree.

Thanks,
Laura

>> @@ -2648,8 +2646,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>>   	stat(s, FREE_SLOWPATH);
>>
>>   	if (kmem_cache_debug(s) &&
>> -	    !(n = free_debug_processing(s, page, head, tail, cnt,
>> -					addr, &flags)))
>> +	    !free_debug_processing(s, page, head, tail, cnt, addr))

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2016-02-24 18:09 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-15 18:44 [PATCHv2 0/4] Improve performance for SLAB_POISON Laura Abbott
2016-02-15 18:44 ` [kernel-hardening] " Laura Abbott
2016-02-15 18:44 ` Laura Abbott
2016-02-15 18:44 ` [PATCHv2 1/4] slub: Drop lock at the end of free_debug_processing Laura Abbott
2016-02-15 18:44   ` [kernel-hardening] " Laura Abbott
2016-02-15 18:44   ` Laura Abbott
2016-02-16 16:28   ` Christoph Lameter
2016-02-16 16:28     ` [kernel-hardening] " Christoph Lameter
2016-02-16 16:28     ` Christoph Lameter
2016-02-24 14:22   ` Paolo Bonzini
2016-02-24 14:22     ` [kernel-hardening] " Paolo Bonzini
2016-02-24 14:22     ` Paolo Bonzini
2016-02-24 18:09     ` Laura Abbott
2016-02-24 18:09       ` [kernel-hardening] " Laura Abbott
2016-02-24 18:09       ` Laura Abbott
2016-02-15 18:44 ` [PATCHv2 2/4] slub: Fix/clean free_debug_processing return paths Laura Abbott
2016-02-15 18:44   ` [kernel-hardening] " Laura Abbott
2016-02-15 18:44   ` Laura Abbott
2016-02-16 16:30   ` Christoph Lameter
2016-02-16 16:30     ` [kernel-hardening] " Christoph Lameter
2016-02-16 16:30     ` Christoph Lameter
2016-02-15 18:44 ` [PATCHv2 3/4] slub: Convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS Laura Abbott
2016-02-15 18:44   ` [kernel-hardening] " Laura Abbott
2016-02-15 18:44   ` Laura Abbott
2016-02-16 16:32   ` Christoph Lameter
2016-02-16 16:32     ` [kernel-hardening] " Christoph Lameter
2016-02-16 16:32     ` Christoph Lameter
2016-02-15 18:44 ` [PATCHv2 4/4] slub: Relax CMPXCHG consistency restrictions Laura Abbott
2016-02-15 18:44   ` [kernel-hardening] " Laura Abbott
2016-02-15 18:44   ` Laura Abbott
2016-02-16 16:33   ` Christoph Lameter
2016-02-16 16:33     ` [kernel-hardening] " Christoph Lameter
2016-02-16 16:33     ` Christoph Lameter
2016-02-18  8:39 ` [PATCHv2 0/4] Improve performance for SLAB_POISON Joonsoo Kim
2016-02-18  8:39   ` [kernel-hardening] " Joonsoo Kim
2016-02-18  8:39   ` Joonsoo Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.