* [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
@ 2016-08-04 19:01 Aruna Ramakrishna
2016-08-04 21:06 ` Andrew Morton
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Aruna Ramakrishna @ 2016-08-04 19:01 UTC (permalink / raw)
To: linux-mm, linux-kernel
Cc: Mike Kravetz, Christoph Lameter, Pekka Enberg, David Rientjes,
Joonsoo Kim, Andrew Morton
On large systems, when some slab caches grow to millions of objects (and
many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
During this time, interrupts are disabled while walking the slab lists
(slabs_full, slabs_partial, and slabs_free) for each node, and this
sometimes causes timeouts in other drivers (for instance, Infiniband).
This patch optimizes 'cat /proc/slabinfo' by maintaining a counter for
total number of allocated slabs per node, per cache. This counter is
updated when a slab is created or destroyed. This enables us to skip
traversing the slabs_full list while gathering slabinfo statistics, and
since slabs_full tends to be the biggest list when the cache is large, it
results in a dramatic performance improvement. Getting slabinfo statistics
now only requires walking the slabs_free and slabs_partial lists, and
those lists are usually much smaller than slabs_full. We tested this after
growing the dentry cache to 70GB, and the performance improved from 2s to
5ms.
Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
Note: this has been tested only on x86_64.
mm/slab.c | 25 ++++++++++++++++---------
mm/slab.h | 15 ++++++++++++++-
mm/slub.c | 19 +------------------
3 files changed, 31 insertions(+), 28 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 261147b..d683840 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -233,6 +233,7 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
spin_lock_init(&parent->list_lock);
parent->free_objects = 0;
parent->free_touched = 0;
+ atomic_long_set(&parent->nr_slabs, 0);
}
#define MAKE_LIST(cachep, listp, slab, nodeid) \
@@ -2333,6 +2334,7 @@ static int drain_freelist(struct kmem_cache *cache,
n->free_objects -= cache->num;
spin_unlock_irq(&n->list_lock);
slab_destroy(cache, page);
+ atomic_long_dec(&n->nr_slabs);
nr_freed++;
}
out:
@@ -2736,6 +2738,8 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep,
if (gfpflags_allow_blocking(local_flags))
local_irq_disable();
+ atomic_long_inc(&n->nr_slabs);
+
return page;
opps1:
@@ -3455,6 +3459,7 @@ static void free_block(struct kmem_cache *cachep, void **objpp,
page = list_last_entry(&n->slabs_free, struct page, lru);
list_move(&page->lru, list);
+ atomic_long_dec(&n->nr_slabs);
}
}
@@ -4111,6 +4116,8 @@ void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo)
unsigned long num_objs;
unsigned long active_slabs = 0;
unsigned long num_slabs, free_objects = 0, shared_avail = 0;
+ unsigned long num_slabs_partial = 0, num_slabs_free = 0;
+ unsigned long num_slabs_full = 0;
const char *name;
char *error = NULL;
int node;
@@ -4120,36 +4127,36 @@ void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo)
num_slabs = 0;
for_each_kmem_cache_node(cachep, node, n) {
+ num_slabs += node_nr_slabs(n);
check_irq_on();
spin_lock_irq(&n->list_lock);
- list_for_each_entry(page, &n->slabs_full, lru) {
- if (page->active != cachep->num && !error)
- error = "slabs_full accounting error";
- active_objs += cachep->num;
- active_slabs++;
- }
list_for_each_entry(page, &n->slabs_partial, lru) {
if (page->active == cachep->num && !error)
error = "slabs_partial accounting error";
if (!page->active && !error)
error = "slabs_partial accounting error";
active_objs += page->active;
- active_slabs++;
+ num_slabs_partial++;
}
+
list_for_each_entry(page, &n->slabs_free, lru) {
if (page->active && !error)
error = "slabs_free accounting error";
- num_slabs++;
+ num_slabs_free++;
}
+
free_objects += n->free_objects;
if (n->shared)
shared_avail += n->shared->avail;
spin_unlock_irq(&n->list_lock);
}
- num_slabs += active_slabs;
num_objs = num_slabs * cachep->num;
+ active_slabs = num_slabs - num_slabs_free;
+ num_slabs_full = num_slabs - (num_slabs_partial + num_slabs_free);
+ active_objs += (num_slabs_full * cachep->num);
+
if (num_objs - active_objs != free_objects && !error)
error = "free_objects accounting error";
diff --git a/mm/slab.h b/mm/slab.h
index 9653f2e..5740cec 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -427,6 +427,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
*/
struct kmem_cache_node {
spinlock_t list_lock;
+ atomic_long_t nr_slabs;
#ifdef CONFIG_SLAB
struct list_head slabs_partial; /* partial list first, better asm code */
@@ -445,7 +446,6 @@ struct kmem_cache_node {
unsigned long nr_partial;
struct list_head partial;
#ifdef CONFIG_SLUB_DEBUG
- atomic_long_t nr_slabs;
atomic_long_t total_objects;
struct list_head full;
#endif
@@ -458,6 +458,19 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node)
return s->node[node];
}
+/* Tracking of the number of slabs for /proc/slabinfo and debugging purposes */
+static inline unsigned long slabs_node(struct kmem_cache *s, int node)
+{
+ struct kmem_cache_node *n = get_node(s, node);
+
+ return atomic_long_read(&n->nr_slabs);
+}
+
+static inline unsigned long node_nr_slabs(struct kmem_cache_node *n)
+{
+ return atomic_long_read(&n->nr_slabs);
+}
+
/*
* Iterator over all nodes. The body will be executed for each node that has
* a kmem_cache_node structure allocated (which is true for all online nodes)
diff --git a/mm/slub.c b/mm/slub.c
index 26eb6a99..b9f2607 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1006,19 +1006,6 @@ static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct
list_del(&page->lru);
}
-/* Tracking of the number of slabs for debugging purposes */
-static inline unsigned long slabs_node(struct kmem_cache *s, int node)
-{
- struct kmem_cache_node *n = get_node(s, node);
-
- return atomic_long_read(&n->nr_slabs);
-}
-
-static inline unsigned long node_nr_slabs(struct kmem_cache_node *n)
-{
- return atomic_long_read(&n->nr_slabs);
-}
-
static inline void inc_slabs_node(struct kmem_cache *s, int node, int objects)
{
struct kmem_cache_node *n = get_node(s, node);
@@ -1297,10 +1284,6 @@ unsigned long kmem_cache_flags(unsigned long object_size,
#define disable_higher_order_debug 0
-static inline unsigned long slabs_node(struct kmem_cache *s, int node)
- { return 0; }
-static inline unsigned long node_nr_slabs(struct kmem_cache_node *n)
- { return 0; }
static inline void inc_slabs_node(struct kmem_cache *s, int node,
int objects) {}
static inline void dec_slabs_node(struct kmem_cache *s, int node,
@@ -3258,8 +3241,8 @@ init_kmem_cache_node(struct kmem_cache_node *n)
n->nr_partial = 0;
spin_lock_init(&n->list_lock);
INIT_LIST_HEAD(&n->partial);
-#ifdef CONFIG_SLUB_DEBUG
atomic_long_set(&n->nr_slabs, 0);
+#ifdef CONFIG_SLUB_DEBUG
atomic_long_set(&n->total_objects, 0);
INIT_LIST_HEAD(&n->full);
#endif
--
1.8.3.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-04 19:01 [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats Aruna Ramakrishna
@ 2016-08-04 21:06 ` Andrew Morton
2016-08-04 21:49 ` Aruna Ramakrishna
2016-08-05 0:35 ` Joonsoo Kim
2016-08-05 14:17 ` Christoph Lameter
2 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2016-08-04 21:06 UTC (permalink / raw)
To: Aruna Ramakrishna
Cc: linux-mm, linux-kernel, Mike Kravetz, Christoph Lameter,
Pekka Enberg, David Rientjes, Joonsoo Kim
On Thu, 4 Aug 2016 12:01:13 -0700 Aruna Ramakrishna <aruna.ramakrishna@oracle.com> wrote:
> On large systems, when some slab caches grow to millions of objects (and
> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> During this time, interrupts are disabled while walking the slab lists
> (slabs_full, slabs_partial, and slabs_free) for each node, and this
> sometimes causes timeouts in other drivers (for instance, Infiniband).
>
> This patch optimizes 'cat /proc/slabinfo' by maintaining a counter for
> total number of allocated slabs per node, per cache. This counter is
> updated when a slab is created or destroyed. This enables us to skip
> traversing the slabs_full list while gathering slabinfo statistics, and
> since slabs_full tends to be the biggest list when the cache is large, it
> results in a dramatic performance improvement. Getting slabinfo statistics
> now only requires walking the slabs_free and slabs_partial lists, and
> those lists are usually much smaller than slabs_full. We tested this after
> growing the dentry cache to 70GB, and the performance improved from 2s to
> 5ms.
I assume this is tested on both slab and slub?
It isn't the smallest of patches but given the seriousness of the
problem I think I'll tag it for -stable backporting.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-04 21:06 ` Andrew Morton
@ 2016-08-04 21:49 ` Aruna Ramakrishna
0 siblings, 0 replies; 10+ messages in thread
From: Aruna Ramakrishna @ 2016-08-04 21:49 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Mike Kravetz, Christoph Lameter,
Pekka Enberg, David Rientjes, Joonsoo Kim
On 08/04/2016 02:06 PM, Andrew Morton wrote:
> On Thu, 4 Aug 2016 12:01:13 -0700 Aruna Ramakrishna <aruna.ramakrishna@oracle.com> wrote:
>
>> On large systems, when some slab caches grow to millions of objects (and
>> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
>> During this time, interrupts are disabled while walking the slab lists
>> (slabs_full, slabs_partial, and slabs_free) for each node, and this
>> sometimes causes timeouts in other drivers (for instance, Infiniband).
>>
>> This patch optimizes 'cat /proc/slabinfo' by maintaining a counter for
>> total number of allocated slabs per node, per cache. This counter is
>> updated when a slab is created or destroyed. This enables us to skip
>> traversing the slabs_full list while gathering slabinfo statistics, and
>> since slabs_full tends to be the biggest list when the cache is large, it
>> results in a dramatic performance improvement. Getting slabinfo statistics
>> now only requires walking the slabs_free and slabs_partial lists, and
>> those lists are usually much smaller than slabs_full. We tested this after
>> growing the dentry cache to 70GB, and the performance improved from 2s to
>> 5ms.
>
> I assume this is tested on both slab and slub?
>
> It isn't the smallest of patches but given the seriousness of the
> problem I think I'll tag it for -stable backporting.
>
This was only sanity-checked on slub. The performance tests were only
run on slab.
Thanks,
Aruna
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-04 19:01 [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats Aruna Ramakrishna
2016-08-04 21:06 ` Andrew Morton
@ 2016-08-05 0:35 ` Joonsoo Kim
2016-08-05 14:21 ` Christoph Lameter
2016-08-05 14:17 ` Christoph Lameter
2 siblings, 1 reply; 10+ messages in thread
From: Joonsoo Kim @ 2016-08-05 0:35 UTC (permalink / raw)
To: Aruna Ramakrishna
Cc: Linux Memory Management List, LKML, Mike Kravetz,
Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton
2016-08-05 4:01 GMT+09:00 Aruna Ramakrishna <aruna.ramakrishna@oracle.com>:
> On large systems, when some slab caches grow to millions of objects (and
> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> During this time, interrupts are disabled while walking the slab lists
> (slabs_full, slabs_partial, and slabs_free) for each node, and this
> sometimes causes timeouts in other drivers (for instance, Infiniband).
>
> This patch optimizes 'cat /proc/slabinfo' by maintaining a counter for
> total number of allocated slabs per node, per cache. This counter is
> updated when a slab is created or destroyed. This enables us to skip
> traversing the slabs_full list while gathering slabinfo statistics, and
> since slabs_full tends to be the biggest list when the cache is large, it
> results in a dramatic performance improvement. Getting slabinfo statistics
> now only requires walking the slabs_free and slabs_partial lists, and
> those lists are usually much smaller than slabs_full. We tested this after
> growing the dentry cache to 70GB, and the performance improved from 2s to
> 5ms.
>
> Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
> Cc: Mike Kravetz <mike.kravetz@oracle.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> ---
> Note: this has been tested only on x86_64.
>
> mm/slab.c | 25 ++++++++++++++++---------
> mm/slab.h | 15 ++++++++++++++-
> mm/slub.c | 19 +------------------
> 3 files changed, 31 insertions(+), 28 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index 261147b..d683840 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -233,6 +233,7 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
> spin_lock_init(&parent->list_lock);
> parent->free_objects = 0;
> parent->free_touched = 0;
> + atomic_long_set(&parent->nr_slabs, 0);
> }
>
> #define MAKE_LIST(cachep, listp, slab, nodeid) \
> @@ -2333,6 +2334,7 @@ static int drain_freelist(struct kmem_cache *cache,
> n->free_objects -= cache->num;
> spin_unlock_irq(&n->list_lock);
> slab_destroy(cache, page);
> + atomic_long_dec(&n->nr_slabs);
> nr_freed++;
> }
Please decrease counter when a slab is detached from the list.
Otherwise, there would be inconsistent between counter and
number of attached slab on the list.
> out:
> @@ -2736,6 +2738,8 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep,
> if (gfpflags_allow_blocking(local_flags))
> local_irq_disable();
>
> + atomic_long_inc(&n->nr_slabs);
> +
> return page;
Please increase counter when a slab is attached to the list
in cache_grow_end().
> opps1:
> @@ -3455,6 +3459,7 @@ static void free_block(struct kmem_cache *cachep, void **objpp,
>
> page = list_last_entry(&n->slabs_free, struct page, lru);
> list_move(&page->lru, list);
> + atomic_long_dec(&n->nr_slabs);
> }
> }
>
> @@ -4111,6 +4116,8 @@ void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo)
> unsigned long num_objs;
> unsigned long active_slabs = 0;
> unsigned long num_slabs, free_objects = 0, shared_avail = 0;
> + unsigned long num_slabs_partial = 0, num_slabs_free = 0;
> + unsigned long num_slabs_full = 0;
> const char *name;
> char *error = NULL;
> int node;
> @@ -4120,36 +4127,36 @@ void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo)
> num_slabs = 0;
> for_each_kmem_cache_node(cachep, node, n) {
>
> + num_slabs += node_nr_slabs(n);
> check_irq_on();
> spin_lock_irq(&n->list_lock);
>
> - list_for_each_entry(page, &n->slabs_full, lru) {
> - if (page->active != cachep->num && !error)
> - error = "slabs_full accounting error";
> - active_objs += cachep->num;
> - active_slabs++;
> - }
> list_for_each_entry(page, &n->slabs_partial, lru) {
> if (page->active == cachep->num && !error)
> error = "slabs_partial accounting error";
> if (!page->active && !error)
> error = "slabs_partial accounting error";
> active_objs += page->active;
> - active_slabs++;
> + num_slabs_partial++;
> }
> +
> list_for_each_entry(page, &n->slabs_free, lru) {
> if (page->active && !error)
> error = "slabs_free accounting error";
> - num_slabs++;
> + num_slabs_free++;
> }
> +
> free_objects += n->free_objects;
> if (n->shared)
> shared_avail += n->shared->avail;
>
> spin_unlock_irq(&n->list_lock);
> }
> - num_slabs += active_slabs;
> num_objs = num_slabs * cachep->num;
> + active_slabs = num_slabs - num_slabs_free;
> + num_slabs_full = num_slabs - (num_slabs_partial + num_slabs_free);
> + active_objs += (num_slabs_full * cachep->num);
> +
> if (num_objs - active_objs != free_objects && !error)
> error = "free_objects accounting error";
>
> diff --git a/mm/slab.h b/mm/slab.h
> index 9653f2e..5740cec 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -427,6 +427,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
> */
> struct kmem_cache_node {
> spinlock_t list_lock;
> + atomic_long_t nr_slabs;
If above my comments are fixed, all counting would be done with
holding a lock. So, atomic definition isn't needed for the SLAB.
I think that it's better not to commonize this counting.
Thanks.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-04 19:01 [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats Aruna Ramakrishna
2016-08-04 21:06 ` Andrew Morton
2016-08-05 0:35 ` Joonsoo Kim
@ 2016-08-05 14:17 ` Christoph Lameter
2 siblings, 0 replies; 10+ messages in thread
From: Christoph Lameter @ 2016-08-05 14:17 UTC (permalink / raw)
To: Aruna Ramakrishna
Cc: linux-mm, linux-kernel, Mike Kravetz, Pekka Enberg,
David Rientjes, Joonsoo Kim, Andrew Morton
On Thu, 4 Aug 2016, Aruna Ramakrishna wrote:
> On large systems, when some slab caches grow to millions of objects (and
> many gigabytes), running 'cat /proc/slabinfo' can take up to 1-2 seconds.
> During this time, interrupts are disabled while walking the slab lists
> (slabs_full, slabs_partial, and slabs_free) for each node, and this
> sometimes causes timeouts in other drivers (for instance, Infiniband).
Acked-by: Christoph Lameter <cl@linux.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-05 0:35 ` Joonsoo Kim
@ 2016-08-05 14:21 ` Christoph Lameter
2016-08-16 3:03 ` Joonsoo Kim
0 siblings, 1 reply; 10+ messages in thread
From: Christoph Lameter @ 2016-08-05 14:21 UTC (permalink / raw)
To: Joonsoo Kim
Cc: Aruna Ramakrishna, Linux Memory Management List, LKML,
Mike Kravetz, Pekka Enberg, David Rientjes, Joonsoo Kim,
Andrew Morton
On Fri, 5 Aug 2016, Joonsoo Kim wrote:
> If above my comments are fixed, all counting would be done with
> holding a lock. So, atomic definition isn't needed for the SLAB.
Ditto for slub. struct kmem_cache_node is alrady defined in mm/slab.h.
Thus it is a common definition already and can be used by both.
Making nr_slabs and total_objects unsigned long would be great.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-05 14:21 ` Christoph Lameter
@ 2016-08-16 3:03 ` Joonsoo Kim
2016-08-16 15:52 ` Christoph Lameter
0 siblings, 1 reply; 10+ messages in thread
From: Joonsoo Kim @ 2016-08-16 3:03 UTC (permalink / raw)
To: Christoph Lameter
Cc: Aruna Ramakrishna, Linux Memory Management List, LKML,
Mike Kravetz, Pekka Enberg, David Rientjes, Andrew Morton
On Fri, Aug 05, 2016 at 09:21:56AM -0500, Christoph Lameter wrote:
> On Fri, 5 Aug 2016, Joonsoo Kim wrote:
>
> > If above my comments are fixed, all counting would be done with
> > holding a lock. So, atomic definition isn't needed for the SLAB.
>
> Ditto for slub. struct kmem_cache_node is alrady defined in mm/slab.h.
> Thus it is a common definition already and can be used by both.
>
> Making nr_slabs and total_objects unsigned long would be great.
In SLUB, nr_slabs is manipulated without holding a lock so atomic
operation should be used.
Anyway, Aruna. Could you handle my comment?
Thank.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-16 3:03 ` Joonsoo Kim
@ 2016-08-16 15:52 ` Christoph Lameter
2016-08-17 7:13 ` aruna.ramakrishna
0 siblings, 1 reply; 10+ messages in thread
From: Christoph Lameter @ 2016-08-16 15:52 UTC (permalink / raw)
To: Joonsoo Kim
Cc: Aruna Ramakrishna, Linux Memory Management List, LKML,
Mike Kravetz, Pekka Enberg, David Rientjes, Andrew Morton
On Tue, 16 Aug 2016, Joonsoo Kim wrote:
> In SLUB, nr_slabs is manipulated without holding a lock so atomic
> operation should be used.
It could be moved under the node lock.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-16 15:52 ` Christoph Lameter
@ 2016-08-17 7:13 ` aruna.ramakrishna
2016-08-17 14:36 ` Christoph Lameter
0 siblings, 1 reply; 10+ messages in thread
From: aruna.ramakrishna @ 2016-08-17 7:13 UTC (permalink / raw)
To: Christoph Lameter, Joonsoo Kim
Cc: Linux Memory Management List, LKML, Mike Kravetz, Pekka Enberg,
David Rientjes, Andrew Morton
On 08/16/2016 08:52 AM, Christoph Lameter wrote:
>
> On Tue, 16 Aug 2016, Joonsoo Kim wrote:
>
>> In SLUB, nr_slabs is manipulated without holding a lock so atomic
>> operation should be used.
>
> It could be moved under the node lock.
>
Christoph, Joonsoo,
I agree that nr_slabs could be common between SLAB and SLUB, but I think
that should be a separate patch, since converting nr_slabs to unsigned
long for SLUB will cause quite a bit of change in mm/slub.c that is not
related to adding counters to SLAB.
I'll send out an updated slab counters patch with Joonsoo's suggested
fix tomorrow (nr_slabs will be unsigned long for SLAB only, and there
will be a separate definition for SLUB), and once that's in, I'll create
a new patch that makes nr_slabs common for SLAB and SLUB, and also
converts total_objects to unsigned long. Maybe it can include some more
cleanup too. Does that sound acceptable?
Thanks,
Aruna
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats
2016-08-17 7:13 ` aruna.ramakrishna
@ 2016-08-17 14:36 ` Christoph Lameter
0 siblings, 0 replies; 10+ messages in thread
From: Christoph Lameter @ 2016-08-17 14:36 UTC (permalink / raw)
To: aruna.ramakrishna
Cc: Joonsoo Kim, Linux Memory Management List, LKML, Mike Kravetz,
Pekka Enberg, David Rientjes, Andrew Morton
On Wed, 17 Aug 2016, aruna.ramakrishna@oracle.com wrote:
> I'll send out an updated slab counters patch with Joonsoo's suggested fix
> tomorrow (nr_slabs will be unsigned long for SLAB only, and there will be a
> separate definition for SLUB), and once that's in, I'll create a new patch
> that makes nr_slabs common for SLAB and SLUB, and also converts total_objects
> to unsigned long. Maybe it can include some more cleanup too. Does that sound
> acceptable?
Thats fine.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2016-08-17 14:36 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-04 19:01 [PATCH v2] mm/slab: Improve performance of gathering slabinfo stats Aruna Ramakrishna
2016-08-04 21:06 ` Andrew Morton
2016-08-04 21:49 ` Aruna Ramakrishna
2016-08-05 0:35 ` Joonsoo Kim
2016-08-05 14:21 ` Christoph Lameter
2016-08-16 3:03 ` Joonsoo Kim
2016-08-16 15:52 ` Christoph Lameter
2016-08-17 7:13 ` aruna.ramakrishna
2016-08-17 14:36 ` Christoph Lameter
2016-08-05 14:17 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).