* [PATCH] reduce fragmentation due to kmem_cache_alloc_node
@ 2004-10-09 20:37 Manfred Spraul
2004-10-11 17:12 ` Badari Pulavarty
2004-10-15 18:08 ` Badari Pulavarty
0 siblings, 2 replies; 5+ messages in thread
From: Manfred Spraul @ 2004-10-09 20:37 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-mm, Linux Kernel Mailing List, pbadari
[-- Attachment #1: Type: text/plain, Size: 1019 bytes --]
Hi Andrew,
attached is a patch that fixes the fragmentation that Badri noticed with
kmem_cache_alloc_node. Could you add it to the mm tree? The patch is
against 2.6.9-rc3-mm3.
Description:
kmem_cache_alloc_node tries to allocate memory from a given node. The
current implementation contains two bugs:
- the node aware code was used even for !CONFIG_NUMA systems. Fix:
inline function that redefines kmem_cache_alloc_node as kmem_cache_alloc
for !CONFIG_NUMA.
- the code always allocated a new slab for each new allocation. This
caused severe fragmentation. Fix: walk the slabp lists and search for a
matching page instead of allocating a new page.
- the patch also adds a new statistics field for node-local allocs. They
should be rare - the codepath is quite slow, especially compared to the
normal kmem_cache_alloc.
Badri: Could you test it?
Andrew, could you add the patch to the next -mm kernel? I'm running it
right now, no obvious problems.
Signed-Off-By: Manfred Spraul <manfred@colorfullife.com>
[-- Attachment #2: patch-nodealloc --]
[-- Type: text/plain, Size: 7240 bytes --]
// $Header$
// Kernel Version:
// VERSION = 2
// PATCHLEVEL = 6
// SUBLEVEL = 9
// EXTRAVERSION = -rc3-mm3
--- 2.6/include/linux/slab.h 2004-10-09 21:49:45.000000000 +0200
+++ build-2.6/include/linux/slab.h 2004-10-09 21:55:19.660780713 +0200
@@ -61,7 +61,14 @@
extern int kmem_cache_destroy(kmem_cache_t *);
extern int kmem_cache_shrink(kmem_cache_t *);
extern void *kmem_cache_alloc(kmem_cache_t *, int);
+#ifdef CONFIG_NUMA
extern void *kmem_cache_alloc_node(kmem_cache_t *, int);
+#else
+static inline void *kmem_cache_alloc_node(kmem_cache_t *cachep, int node)
+{
+ return kmem_cache_alloc(cachep, GFP_KERNEL);
+}
+#endif
extern void kmem_cache_free(kmem_cache_t *, void *);
extern unsigned int kmem_cache_size(kmem_cache_t *);
--- 2.6/mm/slab.c 2004-10-09 21:49:59.000000000 +0200
+++ build-2.6/mm/slab.c 2004-10-09 21:57:58.705171317 +0200
@@ -327,6 +327,7 @@
unsigned long reaped;
unsigned long errors;
unsigned long max_freeable;
+ unsigned long node_allocs;
atomic_t allochit;
atomic_t allocmiss;
atomic_t freehit;
@@ -361,6 +362,7 @@
(x)->high_mark = (x)->num_active; \
} while (0)
#define STATS_INC_ERR(x) ((x)->errors++)
+#define STATS_INC_NODEALLOCS(x) ((x)->node_allocs++)
#define STATS_SET_FREEABLE(x, i) \
do { if ((x)->max_freeable < i) \
(x)->max_freeable = i; \
@@ -378,6 +380,7 @@
#define STATS_INC_REAPED(x) do { } while (0)
#define STATS_SET_HIGH(x) do { } while (0)
#define STATS_INC_ERR(x) do { } while (0)
+#define STATS_INC_NODEALLOCS(x) do { } while (0)
#define STATS_SET_FREEABLE(x, i) \
do { } while (0)
@@ -1747,7 +1750,7 @@
* Grow (by 1) the number of slabs within a cache. This is called by
* kmem_cache_alloc() when there are no active objs left in a cache.
*/
-static int cache_grow (kmem_cache_t * cachep, int flags)
+static int cache_grow (kmem_cache_t * cachep, int flags, int nodeid)
{
struct slab *slabp;
void *objp;
@@ -1798,7 +1801,7 @@
/* Get mem for the objs. */
- if (!(objp = kmem_getpages(cachep, flags, -1)))
+ if (!(objp = kmem_getpages(cachep, flags, nodeid)))
goto failed;
/* Get slab management. */
@@ -2032,7 +2035,7 @@
if (unlikely(!ac->avail)) {
int x;
- x = cache_grow(cachep, flags);
+ x = cache_grow(cachep, flags, -1);
// cache_grow can reenable interrupts, then ac could change.
ac = ac_data(cachep);
@@ -2313,6 +2316,7 @@
return 0;
}
+#ifdef CONFIG_NUMA
/**
* kmem_cache_alloc_node - Allocate an object on the specified node
* @cachep: The cache to allocate from.
@@ -2325,69 +2329,80 @@
*/
void *kmem_cache_alloc_node(kmem_cache_t *cachep, int nodeid)
{
- size_t offset;
+ int loop;
void *objp;
struct slab *slabp;
kmem_bufctl_t next;
- /* The main algorithms are not node aware, thus we have to cheat:
- * We bypass all caches and allocate a new slab.
- * The following code is a streamlined copy of cache_grow().
- */
+ for (loop = 0;;loop++) {
+ struct list_head *q;
- /* Get colour for the slab, and update the next value. */
- spin_lock_irq(&cachep->spinlock);
- offset = cachep->colour_next;
- cachep->colour_next++;
- if (cachep->colour_next >= cachep->colour)
- cachep->colour_next = 0;
- offset *= cachep->colour_off;
- spin_unlock_irq(&cachep->spinlock);
-
- /* Get mem for the objs. */
- if (!(objp = kmem_getpages(cachep, GFP_KERNEL, nodeid)))
- goto failed;
+ objp = NULL;
+ check_irq_on();
+ spin_lock_irq(&cachep->spinlock);
+ /* walk through all partial and empty slab and find one
+ * from the right node */
+ list_for_each(q,&cachep->lists.slabs_partial) {
+ slabp = list_entry(q, struct slab, list);
+
+ if (page_to_nid(virt_to_page(slabp->s_mem)) == nodeid ||
+ loop > 2)
+ goto got_slabp;
+ }
+ list_for_each(q, &cachep->lists.slabs_free) {
+ slabp = list_entry(q, struct slab, list);
+
+ if (page_to_nid(virt_to_page(slabp->s_mem)) == nodeid ||
+ loop > 2)
+ goto got_slabp;
+ }
+ spin_unlock_irq(&cachep->spinlock);
- /* Get slab management. */
- if (!(slabp = alloc_slabmgmt(cachep, objp, offset, GFP_KERNEL)))
- goto opps1;
+ local_irq_disable();
+ if (!cache_grow(cachep, GFP_KERNEL, nodeid)) {
+ local_irq_enable();
+ return NULL;
+ }
+ local_irq_enable();
+ }
+got_slabp:
+ /* found one: allocate object */
+ check_slabp(cachep, slabp);
+ check_spinlock_acquired(cachep);
- set_slab_attr(cachep, slabp, objp);
- cache_init_objs(cachep, slabp, SLAB_CTOR_CONSTRUCTOR);
+ STATS_INC_ALLOCED(cachep);
+ STATS_INC_ACTIVE(cachep);
+ STATS_SET_HIGH(cachep);
+ STATS_INC_NODEALLOCS(cachep);
- /* The first object is ours: */
objp = slabp->s_mem + slabp->free*cachep->objsize;
+
slabp->inuse++;
next = slab_bufctl(slabp)[slabp->free];
#if DEBUG
slab_bufctl(slabp)[slabp->free] = BUFCTL_FREE;
#endif
slabp->free = next;
-
- /* add the remaining objects into the cache */
- spin_lock_irq(&cachep->spinlock);
check_slabp(cachep, slabp);
- STATS_INC_GROWN(cachep);
- /* Make slab active. */
- if (slabp->free == BUFCTL_END) {
- list_add_tail(&slabp->list, &(list3_data(cachep)->slabs_full));
- } else {
- list_add_tail(&slabp->list,
- &(list3_data(cachep)->slabs_partial));
- list3_data(cachep)->free_objects += cachep->num-1;
- }
+
+ /* move slabp to correct slabp list: */
+ list_del(&slabp->list);
+ if (slabp->free == BUFCTL_END)
+ list_add(&slabp->list, &cachep->lists.slabs_full);
+ else
+ list_add(&slabp->list, &cachep->lists.slabs_partial);
+
+ list3_data(cachep)->free_objects--;
spin_unlock_irq(&cachep->spinlock);
+
objp = cache_alloc_debugcheck_after(cachep, GFP_KERNEL, objp,
__builtin_return_address(0));
return objp;
-opps1:
- kmem_freepages(cachep, objp);
-failed:
- return NULL;
-
}
EXPORT_SYMBOL(kmem_cache_alloc_node);
+#endif
+
/**
* kmalloc - allocate memory
* @size: how many bytes of memory are required.
@@ -2812,15 +2827,16 @@
* without _too_ many complaints.
*/
#if STATS
- seq_puts(m, "slabinfo - version: 2.0 (statistics)\n");
+ seq_puts(m, "slabinfo - version: 2.1 (statistics)\n");
#else
- seq_puts(m, "slabinfo - version: 2.0\n");
+ seq_puts(m, "slabinfo - version: 2.1\n");
#endif
seq_puts(m, "# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>");
seq_puts(m, " : tunables <batchcount> <limit> <sharedfactor>");
seq_puts(m, " : slabdata <active_slabs> <num_slabs> <sharedavail>");
#if STATS
- seq_puts(m, " : globalstat <listallocs> <maxobjs> <grown> <reaped> <error> <maxfreeable> <freelimit>");
+ seq_puts(m, " : globalstat <listallocs> <maxobjs> <grown> <reaped>"
+ " <error> <maxfreeable> <freelimit> <nodeallocs>");
seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
#endif
seq_putc(m, '\n');
@@ -2911,10 +2927,11 @@
unsigned long errors = cachep->errors;
unsigned long max_freeable = cachep->max_freeable;
unsigned long free_limit = cachep->free_limit;
+ unsigned long node_allocs = cachep->node_allocs;
- seq_printf(m, " : globalstat %7lu %6lu %5lu %4lu %4lu %4lu %4lu",
+ seq_printf(m, " : globalstat %7lu %6lu %5lu %4lu %4lu %4lu %4lu %4lu",
allocs, high, grown, reaped, errors,
- max_freeable, free_limit);
+ max_freeable, free_limit, node_allocs);
}
/* cpu stats */
{
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] reduce fragmentation due to kmem_cache_alloc_node
2004-10-09 20:37 [PATCH] reduce fragmentation due to kmem_cache_alloc_node Manfred Spraul
@ 2004-10-11 17:12 ` Badari Pulavarty
2004-10-11 18:07 ` Manfred Spraul
2004-10-15 18:08 ` Badari Pulavarty
1 sibling, 1 reply; 5+ messages in thread
From: Badari Pulavarty @ 2004-10-11 17:12 UTC (permalink / raw)
To: Manfred Spraul; +Cc: Andrew Morton, linux-mm, Linux Kernel Mailing List
Manfred,
This patch seems to work fine on my AMD machine.
I tested your patch on 2.6.9-rc2-mm3.
It seemed to have fixed fragmentation problem I was
observing, but I don't think it fixed the problem
completely. I still see some fragmentation, with
repeated tests of scsi-debug, but it could be due
to the test. I will collect more numbers..
Thanks,
Badari
On Sat, 2004-10-09 at 13:37, Manfred Spraul wrote:
> Hi Andrew,
>
> attached is a patch that fixes the fragmentation that Badri noticed with
> kmem_cache_alloc_node. Could you add it to the mm tree? The patch is
> against 2.6.9-rc3-mm3.
>
> Description:
> kmem_cache_alloc_node tries to allocate memory from a given node. The
> current implementation contains two bugs:
> - the node aware code was used even for !CONFIG_NUMA systems. Fix:
> inline function that redefines kmem_cache_alloc_node as kmem_cache_alloc
> for !CONFIG_NUMA.
> - the code always allocated a new slab for each new allocation. This
> caused severe fragmentation. Fix: walk the slabp lists and search for a
> matching page instead of allocating a new page.
> - the patch also adds a new statistics field for node-local allocs. They
> should be rare - the codepath is quite slow, especially compared to the
> normal kmem_cache_alloc.
>
> Badri: Could you test it?
> Andrew, could you add the patch to the next -mm kernel? I'm running it
> right now, no obvious problems.
>
> Signed-Off-By: Manfred Spraul <manfred@colorfullife.com>
>
>
> ______________________________________________________________________
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] reduce fragmentation due to kmem_cache_alloc_node
2004-10-11 17:12 ` Badari Pulavarty
@ 2004-10-11 18:07 ` Manfred Spraul
0 siblings, 0 replies; 5+ messages in thread
From: Manfred Spraul @ 2004-10-11 18:07 UTC (permalink / raw)
To: Badari Pulavarty; +Cc: Andrew Morton, linux-mm, Linux Kernel Mailing List
Badari Pulavarty wrote:
>Manfred,
>
>This patch seems to work fine on my AMD machine.
>I tested your patch on 2.6.9-rc2-mm3.
>
>It seemed to have fixed fragmentation problem I was
>observing, but I don't think it fixed the problem
>completely. I still see some fragmentation, with
>repeated tests of scsi-debug, but it could be due
>to the test. I will collect more numbers..
>
>
>
Did you disable the !CONFIG_NUMA block from <linux/slab.h> or leave it
enabled? If the CONFIG_NUMA test is in the header file, then my patch is
identical to you proposal, except that I've changed the global
declaration instead of just the call from alloc_percpu.
--
Manfred
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] reduce fragmentation due to kmem_cache_alloc_node
2004-10-09 20:37 [PATCH] reduce fragmentation due to kmem_cache_alloc_node Manfred Spraul
2004-10-11 17:12 ` Badari Pulavarty
@ 2004-10-15 18:08 ` Badari Pulavarty
2004-10-15 22:33 ` Badari Pulavarty
1 sibling, 1 reply; 5+ messages in thread
From: Badari Pulavarty @ 2004-10-15 18:08 UTC (permalink / raw)
To: Manfred Spraul; +Cc: Andrew Morton, linux-mm, Linux Kernel Mailing List
Manfred & Andrew,
I am happy with the patch.This seems to have fixed slab fragmentation
problem with scsi_debug tests.
But I still have issue with my scsi-debug tests. I don't think they
are related to this patch. When I do,
while :
do
simulate 1000 scsi-debug disks
freeup 1000 scsi-debug disks
done
I see size-64 "inuse" objects increasing. Eventually, it fills
up entire low-mem. I guess while freeing up scsi-debug disks,
is not cleaning up all the allocations :(
But one question I have is - Is it possible to hold size-64 slab,
because it has a management allocation (slabp - 40 byte allocations)
from alloc_slabmgmt() ? I remember seeing this earlier. Is it worth
moving all managment allocations to its own slab ? should I try it ?
Thanks,
Badari
On Sat, 2004-10-09 at 13:37, Manfred Spraul wrote:
> Hi Andrew,
>
> attached is a patch that fixes the fragmentation that Badri noticed with
> kmem_cache_alloc_node. Could you add it to the mm tree? The patch is
> against 2.6.9-rc3-mm3.
>
> Description:
> kmem_cache_alloc_node tries to allocate memory from a given node. The
> current implementation contains two bugs:
> - the node aware code was used even for !CONFIG_NUMA systems. Fix:
> inline function that redefines kmem_cache_alloc_node as kmem_cache_alloc
> for !CONFIG_NUMA.
> - the code always allocated a new slab for each new allocation. This
> caused severe fragmentation. Fix: walk the slabp lists and search for a
> matching page instead of allocating a new page.
> - the patch also adds a new statistics field for node-local allocs. They
> should be rare - the codepath is quite slow, especially compared to the
> normal kmem_cache_alloc.
>
> Badri: Could you test it?
> Andrew, could you add the patch to the next -mm kernel? I'm running it
> right now, no obvious problems.
>
> Signed-Off-By: Manfred Spraul <manfred@colorfullife.com>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] reduce fragmentation due to kmem_cache_alloc_node
2004-10-15 18:08 ` Badari Pulavarty
@ 2004-10-15 22:33 ` Badari Pulavarty
0 siblings, 0 replies; 5+ messages in thread
From: Badari Pulavarty @ 2004-10-15 22:33 UTC (permalink / raw)
To: Manfred Spraul; +Cc: Andrew Morton, linux-mm, Linux Kernel Mailing List
On Fri, 2004-10-15 at 11:08, Badari Pulavarty wrote:
>
> I see size-64 "inuse" objects increasing. Eventually, it fills
> up entire low-mem. I guess while freeing up scsi-debug disks,
> is not cleaning up all the allocations :(
>
> But one question I have is - Is it possible to hold size-64 slab,
> because it has a management allocation (slabp - 40 byte allocations)
> from alloc_slabmgmt() ? I remember seeing this earlier. Is it worth
> moving all managment allocations to its own slab ? should I try it ?
Nope. Moving "slabp" allocations to its own slab, didn't fix anything.
I guess scsi-debug is not cleaning up properly :(
Thanks,
Badari
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2004-10-15 22:43 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-10-09 20:37 [PATCH] reduce fragmentation due to kmem_cache_alloc_node Manfred Spraul
2004-10-11 17:12 ` Badari Pulavarty
2004-10-11 18:07 ` Manfred Spraul
2004-10-15 18:08 ` Badari Pulavarty
2004-10-15 22:33 ` Badari Pulavarty
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).