* kmalloc() fix^2
@ 2003-04-10 21:50 David Mosberger
2003-04-11 0:04 ` Brian Gerst
0 siblings, 1 reply; 2+ messages in thread
From: David Mosberger @ 2003-04-10 21:50 UTC (permalink / raw)
To: akpm; +Cc: linux-kernel
It's all very embarassing, but my previous patch was horribly broken:
it added the NULL terminator to the wrong array... Of course, adding
it to the correct array uncovered another bug... ;-(
Patch below fixes both problems. Reall, I mean it.
Andrew, you may want to double-check---I didn't look through all of
slab.c whether there might be problems (there was no other mention of
ARRAY_SIZE(malloc_sizes) though.
Thanks,
--david
===== mm/slab.c 1.74 vs edited =====
--- 1.74/mm/slab.c Wed Apr 9 13:28:18 2003
+++ edited/mm/slab.c Thu Apr 10 14:43:44 2003
@@ -383,6 +383,7 @@
} malloc_sizes[] = {
#define CACHE(x) { .cs_size = (x) },
#include <linux/kmalloc_sizes.h>
+ {0, }
#undef CACHE
};
@@ -393,7 +394,6 @@
} cache_names[] = {
#define CACHE(x) { .name = "size-" #x, .name_dma = "size-" #x "(DMA)" },
#include <linux/kmalloc_sizes.h>
- { 0, }
#undef CACHE
};
@@ -604,7 +604,7 @@
if (num_physpages > (32 << 20) >> PAGE_SHIFT)
slab_break_gfp_order = BREAK_GFP_ORDER_HI;
- for (i = 0; i < ARRAY_SIZE(malloc_sizes); i++) {
+ for (i = 0; i < ARRAY_SIZE(malloc_sizes) - 1; i++) {
struct cache_sizes *sizes = malloc_sizes + i;
/* For performance, all the general caches are L1 aligned.
* This should be particularly beneficial on SMP boxes, as it
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: kmalloc() fix^2
2003-04-10 21:50 kmalloc() fix^2 David Mosberger
@ 2003-04-11 0:04 ` Brian Gerst
0 siblings, 0 replies; 2+ messages in thread
From: Brian Gerst @ 2003-04-11 0:04 UTC (permalink / raw)
To: davidm; +Cc: akpm, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 653 bytes --]
David Mosberger wrote:
> It's all very embarassing, but my previous patch was horribly broken:
> it added the NULL terminator to the wrong array... Of course, adding
> it to the correct array uncovered another bug... ;-(
>
> Patch below fixes both problems. Reall, I mean it.
>
> Andrew, you may want to double-check---I didn't look through all of
> slab.c whether there might be problems (there was no other mention of
> ARRAY_SIZE(malloc_sizes) though.
>
> Thanks,
>
> --david
>
My fault, I got a bit overzealous while cleaning that code up. Here's a
better patch that gets rid of ARRAY_SIZE since the null is readded.
--
Brian Gerst
[-- Attachment #2: kmalloc_sizes-3 --]
[-- Type: text/plain, Size: 1801 bytes --]
--- linux-2.5.67-bk/mm/slab.c 2003-04-10 08:36:50.000000000 -0400
+++ linux/mm/slab.c 2003-04-10 19:18:08.000000000 -0400
@@ -383,11 +383,12 @@
} malloc_sizes[] = {
#define CACHE(x) { .cs_size = (x) },
#include <linux/kmalloc_sizes.h>
+ { 0, }
#undef CACHE
};
/* Must match cache_sizes above. Out of line to keep cache footprint low. */
-static struct {
+static struct cache_names {
char *name;
char *name_dma;
} cache_names[] = {
@@ -596,7 +597,9 @@
*/
void __init kmem_cache_sizes_init(void)
{
- int i;
+ struct cache_sizes *sizes = malloc_sizes;
+ struct cache_names *names = cache_names;
+
/*
* Fragmentation resistance on low memory - only use bigger
* page orders on machines with more than 32MB of memory.
@@ -604,15 +607,14 @@
if (num_physpages > (32 << 20) >> PAGE_SHIFT)
slab_break_gfp_order = BREAK_GFP_ORDER_HI;
- for (i = 0; i < ARRAY_SIZE(malloc_sizes); i++) {
- struct cache_sizes *sizes = malloc_sizes + i;
+ while (sizes->cs_size) {
/* For performance, all the general caches are L1 aligned.
* This should be particularly beneficial on SMP boxes, as it
* eliminates "false sharing".
* Note for systems short on memory removing the alignment will
* allow tighter packing of the smaller caches. */
sizes->cs_cachep = kmem_cache_create(
- cache_names[i].name, sizes->cs_size,
+ names->name, sizes->cs_size,
0, SLAB_HWCACHE_ALIGN, NULL, NULL);
if (!sizes->cs_cachep)
BUG();
@@ -624,10 +626,13 @@
}
sizes->cs_dmacachep = kmem_cache_create(
- cache_names[i].name_dma, sizes->cs_size,
+ names->name_dma, sizes->cs_size,
0, SLAB_CACHE_DMA|SLAB_HWCACHE_ALIGN, NULL, NULL);
if (!sizes->cs_dmacachep)
BUG();
+
+ sizes++;
+ names++;
}
/*
* The generic caches are running - time to kick out the
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2003-04-10 23:53 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-04-10 21:50 kmalloc() fix^2 David Mosberger
2003-04-11 0:04 ` Brian Gerst
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).