* [PATCH v1] [mm] Set page->slab_cache for every page allocated for a kmem_cache.
@ 2016-05-27 17:14 Alexander Potapenko
2016-05-27 17:30 ` Christoph Lameter
0 siblings, 1 reply; 3+ messages in thread
From: Alexander Potapenko @ 2016-05-27 17:14 UTC (permalink / raw)
To: adech.fo, cl, dvyukov, akpm, rostedt, iamjoonsoo.kim, js1304,
kcc, aryabinin
Cc: kasan-dev, linux-mm, linux-kernel
It's reasonable to rely on the fact that for every page allocated for a
kmem_cache the |slab_cache| field points to that cache. Without that it's
hard to figure out which cache does an allocated object belong to.
Fixes: 55834c59098d0c5a97b0f324 ("mm: kasan: initial memory quarantine
implementation")
Signed-off-by: Alexander Potapenko <glider@google.com>
---
mm/slab.c | 7 ++++++-
mm/slub.c | 8 +++++---
2 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1..ac6c251 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2703,8 +2703,13 @@ static void slab_put_obj(struct kmem_cache *cachep,
static void slab_map_pages(struct kmem_cache *cache, struct page *page,
void *freelist)
{
- page->slab_cache = cache;
+ int i, nr_pages;
+ char *start = page_address(page);
+
page->freelist = freelist;
+ nr_pages = (1 << cache->gfporder);
+ for (i = 0; i < nr_pages; i++)
+ virt_to_page(start + PAGE_SIZE * i)->slab_cache = cache;
}
/*
diff --git a/mm/slub.c b/mm/slub.c
index 825ff45..fc75ddb 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1411,7 +1411,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
struct kmem_cache_order_objects oo = s->oo;
gfp_t alloc_gfp;
void *start, *p;
- int idx, order;
+ int idx, order, i, pages;
flags &= gfp_allowed_mask;
@@ -1442,9 +1442,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
stat(s, ORDER_FALLBACK);
}
+ pages = 1 << oo_order(oo);
if (kmemcheck_enabled &&
!(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
- int pages = 1 << oo_order(oo);
kmemcheck_alloc_shadow(page, oo_order(oo), alloc_gfp, node);
@@ -1461,13 +1461,15 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
page->objects = oo_objects(oo);
order = compound_order(page);
- page->slab_cache = s;
__SetPageSlab(page);
if (page_is_pfmemalloc(page))
SetPageSlabPfmemalloc(page);
start = page_address(page);
+ for (i = 0; i < pages; i++)
+ virt_to_page(start + PAGE_SIZE * i)->slab_cache = s;
+
if (unlikely(s->flags & SLAB_POISON))
memset(start, POISON_INUSE, PAGE_SIZE << order);
--
2.8.0.rc3.226.g39d4020
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v1] [mm] Set page->slab_cache for every page allocated for a kmem_cache.
2016-05-27 17:14 [PATCH v1] [mm] Set page->slab_cache for every page allocated for a kmem_cache Alexander Potapenko
@ 2016-05-27 17:30 ` Christoph Lameter
2016-05-27 17:41 ` Alexander Potapenko
0 siblings, 1 reply; 3+ messages in thread
From: Christoph Lameter @ 2016-05-27 17:30 UTC (permalink / raw)
To: Alexander Potapenko
Cc: adech.fo, dvyukov, akpm, rostedt, iamjoonsoo.kim, js1304, kcc,
aryabinin, kasan-dev, linux-mm, linux-kernel
On Fri, 27 May 2016, Alexander Potapenko wrote:
> It's reasonable to rely on the fact that for every page allocated for a
> kmem_cache the |slab_cache| field points to that cache. Without that it's
> hard to figure out which cache does an allocated object belong to.
The flags are set only in the head page of a coumpound page which is used
by SLAB. No need to do this. This would just mean unnecessarily dirtying
struct page cachelines on allocation.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v1] [mm] Set page->slab_cache for every page allocated for a kmem_cache.
2016-05-27 17:30 ` Christoph Lameter
@ 2016-05-27 17:41 ` Alexander Potapenko
0 siblings, 0 replies; 3+ messages in thread
From: Alexander Potapenko @ 2016-05-27 17:41 UTC (permalink / raw)
To: Christoph Lameter
Cc: Andrey Konovalov, Dmitriy Vyukov, Andrew Morton, Steven Rostedt,
Joonsoo Kim, Joonsoo Kim, Kostya Serebryany, Andrey Ryabinin,
kasan-dev, Linux Memory Management List, LKML
On Fri, May 27, 2016 at 7:30 PM, Christoph Lameter <cl@linux.com> wrote:
> On Fri, 27 May 2016, Alexander Potapenko wrote:
>
>> It's reasonable to rely on the fact that for every page allocated for a
>> kmem_cache the |slab_cache| field points to that cache. Without that it's
>> hard to figure out which cache does an allocated object belong to.
>
> The flags are set only in the head page of a coumpound page which is used
> by SLAB. No need to do this. This would just mean unnecessarily dirtying
> struct page cachelines on allocation.
>
Got it, thank you.
Looks like I just need to make sure my code uses
virt_to_head_page()->page_slab to get the cache for an object.
--
Alexander Potapenko
Software Engineer
Google Germany GmbH
Erika-Mann-Straße, 33
80636 München
Geschäftsführer: Matthew Scott Sucherman, Paul Terence Manicle
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2016-05-27 17:41 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-27 17:14 [PATCH v1] [mm] Set page->slab_cache for every page allocated for a kmem_cache Alexander Potapenko
2016-05-27 17:30 ` Christoph Lameter
2016-05-27 17:41 ` Alexander Potapenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).