* [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory @ 2019-08-15 0:16 Daniel Axtens 2019-08-15 0:16 ` [PATCH v4 1/3] " Daniel Axtens ` (3 more replies) 0 siblings, 4 replies; 12+ messages in thread From: Daniel Axtens @ 2019-08-15 0:16 UTC (permalink / raw) To: kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, mark.rutland, dvyukov Cc: linuxppc-dev, gor, Daniel Axtens Currently, vmalloc space is backed by the early shadow page. This means that kasan is incompatible with VMAP_STACK, and it also provides a hurdle for architectures that do not have a dedicated module space (like powerpc64). This series provides a mechanism to back vmalloc space with real, dynamically allocated memory. I have only wired up x86, because that's the only currently supported arch I can work with easily, but it's very easy to wire up other architectures. This has been discussed before in the context of VMAP_STACK: - https://bugzilla.kernel.org/show_bug.cgi?id=202009 - https://lkml.org/lkml/2018/7/22/198 - https://lkml.org/lkml/2019/7/19/822 In terms of implementation details: Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. Instead, share backing space across multiple mappings. Allocate a backing page the first time a mapping in vmalloc space uses a particular page of the shadow region. Keep this page around regardless of whether the mapping is later freed - in the mean time the page could have become shared by another vmalloc mapping. This can in theory lead to unbounded memory growth, but the vmalloc allocator is pretty good at reusing addresses, so the practical memory usage appears to grow at first but then stay fairly stable. If we run into practical memory exhaustion issues, I'm happy to consider hooking into the book-keeping that vmap does, but I am not convinced that it will be an issue. v1: https://lore.kernel.org/linux-mm/20190725055503.19507-1-dja@axtens.net/ v2: https://lore.kernel.org/linux-mm/20190729142108.23343-1-dja@axtens.net/ Address review comments: - Patch 1: use kasan_unpoison_shadow's built-in handling of ranges that do not align to a full shadow byte - Patch 3: prepopulate pgds rather than faulting things in v3: https://lore.kernel.org/linux-mm/20190731071550.31814-1-dja@axtens.net/ Address comments from Mark Rutland: - kasan_populate_vmalloc is a better name - handle concurrency correctly - various nits and cleanups - relax module alignment in KASAN_VMALLOC case v4: Changes to patch 1 only: - Integrate Mark's rework, thanks Mark! - handle the case where kasan_populate_shadow might fail - poision shadow on free, allowing the alloc path to just unpoision memory that it uses Daniel Axtens (3): kasan: support backing vmalloc space with real shadow memory fork: support VMAP_STACK with KASAN_VMALLOC x86/kasan: support KASAN_VMALLOC Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++ arch/Kconfig | 9 +++-- arch/x86/Kconfig | 1 + arch/x86/mm/kasan_init_64.c | 61 ++++++++++++++++++++++++++++ include/linux/kasan.h | 24 +++++++++++ include/linux/moduleloader.h | 2 +- include/linux/vmalloc.h | 12 ++++++ kernel/fork.c | 4 ++ lib/Kconfig.kasan | 16 ++++++++ lib/test_kasan.c | 26 ++++++++++++ mm/kasan/common.c | 67 +++++++++++++++++++++++++++++++ mm/kasan/generic_report.c | 3 ++ mm/kasan/kasan.h | 1 + mm/vmalloc.c | 28 ++++++++++++- 14 files changed, 308 insertions(+), 6 deletions(-) -- 2.20.1 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v4 1/3] kasan: support backing vmalloc space with real shadow memory 2019-08-15 0:16 [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens @ 2019-08-15 0:16 ` Daniel Axtens 2019-08-16 7:47 ` Christophe Leroy 2019-08-15 0:16 ` [PATCH v4 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens ` (2 subsequent siblings) 3 siblings, 1 reply; 12+ messages in thread From: Daniel Axtens @ 2019-08-15 0:16 UTC (permalink / raw) To: kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, mark.rutland, dvyukov Cc: linuxppc-dev, gor, Daniel Axtens Hook into vmalloc and vmap, and dynamically allocate real shadow memory to back the mappings. Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. Instead, share backing space across multiple mappings. Allocate a backing page the first time a mapping in vmalloc space uses a particular page of the shadow region. Keep this page around regardless of whether the mapping is later freed - in the mean time the page could have become shared by another vmalloc mapping. This can in theory lead to unbounded memory growth, but the vmalloc allocator is pretty good at reusing addresses, so the practical memory usage grows at first but then stays fairly stable. This requires architecture support to actually use: arches must stop mapping the read-only zero page over portion of the shadow region that covers the vmalloc space and instead leave it unmapped. This allows KASAN with VMAP_STACK, and will be needed for architectures that do not have a separate module space (e.g. powerpc64, which I am currently working on). It also allows relaxing the module alignment back to PAGE_SIZE. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Daniel Axtens <dja@axtens.net> [Mark: rework shadow allocation] Signed-off-by: Mark Rutland <mark.rutland@arm.com> -- v2: let kasan_unpoison_shadow deal with ranges that do not use a full shadow byte. v3: relax module alignment rename to kasan_populate_vmalloc which is a much better name deal with concurrency correctly v4: Integrate Mark's rework Poision pages on vfree Handle allocation failures. I've tested this by inserting artificial failures and using test_vmalloc to stress it. I haven't handled the per-cpu case: it looked like it would require a messy hacking-up of the function to deal with an OOM failure case in a debug feature. --- Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++ include/linux/kasan.h | 24 +++++++++++ include/linux/moduleloader.h | 2 +- include/linux/vmalloc.h | 12 ++++++ lib/Kconfig.kasan | 16 ++++++++ lib/test_kasan.c | 26 ++++++++++++ mm/kasan/common.c | 67 +++++++++++++++++++++++++++++++ mm/kasan/generic_report.c | 3 ++ mm/kasan/kasan.h | 1 + mm/vmalloc.c | 28 ++++++++++++- 10 files changed, 237 insertions(+), 2 deletions(-) diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index b72d07d70239..35fda484a672 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -215,3 +215,63 @@ brk handler is used to print bug reports. A potential expansion of this mode is a hardware tag-based mode, which would use hardware memory tagging support instead of compiler instrumentation and manual shadow memory manipulation. + +What memory accesses are sanitised by KASAN? +-------------------------------------------- + +The kernel maps memory in a number of different parts of the address +space. This poses something of a problem for KASAN, which requires +that all addresses accessed by instrumented code have a valid shadow +region. + +The range of kernel virtual addresses is large: there is not enough +real memory to support a real shadow region for every address that +could be accessed by the kernel. + +By default +~~~~~~~~~~ + +By default, architectures only map real memory over the shadow region +for the linear mapping (and potentially other small areas). For all +other areas - such as vmalloc and vmemmap space - a single read-only +page is mapped over the shadow area. This read-only shadow page +declares all memory accesses as permitted. + +This presents a problem for modules: they do not live in the linear +mapping, but in a dedicated module space. By hooking in to the module +allocator, KASAN can temporarily map real shadow memory to cover +them. This allows detection of invalid accesses to module globals, for +example. + +This also creates an incompatibility with ``VMAP_STACK``: if the stack +lives in vmalloc space, it will be shadowed by the read-only page, and +the kernel will fault when trying to set up the shadow data for stack +variables. + +CONFIG_KASAN_VMALLOC +~~~~~~~~~~~~~~~~~~~~ + +With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the +cost of greater memory usage. Currently this is only supported on x86. + +This works by hooking into vmalloc and vmap, and dynamically +allocating real shadow memory to back the mappings. + +Most mappings in vmalloc space are small, requiring less than a full +page of shadow space. Allocating a full shadow page per mapping would +therefore be wasteful. Furthermore, to ensure that different mappings +use different shadow pages, mappings would have to be aligned to +``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``. + +Instead, we share backing space across multiple mappings. We allocate +a backing page the first time a mapping in vmalloc space uses a +particular page of the shadow region. We keep this page around +regardless of whether the mapping is later freed - in the mean time +this page could have become shared by another vmalloc mapping. + +This can in theory lead to unbounded memory growth, but the vmalloc +allocator is pretty good at reusing addresses, so the practical memory +usage grows at first but then stays fairly stable. + +This allows ``VMAP_STACK`` support on x86, and enables support of +architectures that do not have a fixed module region. diff --git a/include/linux/kasan.h b/include/linux/kasan.h index cc8a03cc9674..d666748cd378 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -70,8 +70,18 @@ struct kasan_cache { int free_meta_offset; }; +/* + * These functions provide a special case to support backing module + * allocations with real shadow memory. With KASAN vmalloc, the special + * case is unnecessary, as the work is handled in the generic case. + */ +#ifndef CONFIG_KASAN_VMALLOC int kasan_module_alloc(void *addr, size_t size); void kasan_free_shadow(const struct vm_struct *vm); +#else +static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } +static inline void kasan_free_shadow(const struct vm_struct *vm) {} +#endif int kasan_add_zero_shadow(void *start, unsigned long size); void kasan_remove_zero_shadow(void *start, unsigned long size); @@ -194,4 +204,18 @@ static inline void *kasan_reset_tag(const void *addr) #endif /* CONFIG_KASAN_SW_TAGS */ +#ifdef CONFIG_KASAN_VMALLOC +int kasan_populate_vmalloc(unsigned long requested_size, + struct vm_struct *area); +void kasan_free_vmalloc(void *start, unsigned long size); +#else +static inline int kasan_populate_vmalloc(unsigned long requested_size, + struct vm_struct *area) +{ + return 0; +} + +static inline void kasan_free_vmalloc(void *start, unsigned long size) {} +#endif + #endif /* LINUX_KASAN_H */ diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 5229c18025e9..ca92aea8a6bd 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -91,7 +91,7 @@ void module_arch_cleanup(struct module *mod); /* Any cleanup before freeing mod->module_init */ void module_arch_freeing_init(struct module *mod); -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) #include <linux/kasan.h> #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) #else diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 9b21d0047710..cdc7a60f7d81 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -21,6 +21,18 @@ struct notifier_block; /* in notifier.h */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ + +/* + * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC. + * + * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after + * shadow memory has been mapped. It's used to handle allocation errors so that + * we don't try to poision shadow on free if it was never allocated. + * + * Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to + * determine which allocations need the module shadow freed. + */ + /* * Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with * vfree_atomic(). diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan index 4fafba1a923b..a320dc2e9317 100644 --- a/lib/Kconfig.kasan +++ b/lib/Kconfig.kasan @@ -6,6 +6,9 @@ config HAVE_ARCH_KASAN config HAVE_ARCH_KASAN_SW_TAGS bool +config HAVE_ARCH_KASAN_VMALLOC + bool + config CC_HAS_KASAN_GENERIC def_bool $(cc-option, -fsanitize=kernel-address) @@ -135,6 +138,19 @@ config KASAN_S390_4_LEVEL_PAGING to 3TB of RAM with KASan enabled). This options allows to force 4-level paging instead. +config KASAN_VMALLOC + bool "Back mappings in vmalloc space with real shadow memory" + depends on KASAN && HAVE_ARCH_KASAN_VMALLOC + help + By default, the shadow region for vmalloc space is the read-only + zero page. This means that KASAN cannot detect errors involving + vmalloc space. + + Enabling this option will hook in to vmap/vmalloc and back those + mappings with real shadow memory allocated on demand. This allows + for KASAN to detect more sorts of errors (and to support vmapped + stacks), but at the cost of higher memory usage. + config TEST_KASAN tristate "Module for testing KASAN for bug detection" depends on m && KASAN diff --git a/lib/test_kasan.c b/lib/test_kasan.c index b63b367a94e8..d375246f5f96 100644 --- a/lib/test_kasan.c +++ b/lib/test_kasan.c @@ -18,6 +18,7 @@ #include <linux/slab.h> #include <linux/string.h> #include <linux/uaccess.h> +#include <linux/vmalloc.h> /* * Note: test functions are marked noinline so that their names appear in @@ -709,6 +710,30 @@ static noinline void __init kmalloc_double_kzfree(void) kzfree(ptr); } +#ifdef CONFIG_KASAN_VMALLOC +static noinline void __init vmalloc_oob(void) +{ + void *area; + + pr_info("vmalloc out-of-bounds\n"); + + /* + * We have to be careful not to hit the guard page. + * The MMU will catch that and crash us. + */ + area = vmalloc(3000); + if (!area) { + pr_err("Allocation failed\n"); + return; + } + + ((volatile char *)area)[3100]; + vfree(area); +} +#else +static void __init vmalloc_oob(void) {} +#endif + static int __init kmalloc_tests_init(void) { /* @@ -752,6 +777,7 @@ static int __init kmalloc_tests_init(void) kasan_strings(); kasan_bitops(); kmalloc_double_kzfree(); + vmalloc_oob(); kasan_restore_multi_shot(multishot); diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 2277b82902d8..b8374e3773cf 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -568,6 +568,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip) /* The object will be poisoned by page_alloc. */ } +#ifndef CONFIG_KASAN_VMALLOC int kasan_module_alloc(void *addr, size_t size) { void *ret; @@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm) if (vm->flags & VM_KASAN) vfree(kasan_mem_to_shadow(vm->addr)); } +#endif extern void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip); @@ -722,3 +724,68 @@ static int __init kasan_memhotplug_init(void) core_initcall(kasan_memhotplug_init); #endif + +#ifdef CONFIG_KASAN_VMALLOC +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, + void *unused) +{ + unsigned long page; + pte_t pte; + + if (likely(!pte_none(*ptep))) + return 0; + + page = __get_free_page(GFP_KERNEL); + if (!page) + return -ENOMEM; + + memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); + pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); + + /* + * Ensure poisoning is visible before the shadow is made visible + * to other CPUs. + */ + smp_wmb(); + + spin_lock(&init_mm.page_table_lock); + if (likely(pte_none(*ptep))) { + set_pte_at(&init_mm, addr, ptep, pte); + page = 0; + } + spin_unlock(&init_mm.page_table_lock); + if (page) + free_page(page); + return 0; +} + +int kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area) +{ + unsigned long shadow_start, shadow_end; + int ret; + + shadow_start = (unsigned long)kasan_mem_to_shadow(area->addr); + shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE); + shadow_end = (unsigned long)kasan_mem_to_shadow( + area->addr + area->size); + shadow_end = ALIGN(shadow_end, PAGE_SIZE); + + ret = apply_to_page_range(&init_mm, shadow_start, + shadow_end - shadow_start, + kasan_populate_vmalloc_pte, NULL); + if (ret) + return ret; + + kasan_unpoison_shadow(area->addr, requested_size); + + area->flags |= VM_KASAN; + + return 0; +} + +void kasan_free_vmalloc(void *start, unsigned long size) +{ + size = round_up(size, KASAN_SHADOW_SCALE_SIZE); + kasan_poison_shadow(start, size, KASAN_VMALLOC_INVALID); +} +#endif diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c index 36c645939bc9..2d97efd4954f 100644 --- a/mm/kasan/generic_report.c +++ b/mm/kasan/generic_report.c @@ -86,6 +86,9 @@ static const char *get_shadow_bug_type(struct kasan_access_info *info) case KASAN_ALLOCA_RIGHT: bug_type = "alloca-out-of-bounds"; break; + case KASAN_VMALLOC_INVALID: + bug_type = "vmalloc-out-of-bounds"; + break; } return bug_type; diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 014f19e76247..8b1f2fbc780b 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -25,6 +25,7 @@ #endif #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ +#define KASAN_VMALLOC_INVALID 0xF9 /* unallocated space in vmapped page */ /* * Stack redzone shadow values diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4fa8d84599b0..c20a7e663004 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2056,6 +2056,22 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, setup_vmalloc_vm(area, va, flags, caller); + /* + * For KASAN, if we are in vmalloc space, we need to cover the shadow + * area with real memory. If we come here through VM_ALLOC, this is + * done by a higher level function that has access to the true size, + * which might not be a full page. + * + * We assume module space comes via VM_ALLOC path. + */ + if (is_vmalloc_addr(area->addr) && !(area->flags & VM_ALLOC)) { + if (kasan_populate_vmalloc(area->size, area)) { + unmap_vmap_area(va); + kfree(area); + return NULL; + } + } + return area; } @@ -2233,6 +2249,9 @@ static void __vunmap(const void *addr, int deallocate_pages) debug_check_no_locks_freed(area->addr, get_vm_area_size(area)); debug_check_no_obj_freed(area->addr, get_vm_area_size(area)); + if (area->flags & VM_KASAN) + kasan_free_vmalloc(area->addr, area->size); + vm_remove_mappings(area, deallocate_pages); if (deallocate_pages) { @@ -2483,6 +2502,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, if (!addr) return NULL; + if (kasan_populate_vmalloc(real_size, area)) + return NULL; + /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized. @@ -3324,10 +3346,14 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, spin_unlock(&vmap_area_lock); /* insert all vm's */ - for (area = 0; area < nr_vms; area++) + for (area = 0; area < nr_vms; area++) { setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC, pcpu_get_vm_areas); + /* assume success here */ + kasan_populate_vmalloc(sizes[area], vms[area]); + } + kfree(vas); return vms; -- 2.20.1 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v4 1/3] kasan: support backing vmalloc space with real shadow memory 2019-08-15 0:16 ` [PATCH v4 1/3] " Daniel Axtens @ 2019-08-16 7:47 ` Christophe Leroy 2019-08-16 17:08 ` Mark Rutland 0 siblings, 1 reply; 12+ messages in thread From: Christophe Leroy @ 2019-08-16 7:47 UTC (permalink / raw) To: Daniel Axtens, kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, mark.rutland, dvyukov Cc: linuxppc-dev, gor Le 15/08/2019 à 02:16, Daniel Axtens a écrit : > Hook into vmalloc and vmap, and dynamically allocate real shadow > memory to back the mappings. > > Most mappings in vmalloc space are small, requiring less than a full > page of shadow space. Allocating a full shadow page per mapping would > therefore be wasteful. Furthermore, to ensure that different mappings > use different shadow pages, mappings would have to be aligned to > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. > > Instead, share backing space across multiple mappings. Allocate > a backing page the first time a mapping in vmalloc space uses a > particular page of the shadow region. Keep this page around > regardless of whether the mapping is later freed - in the mean time > the page could have become shared by another vmalloc mapping. > > This can in theory lead to unbounded memory growth, but the vmalloc > allocator is pretty good at reusing addresses, so the practical memory > usage grows at first but then stays fairly stable. I guess people having gigabytes of memory don't mind, but I'm concerned about tiny targets with very little amount of memory. I have boards with as little as 32Mbytes of RAM. The shadow region for the linear space already takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages busy. Each page of shadow memory represent 8 pages of real memory. Could we use page_ref to count how many pieces of a shadow page are used so that we can free it when the ref count decreases to 0. > > This requires architecture support to actually use: arches must stop > mapping the read-only zero page over portion of the shadow region that > covers the vmalloc space and instead leave it unmapped. Why 'must' ? Couldn't we switch back and forth from the zero page to real page on demand ? If the zero page is not mapped for unused vmalloc space, bad memory accesses will Oops on the shadow memory access instead of Oopsing on the real bad access, making it more difficult to locate and identify the issue. > > This allows KASAN with VMAP_STACK, and will be needed for architectures > that do not have a separate module space (e.g. powerpc64, which I am > currently working on). It also allows relaxing the module alignment > back to PAGE_SIZE. Why 'needed' ? powerpc32 doesn't have a separate module space and doesn't need that. > > Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009 > Acked-by: Vasily Gorbik <gor@linux.ibm.com> > Signed-off-by: Daniel Axtens <dja@axtens.net> > [Mark: rework shadow allocation] > Signed-off-by: Mark Rutland <mark.rutland@arm.com> > > -- > > v2: let kasan_unpoison_shadow deal with ranges that do not use a > full shadow byte. > > v3: relax module alignment > rename to kasan_populate_vmalloc which is a much better name > deal with concurrency correctly > > v4: Integrate Mark's rework > Poision pages on vfree > Handle allocation failures. I've tested this by inserting artificial > failures and using test_vmalloc to stress it. I haven't handled the > per-cpu case: it looked like it would require a messy hacking-up of > the function to deal with an OOM failure case in a debug feature. > > --- > Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++ > include/linux/kasan.h | 24 +++++++++++ > include/linux/moduleloader.h | 2 +- > include/linux/vmalloc.h | 12 ++++++ > lib/Kconfig.kasan | 16 ++++++++ > lib/test_kasan.c | 26 ++++++++++++ > mm/kasan/common.c | 67 +++++++++++++++++++++++++++++++ > mm/kasan/generic_report.c | 3 ++ > mm/kasan/kasan.h | 1 + > mm/vmalloc.c | 28 ++++++++++++- > 10 files changed, 237 insertions(+), 2 deletions(-) > > diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst > index b72d07d70239..35fda484a672 100644 > --- a/Documentation/dev-tools/kasan.rst > +++ b/Documentation/dev-tools/kasan.rst > @@ -215,3 +215,63 @@ brk handler is used to print bug reports. > A potential expansion of this mode is a hardware tag-based mode, which would > use hardware memory tagging support instead of compiler instrumentation and > manual shadow memory manipulation. > + > +What memory accesses are sanitised by KASAN? > +-------------------------------------------- > + > +The kernel maps memory in a number of different parts of the address > +space. This poses something of a problem for KASAN, which requires > +that all addresses accessed by instrumented code have a valid shadow > +region. > + > +The range of kernel virtual addresses is large: there is not enough > +real memory to support a real shadow region for every address that > +could be accessed by the kernel. > + > +By default > +~~~~~~~~~~ > + > +By default, architectures only map real memory over the shadow region > +for the linear mapping (and potentially other small areas). For all > +other areas - such as vmalloc and vmemmap space - a single read-only > +page is mapped over the shadow area. This read-only shadow page > +declares all memory accesses as permitted. > + > +This presents a problem for modules: they do not live in the linear > +mapping, but in a dedicated module space. By hooking in to the module > +allocator, KASAN can temporarily map real shadow memory to cover > +them. This allows detection of invalid accesses to module globals, for > +example. > + > +This also creates an incompatibility with ``VMAP_STACK``: if the stack > +lives in vmalloc space, it will be shadowed by the read-only page, and > +the kernel will fault when trying to set up the shadow data for stack > +variables. > + > +CONFIG_KASAN_VMALLOC > +~~~~~~~~~~~~~~~~~~~~ > + > +With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the > +cost of greater memory usage. Currently this is only supported on x86. > + > +This works by hooking into vmalloc and vmap, and dynamically > +allocating real shadow memory to back the mappings. > + > +Most mappings in vmalloc space are small, requiring less than a full > +page of shadow space. Allocating a full shadow page per mapping would > +therefore be wasteful. Furthermore, to ensure that different mappings > +use different shadow pages, mappings would have to be aligned to > +``KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE``. > + > +Instead, we share backing space across multiple mappings. We allocate > +a backing page the first time a mapping in vmalloc space uses a > +particular page of the shadow region. We keep this page around > +regardless of whether the mapping is later freed - in the mean time > +this page could have become shared by another vmalloc mapping. > + > +This can in theory lead to unbounded memory growth, but the vmalloc > +allocator is pretty good at reusing addresses, so the practical memory > +usage grows at first but then stays fairly stable. > + > +This allows ``VMAP_STACK`` support on x86, and enables support of > +architectures that do not have a fixed module region. That's wrong, powerpc32 doesn't have a fixed module region and is already supported. > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index cc8a03cc9674..d666748cd378 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -70,8 +70,18 @@ struct kasan_cache { > int free_meta_offset; > }; > > +/* > + * These functions provide a special case to support backing module > + * allocations with real shadow memory. With KASAN vmalloc, the special > + * case is unnecessary, as the work is handled in the generic case. > + */ > +#ifndef CONFIG_KASAN_VMALLOC > int kasan_module_alloc(void *addr, size_t size); > void kasan_free_shadow(const struct vm_struct *vm); > +#else > +static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } > +static inline void kasan_free_shadow(const struct vm_struct *vm) {} > +#endif > > int kasan_add_zero_shadow(void *start, unsigned long size); > void kasan_remove_zero_shadow(void *start, unsigned long size); > @@ -194,4 +204,18 @@ static inline void *kasan_reset_tag(const void *addr) > > #endif /* CONFIG_KASAN_SW_TAGS */ > > +#ifdef CONFIG_KASAN_VMALLOC > +int kasan_populate_vmalloc(unsigned long requested_size, > + struct vm_struct *area); > +void kasan_free_vmalloc(void *start, unsigned long size); > +#else > +static inline int kasan_populate_vmalloc(unsigned long requested_size, > + struct vm_struct *area) > +{ > + return 0; > +} > + > +static inline void kasan_free_vmalloc(void *start, unsigned long size) {} > +#endif > + > #endif /* LINUX_KASAN_H */ > diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h > index 5229c18025e9..ca92aea8a6bd 100644 > --- a/include/linux/moduleloader.h > +++ b/include/linux/moduleloader.h > @@ -91,7 +91,7 @@ void module_arch_cleanup(struct module *mod); > /* Any cleanup before freeing mod->module_init */ > void module_arch_freeing_init(struct module *mod); > > -#ifdef CONFIG_KASAN > +#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) > #include <linux/kasan.h> > #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) > #else > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 9b21d0047710..cdc7a60f7d81 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -21,6 +21,18 @@ struct notifier_block; /* in notifier.h */ > #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ > #define VM_NO_GUARD 0x00000040 /* don't add guard page */ > #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ > + > +/* > + * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC. > + * > + * If IS_ENABLED(CONFIG_KASAN_VMALLOC), VM_KASAN is set on a vm_struct after > + * shadow memory has been mapped. It's used to handle allocation errors so that > + * we don't try to poision shadow on free if it was never allocated. > + * > + * Otherwise, VM_KASAN is set for kasan_module_alloc() allocations and used to > + * determine which allocations need the module shadow freed. > + */ > + > /* > * Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with > * vfree_atomic(). > diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan > index 4fafba1a923b..a320dc2e9317 100644 > --- a/lib/Kconfig.kasan > +++ b/lib/Kconfig.kasan > @@ -6,6 +6,9 @@ config HAVE_ARCH_KASAN > config HAVE_ARCH_KASAN_SW_TAGS > bool > > +config HAVE_ARCH_KASAN_VMALLOC > + bool > + > config CC_HAS_KASAN_GENERIC > def_bool $(cc-option, -fsanitize=kernel-address) > > @@ -135,6 +138,19 @@ config KASAN_S390_4_LEVEL_PAGING > to 3TB of RAM with KASan enabled). This options allows to force > 4-level paging instead. > > +config KASAN_VMALLOC > + bool "Back mappings in vmalloc space with real shadow memory" > + depends on KASAN && HAVE_ARCH_KASAN_VMALLOC > + help > + By default, the shadow region for vmalloc space is the read-only > + zero page. This means that KASAN cannot detect errors involving > + vmalloc space. > + > + Enabling this option will hook in to vmap/vmalloc and back those > + mappings with real shadow memory allocated on demand. This allows > + for KASAN to detect more sorts of errors (and to support vmapped > + stacks), but at the cost of higher memory usage. > + > config TEST_KASAN > tristate "Module for testing KASAN for bug detection" > depends on m && KASAN > diff --git a/lib/test_kasan.c b/lib/test_kasan.c > index b63b367a94e8..d375246f5f96 100644 > --- a/lib/test_kasan.c > +++ b/lib/test_kasan.c Could we put the testing part in a separate patch ? > @@ -18,6 +18,7 @@ > #include <linux/slab.h> > #include <linux/string.h> > #include <linux/uaccess.h> > +#include <linux/vmalloc.h> > > /* > * Note: test functions are marked noinline so that their names appear in > @@ -709,6 +710,30 @@ static noinline void __init kmalloc_double_kzfree(void) > kzfree(ptr); > } > > +#ifdef CONFIG_KASAN_VMALLOC > +static noinline void __init vmalloc_oob(void) > +{ > + void *area; > + > + pr_info("vmalloc out-of-bounds\n"); > + > + /* > + * We have to be careful not to hit the guard page. > + * The MMU will catch that and crash us. > + */ > + area = vmalloc(3000); > + if (!area) { > + pr_err("Allocation failed\n"); > + return; > + } > + > + ((volatile char *)area)[3100]; > + vfree(area); > +} > +#else > +static void __init vmalloc_oob(void) {} > +#endif > + > static int __init kmalloc_tests_init(void) > { > /* > @@ -752,6 +777,7 @@ static int __init kmalloc_tests_init(void) > kasan_strings(); > kasan_bitops(); > kmalloc_double_kzfree(); > + vmalloc_oob(); > > kasan_restore_multi_shot(multishot); > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 2277b82902d8..b8374e3773cf 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -568,6 +568,7 @@ void kasan_kfree_large(void *ptr, unsigned long ip) > /* The object will be poisoned by page_alloc. */ > } > > +#ifndef CONFIG_KASAN_VMALLOC > int kasan_module_alloc(void *addr, size_t size) > { > void *ret; > @@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm) > if (vm->flags & VM_KASAN) > vfree(kasan_mem_to_shadow(vm->addr)); > } > +#endif > > extern void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned long ip); > > @@ -722,3 +724,68 @@ static int __init kasan_memhotplug_init(void) > > core_initcall(kasan_memhotplug_init); > #endif > + > +#ifdef CONFIG_KASAN_VMALLOC > +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, > + void *unused) > +{ > + unsigned long page; > + pte_t pte; > + > + if (likely(!pte_none(*ptep))) > + return 0; Prior to this, the zero shadow area should be mapped, and the test should be: if (likely(pte_pfn(*ptep) != PHYS_PFN(__pa(kasan_early_shadow_page)))) return 0; > + > + page = __get_free_page(GFP_KERNEL); > + if (!page) > + return -ENOMEM; > + > + memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE); > + pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL); > + > + /* > + * Ensure poisoning is visible before the shadow is made visible > + * to other CPUs. > + */ > + smp_wmb(); > + > + spin_lock(&init_mm.page_table_lock); > + if (likely(pte_none(*ptep))) { > + set_pte_at(&init_mm, addr, ptep, pte); > + page = 0; > + } > + spin_unlock(&init_mm.page_table_lock); > + if (page) > + free_page(page); > + return 0; > +} > + > +int kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area) > +{ > + unsigned long shadow_start, shadow_end; > + int ret; > + > + shadow_start = (unsigned long)kasan_mem_to_shadow(area->addr); > + shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE); > + shadow_end = (unsigned long)kasan_mem_to_shadow( > + area->addr + area->size); > + shadow_end = ALIGN(shadow_end, PAGE_SIZE); > + > + ret = apply_to_page_range(&init_mm, shadow_start, > + shadow_end - shadow_start, > + kasan_populate_vmalloc_pte, NULL); > + if (ret) > + return ret; > + > + kasan_unpoison_shadow(area->addr, requested_size); > + > + area->flags |= VM_KASAN; > + > + return 0; > +} > + > +void kasan_free_vmalloc(void *start, unsigned long size) > +{ > + size = round_up(size, KASAN_SHADOW_SCALE_SIZE); > + kasan_poison_shadow(start, size, KASAN_VMALLOC_INVALID); > +} > +#endif > diff --git a/mm/kasan/generic_report.c b/mm/kasan/generic_report.c > index 36c645939bc9..2d97efd4954f 100644 > --- a/mm/kasan/generic_report.c > +++ b/mm/kasan/generic_report.c > @@ -86,6 +86,9 @@ static const char *get_shadow_bug_type(struct kasan_access_info *info) > case KASAN_ALLOCA_RIGHT: > bug_type = "alloca-out-of-bounds"; > break; > + case KASAN_VMALLOC_INVALID: > + bug_type = "vmalloc-out-of-bounds"; > + break; > } > > return bug_type; > diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h > index 014f19e76247..8b1f2fbc780b 100644 > --- a/mm/kasan/kasan.h > +++ b/mm/kasan/kasan.h > @@ -25,6 +25,7 @@ > #endif > > #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ > +#define KASAN_VMALLOC_INVALID 0xF9 /* unallocated space in vmapped page */ > > /* > * Stack redzone shadow values > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 4fa8d84599b0..c20a7e663004 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2056,6 +2056,22 @@ static struct vm_struct *__get_vm_area_node(unsigned long size, > > setup_vmalloc_vm(area, va, flags, caller); > > + /* > + * For KASAN, if we are in vmalloc space, we need to cover the shadow > + * area with real memory. If we come here through VM_ALLOC, this is > + * done by a higher level function that has access to the true size, > + * which might not be a full page. > + * > + * We assume module space comes via VM_ALLOC path. > + */ > + if (is_vmalloc_addr(area->addr) && !(area->flags & VM_ALLOC)) { > + if (kasan_populate_vmalloc(area->size, area)) { > + unmap_vmap_area(va); > + kfree(area); > + return NULL; > + } > + } > + > return area; > } > > @@ -2233,6 +2249,9 @@ static void __vunmap(const void *addr, int deallocate_pages) > debug_check_no_locks_freed(area->addr, get_vm_area_size(area)); > debug_check_no_obj_freed(area->addr, get_vm_area_size(area)); > > + if (area->flags & VM_KASAN) > + kasan_free_vmalloc(area->addr, area->size); > + > vm_remove_mappings(area, deallocate_pages); > > if (deallocate_pages) { > @@ -2483,6 +2502,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, > if (!addr) > return NULL; > > + if (kasan_populate_vmalloc(real_size, area)) > + return NULL; > + > /* > * In this function, newly allocated vm_struct has VM_UNINITIALIZED > * flag. It means that vm_struct is not fully initialized. > @@ -3324,10 +3346,14 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets, > spin_unlock(&vmap_area_lock); > > /* insert all vm's */ > - for (area = 0; area < nr_vms; area++) > + for (area = 0; area < nr_vms; area++) { > setup_vmalloc_vm(vms[area], vas[area], VM_ALLOC, > pcpu_get_vm_areas); > > + /* assume success here */ > + kasan_populate_vmalloc(sizes[area], vms[area]); > + } > + > kfree(vas); > return vms; > > Christophe ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 1/3] kasan: support backing vmalloc space with real shadow memory 2019-08-16 7:47 ` Christophe Leroy @ 2019-08-16 17:08 ` Mark Rutland 2019-08-16 17:41 ` Andy Lutomirski 2019-08-19 3:58 ` Daniel Axtens 0 siblings, 2 replies; 12+ messages in thread From: Mark Rutland @ 2019-08-16 17:08 UTC (permalink / raw) To: Christophe Leroy Cc: Daniel Axtens, kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, dvyukov, linuxppc-dev, gor Hi Christophe, On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote: > Le 15/08/2019 à 02:16, Daniel Axtens a écrit : > > Hook into vmalloc and vmap, and dynamically allocate real shadow > > memory to back the mappings. > > > > Most mappings in vmalloc space are small, requiring less than a full > > page of shadow space. Allocating a full shadow page per mapping would > > therefore be wasteful. Furthermore, to ensure that different mappings > > use different shadow pages, mappings would have to be aligned to > > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. > > > > Instead, share backing space across multiple mappings. Allocate > > a backing page the first time a mapping in vmalloc space uses a > > particular page of the shadow region. Keep this page around > > regardless of whether the mapping is later freed - in the mean time > > the page could have become shared by another vmalloc mapping. > > > > This can in theory lead to unbounded memory growth, but the vmalloc > > allocator is pretty good at reusing addresses, so the practical memory > > usage grows at first but then stays fairly stable. > > I guess people having gigabytes of memory don't mind, but I'm concerned > about tiny targets with very little amount of memory. I have boards with as > little as 32Mbytes of RAM. The shadow region for the linear space already > takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages > busy. I think this depends on how much shadow would be in constant use vs what would get left unused. If the amount in constant use is sufficiently large (or the residue is sufficiently small), then it may not be worthwhile to support KASAN_VMALLOC on such small systems. > Each page of shadow memory represent 8 pages of real memory. Could we use > page_ref to count how many pieces of a shadow page are used so that we can > free it when the ref count decreases to 0. > > > This requires architecture support to actually use: arches must stop > > mapping the read-only zero page over portion of the shadow region that > > covers the vmalloc space and instead leave it unmapped. > > Why 'must' ? Couldn't we switch back and forth from the zero page to real > page on demand ? > > If the zero page is not mapped for unused vmalloc space, bad memory accesses > will Oops on the shadow memory access instead of Oopsing on the real bad > access, making it more difficult to locate and identify the issue. I agree this isn't nice, though FWIW this can already happen today for bad addresses that fall outside of the usual kernel address space. We could make the !KASAN_INLINE checks resilient to this by using probe_kernel_read() to check the shadow, and treating unmapped shadow as poison. It's also worth noting that flipping back and forth isn't generally safe unless going via an invalid table entry, so there'd still be windows where a bad access might not have shadow mapped. We'd need to reuse the common p4d/pud/pmd/pte tables for unallocated regions, or the tables alone would consume significant amounts of memory (e..g ~32GiB for arm64 defconfig), and thus we'd need to be able to switch all levels between pgd and pte, which is much more complicated. I strongly suspect that the additional complexity will outweigh the benefit. [...] > > +#ifdef CONFIG_KASAN_VMALLOC > > +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, > > + void *unused) > > +{ > > + unsigned long page; > > + pte_t pte; > > + > > + if (likely(!pte_none(*ptep))) > > + return 0; > > Prior to this, the zero shadow area should be mapped, and the test should > be: > > if (likely(pte_pfn(*ptep) != PHYS_PFN(__pa(kasan_early_shadow_page)))) > return 0; As above, this would need a more comprehensive redesign, so I don't think it's worth going into that level of nit here. :) If we do try to use common shadow for unallocate VA ranges, it probably makes sense to have a common poison page that we can use, so that we can report vmalloc-out-of-bounfds. Thanks, Mark. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 1/3] kasan: support backing vmalloc space with real shadow memory 2019-08-16 17:08 ` Mark Rutland @ 2019-08-16 17:41 ` Andy Lutomirski 2019-08-19 10:15 ` Mark Rutland 2019-08-19 3:58 ` Daniel Axtens 1 sibling, 1 reply; 12+ messages in thread From: Andy Lutomirski @ 2019-08-16 17:41 UTC (permalink / raw) To: Mark Rutland Cc: Christophe Leroy, Daniel Axtens, kasan-dev, Linux-MM, X86 ML, Andrey Ryabinin, Alexander Potapenko, Andrew Lutomirski, LKML, Dmitry Vyukov, linuxppc-dev, Vasily Gorbik On Fri, Aug 16, 2019 at 10:08 AM Mark Rutland <mark.rutland@arm.com> wrote: > > Hi Christophe, > > On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote: > > Le 15/08/2019 à 02:16, Daniel Axtens a écrit : > > > Hook into vmalloc and vmap, and dynamically allocate real shadow > > > memory to back the mappings. > > > > > > Most mappings in vmalloc space are small, requiring less than a full > > > page of shadow space. Allocating a full shadow page per mapping would > > > therefore be wasteful. Furthermore, to ensure that different mappings > > > use different shadow pages, mappings would have to be aligned to > > > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. > > > > > > Instead, share backing space across multiple mappings. Allocate > > > a backing page the first time a mapping in vmalloc space uses a > > > particular page of the shadow region. Keep this page around > > > regardless of whether the mapping is later freed - in the mean time > > > the page could have become shared by another vmalloc mapping. > > > > > > This can in theory lead to unbounded memory growth, but the vmalloc > > > allocator is pretty good at reusing addresses, so the practical memory > > > usage grows at first but then stays fairly stable. > > > > I guess people having gigabytes of memory don't mind, but I'm concerned > > about tiny targets with very little amount of memory. I have boards with as > > little as 32Mbytes of RAM. The shadow region for the linear space already > > takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages > > busy. > > I think this depends on how much shadow would be in constant use vs what > would get left unused. If the amount in constant use is sufficiently > large (or the residue is sufficiently small), then it may not be > worthwhile to support KASAN_VMALLOC on such small systems. > > > Each page of shadow memory represent 8 pages of real memory. Could we use > > page_ref to count how many pieces of a shadow page are used so that we can > > free it when the ref count decreases to 0. > > > > > This requires architecture support to actually use: arches must stop > > > mapping the read-only zero page over portion of the shadow region that > > > covers the vmalloc space and instead leave it unmapped. > > > > Why 'must' ? Couldn't we switch back and forth from the zero page to real > > page on demand ? > > > > If the zero page is not mapped for unused vmalloc space, bad memory accesses > > will Oops on the shadow memory access instead of Oopsing on the real bad > > access, making it more difficult to locate and identify the issue. > > I agree this isn't nice, though FWIW this can already happen today for > bad addresses that fall outside of the usual kernel address space. We > could make the !KASAN_INLINE checks resilient to this by using > probe_kernel_read() to check the shadow, and treating unmapped shadow as > poison. Could we instead modify the page fault handlers to detect this case and print a useful message? ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 1/3] kasan: support backing vmalloc space with real shadow memory 2019-08-16 17:41 ` Andy Lutomirski @ 2019-08-19 10:15 ` Mark Rutland 0 siblings, 0 replies; 12+ messages in thread From: Mark Rutland @ 2019-08-19 10:15 UTC (permalink / raw) To: Andy Lutomirski Cc: Christophe Leroy, Daniel Axtens, kasan-dev, Linux-MM, X86 ML, Andrey Ryabinin, Alexander Potapenko, LKML, Dmitry Vyukov, linuxppc-dev, Vasily Gorbik On Fri, Aug 16, 2019 at 10:41:00AM -0700, Andy Lutomirski wrote: > On Fri, Aug 16, 2019 at 10:08 AM Mark Rutland <mark.rutland@arm.com> wrote: > > > > Hi Christophe, > > > > On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote: > > > Le 15/08/2019 à 02:16, Daniel Axtens a écrit : > > > > Hook into vmalloc and vmap, and dynamically allocate real shadow > > > > memory to back the mappings. > > > > > > > > Most mappings in vmalloc space are small, requiring less than a full > > > > page of shadow space. Allocating a full shadow page per mapping would > > > > therefore be wasteful. Furthermore, to ensure that different mappings > > > > use different shadow pages, mappings would have to be aligned to > > > > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. > > > > > > > > Instead, share backing space across multiple mappings. Allocate > > > > a backing page the first time a mapping in vmalloc space uses a > > > > particular page of the shadow region. Keep this page around > > > > regardless of whether the mapping is later freed - in the mean time > > > > the page could have become shared by another vmalloc mapping. > > > > > > > > This can in theory lead to unbounded memory growth, but the vmalloc > > > > allocator is pretty good at reusing addresses, so the practical memory > > > > usage grows at first but then stays fairly stable. > > > > > > I guess people having gigabytes of memory don't mind, but I'm concerned > > > about tiny targets with very little amount of memory. I have boards with as > > > little as 32Mbytes of RAM. The shadow region for the linear space already > > > takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages > > > busy. > > > > I think this depends on how much shadow would be in constant use vs what > > would get left unused. If the amount in constant use is sufficiently > > large (or the residue is sufficiently small), then it may not be > > worthwhile to support KASAN_VMALLOC on such small systems. > > > > > Each page of shadow memory represent 8 pages of real memory. Could we use > > > page_ref to count how many pieces of a shadow page are used so that we can > > > free it when the ref count decreases to 0. > > > > > > > This requires architecture support to actually use: arches must stop > > > > mapping the read-only zero page over portion of the shadow region that > > > > covers the vmalloc space and instead leave it unmapped. > > > > > > Why 'must' ? Couldn't we switch back and forth from the zero page to real > > > page on demand ? > > > > > > If the zero page is not mapped for unused vmalloc space, bad memory accesses > > > will Oops on the shadow memory access instead of Oopsing on the real bad > > > access, making it more difficult to locate and identify the issue. > > > > I agree this isn't nice, though FWIW this can already happen today for > > bad addresses that fall outside of the usual kernel address space. We > > could make the !KASAN_INLINE checks resilient to this by using > > probe_kernel_read() to check the shadow, and treating unmapped shadow as > > poison. > > Could we instead modify the page fault handlers to detect this case > and print a useful message? In general we can't know if a bad access was a KASAN shadow lookup (e.g. since the shadow of NULL falls outside of the shadow region), but we could always print a message using kasan_shadow_to_mem() for any unhandled fault to suggeest what the "real" address might have been. Thanks, Mark. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 1/3] kasan: support backing vmalloc space with real shadow memory 2019-08-16 17:08 ` Mark Rutland 2019-08-16 17:41 ` Andy Lutomirski @ 2019-08-19 3:58 ` Daniel Axtens 2019-08-19 22:20 ` Andy Lutomirski 1 sibling, 1 reply; 12+ messages in thread From: Daniel Axtens @ 2019-08-19 3:58 UTC (permalink / raw) To: Mark Rutland, Christophe Leroy Cc: kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, dvyukov, linuxppc-dev, gor >> > Instead, share backing space across multiple mappings. Allocate >> > a backing page the first time a mapping in vmalloc space uses a >> > particular page of the shadow region. Keep this page around >> > regardless of whether the mapping is later freed - in the mean time >> > the page could have become shared by another vmalloc mapping. >> > >> > This can in theory lead to unbounded memory growth, but the vmalloc >> > allocator is pretty good at reusing addresses, so the practical memory >> > usage grows at first but then stays fairly stable. >> >> I guess people having gigabytes of memory don't mind, but I'm concerned >> about tiny targets with very little amount of memory. I have boards with as >> little as 32Mbytes of RAM. The shadow region for the linear space already >> takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages >> busy. > > I think this depends on how much shadow would be in constant use vs what > would get left unused. If the amount in constant use is sufficiently > large (or the residue is sufficiently small), then it may not be > worthwhile to support KASAN_VMALLOC on such small systems. I'm not unsympathetic to the cause of small-memory systems, but this is useful as-is for x86, especially for VMAP_STACK. arm64 and s390 have already been able to make use of it as well. So unless the design is going to make it difficult to extend to small-memory systems - if it bakes in concepts or APIs that are going to make things harder - I think it might be worth merging as is. (pending the fixes for documentation nits etc that you point out.) >> Each page of shadow memory represent 8 pages of real memory. Could we use >> page_ref to count how many pieces of a shadow page are used so that we can >> free it when the ref count decreases to 0. I'm not sure how much of a difference it will make, but I'll have a look. >> > This requires architecture support to actually use: arches must stop >> > mapping the read-only zero page over portion of the shadow region that >> > covers the vmalloc space and instead leave it unmapped. >> >> Why 'must' ? Couldn't we switch back and forth from the zero page to real >> page on demand ? This code as currently written will not work if the architecture maps the zero page over the portion of the shadow region that covers the vmalloc space. So it's an implementation 'must' rather than a laws of the universe 'must'. We could perhaps map the zero page, but: - you have to be really careful to get it right. If you accidentally map the zero page onto memory where you shouldn't, you may permit memory accesses that you should catch. We could ameliorate this by taking Mark's suggestion and mapping a poision page over the vmalloc space instead. - I'm not sure what benefit is provided by having something mapped vs leaving a hole, other than making the fault addresses more obvious. - This gets complex, especially to do swapping correctly with respect to various architectures' quirks (see e.g. 56eecdb912b5 "mm: Use ptep/pmdp_set_numa() for updating _PAGE_NUMA bit" - ppc64 at least requires that set_pte_at is never called on a valid PTE). >> If the zero page is not mapped for unused vmalloc space, bad memory accesses >> will Oops on the shadow memory access instead of Oopsing on the real bad >> access, making it more difficult to locate and identify the issue. I suppose. It's pretty easy on at least x86 and my draft ppc64 implementation to identify when an access falls into the shadow region and then to reverse engineer the memory access that was being checked based on the offset. As Andy points out, the fault handler could do this automatically. > I agree this isn't nice, though FWIW this can already happen today for > bad addresses that fall outside of the usual kernel address space. We > could make the !KASAN_INLINE checks resilient to this by using > probe_kernel_read() to check the shadow, and treating unmapped shadow as > poison. > > It's also worth noting that flipping back and forth isn't generally safe > unless going via an invalid table entry, so there'd still be windows > where a bad access might not have shadow mapped. > > We'd need to reuse the common p4d/pud/pmd/pte tables for unallocated > regions, or the tables alone would consume significant amounts of memory > (e..g ~32GiB for arm64 defconfig), and thus we'd need to be able to > switch all levels between pgd and pte, which is much more complicated. > > I strongly suspect that the additional complexity will outweigh the > benefit. > I'm not opposed to this in principle but I am also concerned about the complexity involved. Regards, Daniel > [...] > >> > +#ifdef CONFIG_KASAN_VMALLOC >> > +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr, >> > + void *unused) >> > +{ >> > + unsigned long page; >> > + pte_t pte; >> > + >> > + if (likely(!pte_none(*ptep))) >> > + return 0; >> >> Prior to this, the zero shadow area should be mapped, and the test should >> be: >> >> if (likely(pte_pfn(*ptep) != PHYS_PFN(__pa(kasan_early_shadow_page)))) >> return 0; > > As above, this would need a more comprehensive redesign, so I don't > think it's worth going into that level of nit here. :) > > If we do try to use common shadow for unallocate VA ranges, it probably > makes sense to have a common poison page that we can use, so that we can > report vmalloc-out-of-bounfds. > > Thanks, > Mark. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 1/3] kasan: support backing vmalloc space with real shadow memory 2019-08-19 3:58 ` Daniel Axtens @ 2019-08-19 22:20 ` Andy Lutomirski 0 siblings, 0 replies; 12+ messages in thread From: Andy Lutomirski @ 2019-08-19 22:20 UTC (permalink / raw) To: Daniel Axtens Cc: Mark Rutland, Christophe Leroy, kasan-dev, Linux-MM, X86 ML, Andrey Ryabinin, Alexander Potapenko, Andrew Lutomirski, LKML, Dmitry Vyukov, linuxppc-dev, Vasily Gorbik > On Aug 18, 2019, at 8:58 PM, Daniel Axtens <dja@axtens.net> wrote: > >>> Each page of shadow memory represent 8 pages of real memory. Could we use >>> page_ref to count how many pieces of a shadow page are used so that we can >>> free it when the ref count decreases to 0. > > I'm not sure how much of a difference it will make, but I'll have a look. > There are a grand total of eight possible pages that could require a given shadow page. I would suggest that, instead of reference counting, you just check all eight pages. Or, better yet, look at the actual vm_area_struct and are where prev and next point. That should tell you exactly which range can be freed. ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v4 2/3] fork: support VMAP_STACK with KASAN_VMALLOC 2019-08-15 0:16 [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens 2019-08-15 0:16 ` [PATCH v4 1/3] " Daniel Axtens @ 2019-08-15 0:16 ` Daniel Axtens 2019-08-15 0:16 ` [PATCH v4 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens 2019-08-15 11:28 ` [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Mark Rutland 3 siblings, 0 replies; 12+ messages in thread From: Daniel Axtens @ 2019-08-15 0:16 UTC (permalink / raw) To: kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, mark.rutland, dvyukov Cc: linuxppc-dev, gor, Daniel Axtens Supporting VMAP_STACK with KASAN_VMALLOC is straightforward: - clear the shadow region of vmapped stacks when swapping them in - tweak Kconfig to allow VMAP_STACK to be turned on with KASAN Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Daniel Axtens <dja@axtens.net> --- arch/Kconfig | 9 +++++---- kernel/fork.c | 4 ++++ 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index a7b57dd42c26..e791196005e1 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -825,16 +825,17 @@ config HAVE_ARCH_VMAP_STACK config VMAP_STACK default y bool "Use a virtually-mapped stack" - depends on HAVE_ARCH_VMAP_STACK && !KASAN + depends on HAVE_ARCH_VMAP_STACK + depends on !KASAN || KASAN_VMALLOC ---help--- Enable this if you want the use virtually-mapped kernel stacks with guard pages. This causes kernel stack overflows to be caught immediately rather than causing difficult-to-diagnose corruption. - This is presently incompatible with KASAN because KASAN expects - the stack to map directly to the KASAN shadow map using a formula - that is incorrect if the stack is in vmalloc space. + To use this with KASAN, the architecture must support backing + virtual mappings with real shadow memory, and KASAN_VMALLOC must + be enabled. config ARCH_OPTIONAL_KERNEL_RWX def_bool n diff --git a/kernel/fork.c b/kernel/fork.c index d8ae0f1b4148..ce3150fe8ff2 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include <linux/livepatch.h> #include <linux/thread_info.h> #include <linux/stackleak.h> +#include <linux/kasan.h> #include <asm/pgtable.h> #include <asm/pgalloc.h> @@ -215,6 +216,9 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node) if (!s) continue; + /* Clear the KASAN shadow of the stack. */ + kasan_unpoison_shadow(s->addr, THREAD_SIZE); + /* Clear stale pointers from reused stack. */ memset(s->addr, 0, THREAD_SIZE); -- 2.20.1 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 3/3] x86/kasan: support KASAN_VMALLOC 2019-08-15 0:16 [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens 2019-08-15 0:16 ` [PATCH v4 1/3] " Daniel Axtens 2019-08-15 0:16 ` [PATCH v4 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens @ 2019-08-15 0:16 ` Daniel Axtens 2019-08-16 8:04 ` Christophe Leroy 2019-08-15 11:28 ` [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Mark Rutland 3 siblings, 1 reply; 12+ messages in thread From: Daniel Axtens @ 2019-08-15 0:16 UTC (permalink / raw) To: kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, mark.rutland, dvyukov Cc: linuxppc-dev, gor, Daniel Axtens In the case where KASAN directly allocates memory to back vmalloc space, don't map the early shadow page over it. We prepopulate pgds/p4ds for the range that would otherwise be empty. This is required to get it synced to hardware on boot, allowing the lower levels of the page tables to be filled dynamically. Acked-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Daniel Axtens <dja@axtens.net> --- v2: move from faulting in shadow pgds to prepopulating --- arch/x86/Kconfig | 1 + arch/x86/mm/kasan_init_64.c | 61 +++++++++++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 222855cc0158..40562cc3771f 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -134,6 +134,7 @@ config X86 select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 + select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 296da58f3013..2f57c4ddff61 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -245,6 +245,52 @@ static void __init kasan_map_early_shadow(pgd_t *pgd) } while (pgd++, addr = next, addr != end); } +static void __init kasan_shallow_populate_p4ds(pgd_t *pgd, + unsigned long addr, + unsigned long end, + int nid) +{ + p4d_t *p4d; + unsigned long next; + void *p; + + p4d = p4d_offset(pgd, addr); + do { + next = p4d_addr_end(addr, end); + + if (p4d_none(*p4d)) { + p = early_alloc(PAGE_SIZE, nid, true); + p4d_populate(&init_mm, p4d, p); + } + } while (p4d++, addr = next, addr != end); +} + +static void __init kasan_shallow_populate_pgds(void *start, void *end) +{ + unsigned long addr, next; + pgd_t *pgd; + void *p; + int nid = early_pfn_to_nid((unsigned long)start); + + addr = (unsigned long)start; + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, (unsigned long)end); + + if (pgd_none(*pgd)) { + p = early_alloc(PAGE_SIZE, nid, true); + pgd_populate(&init_mm, pgd, p); + } + + /* + * we need to populate p4ds to be synced when running in + * four level mode - see sync_global_pgds_l4() + */ + kasan_shallow_populate_p4ds(pgd, addr, next, nid); + } while (pgd++, addr = next, addr != (unsigned long)end); +} + + #ifdef CONFIG_KASAN_INLINE static int kasan_die_handler(struct notifier_block *self, unsigned long val, @@ -352,9 +398,24 @@ void __init kasan_init(void) shadow_cpu_entry_end = (void *)round_up( (unsigned long)shadow_cpu_entry_end, PAGE_SIZE); + /* + * If we're in full vmalloc mode, don't back vmalloc space with early + * shadow pages. Instead, prepopulate pgds/p4ds so they are synced to + * the global table and we can populate the lower levels on demand. + */ +#ifdef CONFIG_KASAN_VMALLOC + kasan_shallow_populate_pgds( + kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), + kasan_mem_to_shadow((void *)VMALLOC_END)); + + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)VMALLOC_END + 1), + shadow_cpu_entry_begin); +#else kasan_populate_early_shadow( kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), shadow_cpu_entry_begin); +#endif kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin, (unsigned long)shadow_cpu_entry_end, 0); -- 2.20.1 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v4 3/3] x86/kasan: support KASAN_VMALLOC 2019-08-15 0:16 ` [PATCH v4 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens @ 2019-08-16 8:04 ` Christophe Leroy 0 siblings, 0 replies; 12+ messages in thread From: Christophe Leroy @ 2019-08-16 8:04 UTC (permalink / raw) To: Daniel Axtens, kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, mark.rutland, dvyukov Cc: linuxppc-dev, gor Le 15/08/2019 à 02:16, Daniel Axtens a écrit : > In the case where KASAN directly allocates memory to back vmalloc > space, don't map the early shadow page over it. If early shadow page is not mapped, any bad memory access will Oops on the shadow access instead of Oopsing on the real bad access. You should still map early shadow page, and replace it with real page when needed. Christophe > > We prepopulate pgds/p4ds for the range that would otherwise be empty. > This is required to get it synced to hardware on boot, allowing the > lower levels of the page tables to be filled dynamically. > > Acked-by: Dmitry Vyukov <dvyukov@google.com> > Signed-off-by: Daniel Axtens <dja@axtens.net> > > --- > > v2: move from faulting in shadow pgds to prepopulating > --- > arch/x86/Kconfig | 1 + > arch/x86/mm/kasan_init_64.c | 61 +++++++++++++++++++++++++++++++++++++ > 2 files changed, 62 insertions(+) > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig > index 222855cc0158..40562cc3771f 100644 > --- a/arch/x86/Kconfig > +++ b/arch/x86/Kconfig > @@ -134,6 +134,7 @@ config X86 > select HAVE_ARCH_JUMP_LABEL > select HAVE_ARCH_JUMP_LABEL_RELATIVE > select HAVE_ARCH_KASAN if X86_64 > + select HAVE_ARCH_KASAN_VMALLOC if X86_64 > select HAVE_ARCH_KGDB > select HAVE_ARCH_MMAP_RND_BITS if MMU > select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT > diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c > index 296da58f3013..2f57c4ddff61 100644 > --- a/arch/x86/mm/kasan_init_64.c > +++ b/arch/x86/mm/kasan_init_64.c > @@ -245,6 +245,52 @@ static void __init kasan_map_early_shadow(pgd_t *pgd) > } while (pgd++, addr = next, addr != end); > } > > +static void __init kasan_shallow_populate_p4ds(pgd_t *pgd, > + unsigned long addr, > + unsigned long end, > + int nid) > +{ > + p4d_t *p4d; > + unsigned long next; > + void *p; > + > + p4d = p4d_offset(pgd, addr); > + do { > + next = p4d_addr_end(addr, end); > + > + if (p4d_none(*p4d)) { > + p = early_alloc(PAGE_SIZE, nid, true); > + p4d_populate(&init_mm, p4d, p); > + } > + } while (p4d++, addr = next, addr != end); > +} > + > +static void __init kasan_shallow_populate_pgds(void *start, void *end) > +{ > + unsigned long addr, next; > + pgd_t *pgd; > + void *p; > + int nid = early_pfn_to_nid((unsigned long)start); > + > + addr = (unsigned long)start; > + pgd = pgd_offset_k(addr); > + do { > + next = pgd_addr_end(addr, (unsigned long)end); > + > + if (pgd_none(*pgd)) { > + p = early_alloc(PAGE_SIZE, nid, true); > + pgd_populate(&init_mm, pgd, p); > + } > + > + /* > + * we need to populate p4ds to be synced when running in > + * four level mode - see sync_global_pgds_l4() > + */ > + kasan_shallow_populate_p4ds(pgd, addr, next, nid); > + } while (pgd++, addr = next, addr != (unsigned long)end); > +} > + > + > #ifdef CONFIG_KASAN_INLINE > static int kasan_die_handler(struct notifier_block *self, > unsigned long val, > @@ -352,9 +398,24 @@ void __init kasan_init(void) > shadow_cpu_entry_end = (void *)round_up( > (unsigned long)shadow_cpu_entry_end, PAGE_SIZE); > > + /* > + * If we're in full vmalloc mode, don't back vmalloc space with early > + * shadow pages. Instead, prepopulate pgds/p4ds so they are synced to > + * the global table and we can populate the lower levels on demand. > + */ > +#ifdef CONFIG_KASAN_VMALLOC > + kasan_shallow_populate_pgds( > + kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), > + kasan_mem_to_shadow((void *)VMALLOC_END)); > + > + kasan_populate_early_shadow( > + kasan_mem_to_shadow((void *)VMALLOC_END + 1), > + shadow_cpu_entry_begin); > +#else > kasan_populate_early_shadow( > kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), > shadow_cpu_entry_begin); > +#endif > > kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin, > (unsigned long)shadow_cpu_entry_end, 0); > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory 2019-08-15 0:16 [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens ` (2 preceding siblings ...) 2019-08-15 0:16 ` [PATCH v4 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens @ 2019-08-15 11:28 ` Mark Rutland 3 siblings, 0 replies; 12+ messages in thread From: Mark Rutland @ 2019-08-15 11:28 UTC (permalink / raw) To: Daniel Axtens Cc: kasan-dev, linux-mm, x86, aryabinin, glider, luto, linux-kernel, dvyukov, linuxppc-dev, gor On Thu, Aug 15, 2019 at 10:16:33AM +1000, Daniel Axtens wrote: > Currently, vmalloc space is backed by the early shadow page. This > means that kasan is incompatible with VMAP_STACK, and it also provides > a hurdle for architectures that do not have a dedicated module space > (like powerpc64). > > This series provides a mechanism to back vmalloc space with real, > dynamically allocated memory. I have only wired up x86, because that's > the only currently supported arch I can work with easily, but it's > very easy to wire up other architectures. I'm happy to send patches for arm64 once we've settled some conflicting rework going on for 52-bit VA support. > > This has been discussed before in the context of VMAP_STACK: > - https://bugzilla.kernel.org/show_bug.cgi?id=202009 > - https://lkml.org/lkml/2018/7/22/198 > - https://lkml.org/lkml/2019/7/19/822 > > In terms of implementation details: > > Most mappings in vmalloc space are small, requiring less than a full > page of shadow space. Allocating a full shadow page per mapping would > therefore be wasteful. Furthermore, to ensure that different mappings > use different shadow pages, mappings would have to be aligned to > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. > > Instead, share backing space across multiple mappings. Allocate > a backing page the first time a mapping in vmalloc space uses a > particular page of the shadow region. Keep this page around > regardless of whether the mapping is later freed - in the mean time > the page could have become shared by another vmalloc mapping. > > This can in theory lead to unbounded memory growth, but the vmalloc > allocator is pretty good at reusing addresses, so the practical memory > usage appears to grow at first but then stay fairly stable. > > If we run into practical memory exhaustion issues, I'm happy to > consider hooking into the book-keeping that vmap does, but I am not > convinced that it will be an issue. FWIW, I haven't spotted such memory exhaustion after a week of Syzkaller fuzzing with the last patchset, across 3 machines, so that sounds fine to me. Otherwise, this looks good to me now! For the x86 and fork patch, feel free to add: Acked-by: Mark Rutland <mark.rutland@arm.com> Mark. > > v1: https://lore.kernel.org/linux-mm/20190725055503.19507-1-dja@axtens.net/ > v2: https://lore.kernel.org/linux-mm/20190729142108.23343-1-dja@axtens.net/ > Address review comments: > - Patch 1: use kasan_unpoison_shadow's built-in handling of > ranges that do not align to a full shadow byte > - Patch 3: prepopulate pgds rather than faulting things in > v3: https://lore.kernel.org/linux-mm/20190731071550.31814-1-dja@axtens.net/ > Address comments from Mark Rutland: > - kasan_populate_vmalloc is a better name > - handle concurrency correctly > - various nits and cleanups > - relax module alignment in KASAN_VMALLOC case > v4: Changes to patch 1 only: > - Integrate Mark's rework, thanks Mark! > - handle the case where kasan_populate_shadow might fail > - poision shadow on free, allowing the alloc path to just > unpoision memory that it uses > > Daniel Axtens (3): > kasan: support backing vmalloc space with real shadow memory > fork: support VMAP_STACK with KASAN_VMALLOC > x86/kasan: support KASAN_VMALLOC > > Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++ > arch/Kconfig | 9 +++-- > arch/x86/Kconfig | 1 + > arch/x86/mm/kasan_init_64.c | 61 ++++++++++++++++++++++++++++ > include/linux/kasan.h | 24 +++++++++++ > include/linux/moduleloader.h | 2 +- > include/linux/vmalloc.h | 12 ++++++ > kernel/fork.c | 4 ++ > lib/Kconfig.kasan | 16 ++++++++ > lib/test_kasan.c | 26 ++++++++++++ > mm/kasan/common.c | 67 +++++++++++++++++++++++++++++++ > mm/kasan/generic_report.c | 3 ++ > mm/kasan/kasan.h | 1 + > mm/vmalloc.c | 28 ++++++++++++- > 14 files changed, 308 insertions(+), 6 deletions(-) > > -- > 2.20.1 > ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2019-08-19 22:21 UTC | newest] Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-08-15 0:16 [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens 2019-08-15 0:16 ` [PATCH v4 1/3] " Daniel Axtens 2019-08-16 7:47 ` Christophe Leroy 2019-08-16 17:08 ` Mark Rutland 2019-08-16 17:41 ` Andy Lutomirski 2019-08-19 10:15 ` Mark Rutland 2019-08-19 3:58 ` Daniel Axtens 2019-08-19 22:20 ` Andy Lutomirski 2019-08-15 0:16 ` [PATCH v4 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens 2019-08-15 0:16 ` [PATCH v4 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens 2019-08-16 8:04 ` Christophe Leroy 2019-08-15 11:28 ` [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Mark Rutland
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).