From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C698C43603 for ; Wed, 18 Dec 2019 04:51:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4CF3121D7D for ; Wed, 18 Dec 2019 04:51:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="hc57m/Ya" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4CF3121D7D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D88768E00D1; Tue, 17 Dec 2019 23:51:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D395A8E0079; Tue, 17 Dec 2019 23:51:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C284C8E00D1; Tue, 17 Dec 2019 23:51:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id AD5158E0079 for ; Tue, 17 Dec 2019 23:51:40 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 643B82C68 for ; Wed, 18 Dec 2019 04:51:40 +0000 (UTC) X-FDA: 76277039160.26.bun19_65dbd5309d055 X-HE-Tag: bun19_65dbd5309d055 X-Filterd-Recvd-Size: 11279 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Dec 2019 04:51:39 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B8CE721582; Wed, 18 Dec 2019 04:51:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1576644699; bh=N2/vbMCS8fhvep9088yp1822jHFRnIqGZrj9emvUFgk=; h=Date:From:To:Subject:In-Reply-To:From; b=hc57m/Ya4J/NWGcMoP8wSwXyWMvA91AZroSw56kY/KSoa2NJhJWgAGO2jZzTJX8jC nA4K2Jr05HZgWE91DU1Ut5MjfGrZi8sJkToL4gp6mjqJWMeHXFehkfNrmM6Pgj9/Er kbEBX6g9XpTVfPXUUGS0ZbmyNIPQxUDD6iOtkTWk= Date: Tue, 17 Dec 2019 20:51:38 -0800 From: Andrew Morton To: akpm@linux-foundation.org, aryabinin@virtuozzo.com, cai@lca.pw, dja@axtens.net, dvyukov@google.com, glider@google.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, urezki@gmail.com Subject: [patch 1/6] kasan: fix crashes on access to memory mapped by vm_map_ram() Message-ID: <20191218045138.X3wO3VBVU%akpm@linux-foundation.org> In-Reply-To: <20191217205020.6e2eaefc78710ec646e99aa9@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Ryabinin Subject: kasan: fix crashes on access to memory mapped by vm_map_ram() With CONFIG_KASAN_VMALLOC=y any use of memory obtained via vm_map_ram() will crash because there is no shadow backing that memory. Instead of sprinkling additional kasan_populate_vmalloc() calls all over the vmalloc code, move it into alloc_vmap_area(). This will fix vm_map_ram() and simplify the code a bit. [aryabinin@virtuozzo.com: v2] Link: http://lkml.kernel.org/r/20191205095942.1761-1-aryabinin@virtuozzo.comLink: http://lkml.kernel.org/r/20191204204534.32202-1-aryabinin@virtuozzo.com Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory") Signed-off-by: Andrey Ryabinin Reported-by: Dmitry Vyukov Reviewed-by: Uladzislau Rezki (Sony) Cc: Daniel Axtens Cc: Alexander Potapenko Cc: Daniel Axtens Cc: Qian Cai Signed-off-by: Andrew Morton --- include/linux/kasan.h | 15 ++++--- mm/kasan/common.c | 27 ++++++++---- mm/vmalloc.c | 85 ++++++++++++++++++---------------------- 3 files changed, 67 insertions(+), 60 deletions(-) --- a/include/linux/kasan.h~kasan-fix-crashes-on-access-to-memory-mapped-by-vm_map_ram +++ a/include/linux/kasan.h @@ -205,20 +205,23 @@ static inline void *kasan_reset_tag(cons #endif /* CONFIG_KASAN_SW_TAGS */ #ifdef CONFIG_KASAN_VMALLOC -int kasan_populate_vmalloc(unsigned long requested_size, - struct vm_struct *area); -void kasan_poison_vmalloc(void *start, unsigned long size); +int kasan_populate_vmalloc(unsigned long addr, unsigned long size); +void kasan_poison_vmalloc(const void *start, unsigned long size); +void kasan_unpoison_vmalloc(const void *start, unsigned long size); void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, unsigned long free_region_end); #else -static inline int kasan_populate_vmalloc(unsigned long requested_size, - struct vm_struct *area) +static inline int kasan_populate_vmalloc(unsigned long start, + unsigned long size) { return 0; } -static inline void kasan_poison_vmalloc(void *start, unsigned long size) {} +static inline void kasan_poison_vmalloc(const void *start, unsigned long size) +{ } +static inline void kasan_unpoison_vmalloc(const void *start, unsigned long size) +{ } static inline void kasan_release_vmalloc(unsigned long start, unsigned long end, unsigned long free_region_start, --- a/mm/kasan/common.c~kasan-fix-crashes-on-access-to-memory-mapped-by-vm_map_ram +++ a/mm/kasan/common.c @@ -778,15 +778,17 @@ static int kasan_populate_vmalloc_pte(pt return 0; } -int kasan_populate_vmalloc(unsigned long requested_size, struct vm_struct *area) +int kasan_populate_vmalloc(unsigned long addr, unsigned long size) { unsigned long shadow_start, shadow_end; int ret; - shadow_start = (unsigned long)kasan_mem_to_shadow(area->addr); + if (!is_vmalloc_or_module_addr((void *)addr)) + return 0; + + shadow_start = (unsigned long)kasan_mem_to_shadow((void *)addr); shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE); - shadow_end = (unsigned long)kasan_mem_to_shadow(area->addr + - area->size); + shadow_end = (unsigned long)kasan_mem_to_shadow((void *)addr + size); shadow_end = ALIGN(shadow_end, PAGE_SIZE); ret = apply_to_page_range(&init_mm, shadow_start, @@ -797,10 +799,6 @@ int kasan_populate_vmalloc(unsigned long flush_cache_vmap(shadow_start, shadow_end); - kasan_unpoison_shadow(area->addr, requested_size); - - area->flags |= VM_KASAN; - /* * We need to be careful about inter-cpu effects here. Consider: * @@ -843,12 +841,23 @@ int kasan_populate_vmalloc(unsigned long * Poison the shadow for a vmalloc region. Called as part of the * freeing process at the time the region is freed. */ -void kasan_poison_vmalloc(void *start, unsigned long size) +void kasan_poison_vmalloc(const void *start, unsigned long size) { + if (!is_vmalloc_or_module_addr(start)) + return; + size = round_up(size, KASAN_SHADOW_SCALE_SIZE); kasan_poison_shadow(start, size, KASAN_VMALLOC_INVALID); } +void kasan_unpoison_vmalloc(const void *start, unsigned long size) +{ + if (!is_vmalloc_or_module_addr(start)) + return; + + kasan_unpoison_shadow(start, size); +} + static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr, void *unused) { --- a/mm/vmalloc.c~kasan-fix-crashes-on-access-to-memory-mapped-by-vm_map_ram +++ a/mm/vmalloc.c @@ -1062,6 +1062,26 @@ __alloc_vmap_area(unsigned long size, un } /* + * Free a region of KVA allocated by alloc_vmap_area + */ +static void free_vmap_area(struct vmap_area *va) +{ + /* + * Remove from the busy tree/list. + */ + spin_lock(&vmap_area_lock); + unlink_va(va, &vmap_area_root); + spin_unlock(&vmap_area_lock); + + /* + * Insert/Merge it back to the free tree/list. + */ + spin_lock(&free_vmap_area_lock); + merge_or_add_vmap_area(va, &free_vmap_area_root, &free_vmap_area_list); + spin_unlock(&free_vmap_area_lock); +} + +/* * Allocate a region of KVA of the specified size and alignment, within the * vstart and vend. */ @@ -1073,6 +1093,7 @@ static struct vmap_area *alloc_vmap_area struct vmap_area *va, *pva; unsigned long addr; int purged = 0; + int ret; BUG_ON(!size); BUG_ON(offset_in_page(size)); @@ -1139,6 +1160,7 @@ retry: va->va_end = addr + size; va->vm = NULL; + spin_lock(&vmap_area_lock); insert_vmap_area(va, &vmap_area_root, &vmap_area_list); spin_unlock(&vmap_area_lock); @@ -1147,6 +1169,12 @@ retry: BUG_ON(va->va_start < vstart); BUG_ON(va->va_end > vend); + ret = kasan_populate_vmalloc(addr, size); + if (ret) { + free_vmap_area(va); + return ERR_PTR(ret); + } + return va; overflow: @@ -1186,26 +1214,6 @@ int unregister_vmap_purge_notifier(struc EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier); /* - * Free a region of KVA allocated by alloc_vmap_area - */ -static void free_vmap_area(struct vmap_area *va) -{ - /* - * Remove from the busy tree/list. - */ - spin_lock(&vmap_area_lock); - unlink_va(va, &vmap_area_root); - spin_unlock(&vmap_area_lock); - - /* - * Insert/Merge it back to the free tree/list. - */ - spin_lock(&free_vmap_area_lock); - merge_or_add_vmap_area(va, &free_vmap_area_root, &free_vmap_area_list); - spin_unlock(&free_vmap_area_lock); -} - -/* * Clear the pagetable entries of a given vmap_area */ static void unmap_vmap_area(struct vmap_area *va) @@ -1771,6 +1779,8 @@ void vm_unmap_ram(const void *mem, unsig BUG_ON(addr > VMALLOC_END); BUG_ON(!PAGE_ALIGNED(addr)); + kasan_poison_vmalloc(mem, size); + if (likely(count <= VMAP_MAX_ALLOC)) { debug_check_no_locks_freed(mem, size); vb_free(mem, size); @@ -1821,6 +1831,9 @@ void *vm_map_ram(struct page **pages, un addr = va->va_start; mem = (void *)addr; } + + kasan_unpoison_vmalloc(mem, size); + if (vmap_page_range(addr, addr + size, prot, pages) < 0) { vm_unmap_ram(mem, count); return NULL; @@ -2075,6 +2088,7 @@ static struct vm_struct *__get_vm_area_n { struct vmap_area *va; struct vm_struct *area; + unsigned long requested_size = size; BUG_ON(in_interrupt()); size = PAGE_ALIGN(size); @@ -2098,23 +2112,9 @@ static struct vm_struct *__get_vm_area_n return NULL; } - setup_vmalloc_vm(area, va, flags, caller); + kasan_unpoison_vmalloc((void *)va->va_start, requested_size); - /* - * For KASAN, if we are in vmalloc space, we need to cover the shadow - * area with real memory. If we come here through VM_ALLOC, this is - * done by a higher level function that has access to the true size, - * which might not be a full page. - * - * We assume module space comes via VM_ALLOC path. - */ - if (is_vmalloc_addr(area->addr) && !(area->flags & VM_ALLOC)) { - if (kasan_populate_vmalloc(area->size, area)) { - unmap_vmap_area(va); - kfree(area); - return NULL; - } - } + setup_vmalloc_vm(area, va, flags, caller); return area; } @@ -2293,8 +2293,7 @@ static void __vunmap(const void *addr, i debug_check_no_locks_freed(area->addr, get_vm_area_size(area)); debug_check_no_obj_freed(area->addr, get_vm_area_size(area)); - if (area->flags & VM_KASAN) - kasan_poison_vmalloc(area->addr, area->size); + kasan_poison_vmalloc(area->addr, area->size); vm_remove_mappings(area, deallocate_pages); @@ -2539,7 +2538,7 @@ void *__vmalloc_node_range(unsigned long if (!size || (size >> PAGE_SHIFT) > totalram_pages()) goto fail; - area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | + area = __get_vm_area_node(real_size, align, VM_ALLOC | VM_UNINITIALIZED | vm_flags, start, end, node, gfp_mask, caller); if (!area) goto fail; @@ -2548,11 +2547,6 @@ void *__vmalloc_node_range(unsigned long if (!addr) return NULL; - if (is_vmalloc_or_module_addr(area->addr)) { - if (kasan_populate_vmalloc(real_size, area)) - return NULL; - } - /* * In this function, newly allocated vm_struct has VM_UNINITIALIZED * flag. It means that vm_struct is not fully initialized. @@ -3437,7 +3431,8 @@ retry: /* populate the shadow space outside of the lock */ for (area = 0; area < nr_vms; area++) { /* assume success here */ - kasan_populate_vmalloc(sizes[area], vms[area]); + kasan_populate_vmalloc(vas[area]->va_start, sizes[area]); + kasan_unpoison_vmalloc((void *)vms[area]->addr, sizes[area]); } kfree(vas); _