All of lore.kernel.org
 help / color / mirror / Atom feed
From: andrey.konovalov@linux.dev
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Konovalov <andreyknvl@gmail.com>,
	Marco Elver <elver@google.com>,
	Alexander Potapenko <glider@google.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	Andrey Ryabinin <ryabinin.a.a@gmail.com>,
	kasan-dev@googlegroups.com, linux-mm@kvack.org,
	Vincenzo Frascino <vincenzo.frascino@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	Peter Collingbourne <pcc@google.com>,
	Evgenii Stepanov <eugenis@google.com>,
	linux-kernel@vger.kernel.org,
	Andrey Konovalov <andreyknvl@google.com>
Subject: [PATCH mm v5 26/39] kasan, vmalloc: unpoison VM_ALLOC pages after mapping
Date: Thu, 30 Dec 2021 20:14:51 +0100	[thread overview]
Message-ID: <2aec888039eb8e7f9bd8c1f8bb289081f0136e60.1640891329.git.andreyknvl@google.com> (raw)
In-Reply-To: <cover.1640891329.git.andreyknvl@google.com>

From: Andrey Konovalov <andreyknvl@google.com>

Make KASAN unpoison vmalloc mappings after they have been mapped in
when it's possible: for vmalloc() (indentified via VM_ALLOC) and
vm_map_ram().

The reasons for this are:

- For vmalloc() and vm_map_ram(): pages don't get unpoisoned in case
  mapping them fails.
- For vmalloc(): HW_TAGS KASAN needs pages to be mapped to set tags via
  kasan_unpoison_vmalloc().

As a part of these changes, the return value of __vmalloc_node_range()
is changed to area->addr. This is a non-functional change, as
__vmalloc_area_node() returns area->addr anyway.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

---

Changes v3->v4:
- Don't forget to save tagged addr to vm_struct->addr for VM_ALLOC
  so that find_vm_area(addr)->addr == addr for vmalloc().
- Reword comments.
- Update patch description.

Changes v2->v3:
- Update patch description.
---
 mm/vmalloc.c | 30 ++++++++++++++++++++++--------
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 598bb65263c7..bcf973a54737 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2210,14 +2210,15 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
 		mem = (void *)addr;
 	}
 
-	mem = kasan_unpoison_vmalloc(mem, size);
-
 	if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
 				pages, PAGE_SHIFT) < 0) {
 		vm_unmap_ram(mem, count);
 		return NULL;
 	}
 
+	/* Mark the pages as accessible, now that they are mapped. */
+	mem = kasan_unpoison_vmalloc(mem, size);
+
 	return mem;
 }
 EXPORT_SYMBOL(vm_map_ram);
@@ -2445,7 +2446,14 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 
 	setup_vmalloc_vm(area, va, flags, caller);
 
-	area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+	/*
+	 * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a
+	 * best-effort approach, as they can be mapped outside of vmalloc code.
+	 * For VM_ALLOC mappings, the pages are marked as accessible after
+	 * getting mapped in __vmalloc_node_range().
+	 */
+	if (!(flags & VM_ALLOC))
+		area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
 
 	return area;
 }
@@ -3054,7 +3062,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			const void *caller)
 {
 	struct vm_struct *area;
-	void *addr;
+	void *ret;
 	unsigned long real_size = size;
 	unsigned long real_align = align;
 	unsigned int shift = PAGE_SHIFT;
@@ -3116,10 +3124,13 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 		prot = arch_vmap_pgprot_tagged(prot);
 
 	/* Allocate physical pages and map them into vmalloc space. */
-	addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
-	if (!addr)
+	ret = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
+	if (!ret)
 		goto fail;
 
+	/* Mark the pages as accessible, now that they are mapped. */
+	area->addr = kasan_unpoison_vmalloc(area->addr, real_size);
+
 	/*
 	 * In this function, newly allocated vm_struct has VM_UNINITIALIZED
 	 * flag. It means that vm_struct is not fully initialized.
@@ -3131,7 +3142,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!(vm_flags & VM_DEFER_KMEMLEAK))
 		kmemleak_vmalloc(area, size, gfp_mask);
 
-	return addr;
+	return area->addr;
 
 fail:
 	if (shift > PAGE_SHIFT) {
@@ -3823,7 +3834,10 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
 	}
 	spin_unlock(&vmap_area_lock);
 
-	/* mark allocated areas as accessible */
+	/*
+	 * Mark allocated areas as accessible. Do it now as a best-effort
+	 * approach, as they can be mapped outside of vmalloc code.
+	 */
 	for (area = 0; area < nr_vms; area++)
 		vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
 							 vms[area]->size);
-- 
2.25.1


WARNING: multiple messages have this Message-ID (diff)
From: andrey.konovalov@linux.dev
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Konovalov <andreyknvl@gmail.com>,
	Marco Elver <elver@google.com>,
	Alexander Potapenko <glider@google.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	Andrey Ryabinin <ryabinin.a.a@gmail.com>,
	kasan-dev@googlegroups.com, linux-mm@kvack.org,
	Vincenzo Frascino <vincenzo.frascino@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	Peter Collingbourne <pcc@google.com>,
	Evgenii Stepanov <eugenis@google.com>,
	linux-kernel@vger.kernel.org,
	Andrey Konovalov <andreyknvl@google.com>
Subject: [PATCH mm v5 26/39] kasan, vmalloc: unpoison VM_ALLOC pages after mapping
Date: Thu, 30 Dec 2021 20:14:51 +0100	[thread overview]
Message-ID: <2aec888039eb8e7f9bd8c1f8bb289081f0136e60.1640891329.git.andreyknvl@google.com> (raw)
In-Reply-To: <cover.1640891329.git.andreyknvl@google.com>

From: Andrey Konovalov <andreyknvl@google.com>

Make KASAN unpoison vmalloc mappings after they have been mapped in
when it's possible: for vmalloc() (indentified via VM_ALLOC) and
vm_map_ram().

The reasons for this are:

- For vmalloc() and vm_map_ram(): pages don't get unpoisoned in case
  mapping them fails.
- For vmalloc(): HW_TAGS KASAN needs pages to be mapped to set tags via
  kasan_unpoison_vmalloc().

As a part of these changes, the return value of __vmalloc_node_range()
is changed to area->addr. This is a non-functional change, as
__vmalloc_area_node() returns area->addr anyway.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Alexander Potapenko <glider@google.com>

---

Changes v3->v4:
- Don't forget to save tagged addr to vm_struct->addr for VM_ALLOC
  so that find_vm_area(addr)->addr == addr for vmalloc().
- Reword comments.
- Update patch description.

Changes v2->v3:
- Update patch description.
---
 mm/vmalloc.c | 30 ++++++++++++++++++++++--------
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 598bb65263c7..bcf973a54737 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2210,14 +2210,15 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
 		mem = (void *)addr;
 	}
 
-	mem = kasan_unpoison_vmalloc(mem, size);
-
 	if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
 				pages, PAGE_SHIFT) < 0) {
 		vm_unmap_ram(mem, count);
 		return NULL;
 	}
 
+	/* Mark the pages as accessible, now that they are mapped. */
+	mem = kasan_unpoison_vmalloc(mem, size);
+
 	return mem;
 }
 EXPORT_SYMBOL(vm_map_ram);
@@ -2445,7 +2446,14 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
 
 	setup_vmalloc_vm(area, va, flags, caller);
 
-	area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+	/*
+	 * Mark pages for non-VM_ALLOC mappings as accessible. Do it now as a
+	 * best-effort approach, as they can be mapped outside of vmalloc code.
+	 * For VM_ALLOC mappings, the pages are marked as accessible after
+	 * getting mapped in __vmalloc_node_range().
+	 */
+	if (!(flags & VM_ALLOC))
+		area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
 
 	return area;
 }
@@ -3054,7 +3062,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 			const void *caller)
 {
 	struct vm_struct *area;
-	void *addr;
+	void *ret;
 	unsigned long real_size = size;
 	unsigned long real_align = align;
 	unsigned int shift = PAGE_SHIFT;
@@ -3116,10 +3124,13 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 		prot = arch_vmap_pgprot_tagged(prot);
 
 	/* Allocate physical pages and map them into vmalloc space. */
-	addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
-	if (!addr)
+	ret = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
+	if (!ret)
 		goto fail;
 
+	/* Mark the pages as accessible, now that they are mapped. */
+	area->addr = kasan_unpoison_vmalloc(area->addr, real_size);
+
 	/*
 	 * In this function, newly allocated vm_struct has VM_UNINITIALIZED
 	 * flag. It means that vm_struct is not fully initialized.
@@ -3131,7 +3142,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
 	if (!(vm_flags & VM_DEFER_KMEMLEAK))
 		kmemleak_vmalloc(area, size, gfp_mask);
 
-	return addr;
+	return area->addr;
 
 fail:
 	if (shift > PAGE_SHIFT) {
@@ -3823,7 +3834,10 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
 	}
 	spin_unlock(&vmap_area_lock);
 
-	/* mark allocated areas as accessible */
+	/*
+	 * Mark allocated areas as accessible. Do it now as a best-effort
+	 * approach, as they can be mapped outside of vmalloc code.
+	 */
 	for (area = 0; area < nr_vms; area++)
 		vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
 							 vms[area]->size);
-- 
2.25.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2021-12-30 19:15 UTC|newest]

Thread overview: 91+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-30 19:12 [PATCH mm v5 00/39] kasan, vmalloc, arm64: add vmalloc tagging support for SW/HW_TAGS andrey.konovalov
2021-12-30 19:12 ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 01/39] kasan, page_alloc: deduplicate should_skip_kasan_poison andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 02/39] kasan, page_alloc: move tag_clear_highpage out of kernel_init_free_pages andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 03/39] kasan, page_alloc: merge kasan_free_pages into free_pages_prepare andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 04/39] kasan, page_alloc: simplify kasan_poison_pages call site andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 05/39] kasan, page_alloc: init memory of skipped pages on free andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 06/39] kasan: drop skip_kasan_poison variable in free_pages_prepare andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 07/39] mm: clarify __GFP_ZEROTAGS comment andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 08/39] kasan: only apply __GFP_ZEROTAGS when memory is zeroed andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 09/39] kasan, page_alloc: refactor init checks in post_alloc_hook andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 10/39] kasan, page_alloc: merge kasan_alloc_pages into post_alloc_hook andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 11/39] kasan, page_alloc: combine tag_clear_highpage calls in post_alloc_hook andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 12/39] kasan, page_alloc: move SetPageSkipKASanPoison " andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 13/39] kasan, page_alloc: move kernel_init_free_pages " andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 14/39] kasan, page_alloc: rework kasan_unpoison_pages call site andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 15/39] kasan: clean up metadata byte definitions andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 16/39] kasan: define KASAN_VMALLOC_INVALID for SW_TAGS andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 17/39] kasan, x86, arm64, s390: rename functions for modules shadow andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:12 ` [PATCH mm v5 18/39] kasan, vmalloc: drop outdated VM_KASAN comment andrey.konovalov
2021-12-30 19:12   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 19/39] kasan: reorder vmalloc hooks andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 20/39] kasan: add wrappers for " andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 21/39] kasan, vmalloc: reset tags in vmalloc functions andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 22/39] kasan, fork: reset pointer tags of vmapped stacks andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 23/39] kasan, arm64: " andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 24/39] kasan, vmalloc: add vmalloc tagging for SW_TAGS andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 25/39] kasan, vmalloc, arm64: mark vmalloc mappings as pgprot_tagged andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` andrey.konovalov [this message]
2021-12-30 19:14   ` [PATCH mm v5 26/39] kasan, vmalloc: unpoison VM_ALLOC pages after mapping andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 27/39] kasan, mm: only define ___GFP_SKIP_KASAN_POISON with HW_TAGS andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 28/39] kasan, page_alloc: allow skipping unpoisoning for HW_TAGS andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 29/39] kasan, page_alloc: allow skipping memory init " andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2022-01-03  2:32   ` Kuan-Ying Lee
2022-01-03  2:32     ` Kuan-Ying Lee
2022-01-04 11:28     ` Andrey Konovalov
2022-01-04 11:28       ` Andrey Konovalov
2021-12-30 19:14 ` [PATCH mm v5 30/39] kasan, vmalloc: add vmalloc tagging " andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 31/39] kasan, vmalloc: only tag normal vmalloc allocations andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2022-01-01  0:25   ` kernel test robot
2021-12-30 19:14 ` [PATCH mm v5 32/39] kasan, arm64: don't tag executable " andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 33/39] kasan: mark kasan_arg_stacktrace as __initdata andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:14 ` [PATCH mm v5 34/39] kasan: clean up feature flags for HW_TAGS mode andrey.konovalov
2021-12-30 19:14   ` andrey.konovalov
2021-12-30 19:15 ` [PATCH mm v5 35/39] kasan: add kasan.vmalloc command line flag andrey.konovalov
2021-12-30 19:15   ` andrey.konovalov
2021-12-30 19:17 ` [PATCH mm v5 36/39] kasan: allow enabling KASAN_VMALLOC and SW/HW_TAGS andrey.konovalov
2021-12-30 19:17   ` andrey.konovalov
2021-12-30 19:17 ` [PATCH mm v5 37/39] arm64: select KASAN_VMALLOC for SW/HW_TAGS modes andrey.konovalov
2021-12-30 19:17   ` andrey.konovalov
2021-12-30 19:17 ` [PATCH mm v5 38/39] kasan: documentation updates andrey.konovalov
2021-12-30 19:17   ` andrey.konovalov
2021-12-30 19:17 ` [PATCH mm v5 39/39] kasan: improve vmalloc tests andrey.konovalov
2021-12-30 19:17   ` andrey.konovalov
2021-12-30 19:19 ` [PATCH mm v5 00/39] kasan, vmalloc, arm64: add vmalloc tagging support for SW/HW_TAGS Andrey Konovalov
2021-12-30 19:19   ` Andrey Konovalov
2021-12-31  2:30   ` Andrew Morton
2021-12-31  2:30     ` Andrew Morton
2022-01-02  2:26     ` Andrey Konovalov
2022-01-02  2:26       ` Andrey Konovalov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2aec888039eb8e7f9bd8c1f8bb289081f0136e60.1640891329.git.andreyknvl@google.com \
    --to=andrey.konovalov@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=andreyknvl@gmail.com \
    --cc=andreyknvl@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=dvyukov@google.com \
    --cc=elver@google.com \
    --cc=eugenis@google.com \
    --cc=glider@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=pcc@google.com \
    --cc=ryabinin.a.a@gmail.com \
    --cc=vincenzo.frascino@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.