All of lore.kernel.org
 help / color / mirror / Atom feed
From: Uladzislau Rezki <urezki@gmail.com>
To: Daniel Axtens <dja@axtens.net>
Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org,
	aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org,
	linux-kernel@vger.kernel.org, mark.rutland@arm.com,
	dvyukov@google.com, christophe.leroy@c-s.fr,
	linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory
Date: Mon, 7 Oct 2019 10:02:09 +0200	[thread overview]
Message-ID: <20191007080209.GA22997@pc636> (raw)
In-Reply-To: <20191001065834.8880-2-dja@axtens.net>

> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index a3c70e275f4e..9fb7a16f42ae 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  	struct list_head *next;
>  	struct rb_node **link;
>  	struct rb_node *parent;
> +	unsigned long orig_start, orig_end;
>  	bool merged = false;
>  
> +	/*
> +	 * To manage KASAN vmalloc memory usage, we use this opportunity to
> +	 * clean up the shadow memory allocated to back this allocation.
> +	 * Because a vmalloc shadow page covers several pages, the start or end
> +	 * of an allocation might not align with a shadow page. Use the merging
> +	 * opportunities to try to extend the region we can release.
> +	 */
> +	orig_start = va->va_start;
> +	orig_end = va->va_end;
> +
>  	/*
>  	 * Find a place in the tree where VA potentially will be
>  	 * inserted, unless it is merged with its sibling/siblings.
> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  		if (sibling->va_end == va->va_start) {
>  			sibling->va_end = va->va_end;
>  
> +			kasan_release_vmalloc(orig_start, orig_end,
> +					      sibling->va_start,
> +					      sibling->va_end);
> +
>  			/* Check and update the tree if needed. */
>  			augment_tree_propagate_from(sibling);
>  
> @@ -754,6 +769,8 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  	}
>  
>  insert:
> +	kasan_release_vmalloc(orig_start, orig_end, va->va_start, va->va_end);
> +
>  	if (!merged) {
>  		link_va(va, root, parent, link, head);
>  		augment_tree_propagate_from(va);
Hello, Daniel.

Looking at it one more, i think above part of code is a bit wrong
and should be separated from merge_or_add_vmap_area() logic. The
reason is to keep it simple and do only what it is supposed to do:
merging or adding.

Also the kasan_release_vmalloc() gets called twice there and looks like
a duplication. Apart of that, merge_or_add_vmap_area() can be called via
recovery path when vmap/vmaps is/are not even setup. See percpu
allocator.

I guess your part could be moved directly to the __purge_vmap_area_lazy()
where all vmaps are lazily freed. To do so, we also need to modify
merge_or_add_vmap_area() to return merged area:

<snip>
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e92ff5f7dd8b..fecde4312d68 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -683,7 +683,7 @@ insert_vmap_area_augment(struct vmap_area *va,
  * free area is inserted. If VA has been merged, it is
  * freed.
  */
-static __always_inline void
+static __always_inline struct vmap_area *
 merge_or_add_vmap_area(struct vmap_area *va,
        struct rb_root *root, struct list_head *head)
 {
@@ -750,7 +750,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
 
                        /* Free vmap_area object. */
                        kmem_cache_free(vmap_area_cachep, va);
-                       return;
+
+                       /* Point to the new merged area. */
+                       va = sibling;
+                       merged = true;
                }
        }
 
@@ -759,6 +762,8 @@ merge_or_add_vmap_area(struct vmap_area *va,
                link_va(va, root, parent, link, head);
                augment_tree_propagate_from(va);
        }
+
+       return va;
 }
 
 static __always_inline bool
@@ -1172,7 +1177,7 @@ static void __free_vmap_area(struct vmap_area *va)
        /*
         * Merge VA with its neighbors, otherwise just add it.
         */
-       merge_or_add_vmap_area(va,
+       (void) merge_or_add_vmap_area(va,
                &free_vmap_area_root, &free_vmap_area_list);
 }
 
@@ -1279,15 +1284,20 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
        spin_lock(&vmap_area_lock);
        llist_for_each_entry_safe(va, n_va, valist, purge_list) {
                unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
+               unsigned long orig_start = va->va_start;
+               unsigned long orig_end = va->va_end;
 
                /*
                 * Finally insert or merge lazily-freed area. It is
                 * detached and there is no need to "unlink" it from
                 * anything.
                 */
-               merge_or_add_vmap_area(va,
+               va = merge_or_add_vmap_area(va,
                        &free_vmap_area_root, &free_vmap_area_list);
 
+               kasan_release_vmalloc(orig_start,
+                       orig_end, va->va_start, va->va_end);
+
                atomic_long_sub(nr, &vmap_lazy_nr);
 
                if (atomic_long_read(&vmap_lazy_nr) < resched_threshold)
<snip>

--
Vlad Rezki

WARNING: multiple messages have this Message-ID (diff)
From: Uladzislau Rezki <urezki@gmail.com>
To: Daniel Axtens <dja@axtens.net>
Cc: mark.rutland@arm.com, gor@linux.ibm.com, x86@kernel.org,
	linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com,
	linux-mm@kvack.org, glider@google.com, luto@kernel.org,
	aryabinin@virtuozzo.com, linuxppc-dev@lists.ozlabs.org,
	dvyukov@google.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory
Date: Mon, 7 Oct 2019 10:02:09 +0200	[thread overview]
Message-ID: <20191007080209.GA22997@pc636> (raw)
In-Reply-To: <20191001065834.8880-2-dja@axtens.net>

> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index a3c70e275f4e..9fb7a16f42ae 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  	struct list_head *next;
>  	struct rb_node **link;
>  	struct rb_node *parent;
> +	unsigned long orig_start, orig_end;
>  	bool merged = false;
>  
> +	/*
> +	 * To manage KASAN vmalloc memory usage, we use this opportunity to
> +	 * clean up the shadow memory allocated to back this allocation.
> +	 * Because a vmalloc shadow page covers several pages, the start or end
> +	 * of an allocation might not align with a shadow page. Use the merging
> +	 * opportunities to try to extend the region we can release.
> +	 */
> +	orig_start = va->va_start;
> +	orig_end = va->va_end;
> +
>  	/*
>  	 * Find a place in the tree where VA potentially will be
>  	 * inserted, unless it is merged with its sibling/siblings.
> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  		if (sibling->va_end == va->va_start) {
>  			sibling->va_end = va->va_end;
>  
> +			kasan_release_vmalloc(orig_start, orig_end,
> +					      sibling->va_start,
> +					      sibling->va_end);
> +
>  			/* Check and update the tree if needed. */
>  			augment_tree_propagate_from(sibling);
>  
> @@ -754,6 +769,8 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  	}
>  
>  insert:
> +	kasan_release_vmalloc(orig_start, orig_end, va->va_start, va->va_end);
> +
>  	if (!merged) {
>  		link_va(va, root, parent, link, head);
>  		augment_tree_propagate_from(va);
Hello, Daniel.

Looking at it one more, i think above part of code is a bit wrong
and should be separated from merge_or_add_vmap_area() logic. The
reason is to keep it simple and do only what it is supposed to do:
merging or adding.

Also the kasan_release_vmalloc() gets called twice there and looks like
a duplication. Apart of that, merge_or_add_vmap_area() can be called via
recovery path when vmap/vmaps is/are not even setup. See percpu
allocator.

I guess your part could be moved directly to the __purge_vmap_area_lazy()
where all vmaps are lazily freed. To do so, we also need to modify
merge_or_add_vmap_area() to return merged area:

<snip>
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e92ff5f7dd8b..fecde4312d68 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -683,7 +683,7 @@ insert_vmap_area_augment(struct vmap_area *va,
  * free area is inserted. If VA has been merged, it is
  * freed.
  */
-static __always_inline void
+static __always_inline struct vmap_area *
 merge_or_add_vmap_area(struct vmap_area *va,
        struct rb_root *root, struct list_head *head)
 {
@@ -750,7 +750,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
 
                        /* Free vmap_area object. */
                        kmem_cache_free(vmap_area_cachep, va);
-                       return;
+
+                       /* Point to the new merged area. */
+                       va = sibling;
+                       merged = true;
                }
        }
 
@@ -759,6 +762,8 @@ merge_or_add_vmap_area(struct vmap_area *va,
                link_va(va, root, parent, link, head);
                augment_tree_propagate_from(va);
        }
+
+       return va;
 }
 
 static __always_inline bool
@@ -1172,7 +1177,7 @@ static void __free_vmap_area(struct vmap_area *va)
        /*
         * Merge VA with its neighbors, otherwise just add it.
         */
-       merge_or_add_vmap_area(va,
+       (void) merge_or_add_vmap_area(va,
                &free_vmap_area_root, &free_vmap_area_list);
 }
 
@@ -1279,15 +1284,20 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
        spin_lock(&vmap_area_lock);
        llist_for_each_entry_safe(va, n_va, valist, purge_list) {
                unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
+               unsigned long orig_start = va->va_start;
+               unsigned long orig_end = va->va_end;
 
                /*
                 * Finally insert or merge lazily-freed area. It is
                 * detached and there is no need to "unlink" it from
                 * anything.
                 */
-               merge_or_add_vmap_area(va,
+               va = merge_or_add_vmap_area(va,
                        &free_vmap_area_root, &free_vmap_area_list);
 
+               kasan_release_vmalloc(orig_start,
+                       orig_end, va->va_start, va->va_end);
+
                atomic_long_sub(nr, &vmap_lazy_nr);
 
                if (atomic_long_read(&vmap_lazy_nr) < resched_threshold)
<snip>

--
Vlad Rezki

  parent reply	other threads:[~2019-10-07  8:02 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-01  6:58 [PATCH v8 0/5] kasan: support backing vmalloc space with real shadow memory Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 1/5] " Daniel Axtens
2019-10-01 10:17   ` Uladzislau Rezki
2019-10-01 10:17     ` Uladzislau Rezki
2019-10-02  1:23     ` Daniel Axtens
2019-10-02  1:23       ` Daniel Axtens
2019-10-02  7:13       ` Christophe Leroy
2019-10-02  7:13         ` Christophe Leroy
2019-10-02 11:49       ` Uladzislau Rezki
2019-10-02 11:49         ` Uladzislau Rezki
2019-10-07  8:02   ` Uladzislau Rezki [this message]
2019-10-07  8:02     ` Uladzislau Rezki
2019-10-11  5:15     ` Daniel Axtens
2019-10-11  5:15       ` Daniel Axtens
2019-10-11 19:57   ` Andrey Ryabinin
2019-10-14 13:57     ` Daniel Axtens
2019-10-14 15:27       ` Mark Rutland
2019-10-14 15:27         ` Mark Rutland
2019-10-15  6:32         ` Daniel Axtens
2019-10-15  6:32           ` Daniel Axtens
2019-10-15  6:29       ` Daniel Axtens
2019-10-16 12:19       ` Andrey Ryabinin
2019-10-16 13:22         ` Mark Rutland
2019-10-16 13:22           ` Mark Rutland
2019-10-18 10:43           ` Andrey Ryabinin
2019-10-18 10:43             ` Andrey Ryabinin
2019-10-28  7:39             ` Daniel Axtens
2019-10-28  7:39               ` Daniel Axtens
2019-10-28  1:26           ` Daniel Axtens
2019-10-28  1:26             ` Daniel Axtens
2019-10-14 15:43   ` Mark Rutland
2019-10-14 15:43     ` Mark Rutland
2019-10-15  6:27     ` Daniel Axtens
2019-10-15  6:27       ` Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 2/5] kasan: add test for vmalloc Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 3/5] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 4/5] x86/kasan: support KASAN_VMALLOC Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 5/5] kasan debug: track pages allocated for vmalloc shadow Daniel Axtens

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191007080209.GA22997@pc636 \
    --to=urezki@gmail.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=christophe.leroy@c-s.fr \
    --cc=dja@axtens.net \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=gor@linux.ibm.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.