All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mark Rutland <mark.rutland@arm.com>
To: Daniel Axtens <dja@axtens.net>
Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org,
	aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org,
	linux-kernel@vger.kernel.org, dvyukov@google.com,
	christophe.leroy@c-s.fr, linuxppc-dev@lists.ozlabs.org,
	gor@linux.ibm.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory
Date: Mon, 14 Oct 2019 16:43:59 +0100	[thread overview]
Message-ID: <20191014154359.GC20438@lakrids.cambridge.arm.com> (raw)
In-Reply-To: <20191001065834.8880-2-dja@axtens.net>

On Tue, Oct 01, 2019 at 04:58:30PM +1000, Daniel Axtens wrote:
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
> 
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
> 
> Instead, share backing space across multiple mappings. Allocate a
> backing page when a mapping in vmalloc space uses a particular page of
> the shadow region. This page can be shared by other vmalloc mappings
> later on.
> 
> We hook in to the vmap infrastructure to lazily clean up unused shadow
> memory.
> 
> To avoid the difficulties around swapping mappings around, this code
> expects that the part of the shadow region that covers the vmalloc
> space will not be covered by the early shadow page, but will be left
> unmapped. This will require changes in arch-specific code.
> 
> This allows KASAN with VMAP_STACK, and may be helpful for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on). It also allows relaxing the module alignment
> back to PAGE_SIZE.
> 
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
> Acked-by: Vasily Gorbik <gor@linux.ibm.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> [Mark: rework shadow allocation]
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

Sorry to point this out so late, but your S-o-B should come last in the
chain per Documentation/process/submitting-patches.rst. Judging by the
rest of that, I think you want something like:

Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>

... leaving yourself as the Author in the headers.

Sorry to have made that more complicated!

[...]

> +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> +					void *unused)
> +{
> +	unsigned long page;
> +
> +	page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT);
> +
> +	spin_lock(&init_mm.page_table_lock);
> +
> +	if (likely(!pte_none(*ptep))) {
> +		pte_clear(&init_mm, addr, ptep);
> +		free_page(page);
> +	}

There should be TLB maintenance between clearing the PTE and freeing the
page here.

Thanks,
Mark.

WARNING: multiple messages have this Message-ID (diff)
From: Mark Rutland <mark.rutland@arm.com>
To: Daniel Axtens <dja@axtens.net>
Cc: gor@linux.ibm.com, x86@kernel.org, linux-kernel@vger.kernel.org,
	kasan-dev@googlegroups.com, linux-mm@kvack.org,
	glider@google.com, luto@kernel.org, aryabinin@virtuozzo.com,
	linuxppc-dev@lists.ozlabs.org, dvyukov@google.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory
Date: Mon, 14 Oct 2019 16:43:59 +0100	[thread overview]
Message-ID: <20191014154359.GC20438@lakrids.cambridge.arm.com> (raw)
In-Reply-To: <20191001065834.8880-2-dja@axtens.net>

On Tue, Oct 01, 2019 at 04:58:30PM +1000, Daniel Axtens wrote:
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
> 
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
> 
> Instead, share backing space across multiple mappings. Allocate a
> backing page when a mapping in vmalloc space uses a particular page of
> the shadow region. This page can be shared by other vmalloc mappings
> later on.
> 
> We hook in to the vmap infrastructure to lazily clean up unused shadow
> memory.
> 
> To avoid the difficulties around swapping mappings around, this code
> expects that the part of the shadow region that covers the vmalloc
> space will not be covered by the early shadow page, but will be left
> unmapped. This will require changes in arch-specific code.
> 
> This allows KASAN with VMAP_STACK, and may be helpful for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on). It also allows relaxing the module alignment
> back to PAGE_SIZE.
> 
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
> Acked-by: Vasily Gorbik <gor@linux.ibm.com>
> Signed-off-by: Daniel Axtens <dja@axtens.net>
> [Mark: rework shadow allocation]
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

Sorry to point this out so late, but your S-o-B should come last in the
chain per Documentation/process/submitting-patches.rst. Judging by the
rest of that, I think you want something like:

Co-developed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com> [shadow rework]
Signed-off-by: Daniel Axtens <dja@axtens.net>

... leaving yourself as the Author in the headers.

Sorry to have made that more complicated!

[...]

> +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> +					void *unused)
> +{
> +	unsigned long page;
> +
> +	page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT);
> +
> +	spin_lock(&init_mm.page_table_lock);
> +
> +	if (likely(!pte_none(*ptep))) {
> +		pte_clear(&init_mm, addr, ptep);
> +		free_page(page);
> +	}

There should be TLB maintenance between clearing the PTE and freeing the
page here.

Thanks,
Mark.

  parent reply	other threads:[~2019-10-14 15:44 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-01  6:58 [PATCH v8 0/5] kasan: support backing vmalloc space with real shadow memory Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 1/5] " Daniel Axtens
2019-10-01 10:17   ` Uladzislau Rezki
2019-10-01 10:17     ` Uladzislau Rezki
2019-10-02  1:23     ` Daniel Axtens
2019-10-02  1:23       ` Daniel Axtens
2019-10-02  7:13       ` Christophe Leroy
2019-10-02  7:13         ` Christophe Leroy
2019-10-02 11:49       ` Uladzislau Rezki
2019-10-02 11:49         ` Uladzislau Rezki
2019-10-07  8:02   ` Uladzislau Rezki
2019-10-07  8:02     ` Uladzislau Rezki
2019-10-11  5:15     ` Daniel Axtens
2019-10-11  5:15       ` Daniel Axtens
2019-10-11 19:57   ` Andrey Ryabinin
2019-10-14 13:57     ` Daniel Axtens
2019-10-14 15:27       ` Mark Rutland
2019-10-14 15:27         ` Mark Rutland
2019-10-15  6:32         ` Daniel Axtens
2019-10-15  6:32           ` Daniel Axtens
2019-10-15  6:29       ` Daniel Axtens
2019-10-16 12:19       ` Andrey Ryabinin
2019-10-16 13:22         ` Mark Rutland
2019-10-16 13:22           ` Mark Rutland
2019-10-18 10:43           ` Andrey Ryabinin
2019-10-18 10:43             ` Andrey Ryabinin
2019-10-28  7:39             ` Daniel Axtens
2019-10-28  7:39               ` Daniel Axtens
2019-10-28  1:26           ` Daniel Axtens
2019-10-28  1:26             ` Daniel Axtens
2019-10-14 15:43   ` Mark Rutland [this message]
2019-10-14 15:43     ` Mark Rutland
2019-10-15  6:27     ` Daniel Axtens
2019-10-15  6:27       ` Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 2/5] kasan: add test for vmalloc Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 3/5] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 4/5] x86/kasan: support KASAN_VMALLOC Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 5/5] kasan debug: track pages allocated for vmalloc shadow Daniel Axtens

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191014154359.GC20438@lakrids.cambridge.arm.com \
    --to=mark.rutland@arm.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=christophe.leroy@c-s.fr \
    --cc=dja@axtens.net \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=gor@linux.ibm.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.