linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andy Lutomirski <luto@kernel.org>
To: Mark Rutland <mark.rutland@arm.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>,
	Daniel Axtens <dja@axtens.net>,
	 kasan-dev <kasan-dev@googlegroups.com>,
	Linux-MM <linux-mm@kvack.org>, X86 ML <x86@kernel.org>,
	 Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Alexander Potapenko <glider@google.com>,
	 Andrew Lutomirski <luto@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	 Dmitry Vyukov <dvyukov@google.com>,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	 Vasily Gorbik <gor@linux.ibm.com>
Subject: Re: [PATCH v4 1/3] kasan: support backing vmalloc space with real shadow memory
Date: Fri, 16 Aug 2019 10:41:00 -0700	[thread overview]
Message-ID: <CALCETrUn4FNjvRoJW77DNi5vdwO+EURUC_46tysjPQD0MM3THQ@mail.gmail.com> (raw)
In-Reply-To: <20190816170813.GA7417@lakrids.cambridge.arm.com>

On Fri, Aug 16, 2019 at 10:08 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> Hi Christophe,
>
> On Fri, Aug 16, 2019 at 09:47:00AM +0200, Christophe Leroy wrote:
> > Le 15/08/2019 à 02:16, Daniel Axtens a écrit :
> > > Hook into vmalloc and vmap, and dynamically allocate real shadow
> > > memory to back the mappings.
> > >
> > > Most mappings in vmalloc space are small, requiring less than a full
> > > page of shadow space. Allocating a full shadow page per mapping would
> > > therefore be wasteful. Furthermore, to ensure that different mappings
> > > use different shadow pages, mappings would have to be aligned to
> > > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
> > >
> > > Instead, share backing space across multiple mappings. Allocate
> > > a backing page the first time a mapping in vmalloc space uses a
> > > particular page of the shadow region. Keep this page around
> > > regardless of whether the mapping is later freed - in the mean time
> > > the page could have become shared by another vmalloc mapping.
> > >
> > > This can in theory lead to unbounded memory growth, but the vmalloc
> > > allocator is pretty good at reusing addresses, so the practical memory
> > > usage grows at first but then stays fairly stable.
> >
> > I guess people having gigabytes of memory don't mind, but I'm concerned
> > about tiny targets with very little amount of memory. I have boards with as
> > little as 32Mbytes of RAM. The shadow region for the linear space already
> > takes one eighth of the RAM. I'd rather avoid keeping unused shadow pages
> > busy.
>
> I think this depends on how much shadow would be in constant use vs what
> would get left unused. If the amount in constant use is sufficiently
> large (or the residue is sufficiently small), then it may not be
> worthwhile to support KASAN_VMALLOC on such small systems.
>
> > Each page of shadow memory represent 8 pages of real memory. Could we use
> > page_ref to count how many pieces of a shadow page are used so that we can
> > free it when the ref count decreases to 0.
> >
> > > This requires architecture support to actually use: arches must stop
> > > mapping the read-only zero page over portion of the shadow region that
> > > covers the vmalloc space and instead leave it unmapped.
> >
> > Why 'must' ? Couldn't we switch back and forth from the zero page to real
> > page on demand ?
> >
> > If the zero page is not mapped for unused vmalloc space, bad memory accesses
> > will Oops on the shadow memory access instead of Oopsing on the real bad
> > access, making it more difficult to locate and identify the issue.
>
> I agree this isn't nice, though FWIW this can already happen today for
> bad addresses that fall outside of the usual kernel address space. We
> could make the !KASAN_INLINE checks resilient to this by using
> probe_kernel_read() to check the shadow, and treating unmapped shadow as
> poison.

Could we instead modify the page fault handlers to detect this case
and print a useful message?


  reply	other threads:[~2019-08-16 17:41 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-15  0:16 [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens
2019-08-15  0:16 ` [PATCH v4 1/3] " Daniel Axtens
2019-08-16  7:47   ` Christophe Leroy
2019-08-16 17:08     ` Mark Rutland
2019-08-16 17:41       ` Andy Lutomirski [this message]
2019-08-19 10:15         ` Mark Rutland
2019-08-19  3:58       ` Daniel Axtens
2019-08-19 22:20         ` Andy Lutomirski
2019-08-15  0:16 ` [PATCH v4 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens
2019-08-15  0:16 ` [PATCH v4 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens
2019-08-16  8:04   ` Christophe Leroy
2019-08-15 11:28 ` [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Mark Rutland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALCETrUn4FNjvRoJW77DNi5vdwO+EURUC_46tysjPQD0MM3THQ@mail.gmail.com \
    --to=luto@kernel.org \
    --cc=aryabinin@virtuozzo.com \
    --cc=christophe.leroy@c-s.fr \
    --cc=dja@axtens.net \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=gor@linux.ibm.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mark.rutland@arm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).