All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mark Rutland <mark.rutland@arm.com>
To: Daniel Axtens <dja@axtens.net>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
	kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org,
	glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org,
	dvyukov@google.com, christophe.leroy@c-s.fr,
	linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory
Date: Mon, 14 Oct 2019 16:27:17 +0100	[thread overview]
Message-ID: <20191014152717.GA20438@lakrids.cambridge.arm.com> (raw)
In-Reply-To: <87ftjvtoo7.fsf@dja-thinkpad.axtens.net>

On Tue, Oct 15, 2019 at 12:57:44AM +1100, Daniel Axtens wrote:
> Hi Andrey,
> 
> 
> >> +	/*
> >> +	 * Ensure poisoning is visible before the shadow is made visible
> >> +	 * to other CPUs.
> >> +	 */
> >> +	smp_wmb();
> >
> > I'm not quite understand what this barrier do and why it needed.
> > And if it's really needed there should be a pairing barrier
> > on the other side which I don't see.
> 
> Mark might be better able to answer this, but my understanding is that
> we want to make sure that we never have a situation where the writes are
> reordered so that PTE is installed before all the poisioning is written
> out. I think it follows the logic in __pte_alloc() in mm/memory.c:
> 
> 	/*
> 	 * Ensure all pte setup (eg. pte page lock and page clearing) are
> 	 * visible before the pte is made visible to other CPUs by being
> 	 * put into page tables.

Yup. We need to ensure that if a thread sees a populated shadow PTE, the
corresponding shadow memory has been zeroed. Thus, we need to ensure
that the zeroing is observed by other CPUs before we update the PTE.

We're relying on the absence of a TLB entry preventing another CPU from
loading the corresponding shadow shadow memory until its PTE has been
populated (after the zeroing is visible). Consequently there is no
barrier on the other side, and just a control-dependency (which would be
insufficient on its own).

There is a potential problem here, as Will Deacon wrote up at:

  https://lore.kernel.org/linux-arm-kernel/20190827131818.14724-1-will@kernel.org/

... in the section starting:

| *** Other architecture maintainers -- start here! ***

... whereby the CPU can spuriously fault on an access after observing a
valid PTE.

For arm64 we handle the spurious fault, and it looks like x86 would need
something like its vmalloc_fault() applying to the shadow region to
cater for this.

Thanks,
Mark.

WARNING: multiple messages have this Message-ID (diff)
From: Mark Rutland <mark.rutland@arm.com>
To: Daniel Axtens <dja@axtens.net>
Cc: gor@linux.ibm.com, x86@kernel.org, linux-kernel@vger.kernel.org,
	kasan-dev@googlegroups.com, linux-mm@kvack.org,
	glider@google.com, luto@kernel.org,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	linuxppc-dev@lists.ozlabs.org, dvyukov@google.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory
Date: Mon, 14 Oct 2019 16:27:17 +0100	[thread overview]
Message-ID: <20191014152717.GA20438@lakrids.cambridge.arm.com> (raw)
In-Reply-To: <87ftjvtoo7.fsf@dja-thinkpad.axtens.net>

On Tue, Oct 15, 2019 at 12:57:44AM +1100, Daniel Axtens wrote:
> Hi Andrey,
> 
> 
> >> +	/*
> >> +	 * Ensure poisoning is visible before the shadow is made visible
> >> +	 * to other CPUs.
> >> +	 */
> >> +	smp_wmb();
> >
> > I'm not quite understand what this barrier do and why it needed.
> > And if it's really needed there should be a pairing barrier
> > on the other side which I don't see.
> 
> Mark might be better able to answer this, but my understanding is that
> we want to make sure that we never have a situation where the writes are
> reordered so that PTE is installed before all the poisioning is written
> out. I think it follows the logic in __pte_alloc() in mm/memory.c:
> 
> 	/*
> 	 * Ensure all pte setup (eg. pte page lock and page clearing) are
> 	 * visible before the pte is made visible to other CPUs by being
> 	 * put into page tables.

Yup. We need to ensure that if a thread sees a populated shadow PTE, the
corresponding shadow memory has been zeroed. Thus, we need to ensure
that the zeroing is observed by other CPUs before we update the PTE.

We're relying on the absence of a TLB entry preventing another CPU from
loading the corresponding shadow shadow memory until its PTE has been
populated (after the zeroing is visible). Consequently there is no
barrier on the other side, and just a control-dependency (which would be
insufficient on its own).

There is a potential problem here, as Will Deacon wrote up at:

  https://lore.kernel.org/linux-arm-kernel/20190827131818.14724-1-will@kernel.org/

... in the section starting:

| *** Other architecture maintainers -- start here! ***

... whereby the CPU can spuriously fault on an access after observing a
valid PTE.

For arm64 we handle the spurious fault, and it looks like x86 would need
something like its vmalloc_fault() applying to the shadow region to
cater for this.

Thanks,
Mark.

  reply	other threads:[~2019-10-14 15:27 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-01  6:58 [PATCH v8 0/5] kasan: support backing vmalloc space with real shadow memory Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 1/5] " Daniel Axtens
2019-10-01 10:17   ` Uladzislau Rezki
2019-10-01 10:17     ` Uladzislau Rezki
2019-10-02  1:23     ` Daniel Axtens
2019-10-02  1:23       ` Daniel Axtens
2019-10-02  7:13       ` Christophe Leroy
2019-10-02  7:13         ` Christophe Leroy
2019-10-02 11:49       ` Uladzislau Rezki
2019-10-02 11:49         ` Uladzislau Rezki
2019-10-07  8:02   ` Uladzislau Rezki
2019-10-07  8:02     ` Uladzislau Rezki
2019-10-11  5:15     ` Daniel Axtens
2019-10-11  5:15       ` Daniel Axtens
2019-10-11 19:57   ` Andrey Ryabinin
2019-10-14 13:57     ` Daniel Axtens
2019-10-14 15:27       ` Mark Rutland [this message]
2019-10-14 15:27         ` Mark Rutland
2019-10-15  6:32         ` Daniel Axtens
2019-10-15  6:32           ` Daniel Axtens
2019-10-15  6:29       ` Daniel Axtens
2019-10-16 12:19       ` Andrey Ryabinin
2019-10-16 13:22         ` Mark Rutland
2019-10-16 13:22           ` Mark Rutland
2019-10-18 10:43           ` Andrey Ryabinin
2019-10-18 10:43             ` Andrey Ryabinin
2019-10-28  7:39             ` Daniel Axtens
2019-10-28  7:39               ` Daniel Axtens
2019-10-28  1:26           ` Daniel Axtens
2019-10-28  1:26             ` Daniel Axtens
2019-10-14 15:43   ` Mark Rutland
2019-10-14 15:43     ` Mark Rutland
2019-10-15  6:27     ` Daniel Axtens
2019-10-15  6:27       ` Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 2/5] kasan: add test for vmalloc Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 3/5] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 4/5] x86/kasan: support KASAN_VMALLOC Daniel Axtens
2019-10-01  6:58 ` [PATCH v8 5/5] kasan debug: track pages allocated for vmalloc shadow Daniel Axtens

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191014152717.GA20438@lakrids.cambridge.arm.com \
    --to=mark.rutland@arm.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=christophe.leroy@c-s.fr \
    --cc=dja@axtens.net \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=gor@linux.ibm.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=luto@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.