linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion
@ 2018-08-16 20:46 Sean Christopherson
  2018-08-17 14:39 ` Andi Kleen
  2018-08-17 16:13 ` Linus Torvalds
  0 siblings, 2 replies; 6+ messages in thread
From: Sean Christopherson @ 2018-08-16 20:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, x86
  Cc: H. Peter Anvin, linux-kernel, Sean Christopherson, Andi Kleen,
	Josh Poimboeuf, Michal Hocko, Vlastimil Babka, Dave Hansen,
	Greg Kroah-Hartman

clear_page() does not undergo the XOR logic to invert the address
bits, i.e. PTE, PMD and PUD entries that have not been individually
written will have val=0 and so will trigger __pte_needs_invert().
As a result, {pte,pmd,pud}_pfn() will return the wrong PFN value,
i.e. all ones (adjusted by the max PFN mask) instead of zero.
A zeroed entry is ok because the page at physical address 0 is
reserved early in boot specifically to mitigate L1TF, so explicitly
exempt them from the inversion when reading the PFN.

Manifested as an unexpected mprotect(..., PROT_NONE) failure when
called on a VMA that has VM_PFNMAP and was mmap'd to as something
other than PROT_NONE but never used.  mprotect() sends the PROT_NONE
request down prot_none_walk(), which walks the PTEs to check the PFNs.
prot_none_pte_entry() gets the bogus PFN from pte_pfn() and returns
-EACCES because it thinks mprotect() is trying to adjust a high MMIO
address.

Fixes: 6b28baca9b1f ("x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/x86/include/asm/pgtable.h | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index e4ffa565a69f..f21a1df4ca89 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -195,21 +195,24 @@ static inline u64 protnone_mask(u64 val);
 static inline unsigned long pte_pfn(pte_t pte)
 {
 	phys_addr_t pfn = pte_val(pte);
-	pfn ^= protnone_mask(pfn);
+	if (pfn)
+		pfn ^= protnone_mask(pfn);
 	return (pfn & PTE_PFN_MASK) >> PAGE_SHIFT;
 }
 
 static inline unsigned long pmd_pfn(pmd_t pmd)
 {
 	phys_addr_t pfn = pmd_val(pmd);
-	pfn ^= protnone_mask(pfn);
+	if (pfn)
+		pfn ^= protnone_mask(pfn);
 	return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT;
 }
 
 static inline unsigned long pud_pfn(pud_t pud)
 {
 	phys_addr_t pfn = pud_val(pud);
-	pfn ^= protnone_mask(pfn);
+	if (pfn)
+		pfn ^= protnone_mask(pfn);
 	return (pfn & pud_pfn_mask(pud)) >> PAGE_SHIFT;
 }
 
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion
  2018-08-16 20:46 [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion Sean Christopherson
@ 2018-08-17 14:39 ` Andi Kleen
  2018-08-17 16:13 ` Linus Torvalds
  1 sibling, 0 replies; 6+ messages in thread
From: Andi Kleen @ 2018-08-17 14:39 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Thomas Gleixner, Ingo Molnar, x86, H. Peter Anvin, linux-kernel,
	Josh Poimboeuf, Michal Hocko, Vlastimil Babka, Dave Hansen,
	Greg Kroah-Hartman, torvalds

On Thu, Aug 16, 2018 at 01:46:38PM -0700, Sean Christopherson wrote:
> clear_page() does not undergo the XOR logic to invert the address
> bits, i.e. PTE, PMD and PUD entries that have not been individually
> written will have val=0 and so will trigger __pte_needs_invert().
> As a result, {pte,pmd,pud}_pfn() will return the wrong PFN value,
> i.e. all ones (adjusted by the max PFN mask) instead of zero.
> A zeroed entry is ok because the page at physical address 0 is
> reserved early in boot specifically to mitigate L1TF, so explicitly
> exempt them from the inversion when reading the PFN.
> 
> Manifested as an unexpected mprotect(..., PROT_NONE) failure when
> called on a VMA that has VM_PFNMAP and was mmap'd to as something
> other than PROT_NONE but never used.  mprotect() sends the PROT_NONE
> request down prot_none_walk(), which walks the PTEs to check the PFNs.
> prot_none_pte_entry() gets the bogus PFN from pte_pfn() and returns
> -EACCES because it thinks mprotect() is trying to adjust a high MMIO
> address.

Looks good to me. You're right that case was missed.

Reviewed-by: Andi Kleen <ak@linux.intel.com>

I think Thomas is still on vacation, copying Linus.

-Andi

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion
  2018-08-16 20:46 [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion Sean Christopherson
  2018-08-17 14:39 ` Andi Kleen
@ 2018-08-17 16:13 ` Linus Torvalds
  2018-08-17 16:54   ` Andi Kleen
  2018-08-17 17:01   ` Sean Christopherson
  1 sibling, 2 replies; 6+ messages in thread
From: Linus Torvalds @ 2018-08-17 16:13 UTC (permalink / raw)
  To: sean.j.christopherson
  Cc: Thomas Gleixner, Ingo Molnar, the arch/x86 maintainers,
	Peter Anvin, Linux Kernel Mailing List, Andi Kleen,
	Josh Poimboeuf, Michal Hocko, Vlastimil Babka, Dave Hansen,
	Greg Kroah-Hartman

On Thu, Aug 16, 2018 at 1:47 PM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Fixes: 6b28baca9b1f ("x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation")

This seems wrong.

That commit doesn't invert a cleared page table entry, because that
commit still required _PAGE_PROTNONE being set for a pte to be
inverted.

I'm assuming the real culprit is commit f22cc87f6c1f
("x86/speculation/l1tf: Invert all not present mappings") which made
it look at _just_ the present bit.

And yeah, that was wrong.

So I really think a much better patch would be the appended one-liner.

Note - it's whitespace-damaged by cut-and-paste, but it should be
obvious enough to apply by hand.

Can you test this one instead?

             Linus
---

 arch/x86/include/asm/pgtable-invert.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/pgtable-invert.h
b/arch/x86/include/asm/pgtable-invert.h
index 44b1203ece12..821438e91b77 100644
--- a/arch/x86/include/asm/pgtable-invert.h
+++ b/arch/x86/include/asm/pgtable-invert.h
@@ -6,7 +6,7 @@

 static inline bool __pte_needs_invert(u64 val)
 {
-       return !(val & _PAGE_PRESENT);
+       return val && !(val & _PAGE_PRESENT);
 }

 /* Get a mask to xor with the page table entry to get the correct pfn. */

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion
  2018-08-17 16:13 ` Linus Torvalds
@ 2018-08-17 16:54   ` Andi Kleen
  2018-08-17 17:01   ` Sean Christopherson
  1 sibling, 0 replies; 6+ messages in thread
From: Andi Kleen @ 2018-08-17 16:54 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: sean.j.christopherson, Thomas Gleixner, Ingo Molnar,
	the arch/x86 maintainers, Peter Anvin, Linux Kernel Mailing List,
	Josh Poimboeuf, Michal Hocko, Vlastimil Babka, Dave Hansen,
	Greg Kroah-Hartman

> Note - it's whitespace-damaged by cut-and-paste, but it should be
> obvious enough to apply by hand.
> 
> Can you test this one instead?

Right that seems like a better fix.

-Andi

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion
  2018-08-17 16:13 ` Linus Torvalds
  2018-08-17 16:54   ` Andi Kleen
@ 2018-08-17 17:01   ` Sean Christopherson
  2018-08-17 17:05     ` Linus Torvalds
  1 sibling, 1 reply; 6+ messages in thread
From: Sean Christopherson @ 2018-08-17 17:01 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Thomas Gleixner, Ingo Molnar, the arch/x86 maintainers,
	Peter Anvin, Linux Kernel Mailing List, Andi Kleen,
	Josh Poimboeuf, Michal Hocko, Vlastimil Babka, Dave Hansen,
	Greg Kroah-Hartman

On Fri, Aug 17, 2018 at 09:13:51AM -0700, Linus Torvalds wrote:
> On Thu, Aug 16, 2018 at 1:47 PM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> >
> > Fixes: 6b28baca9b1f ("x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation")
> 
> This seems wrong.
> 
> That commit doesn't invert a cleared page table entry, because that
> commit still required _PAGE_PROTNONE being set for a pte to be
> inverted.
> 
> I'm assuming the real culprit is commit f22cc87f6c1f
> ("x86/speculation/l1tf: Invert all not present mappings") which made
> it look at _just_ the present bit.
> 
> And yeah, that was wrong.
> 
> So I really think a much better patch would be the appended one-liner.
> 
> Note - it's whitespace-damaged by cut-and-paste, but it should be
> obvious enough to apply by hand.
> 
> Can you test this one instead?

Checking for a non-zero val in __pte_needs_invert() also resolves the
issue.  I shied away from that change because prot_none_walk() doesn't
pass the full PTE to __pte_needs_invert(), it only passes the pgprot_t
bits.  This works because PAGE_NONE sets the global and accessed bits,
but it made me nervous nonetheless.

>              Linus
> ---
> 
>  arch/x86/include/asm/pgtable-invert.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/pgtable-invert.h
> b/arch/x86/include/asm/pgtable-invert.h
> index 44b1203ece12..821438e91b77 100644
> --- a/arch/x86/include/asm/pgtable-invert.h
> +++ b/arch/x86/include/asm/pgtable-invert.h
> @@ -6,7 +6,7 @@
> 
>  static inline bool __pte_needs_invert(u64 val)
>  {
> -       return !(val & _PAGE_PRESENT);
> +       return val && !(val & _PAGE_PRESENT);
>  }
> 
>  /* Get a mask to xor with the page table entry to get the correct pfn. */

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion
  2018-08-17 17:01   ` Sean Christopherson
@ 2018-08-17 17:05     ` Linus Torvalds
  0 siblings, 0 replies; 6+ messages in thread
From: Linus Torvalds @ 2018-08-17 17:05 UTC (permalink / raw)
  To: sean.j.christopherson
  Cc: Thomas Gleixner, Ingo Molnar, the arch/x86 maintainers,
	Peter Anvin, Linux Kernel Mailing List, Andi Kleen,
	Josh Poimboeuf, Michal Hocko, Vlastimil Babka, Dave Hansen,
	Greg Kroah-Hartman

On Fri, Aug 17, 2018 at 10:01 AM Sean Christopherson
<sean.j.christopherson@intel.com> wrote:
>
> Checking for a non-zero val in __pte_needs_invert() also resolves the
> issue.  I shied away from that change because prot_none_walk() doesn't
> pass the full PTE to __pte_needs_invert(), it only passes the pgprot_t
> bits.  This works because PAGE_NONE sets the global and accessed bits,
> but it made me nervous nonetheless.

Good point, and I do think that might merit a comment in the code.
Will add, and credit you. But I'd still prefer to just fix up just
__pte_needs_invert.

Thanks,

              Linus

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-08-17 17:05 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-16 20:46 [PATCH] x86/speculation/l1tf: Exempt zeroed PTEs from XOR conversion Sean Christopherson
2018-08-17 14:39 ` Andi Kleen
2018-08-17 16:13 ` Linus Torvalds
2018-08-17 16:54   ` Andi Kleen
2018-08-17 17:01   ` Sean Christopherson
2018-08-17 17:05     ` Linus Torvalds

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).