All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andi Kleen <ak@linux.intel.com>
To: speck@linutronix.de
Cc: Andi Kleen <ak@linux.intel.com>
Subject: [MODERATED] [PATCH v5 8/8] L1TFv4 1
Date: Wed, 23 May 2018 14:51:25 -0700	[thread overview]
Message-ID: <7effd501b61578044d081befab84f74019204bac.1527111786.git.ak@linux.intel.com> (raw)
In-Reply-To: <cover.1527111786.git.ak@linux.intel.com>
In-Reply-To: <cover.1527111786.git.ak@linux.intel.com>

For L1TF PROT_NONE mappings are protected by inverting the PFN in the
page table entry. This sets the high bits in the CPU's address space,
thus making sure to point to not point an unmapped entry to valid
cached memory.

Some server system BIOS put the MMIO mappings high up in the physical
address space. If such an high mapping was mapped to an unprivileged
user they could attack low memory by setting such a mapping to
PROT_NONE. This could happen through a special device driver
which is not access protected. Normal /dev/mem is of course
access protect.

To avoid this we forbid PROT_NONE mappings or mprotect for high MMIO
mappings.

Valid page mappings are allowed because the system is then unsafe
anyways.

We don't expect users to commonly use PROT_NONE on MMIO. But
to minimize any impact here we only do this if the mapping actually
refers to a high MMIO address (defined as the MAX_PA-1 bit being set),
and also skip the check for root.

For mmaps this is straight forward and can be handled in vm_insert_pfn
and in remap_pfn_range().

For mprotect it's a bit trickier. At the point we're looking at the
actual PTEs a lot of state has been changed and would be difficult
to undo on an error. Since this is a uncommon case we use a separate
early page talk walk pass for MMIO PROT_NONE mappings that
checks for this condition early. For non MMIO and non PROT_NONE
there are no changes.

v2: Use new helpers added earlier
v3: Fix inverted check added in v3
v4: Use l1tf_pfn_limit (Thomas)
Add comment for locked down kernels
Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/include/asm/pgtable.h |  4 ++++
 arch/x86/mm/mmap.c             | 21 ++++++++++++++++++
 include/asm-generic/pgtable.h  | 12 +++++++++++
 mm/memory.c                    | 37 ++++++++++++++++++++++---------
 mm/mprotect.c                  | 49 ++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 113 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 10dcd9e597c6..0aa96d5d392d 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1338,6 +1338,10 @@ static inline bool pud_access_permitted(pud_t pud, bool write)
 	return __pte_access_permitted(pud_val(pud), write);
 }
 
+#define __HAVE_ARCH_PFN_MODIFY_ALLOWED 1
+extern bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot);
+static inline bool arch_has_pfn_modify_check(void) { return true; }
+
 #include <asm-generic/pgtable.h>
 #endif	/* __ASSEMBLY__ */
 
diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
index 48c591251600..3d7802d474c3 100644
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -240,3 +240,24 @@ int valid_mmap_phys_addr_range(unsigned long pfn, size_t count)
 
 	return phys_addr_valid(addr + count - 1);
 }
+
+/*
+ * Only allow root to set high MMIO mappings to PROT_NONE.
+ * This prevents an unpriv. user to set them to PROT_NONE and invert
+ * them, then pointing to valid memory for L1TF speculation.
+ *
+ * Note: for locked down kernels may want to disable the root override.
+ */
+bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
+{
+	if (!boot_cpu_has(X86_BUG_L1TF))
+		return true;
+	if (!__pte_needs_invert(pgprot_val(prot)))
+		return true;
+	/* If it's real memory always allow */
+	if (pfn_valid(pfn))
+		return true;
+	if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
+		return false;
+	return true;
+}
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index f59639afaa39..0ecc1197084b 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -1097,4 +1097,16 @@ static inline void init_espfix_bsp(void) { }
 #endif
 #endif
 
+#ifndef __HAVE_ARCH_PFN_MODIFY_ALLOWED
+static inline bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
+{
+	return true;
+}
+
+static inline bool arch_has_pfn_modify_check(void)
+{
+	return false;
+}
+#endif
+
 #endif /* _ASM_GENERIC_PGTABLE_H */
diff --git a/mm/memory.c b/mm/memory.c
index 01f5464e0fd2..fe497cecd2ab 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1891,6 +1891,9 @@ int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
 	if (addr < vma->vm_start || addr >= vma->vm_end)
 		return -EFAULT;
 
+	if (!pfn_modify_allowed(pfn, pgprot))
+		return -EACCES;
+
 	track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV));
 
 	ret = insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot,
@@ -1926,6 +1929,9 @@ static int __vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 
 	track_pfn_insert(vma, &pgprot, pfn);
 
+	if (!pfn_modify_allowed(pfn_t_to_pfn(pfn), pgprot))
+		return -EACCES;
+
 	/*
 	 * If we don't have pte special, then we have to use the pfn_valid()
 	 * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must*
@@ -1973,6 +1979,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
 {
 	pte_t *pte;
 	spinlock_t *ptl;
+	int err = 0;
 
 	pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
 	if (!pte)
@@ -1980,12 +1987,16 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd,
 	arch_enter_lazy_mmu_mode();
 	do {
 		BUG_ON(!pte_none(*pte));
+		if (!pfn_modify_allowed(pfn, prot)) {
+			err = -EACCES;
+			break;
+		}
 		set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot)));
 		pfn++;
 	} while (pte++, addr += PAGE_SIZE, addr != end);
 	arch_leave_lazy_mmu_mode();
 	pte_unmap_unlock(pte - 1, ptl);
-	return 0;
+	return err;
 }
 
 static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
@@ -1994,6 +2005,7 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
 {
 	pmd_t *pmd;
 	unsigned long next;
+	int err;
 
 	pfn -= addr >> PAGE_SHIFT;
 	pmd = pmd_alloc(mm, pud, addr);
@@ -2002,9 +2014,10 @@ static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
 	VM_BUG_ON(pmd_trans_huge(*pmd));
 	do {
 		next = pmd_addr_end(addr, end);
-		if (remap_pte_range(mm, pmd, addr, next,
-				pfn + (addr >> PAGE_SHIFT), prot))
-			return -ENOMEM;
+		err = remap_pte_range(mm, pmd, addr, next,
+				pfn + (addr >> PAGE_SHIFT), prot);
+		if (err)
+			return err;
 	} while (pmd++, addr = next, addr != end);
 	return 0;
 }
@@ -2015,6 +2028,7 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
 {
 	pud_t *pud;
 	unsigned long next;
+	int err;
 
 	pfn -= addr >> PAGE_SHIFT;
 	pud = pud_alloc(mm, p4d, addr);
@@ -2022,9 +2036,10 @@ static inline int remap_pud_range(struct mm_struct *mm, p4d_t *p4d,
 		return -ENOMEM;
 	do {
 		next = pud_addr_end(addr, end);
-		if (remap_pmd_range(mm, pud, addr, next,
-				pfn + (addr >> PAGE_SHIFT), prot))
-			return -ENOMEM;
+		err = remap_pmd_range(mm, pud, addr, next,
+				pfn + (addr >> PAGE_SHIFT), prot);
+		if (err)
+			return err;
 	} while (pud++, addr = next, addr != end);
 	return 0;
 }
@@ -2035,6 +2050,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
 {
 	p4d_t *p4d;
 	unsigned long next;
+	int err;
 
 	pfn -= addr >> PAGE_SHIFT;
 	p4d = p4d_alloc(mm, pgd, addr);
@@ -2042,9 +2058,10 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
 		return -ENOMEM;
 	do {
 		next = p4d_addr_end(addr, end);
-		if (remap_pud_range(mm, p4d, addr, next,
-				pfn + (addr >> PAGE_SHIFT), prot))
-			return -ENOMEM;
+		err = remap_pud_range(mm, p4d, addr, next,
+				pfn + (addr >> PAGE_SHIFT), prot);
+		if (err)
+			return err;
 	} while (p4d++, addr = next, addr != end);
 	return 0;
 }
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 625608bc8962..6d331620b9e5 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -306,6 +306,42 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
 	return pages;
 }
 
+static int prot_none_pte_entry(pte_t *pte, unsigned long addr,
+			       unsigned long next, struct mm_walk *walk)
+{
+	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
+		0 : -EACCES;
+}
+
+static int prot_none_hugetlb_entry(pte_t *pte, unsigned long hmask,
+				   unsigned long addr, unsigned long next,
+				   struct mm_walk *walk)
+{
+	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
+		0 : -EACCES;
+}
+
+static int prot_none_test(unsigned long addr, unsigned long next,
+			  struct mm_walk *walk)
+{
+	return 0;
+}
+
+static int prot_none_walk(struct vm_area_struct *vma, unsigned long start,
+			   unsigned long end, unsigned long newflags)
+{
+	pgprot_t new_pgprot = vm_get_page_prot(newflags);
+	struct mm_walk prot_none_walk = {
+		.pte_entry = prot_none_pte_entry,
+		.hugetlb_entry = prot_none_hugetlb_entry,
+		.test_walk = prot_none_test,
+		.mm = current->mm,
+		.private = &new_pgprot,
+	};
+
+	return walk_page_range(start, end, &prot_none_walk);
+}
+
 int
 mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
 	unsigned long start, unsigned long end, unsigned long newflags)
@@ -323,6 +359,19 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
 		return 0;
 	}
 
+	/*
+	 * Do PROT_NONE PFN permission checks here when we can still
+	 * bail out without undoing a lot of state. This is a rather
+	 * uncommon case, so doesn't need to be very optimized.
+	 */
+	if (arch_has_pfn_modify_check() &&
+	    (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) &&
+	    (newflags & (VM_READ|VM_WRITE|VM_EXEC)) == 0) {
+		error = prot_none_walk(vma, start, end, newflags);
+		if (error)
+			return error;
+	}
+
 	/*
 	 * If we make a private mapping writable we increase our commit;
 	 * but (without finer accounting) cannot reduce our commit if we
-- 
2.14.3

  parent reply	other threads:[~2018-05-23 21:51 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-23 21:51 [MODERATED] [PATCH v5 0/8] L1TFv4 5 Andi Kleen
2018-05-23 21:51 ` [MODERATED] [PATCH v5 1/8] L1TFv4 6 Andi Kleen
2018-05-23 21:51 ` [MODERATED] [PATCH v5 2/8] L1TFv4 7 Andi Kleen
2018-05-23 21:51 ` [MODERATED] [PATCH v5 3/8] L1TFv4 2 Andi Kleen
2018-05-23 21:51 ` [MODERATED] [PATCH v5 4/8] L1TFv4 8 Andi Kleen
2018-05-23 21:51 ` [MODERATED] [PATCH v5 5/8] L1TFv4 0 Andi Kleen
2018-05-23 21:51 ` [MODERATED] [PATCH v5 6/8] L1TFv4 4 Andi Kleen
2018-05-23 21:51 ` [MODERATED] [PATCH v5 7/8] L1TFv4 3 Andi Kleen
2018-05-23 21:51 ` Andi Kleen [this message]
     [not found] ` <20180523215658.63CAB61104@crypto-ml.lab.linutronix.de>
2018-05-23 22:22   ` [MODERATED] Re: [PATCH v5 5/8] L1TFv4 0 Borislav Petkov
     [not found] ` <20180523215726.A931B61157@crypto-ml.lab.linutronix.de>
2018-05-23 22:50   ` [MODERATED] Re: [PATCH v5 8/8] L1TFv4 1 Dave Hansen
     [not found] ` <20180523215737.7C50E61169@crypto-ml.lab.linutronix.de>
2018-05-23 23:15   ` [MODERATED] Re: [PATCH v5 1/8] L1TFv4 6 Dave Hansen
2018-05-23 23:52     ` Andrew Cooper
2018-05-24  9:09     ` Michal Hocko
2018-05-24 15:26     ` Andi Kleen
2018-05-24 17:00       ` Dave Hansen
     [not found] ` <20180523215136.EB16B610ED@crypto-ml.lab.linutronix.de>
2018-05-24  3:34   ` [MODERATED] Re: [PATCH v5 4/8] L1TFv4 8 Josh Poimboeuf
     [not found] ` <20180523215651.BFF82610ED@crypto-ml.lab.linutronix.de>
2018-05-24  4:04   ` [MODERATED] Re: [PATCH v5 6/8] L1TFv4 4 Josh Poimboeuf
2018-05-24 13:35     ` Andi Kleen
2018-05-24 15:45       ` Josh Poimboeuf
2018-05-24 16:53         ` Andi Kleen
2018-05-24 17:53           ` Josh Poimboeuf
2018-05-24 20:32             ` Andi Kleen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7effd501b61578044d081befab84f74019204bac.1527111786.git.ak@linux.intel.com \
    --to=ak@linux.intel.com \
    --cc=speck@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.