linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, Andi Kleen <ak@linux.intel.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Josh Poimboeuf <jpoimboe@redhat.com>,
	Dave Hansen <dave.hansen@intel.com>,
	David Woodhouse <dwmw@amazon.co.uk>,
	Guenter Roeck <linux@roeck-us.net>
Subject: [PATCH 4.4 32/43] x86/speculation/l1tf: Disallow non privileged high MMIO PROT_NONE mappings
Date: Tue, 14 Aug 2018 19:18:08 +0200	[thread overview]
Message-ID: <20180814171519.219354177@linuxfoundation.org> (raw)
In-Reply-To: <20180814171517.014285600@linuxfoundation.org>

4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Andi Kleen <ak@linux.intel.com>

commit 42e4089c7890725fcd329999252dc489b72f2921 upstream

For L1TF PROT_NONE mappings are protected by inverting the PFN in the page
table entry. This sets the high bits in the CPU's address space, thus
making sure to point to not point an unmapped entry to valid cached memory.

Some server system BIOSes put the MMIO mappings high up in the physical
address space. If such an high mapping was mapped to unprivileged users
they could attack low memory by setting such a mapping to PROT_NONE. This
could happen through a special device driver which is not access
protected. Normal /dev/mem is of course access protected.

To avoid this forbid PROT_NONE mappings or mprotect for high MMIO mappings.

Valid page mappings are allowed because the system is then unsafe anyways.

It's not expected that users commonly use PROT_NONE on MMIO. But to
minimize any impact this is only enforced if the mapping actually refers to
a high MMIO address (defined as the MAX_PA-1 bit being set), and also skip
the check for root.

For mmaps this is straight forward and can be handled in vm_insert_pfn and
in remap_pfn_range().

For mprotect it's a bit trickier. At the point where the actual PTEs are
accessed a lot of state has been changed and it would be difficult to undo
on an error. Since this is a uncommon case use a separate early page talk
walk pass for MMIO PROT_NONE mappings that checks for this condition
early. For non MMIO and non PROT_NONE there are no changes.

[dwmw2: Backport to 4.9]
[groeck: Backport to 4.4]

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/x86/include/asm/pgtable.h |    8 ++++++
 arch/x86/mm/mmap.c             |   21 +++++++++++++++++
 include/asm-generic/pgtable.h  |   12 ++++++++++
 mm/memory.c                    |   29 ++++++++++++++++++------
 mm/mprotect.c                  |   49 +++++++++++++++++++++++++++++++++++++++++
 5 files changed, 112 insertions(+), 7 deletions(-)

--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -942,6 +942,14 @@ static inline pte_t pte_swp_clear_soft_d
 }
 #endif
 
+#define __HAVE_ARCH_PFN_MODIFY_ALLOWED 1
+extern bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot);
+
+static inline bool arch_has_pfn_modify_check(void)
+{
+	return boot_cpu_has_bug(X86_BUG_L1TF);
+}
+
 #include <asm-generic/pgtable.h>
 #endif	/* __ASSEMBLY__ */
 
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -121,3 +121,24 @@ const char *arch_vma_name(struct vm_area
 		return "[mpx]";
 	return NULL;
 }
+
+/*
+ * Only allow root to set high MMIO mappings to PROT_NONE.
+ * This prevents an unpriv. user to set them to PROT_NONE and invert
+ * them, then pointing to valid memory for L1TF speculation.
+ *
+ * Note: for locked down kernels may want to disable the root override.
+ */
+bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
+{
+	if (!boot_cpu_has_bug(X86_BUG_L1TF))
+		return true;
+	if (!__pte_needs_invert(pgprot_val(prot)))
+		return true;
+	/* If it's real memory always allow */
+	if (pfn_valid(pfn))
+		return true;
+	if (pfn > l1tf_pfn_limit() && !capable(CAP_SYS_ADMIN))
+		return false;
+	return true;
+}
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -805,4 +805,16 @@ static inline int pmd_free_pte_page(pmd_
 #define io_remap_pfn_range remap_pfn_range
 #endif
 
+#ifndef __HAVE_ARCH_PFN_MODIFY_ALLOWED
+static inline bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot)
+{
+	return true;
+}
+
+static inline bool arch_has_pfn_modify_check(void)
+{
+	return false;
+}
+#endif
+
 #endif /* _ASM_GENERIC_PGTABLE_H */
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1645,6 +1645,9 @@ int vm_insert_pfn_prot(struct vm_area_st
 	if (track_pfn_insert(vma, &pgprot, pfn))
 		return -EINVAL;
 
+	if (!pfn_modify_allowed(pfn, pgprot))
+		return -EACCES;
+
 	ret = insert_pfn(vma, addr, pfn, pgprot);
 
 	return ret;
@@ -1663,6 +1666,9 @@ int vm_insert_mixed(struct vm_area_struc
 	if (track_pfn_insert(vma, &pgprot, pfn))
 		return -EINVAL;
 
+	if (!pfn_modify_allowed(pfn, pgprot))
+		return -EACCES;
+
 	/*
 	 * If we don't have pte special, then we have to use the pfn_valid()
 	 * based VM_MIXEDMAP scheme (see vm_normal_page), and thus we *must*
@@ -1691,6 +1697,7 @@ static int remap_pte_range(struct mm_str
 {
 	pte_t *pte;
 	spinlock_t *ptl;
+	int err = 0;
 
 	pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
 	if (!pte)
@@ -1698,12 +1705,16 @@ static int remap_pte_range(struct mm_str
 	arch_enter_lazy_mmu_mode();
 	do {
 		BUG_ON(!pte_none(*pte));
+		if (!pfn_modify_allowed(pfn, prot)) {
+			err = -EACCES;
+			break;
+		}
 		set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot)));
 		pfn++;
 	} while (pte++, addr += PAGE_SIZE, addr != end);
 	arch_leave_lazy_mmu_mode();
 	pte_unmap_unlock(pte - 1, ptl);
-	return 0;
+	return err;
 }
 
 static inline int remap_pmd_range(struct mm_struct *mm, pud_t *pud,
@@ -1712,6 +1723,7 @@ static inline int remap_pmd_range(struct
 {
 	pmd_t *pmd;
 	unsigned long next;
+	int err;
 
 	pfn -= addr >> PAGE_SHIFT;
 	pmd = pmd_alloc(mm, pud, addr);
@@ -1720,9 +1732,10 @@ static inline int remap_pmd_range(struct
 	VM_BUG_ON(pmd_trans_huge(*pmd));
 	do {
 		next = pmd_addr_end(addr, end);
-		if (remap_pte_range(mm, pmd, addr, next,
-				pfn + (addr >> PAGE_SHIFT), prot))
-			return -ENOMEM;
+		err = remap_pte_range(mm, pmd, addr, next,
+				pfn + (addr >> PAGE_SHIFT), prot);
+		if (err)
+			return err;
 	} while (pmd++, addr = next, addr != end);
 	return 0;
 }
@@ -1733,6 +1746,7 @@ static inline int remap_pud_range(struct
 {
 	pud_t *pud;
 	unsigned long next;
+	int err;
 
 	pfn -= addr >> PAGE_SHIFT;
 	pud = pud_alloc(mm, pgd, addr);
@@ -1740,9 +1754,10 @@ static inline int remap_pud_range(struct
 		return -ENOMEM;
 	do {
 		next = pud_addr_end(addr, end);
-		if (remap_pmd_range(mm, pud, addr, next,
-				pfn + (addr >> PAGE_SHIFT), prot))
-			return -ENOMEM;
+		err = remap_pmd_range(mm, pud, addr, next,
+				pfn + (addr >> PAGE_SHIFT), prot);
+		if (err)
+			return err;
 	} while (pud++, addr = next, addr != end);
 	return 0;
 }
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -255,6 +255,42 @@ unsigned long change_protection(struct v
 	return pages;
 }
 
+static int prot_none_pte_entry(pte_t *pte, unsigned long addr,
+			       unsigned long next, struct mm_walk *walk)
+{
+	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
+		0 : -EACCES;
+}
+
+static int prot_none_hugetlb_entry(pte_t *pte, unsigned long hmask,
+				   unsigned long addr, unsigned long next,
+				   struct mm_walk *walk)
+{
+	return pfn_modify_allowed(pte_pfn(*pte), *(pgprot_t *)(walk->private)) ?
+		0 : -EACCES;
+}
+
+static int prot_none_test(unsigned long addr, unsigned long next,
+			  struct mm_walk *walk)
+{
+	return 0;
+}
+
+static int prot_none_walk(struct vm_area_struct *vma, unsigned long start,
+			   unsigned long end, unsigned long newflags)
+{
+	pgprot_t new_pgprot = vm_get_page_prot(newflags);
+	struct mm_walk prot_none_walk = {
+		.pte_entry = prot_none_pte_entry,
+		.hugetlb_entry = prot_none_hugetlb_entry,
+		.test_walk = prot_none_test,
+		.mm = current->mm,
+		.private = &new_pgprot,
+	};
+
+	return walk_page_range(start, end, &prot_none_walk);
+}
+
 int
 mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
 	unsigned long start, unsigned long end, unsigned long newflags)
@@ -273,6 +309,19 @@ mprotect_fixup(struct vm_area_struct *vm
 	}
 
 	/*
+	 * Do PROT_NONE PFN permission checks here when we can still
+	 * bail out without undoing a lot of state. This is a rather
+	 * uncommon case, so doesn't need to be very optimized.
+	 */
+	if (arch_has_pfn_modify_check() &&
+	    (vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) &&
+	    (newflags & (VM_READ|VM_WRITE|VM_EXEC)) == 0) {
+		error = prot_none_walk(vma, start, end, newflags);
+		if (error)
+			return error;
+	}
+
+	/*
 	 * If we make a private mapping writable we increase our commit;
 	 * but (without finer accounting) cannot reduce our commit if we
 	 * make it unwritable again. hugetlb mapping were accounted for



  parent reply	other threads:[~2018-08-14 17:47 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-14 17:17 [PATCH 4.4 00/43] 4.4.148-stable review Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 01/43] ext4: fix check to prevent initializing reserved inodes Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 02/43] tpm: fix race condition in tpm_common_write() Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 03/43] ipv4+ipv6: Make INET*_ESP select CRYPTO_ECHAINIV Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 04/43] fork: unconditionally clear stack on fork Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 05/43] parisc: Enable CONFIG_MLONGCALLS by default Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 07/43] xen/netfront: dont cache skb_shinfo() Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 08/43] ACPI / LPSS: Add missing prv_offset setting for byt/cht PWM devices Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 09/43] scsi: sr: Avoid that opening a CD-ROM hangs with runtime power management enabled Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 10/43] root dentries need RCU-delayed freeing Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 11/43] fix mntput/mntput race Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 12/43] fix __legitimize_mnt()/mntput() race Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 13/43] IB/core: Make testing MR flags for writability a static inline function Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 14/43] IB/mlx4: Mark user MR as writable if actual virtual memory is writable Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 15/43] IB/ocrdma: fix out of bounds access to local buffer Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 16/43] ARM: dts: imx6sx: fix irq for pcie bridge Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 17/43] x86/paravirt: Fix spectre-v2 mitigations for paravirt guests Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 18/43] x86/speculation: Protect against userspace-userspace spectreRSB Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 19/43] kprobes/x86: Fix %p uses in error messages Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 20/43] x86/irqflags: Provide a declaration for native_save_fl Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 21/43] x86/speculation/l1tf: Increase 32bit PAE __PHYSICAL_PAGE_SHIFT Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 22/43] x86/mm: Move swap offset/type up in PTE to work around erratum Greg Kroah-Hartman
2018-08-14 17:17 ` [PATCH 4.4 23/43] x86/mm: Fix swap entry comment and macro Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 24/43] mm: x86: move _PAGE_SWP_SOFT_DIRTY from bit 7 to bit 1 Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 25/43] x86/speculation/l1tf: Change order of offset/type in swap entry Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 26/43] x86/speculation/l1tf: Protect swap entries against L1TF Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 27/43] x86/speculation/l1tf: Protect PROT_NONE PTEs against speculation Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 28/43] x86/speculation/l1tf: Make sure the first page is always reserved Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 29/43] x86/speculation/l1tf: Add sysfs reporting for l1tf Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 30/43] mm: Add vm_insert_pfn_prot() Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 31/43] mm: fix cache mode tracking in vm_insert_mixed() Greg Kroah-Hartman
2018-09-07 17:05   ` Ben Hutchings
2018-09-07 20:03     ` Greg Kroah-Hartman
2018-08-14 17:18 ` Greg Kroah-Hartman [this message]
2018-08-14 17:18 ` [PATCH 4.4 33/43] x86/speculation/l1tf: Limit swap file size to MAX_PA/2 Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 34/43] x86/bugs: Move the l1tf function and define pr_fmt properly Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 35/43] x86/speculation/l1tf: Extend 64bit swap file size limit Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 36/43] x86/cpufeatures: Add detection of L1D cache flush support Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 37/43] x86/speculation/l1tf: Protect PAE swap entries against L1TF Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 38/43] x86/speculation/l1tf: Fix up pte->pfn conversion for PAE Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 39/43] x86/speculation/l1tf: Invert all not present mappings Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 40/43] x86/speculation/l1tf: Make pmd/pud_mknotpresent() invert Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 41/43] x86/mm/pat: Make set_memory_np() L1TF safe Greg Kroah-Hartman
2018-09-09 16:46   ` Ben Hutchings
2018-09-09 17:06     ` Guenter Roeck
2018-09-10  7:16       ` Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 42/43] x86/mm/kmmio: Make the tracer robust against L1TF Greg Kroah-Hartman
2018-08-14 17:18 ` [PATCH 4.4 43/43] x86/speculation/l1tf: Fix up CPU feature flags Greg Kroah-Hartman
2018-08-15  6:15 ` [PATCH 4.4 00/43] 4.4.148-stable review Greg Kroah-Hartman
2018-08-15 13:10 ` Guenter Roeck
2018-08-15 20:52 ` Dan Rue

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180814171519.219354177@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=ak@linux.intel.com \
    --cc=dave.hansen@intel.com \
    --cc=dwmw@amazon.co.uk \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@roeck-us.net \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).