[RFC,5/6] x86/alternatives: use temporary mm for text poking
diff mbox series

Message ID 20180829081147.184610-6-namit@vmware.com
State New, archived
Headers show
Series
  • x86: text_poke() fixes
Related show

Commit Message

Nadav Amit Aug. 29, 2018, 8:11 a.m. UTC
text_poke() can potentially compromise the security as it sets temporary
PTEs in the fixmap. These PTEs might be used to rewrite the kernel code
from other cores accidentally or maliciously, if an attacker gains the
ability to write onto kernel memory.

Moreover, since remote TLBs are not flushed after the temporary PTEs are
removed, the time-window in which the code is writable is not limited if
the fixmap PTEs - maliciously or accidentally - are cached in the TLB.

To address these potential security hazards, we use a temporary mm for
patching the code. Unfortunately, the temporary-mm cannot be initialized
early enough during the init, and as a result x86_late_time_init() needs
to use text_poke() before it can be initialized. text_poke() therefore
keeps the two poking versions - using fixmap and using temporary mm -
and uses them accordingly.

More adventurous developers can try to reorder the init sequence or use
text_poke_early() instead of text_poke() to remove the use of fixmap for
patching completely.

Finally, text_poke() is also not conservative enough when mapping pages,
as it always tries to map 2 pages, even when a single one is sufficient.
So try to be more conservative, and do not map more than needed.

Cc: Andy Lutomirski <luto@kernel.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/kernel/alternative.c | 154 +++++++++++++++++++++++++++++-----
 1 file changed, 133 insertions(+), 21 deletions(-)

Comments

Peter Zijlstra Aug. 29, 2018, 9:28 a.m. UTC | #1
On Wed, Aug 29, 2018 at 01:11:46AM -0700, Nadav Amit wrote:
> +static void text_poke_fixmap(void *addr, const void *opcode, size_t len,
> +			     struct page *pages[2])
> +{
> +	u8 *vaddr;
> +
> +	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
> +	if (pages[1])
> +		set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1]));
> +	vaddr = (u8 *)fix_to_virt(FIX_TEXT_POKE0);
> +	memcpy(vaddr + offset_in_page(addr), opcode, len);
> +
> +	/*
> +	 * clear_fixmap() performs a TLB flush, so no additional TLB
> +	 * flush is needed.
> +	 */
> +	clear_fixmap(FIX_TEXT_POKE0);
> +	if (pages[1])
> +		clear_fixmap(FIX_TEXT_POKE1);
> +	sync_core();
> +	/* Could also do a CLFLUSH here to speed up CPU recovery; but
> +	   that causes hangs on some VIA CPUs. */

Please take this opportunity to fix that comment style.

> +}
> +
> +__ro_after_init struct mm_struct *poking_mm;
> +__ro_after_init unsigned long poking_addr;
> +
> +/**
> + * text_poke_safe() - Pokes the text using a separate address space.
> + *
> + * This is the preferable way for patching the kernel after boot, as it does not
> + * allow other cores to accidentally or maliciously modify the code using the
> + * temporary PTEs.
> + */
> +static void text_poke_safe(void *addr, const void *opcode, size_t len,
> +			   struct page *pages[2])
> +{
> +	temporary_mm_state_t prev;
> +	pte_t pte, *ptep;
> +	spinlock_t *ptl;
> +
> +	/*
> +	 * The lock is not really needed, but this allows to avoid open-coding.
> +	 */
> +	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
> +
> +	pte = mk_pte(pages[0], PAGE_KERNEL);
> +	set_pte_at(poking_mm, poking_addr, ptep, pte);
> +
> +	if (pages[1]) {
> +		pte = mk_pte(pages[1], PAGE_KERNEL);
> +		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
> +	}
> +
> +	/*
> +	 * Loading the temporary mm behaves as a compiler barrier, which
> +	 * guarantees that the PTE will be set at the time memcpy() is done.
> +	 */
> +	prev = use_temporary_mm(poking_mm);
> +
> +	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
> +
> +	/*
> +	 * Ensure that the PTE is only cleared after copying is done by using a
> +	 * compiler barrier.
> +	 */
> +	barrier();

I tripped over the use of 'done', because even with TSO the store isn't
done once the instruction retires.

All we want to ensure is that the pte_clear() store is issued after the
copy, and that is indeed guaranteed by this.

> +	pte_clear(poking_mm, poking_addr, ptep);
> +
> +	/*
> +	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
> +	 * as it also flushes the corresponding "user" address spaces, which
> +	 * does not exist.
> +	 *
> +	 * Poking, however, is already very inefficient since it does not try to
> +	 * batch updates, so we ignore this problem for the time being.
> +	 *
> +	 * Since the PTEs do not exist in other kernel address-spaces, we do
> +	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
> +	 * more unwarranted TLB flushes.
> +	 */

yuck :-), but yeah.

> +	__flush_tlb_one_user(poking_addr);
> +	if (pages[1]) {
> +		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
> +		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
> +	}
> +	/*
> +	 * Loading the previous page-table hierarchy requires a serializing
> +	 * instruction that already allows the core to see the updated version.
> +	 * Xen-PV is assumed to serialize execution in a similar manner.
> +	 */
> +	unuse_temporary_mm(prev);
> +
> +	pte_unmap_unlock(ptep, ptl);
> +}
Andy Lutomirski Aug. 29, 2018, 3:46 p.m. UTC | #2
On Wed, Aug 29, 2018 at 2:28 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, Aug 29, 2018 at 01:11:46AM -0700, Nadav Amit wrote:

>> +     pte_clear(poking_mm, poking_addr, ptep);
>> +
>> +     /*
>> +      * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
>> +      * as it also flushes the corresponding "user" address spaces, which
>> +      * does not exist.
>> +      *
>> +      * Poking, however, is already very inefficient since it does not try to
>> +      * batch updates, so we ignore this problem for the time being.
>> +      *
>> +      * Since the PTEs do not exist in other kernel address-spaces, we do
>> +      * not use __flush_tlb_one_kernel(), which when PTI is on would cause
>> +      * more unwarranted TLB flushes.
>> +      */
>
> yuck :-), but yeah.

I'm sure we covered this ad nauseum when PTI was being developed, but
we were kind of in a rush, so:

Why do we do INVPCID at all?  The fallback path for non-INVPCID
systems uses invalidate_user_asid(), which should be faster than the
invpcid path.  And doesn't do a redundant flush in this case.

Can we just drop the INVPCID?  While we're at it, we could drop
X86_FEATURE_INVPCID_SINGLE entirely, since that's the only user.

--Andy
Peter Zijlstra Aug. 29, 2018, 4:14 p.m. UTC | #3
On Wed, Aug 29, 2018 at 08:46:04AM -0700, Andy Lutomirski wrote:
> On Wed, Aug 29, 2018 at 2:28 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Wed, Aug 29, 2018 at 01:11:46AM -0700, Nadav Amit wrote:
> 
> >> +     pte_clear(poking_mm, poking_addr, ptep);
> >> +
> >> +     /*
> >> +      * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
> >> +      * as it also flushes the corresponding "user" address spaces, which
> >> +      * does not exist.
> >> +      *
> >> +      * Poking, however, is already very inefficient since it does not try to
> >> +      * batch updates, so we ignore this problem for the time being.
> >> +      *
> >> +      * Since the PTEs do not exist in other kernel address-spaces, we do
> >> +      * not use __flush_tlb_one_kernel(), which when PTI is on would cause
> >> +      * more unwarranted TLB flushes.
> >> +      */
> >
> > yuck :-), but yeah.
> 
> I'm sure we covered this ad nauseum when PTI was being developed, but
> we were kind of in a rush, so:
> 
> Why do we do INVPCID at all?  The fallback path for non-INVPCID
> systems uses invalidate_user_asid(), which should be faster than the
> invpcid path.  And doesn't do a redundant flush in this case.

I don't remember; and you forgot to (re)add dhansen.

Logically INVPCID_SINGLE should be faster since it pokes out a single
translation in another PCID instead of killing all user translations.

Is it just a matter of (current) chips implementing INVLPCID_SINGLE
inefficient, or something else?
Andy Lutomirski Aug. 29, 2018, 4:32 p.m. UTC | #4
On Wed, Aug 29, 2018 at 9:14 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, Aug 29, 2018 at 08:46:04AM -0700, Andy Lutomirski wrote:
>> On Wed, Aug 29, 2018 at 2:28 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>> > On Wed, Aug 29, 2018 at 01:11:46AM -0700, Nadav Amit wrote:
>>
>> >> +     pte_clear(poking_mm, poking_addr, ptep);
>> >> +
>> >> +     /*
>> >> +      * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
>> >> +      * as it also flushes the corresponding "user" address spaces, which
>> >> +      * does not exist.
>> >> +      *
>> >> +      * Poking, however, is already very inefficient since it does not try to
>> >> +      * batch updates, so we ignore this problem for the time being.
>> >> +      *
>> >> +      * Since the PTEs do not exist in other kernel address-spaces, we do
>> >> +      * not use __flush_tlb_one_kernel(), which when PTI is on would cause
>> >> +      * more unwarranted TLB flushes.
>> >> +      */
>> >
>> > yuck :-), but yeah.
>>
>> I'm sure we covered this ad nauseum when PTI was being developed, but
>> we were kind of in a rush, so:
>>
>> Why do we do INVPCID at all?  The fallback path for non-INVPCID
>> systems uses invalidate_user_asid(), which should be faster than the
>> invpcid path.  And doesn't do a redundant flush in this case.
>
> I don't remember; and you forgot to (re)add dhansen.
>
> Logically INVPCID_SINGLE should be faster since it pokes out a single
> translation in another PCID instead of killing all user translations.
>
> Is it just a matter of (current) chips implementing INVLPCID_SINGLE
> inefficient, or something else?

It's two things.  Current chips (or at least Skylake, but I'm pretty
sure that older chips are the same) have INVPCID being slower than
writing CR3.  (Yes, that's right, it is considerably faster to flush
the a whole PCID by writing to CR3 than it is to ask INVPCID to do
anything at all.)  But INVPCID is also serializing, whereas just
marking an ASID for future flushing is essentially free.

It's plausible that there are workloads where the current code is
faster, such as where we're munmapping a single page via syscall and
we'd prefer to only flush that one TLB entry even if the flush
operation is slower as a result.

--Andy
Dave Hansen Aug. 29, 2018, 4:37 p.m. UTC | #5
On 08/29/2018 09:32 AM, Andy Lutomirski wrote:
> It's plausible that there are workloads where the current code is
> faster, such as where we're munmapping a single page via syscall and
> we'd prefer to only flush that one TLB entry even if the flush
> operation is slower as a result.

Yeah, I don't specifically remember testing it.  But, I know I wanted to
avoid throwing away thousands of TLB entries when we only want to rid
ourselves of one.

Patch
diff mbox series

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 916c11b410c4..0feac3dfabe9 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -11,6 +11,7 @@ 
 #include <linux/stop_machine.h>
 #include <linux/slab.h>
 #include <linux/kdebug.h>
+#include <linux/mmu_context.h>
 #include <asm/text-patching.h>
 #include <asm/alternative.h>
 #include <asm/sections.h>
@@ -674,6 +675,113 @@  void *__init_or_module text_poke_early(void *addr, const void *opcode,
 	return addr;
 }
 
+/**
+ * text_poke_fixmap - poke using the fixmap.
+ *
+ * Fallback function for poking the text using the fixmap. It is used during
+ * early boot and in the rare case in which initialization of safe poking fails.
+ *
+ * Poking in this manner should be avoided, since it allows other cores to use
+ * the fixmap entries, and can be exploited by an attacker to overwrite the code
+ * (assuming he gained the write access through another bug).
+ */
+static void text_poke_fixmap(void *addr, const void *opcode, size_t len,
+			     struct page *pages[2])
+{
+	u8 *vaddr;
+
+	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
+	if (pages[1])
+		set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1]));
+	vaddr = (u8 *)fix_to_virt(FIX_TEXT_POKE0);
+	memcpy(vaddr + offset_in_page(addr), opcode, len);
+
+	/*
+	 * clear_fixmap() performs a TLB flush, so no additional TLB
+	 * flush is needed.
+	 */
+	clear_fixmap(FIX_TEXT_POKE0);
+	if (pages[1])
+		clear_fixmap(FIX_TEXT_POKE1);
+	sync_core();
+	/* Could also do a CLFLUSH here to speed up CPU recovery; but
+	   that causes hangs on some VIA CPUs. */
+}
+
+__ro_after_init struct mm_struct *poking_mm;
+__ro_after_init unsigned long poking_addr;
+
+/**
+ * text_poke_safe() - Pokes the text using a separate address space.
+ *
+ * This is the preferable way for patching the kernel after boot, as it does not
+ * allow other cores to accidentally or maliciously modify the code using the
+ * temporary PTEs.
+ */
+static void text_poke_safe(void *addr, const void *opcode, size_t len,
+			   struct page *pages[2])
+{
+	temporary_mm_state_t prev;
+	pte_t pte, *ptep;
+	spinlock_t *ptl;
+
+	/*
+	 * The lock is not really needed, but this allows to avoid open-coding.
+	 */
+	ptep = get_locked_pte(poking_mm, poking_addr, &ptl);
+
+	pte = mk_pte(pages[0], PAGE_KERNEL);
+	set_pte_at(poking_mm, poking_addr, ptep, pte);
+
+	if (pages[1]) {
+		pte = mk_pte(pages[1], PAGE_KERNEL);
+		set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte);
+	}
+
+	/*
+	 * Loading the temporary mm behaves as a compiler barrier, which
+	 * guarantees that the PTE will be set at the time memcpy() is done.
+	 */
+	prev = use_temporary_mm(poking_mm);
+
+	memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len);
+
+	/*
+	 * Ensure that the PTE is only cleared after copying is done by using a
+	 * compiler barrier.
+	 */
+	barrier();
+
+	pte_clear(poking_mm, poking_addr, ptep);
+
+	/*
+	 * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on,
+	 * as it also flushes the corresponding "user" address spaces, which
+	 * does not exist.
+	 *
+	 * Poking, however, is already very inefficient since it does not try to
+	 * batch updates, so we ignore this problem for the time being.
+	 *
+	 * Since the PTEs do not exist in other kernel address-spaces, we do
+	 * not use __flush_tlb_one_kernel(), which when PTI is on would cause
+	 * more unwarranted TLB flushes.
+	 */
+	__flush_tlb_one_user(poking_addr);
+	if (pages[1]) {
+		pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1);
+		__flush_tlb_one_user(poking_addr + PAGE_SIZE);
+	}
+
+	/*
+	 * Loading the previous page-table hierarchy requires a serializing
+	 * instruction that already allows the core to see the updated version.
+	 * Xen-PV is assumed to serialize execution in a similar manner.
+	 */
+	unuse_temporary_mm(prev);
+
+	pte_unmap_unlock(ptep, ptl);
+}
+
 /**
  * text_poke - Update instructions on a live kernel
  * @addr: address to modify
@@ -689,42 +797,46 @@  void *__init_or_module text_poke_early(void *addr, const void *opcode,
  */
 void *text_poke(void *addr, const void *opcode, size_t len)
 {
+	bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE;
+	struct page *pages[2] = {0};
 	unsigned long flags;
-	char *vaddr;
-	struct page *pages[2];
-	int i;
 
 	/*
-	 * While boot memory allocator is runnig we cannot use struct
-	 * pages as they are not yet initialized.
+	 * While boot memory allocator is running we cannot use struct pages as
+	 * they are not yet initialized.
 	 */
 	BUG_ON(!after_bootmem);
 	lockdep_assert_held(&text_mutex);
 
 	if (!core_kernel_text((unsigned long)addr)) {
 		pages[0] = vmalloc_to_page(addr);
-		pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = vmalloc_to_page(addr + PAGE_SIZE);
 	} else {
 		pages[0] = virt_to_page(addr);
 		WARN_ON(!PageReserved(pages[0]));
-		pages[1] = virt_to_page(addr + PAGE_SIZE);
+		if (cross_page_boundary)
+			pages[1] = virt_to_page(addr + PAGE_SIZE);
 	}
 	BUG_ON(!pages[0]);
 	local_irq_save(flags);
-	set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0]));
-	if (pages[1])
-		set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1]));
-	vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0);
-	memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len);
-	clear_fixmap(FIX_TEXT_POKE0);
-	if (pages[1])
-		clear_fixmap(FIX_TEXT_POKE1);
-	local_flush_tlb();
-	sync_core();
-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
-	   that causes hangs on some VIA CPUs. */
-	for (i = 0; i < len; i++)
-		BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]);
+
+	/*
+	 * During initial boot, it is hard to initialize poking_mm due to
+	 * dependencies in boot order.
+	 */
+	if (poking_mm)
+		text_poke_safe(addr, opcode, len, pages);
+	else
+		text_poke_fixmap(addr, opcode, len, pages);
+
+	/*
+	 * To be on the safe side, do the comparison before enabling IRQs, as it
+	 * was done before. However, it makes more sense to allow the callers to
+	 * deal with potential failures and not to panic so easily.
+	 */
+	BUG_ON(memcmp(addr, opcode, len));
+
 	local_irq_restore(flags);
 	return addr;
 }