linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch V3 00/20] x86/iopl: Prevent user space from using CLI/STI with iopl(3)
@ 2019-11-13 20:42 Thomas Gleixner
  2019-11-13 20:42 ` [patch V3 01/20] x86/ptrace: Prevent truncation of bitmap size Thomas Gleixner
                   ` (20 more replies)
  0 siblings, 21 replies; 53+ messages in thread
From: Thomas Gleixner @ 2019-11-13 20:42 UTC (permalink / raw)
  To: LKML
  Cc: x86, Andy Lutomirski, Linus Torvalds, Stephen Hemminger,
	Willy Tarreau, Juergen Gross, Sean Christopherson,
	H. Peter Anvin

This is the third version of the attempt to confine the unwanted side
effects of iopl(). The first version of this series can be found here:

   https://lore.kernel.org/r/20191106193459.581614484@linutronix.de

Second version is here:

   https://lore.kernel.org/r/20191111220314.519933535@linutronix.de

The V1 cover letter also contains a longer variant of the
background. Summary:

iopl(level = 3) enables aside of access to all 65536 I/O ports also the
usage of CLI/STI in user space.

Disabling interrupts in user space can lead to system lockups and breaks
assumptions in the kernel that userspace always runs with interrupts
enabled.

iopl() is often preferred over ioperm() as it avoids the overhead of
copying the tasks I/O bitmap to the TSS bitmap on context switch. This
overhead can be avoided by providing a all zeroes bitmap in the TSS and
switching the TSS bitmap offset to this permit all IO bitmap. It's
marginally slower than iopl() which is a one time setup, but prevents the
usage of CLI/STI in user space.

The changes vs. V3:

    - Split out the restructuring of the first/subsequent ioperm()
      invocation into a seperate patch to address the inconsisteny which
      Andy detected in the patch which introduces the concept of
      invalidating the I/O bitmap base to speed up context switching.
      This change is moved in front so the subsequent changes are
      functionally correct.

    - Moved the non HW TSS data related to I/O bitmap(s) into a seperate
      data structure. Modified version of Ingos proposed patch.

    - Made struct memeber names more consistent (Ingo)

    - Dropped the bitmap union. It is not longer necessary because V2
      already dropped the finer grained copying algorithm. The sequence
      count approach should avoid most of the copying overhead when the
      number of ioperm() using processes is very low which is the normal
      case.

    - Dropped the pointer storage of the bitmap in the TSS data as it is
      not required (Peter, Andy)

    - Fixed the missing refcount setting in the bitmap duplication code
      path. (Peter, Andy)

    - Updated changelog and comment to explain the bitmap invalidation
      logic. (Andy)

    - Removed TIF_IO_BITMAP from the TIF flags which are evaluated on the
      next task for entering the slow path.

    - Folded the NULL pointer check fix

    - Simplified the config option in the legacy removal patch (Andy)

    - Extended the scope of the config option to disable ioperm() along
      with iopl() which also mokes all related storage and functions
      compile time conditional. (Andy)

The series is also available from git:

  git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/iopl

Thanks,

	tglx
---
 arch/x86/Kconfig                        |   18 ++
 arch/x86/entry/common.c                 |    4 
 arch/x86/include/asm/io_bitmap.h        |   29 ++++
 arch/x86/include/asm/paravirt.h         |    4 
 arch/x86/include/asm/paravirt_types.h   |    2 
 arch/x86/include/asm/pgtable_32_types.h |    2 
 arch/x86/include/asm/processor.h        |  113 ++++++++++-------
 arch/x86/include/asm/ptrace.h           |    6 
 arch/x86/include/asm/switch_to.h        |   10 +
 arch/x86/include/asm/thread_info.h      |   14 +-
 arch/x86/include/asm/xen/hypervisor.h   |    2 
 arch/x86/kernel/cpu/common.c            |  188 ++++++++++++----------------
 arch/x86/kernel/doublefault.c           |    2 
 arch/x86/kernel/ioport.c                |  209 +++++++++++++++++++++-----------
 arch/x86/kernel/paravirt.c              |    2 
 arch/x86/kernel/process.c               |  200 ++++++++++++++++++++++++------
 arch/x86/kernel/process_32.c            |   77 -----------
 arch/x86/kernel/process_64.c            |   86 -------------
 arch/x86/kernel/ptrace.c                |   12 +
 arch/x86/kvm/vmx/vmx.c                  |    8 -
 arch/x86/mm/cpu_entry_area.c            |    8 +
 arch/x86/xen/enlighten_pv.c             |   10 -
 tools/testing/selftests/x86/ioperm.c    |   16 ++
 tools/testing/selftests/x86/iopl.c      |  129 ++++++++++++++++++-
 24 files changed, 674 insertions(+), 477 deletions(-)


^ permalink raw reply	[flat|nested] 53+ messages in thread
* [tip: x86/iopl] x86/cpu: Unify cpu_init()
@ 2019-11-16 11:51 tip-bot2 for Thomas Gleixner
  0 siblings, 0 replies; 53+ messages in thread
From: tip-bot2 for Thomas Gleixner @ 2019-11-16 11:51 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Thomas Gleixner, Andy Lutomirski, Ingo Molnar, Borislav Petkov,
	linux-kernel

The following commit has been merged into the x86/iopl branch of tip:

Commit-ID:     505b789996f64bdbfcc5847dd4b5076fc7c50274
Gitweb:        https://git.kernel.org/tip/505b789996f64bdbfcc5847dd4b5076fc7c50274
Author:        Thomas Gleixner <tglx@linutronix.de>
AuthorDate:    Mon, 11 Nov 2019 23:03:17 +01:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Sat, 16 Nov 2019 11:23:59 +01:00

x86/cpu: Unify cpu_init()

Similar to copy_thread_tls() the 32bit and 64bit implementations of
cpu_init() are very similar and unification avoids duplicate changes in the
future.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Andy Lutomirski <luto@kernel.org>

---
 arch/x86/kernel/cpu/common.c | 173 ++++++++++++----------------------
 1 file changed, 65 insertions(+), 108 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 9ae7d1b..d52ec1a 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -53,10 +53,7 @@
 #include <asm/microcode_intel.h>
 #include <asm/intel-family.h>
 #include <asm/cpu_device_id.h>
-
-#ifdef CONFIG_X86_LOCAL_APIC
 #include <asm/uv/uv.h>
-#endif
 
 #include "cpu.h"
 
@@ -1749,7 +1746,7 @@ static void wait_for_master_cpu(int cpu)
 }
 
 #ifdef CONFIG_X86_64
-static void setup_getcpu(int cpu)
+static inline void setup_getcpu(int cpu)
 {
 	unsigned long cpudata = vdso_encode_cpunode(cpu, early_cpu_to_node(cpu));
 	struct desc_struct d = { };
@@ -1769,7 +1766,43 @@ static void setup_getcpu(int cpu)
 
 	write_gdt_entry(get_cpu_gdt_rw(cpu), GDT_ENTRY_CPUNODE, &d, DESCTYPE_S);
 }
+
+static inline void ucode_cpu_init(int cpu)
+{
+	if (cpu)
+		load_ucode_ap();
+}
+
+static inline void tss_setup_ist(struct tss_struct *tss)
+{
+	/* Set up the per-CPU TSS IST stacks */
+	tss->x86_tss.ist[IST_INDEX_DF] = __this_cpu_ist_top_va(DF);
+	tss->x86_tss.ist[IST_INDEX_NMI] = __this_cpu_ist_top_va(NMI);
+	tss->x86_tss.ist[IST_INDEX_DB] = __this_cpu_ist_top_va(DB);
+	tss->x86_tss.ist[IST_INDEX_MCE] = __this_cpu_ist_top_va(MCE);
+}
+
+static inline void gdt_setup_doublefault_tss(int cpu) { }
+
+#else /* CONFIG_X86_64 */
+
+static inline void setup_getcpu(int cpu) { }
+
+static inline void ucode_cpu_init(int cpu)
+{
+	show_ucode_info_early();
+}
+
+static inline void tss_setup_ist(struct tss_struct *tss) { }
+
+static inline void gdt_setup_doublefault_tss(int cpu)
+{
+#ifdef CONFIG_DOUBLEFAULT
+	/* Set up the doublefault TSS pointer in the GDT */
+	__set_tss_desc(cpu, GDT_ENTRY_DOUBLEFAULT_TSS, &doublefault_tss);
 #endif
+}
+#endif /* !CONFIG_X86_64 */
 
 /*
  * cpu_init() initializes state that is per-CPU. Some data is already
@@ -1777,21 +1810,15 @@ static void setup_getcpu(int cpu)
  * and IDT. We reload them nevertheless, this function acts as a
  * 'CPU state barrier', nothing should get across.
  */
-#ifdef CONFIG_X86_64
-
 void cpu_init(void)
 {
+	struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw);
+	struct task_struct *cur = current;
 	int cpu = raw_smp_processor_id();
-	struct task_struct *me;
-	struct tss_struct *t;
-	int i;
 
 	wait_for_master_cpu(cpu);
 
-	if (cpu)
-		load_ucode_ap();
-
-	t = &per_cpu(cpu_tss_rw, cpu);
+	ucode_cpu_init(cpu);
 
 #ifdef CONFIG_NUMA
 	if (this_cpu_read(numa_node) == 0 &&
@@ -1800,63 +1827,48 @@ void cpu_init(void)
 #endif
 	setup_getcpu(cpu);
 
-	me = current;
-
 	pr_debug("Initializing CPU#%d\n", cpu);
 
-	cr4_clear_bits(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE);
+	if (IS_ENABLED(CONFIG_X86_64) || cpu_feature_enabled(X86_FEATURE_VME) ||
+	    boot_cpu_has(X86_FEATURE_TSC) || boot_cpu_has(X86_FEATURE_DE))
+		cr4_clear_bits(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE);
 
 	/*
 	 * Initialize the per-CPU GDT with the boot GDT,
 	 * and set up the GDT descriptor:
 	 */
-
 	switch_to_new_gdt(cpu);
-	loadsegment(fs, 0);
-
 	load_current_idt();
 
-	memset(me->thread.tls_array, 0, GDT_ENTRY_TLS_ENTRIES * 8);
-	syscall_init();
+	if (IS_ENABLED(CONFIG_X86_64)) {
+		loadsegment(fs, 0);
+		memset(cur->thread.tls_array, 0, GDT_ENTRY_TLS_ENTRIES * 8);
+		syscall_init();
 
-	wrmsrl(MSR_FS_BASE, 0);
-	wrmsrl(MSR_KERNEL_GS_BASE, 0);
-	barrier();
+		wrmsrl(MSR_FS_BASE, 0);
+		wrmsrl(MSR_KERNEL_GS_BASE, 0);
+		barrier();
 
-	x86_configure_nx();
-	x2apic_setup();
-
-	/*
-	 * set up and load the per-CPU TSS
-	 */
-	if (!t->x86_tss.ist[0]) {
-		t->x86_tss.ist[IST_INDEX_DF] = __this_cpu_ist_top_va(DF);
-		t->x86_tss.ist[IST_INDEX_NMI] = __this_cpu_ist_top_va(NMI);
-		t->x86_tss.ist[IST_INDEX_DB] = __this_cpu_ist_top_va(DB);
-		t->x86_tss.ist[IST_INDEX_MCE] = __this_cpu_ist_top_va(MCE);
+		x2apic_setup();
 	}
 
-	t->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
-
-	/*
-	 * <= is required because the CPU will access up to
-	 * 8 bits beyond the end of the IO permission bitmap.
-	 */
-	for (i = 0; i <= IO_BITMAP_LONGS; i++)
-		t->io_bitmap[i] = ~0UL;
-
 	mmgrab(&init_mm);
-	me->active_mm = &init_mm;
-	BUG_ON(me->mm);
+	cur->active_mm = &init_mm;
+	BUG_ON(cur->mm);
 	initialize_tlbstate_and_flush();
-	enter_lazy_tlb(&init_mm, me);
+	enter_lazy_tlb(&init_mm, cur);
 
-	/*
-	 * Initialize the TSS.  sp0 points to the entry trampoline stack
-	 * regardless of what task is running.
-	 */
+	/* Initialize the TSS. */
+	tss_setup_ist(tss);
+	tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
+	memset(tss->io_bitmap, 0xff, sizeof(tss->io_bitmap));
 	set_tss_desc(cpu, &get_cpu_entry_area(cpu)->tss.x86_tss);
+
 	load_TR_desc();
+	/*
+	 * sp0 points to the entry trampoline stack regardless of what task
+	 * is running.
+	 */
 	load_sp0((unsigned long)(cpu_entry_stack(cpu) + 1));
 
 	load_mm_ldt(&init_mm);
@@ -1864,6 +1876,8 @@ void cpu_init(void)
 	clear_all_debug_regs();
 	dbg_restore_debug_regs();
 
+	gdt_setup_doublefault_tss(cpu);
+
 	fpu__init_cpu();
 
 	if (is_uv_system())
@@ -1872,63 +1886,6 @@ void cpu_init(void)
 	load_fixmap_gdt(cpu);
 }
 
-#else
-
-void cpu_init(void)
-{
-	int cpu = smp_processor_id();
-	struct task_struct *curr = current;
-	struct tss_struct *t = &per_cpu(cpu_tss_rw, cpu);
-
-	wait_for_master_cpu(cpu);
-
-	show_ucode_info_early();
-
-	pr_info("Initializing CPU#%d\n", cpu);
-
-	if (cpu_feature_enabled(X86_FEATURE_VME) ||
-	    boot_cpu_has(X86_FEATURE_TSC) ||
-	    boot_cpu_has(X86_FEATURE_DE))
-		cr4_clear_bits(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE);
-
-	load_current_idt();
-	switch_to_new_gdt(cpu);
-
-	/*
-	 * Set up and load the per-CPU TSS and LDT
-	 */
-	mmgrab(&init_mm);
-	curr->active_mm = &init_mm;
-	BUG_ON(curr->mm);
-	initialize_tlbstate_and_flush();
-	enter_lazy_tlb(&init_mm, curr);
-
-	/*
-	 * Initialize the TSS.  sp0 points to the entry trampoline stack
-	 * regardless of what task is running.
-	 */
-	set_tss_desc(cpu, &get_cpu_entry_area(cpu)->tss.x86_tss);
-	load_TR_desc();
-	load_sp0((unsigned long)(cpu_entry_stack(cpu) + 1));
-
-	load_mm_ldt(&init_mm);
-
-	t->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
-
-#ifdef CONFIG_DOUBLEFAULT
-	/* Set up doublefault TSS pointer in the GDT */
-	__set_tss_desc(cpu, GDT_ENTRY_DOUBLEFAULT_TSS, &doublefault_tss);
-#endif
-
-	clear_all_debug_regs();
-	dbg_restore_debug_regs();
-
-	fpu__init_cpu();
-
-	load_fixmap_gdt(cpu);
-}
-#endif
-
 /*
  * The microcode loader calls this upon late microcode load to recheck features,
  * only when microcode has been updated. Caller holds microcode_mutex and CPU

^ permalink raw reply related	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2019-11-16 11:52 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-13 20:42 [patch V3 00/20] x86/iopl: Prevent user space from using CLI/STI with iopl(3) Thomas Gleixner
2019-11-13 20:42 ` [patch V3 01/20] x86/ptrace: Prevent truncation of bitmap size Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 02/20] x86/process: Unify copy_thread_tls() Thomas Gleixner
2019-11-13 21:10   ` Linus Torvalds
2019-11-13 21:41     ` Thomas Gleixner
2019-11-13 22:10       ` Linus Torvalds
2019-11-13 22:33         ` Thomas Gleixner
2019-11-13 21:44     ` Brian Gerst
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 03/20] x86/cpu: Unify cpu_init() Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 04/20] x86/tss: Fix and move VMX BUILD_BUG_ON() Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 05/20] x86/iopl: Cleanup include maze Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 06/20] x86/ioperm: Simplify first ioperm() invocation logic Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 07/20] x86/ioperm: Avoid bitmap allocation if no permissions are set Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 08/20] x86/io: Speedup schedule out of I/O bitmap user Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 09/20] x86/tss: Move I/O bitmap data into a seperate struct Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 10/20] x86/ioperm: Move iobitmap data into a struct Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 11/20] x86/ioperm: Add bitmap sequence number Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 12/20] x86/ioperm: Move TSS bitmap update to exit to user work Thomas Gleixner
2019-11-13 21:19   ` Linus Torvalds
2019-11-13 21:21     ` Linus Torvalds
2019-11-13 21:44       ` Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 13/20] x86/ioperm: Remove bitmap if all permissions dropped Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 14/20] x86/ioperm: Share I/O bitmap if identical Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 15/20] selftests/x86/ioperm: Extend testing so the shared bitmap is exercised Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 16/20] x86/iopl: Fixup misleading comment Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 17/20] x86/iopl: Restrict iopl() permission scope Thomas Gleixner
2019-11-14 18:13   ` Borislav Petkov
2019-11-14 18:39     ` Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 18/20] x86/iopl: Remove legacy IOPL option Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:42 ` [patch V3 19/20] x86/ioperm: Extend IOPL config to control ioperm() as well Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-13 20:43 ` [patch V3 20/20] selftests/x86/iopl: Extend test to cover IOPL emulation Thomas Gleixner
2019-11-15 21:12   ` [tip: x86/iopl] " tip-bot2 for Thomas Gleixner
2019-11-14  8:43 ` [patch V3 00/20] x86/iopl: Prevent user space from using CLI/STI with iopl(3) Peter Zijlstra
2019-11-16 11:51 [tip: x86/iopl] x86/cpu: Unify cpu_init() tip-bot2 for Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).