linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Optionally flush L1D on context switch
@ 2020-04-08  9:02 Balbir Singh
  2020-04-08  9:02 ` [PATCH v3 1/5] arch/x86/kvm: Refactor l1d flush lifecycle management Balbir Singh
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Balbir Singh @ 2020-04-08  9:02 UTC (permalink / raw)
  To: tglx, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Provide a mechanism to flush the L1D cache on context switch.  The goal
is to allow tasks that are paranoid due to the recent snoop assisted data
sampling vulnerabilites, to flush their L1D on being switched out.
This protects their data from being snooped or leaked via side channels
after the task has context switched out.

The core of the patches is patch 3, the rest largely refactor the code
so that common bits can be reused.

Changelog v3:
 - Refactor the return value of what flush_l1d_cache_hw() returns
 - Refactor the code, so that the generic setup bits come first
   (patch 3 from previous posting is now patches 3 and 4)
 - Move from arch_prctl() to the prctl() interface as recommend
   in the reviews.
Changelog v2:
 - Fix a miss of mutex_unlock (caught by Borislav Petkov <bp@alien8.de>)
 - Add documentation about the changes (Josh Poimboeuf
   <jpoimboe@redhat.com>)

Changelog:
 - Refactor the code and reuse cond_ibpb() - code bits provided by tglx
 - Merge mm state tracking for ibpb and l1d flush
 - Rename TIF_L1D_FLUSH to TIF_SPEC_FLUSH_L1D

Changelog RFC:
 - Reuse existing code for allocation and flush
 - Simplify the goto logic in the actual l1d_flush function
 - Optimize the code path with jump labels/static functions

The previous version of this patch posted at:

https://lore.kernel.org/lkml/20200406031946.11815-1-sblbir@amazon.com/

Balbir Singh (5):
  arch/x86/kvm: Refactor l1d flush lifecycle management
  arch/x86: Refactor tlbflush and l1d flush
  arch/x86/mm: Refactor cond_ibpb() to support other use cases
  arch/x86: Optionally flush L1D on context switch
  arch/x86: Add L1D flushing Documentation

 Documentation/admin-guide/hw-vuln/index.rst   |   1 +
 .../admin-guide/hw-vuln/l1d_flush.rst         |  40 +++++++
 arch/x86/include/asm/cacheflush.h             |   6 +
 arch/x86/include/asm/thread_info.h            |   6 +-
 arch/x86/include/asm/tlbflush.h               |   2 +-
 arch/x86/kernel/Makefile                      |   1 +
 arch/x86/kernel/l1d_flush.c                   |  85 ++++++++++++++
 arch/x86/kvm/vmx/vmx.c                        |  56 ++-------
 arch/x86/mm/tlb.c                             | 109 ++++++++++++++----
 include/uapi/linux/prctl.h                    |   4 +
 kernel/sys.c                                  |  20 ++++
 11 files changed, 259 insertions(+), 71 deletions(-)
 create mode 100644 Documentation/admin-guide/hw-vuln/l1d_flush.rst
 create mode 100644 arch/x86/kernel/l1d_flush.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 1/5] arch/x86/kvm: Refactor l1d flush lifecycle management
  2020-04-08  9:02 [PATCH v3 0/5] Optionally flush L1D on context switch Balbir Singh
@ 2020-04-08  9:02 ` Balbir Singh
  2020-04-17 12:57   ` Thomas Gleixner
  2020-04-08  9:02 ` [PATCH v3 2/5] arch/x86: Refactor tlbflush and l1d flush Balbir Singh
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Balbir Singh @ 2020-04-08  9:02 UTC (permalink / raw)
  To: tglx, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Split out the allocation and free routines to be used in a follow
up set of patches (to reuse for L1D flushing).

Signed-off-by: Balbir Singh <sblbir@amazon.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/cacheflush.h |  3 +++
 arch/x86/kernel/Makefile          |  1 +
 arch/x86/kernel/l1d_flush.c       | 36 +++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c            | 25 +++------------------
 4 files changed, 43 insertions(+), 22 deletions(-)
 create mode 100644 arch/x86/kernel/l1d_flush.c

diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index 63feaf2a5f93..6419a4cef0e8 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -6,6 +6,9 @@
 #include <asm-generic/cacheflush.h>
 #include <asm/special_insns.h>
 
+#define L1D_CACHE_ORDER 4
 void clflush_cache_range(void *addr, unsigned int size);
+void *alloc_l1d_flush_pages(void);
+void cleanup_l1d_flush_pages(void *l1d_flush_pages);
 
 #endif /* _ASM_X86_CACHEFLUSH_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index d6d61c4455fa..48f443e6c2de 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -160,3 +160,4 @@ ifeq ($(CONFIG_X86_64),y)
 endif
 
 obj-$(CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT)	+= ima_arch.o
+obj-y						+= l1d_flush.o
diff --git a/arch/x86/kernel/l1d_flush.c b/arch/x86/kernel/l1d_flush.c
new file mode 100644
index 000000000000..05f375c33423
--- /dev/null
+++ b/arch/x86/kernel/l1d_flush.c
@@ -0,0 +1,36 @@
+#include <linux/mm.h>
+#include <asm/cacheflush.h>
+
+void *alloc_l1d_flush_pages(void)
+{
+	struct page *page;
+	void *l1d_flush_pages = NULL;
+	int i;
+
+	/*
+	 * This allocation for l1d_flush_pages is not tied to a VM/task's
+	 * lifetime and so should not be charged to a memcg.
+	 */
+	page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER);
+	if (!page)
+		return NULL;
+	l1d_flush_pages = page_address(page);
+
+	/*
+	 * Initialize each page with a different pattern in
+	 * order to protect against KSM in the nested
+	 * virtualization case.
+	 */
+	for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) {
+		memset(l1d_flush_pages + i * PAGE_SIZE, i + 1,
+				PAGE_SIZE);
+	}
+	return l1d_flush_pages;
+}
+EXPORT_SYMBOL_GPL(alloc_l1d_flush_pages);
+
+void cleanup_l1d_flush_pages(void *l1d_flush_pages)
+{
+	free_pages((unsigned long)l1d_flush_pages, L1D_CACHE_ORDER);
+}
+EXPORT_SYMBOL_GPL(cleanup_l1d_flush_pages);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 91749f1254e8..f40c30f9b4d8 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -203,14 +203,10 @@ static const struct {
 	[VMENTER_L1D_FLUSH_NOT_REQUIRED] = {"not required", false},
 };
 
-#define L1D_CACHE_ORDER 4
 static void *vmx_l1d_flush_pages;
 
 static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
 {
-	struct page *page;
-	unsigned int i;
-
 	if (!boot_cpu_has_bug(X86_BUG_L1TF)) {
 		l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_NOT_REQUIRED;
 		return 0;
@@ -253,24 +249,9 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
 
 	if (l1tf != VMENTER_L1D_FLUSH_NEVER && !vmx_l1d_flush_pages &&
 	    !boot_cpu_has(X86_FEATURE_FLUSH_L1D)) {
-		/*
-		 * This allocation for vmx_l1d_flush_pages is not tied to a VM
-		 * lifetime and so should not be charged to a memcg.
-		 */
-		page = alloc_pages(GFP_KERNEL, L1D_CACHE_ORDER);
-		if (!page)
+		vmx_l1d_flush_pages = alloc_l1d_flush_pages();
+		if (!vmx_l1d_flush_pages)
 			return -ENOMEM;
-		vmx_l1d_flush_pages = page_address(page);
-
-		/*
-		 * Initialize each page with a different pattern in
-		 * order to protect against KSM in the nested
-		 * virtualization case.
-		 */
-		for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) {
-			memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1,
-			       PAGE_SIZE);
-		}
 	}
 
 	l1tf_vmx_mitigation = l1tf;
@@ -7999,7 +7980,7 @@ static struct kvm_x86_init_ops vmx_init_ops __initdata = {
 static void vmx_cleanup_l1d_flush(void)
 {
 	if (vmx_l1d_flush_pages) {
-		free_pages((unsigned long)vmx_l1d_flush_pages, L1D_CACHE_ORDER);
+		cleanup_l1d_flush_pages(vmx_l1d_flush_pages);
 		vmx_l1d_flush_pages = NULL;
 	}
 	/* Restore state so sysfs ignores VMX */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 2/5] arch/x86: Refactor tlbflush and l1d flush
  2020-04-08  9:02 [PATCH v3 0/5] Optionally flush L1D on context switch Balbir Singh
  2020-04-08  9:02 ` [PATCH v3 1/5] arch/x86/kvm: Refactor l1d flush lifecycle management Balbir Singh
@ 2020-04-08  9:02 ` Balbir Singh
  2020-04-17 13:03   ` Thomas Gleixner
  2020-04-08  9:02 ` [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases Balbir Singh
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Balbir Singh @ 2020-04-08  9:02 UTC (permalink / raw)
  To: tglx, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Refactor the existing assembly bits into smaller helper functions
and also abstract L1D_FLUSH into a helper function. Use these
functions in kvm for L1D flushing.

Signed-off-by: Balbir Singh <sblbir@amazon.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/cacheflush.h |  3 ++
 arch/x86/kernel/l1d_flush.c       | 49 +++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c            | 31 ++++---------------
 3 files changed, 57 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index 6419a4cef0e8..66a46db7aadd 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -10,5 +10,8 @@
 void clflush_cache_range(void *addr, unsigned int size);
 void *alloc_l1d_flush_pages(void);
 void cleanup_l1d_flush_pages(void *l1d_flush_pages);
+void populate_tlb_with_flush_pages(void *l1d_flush_pages);
+void flush_l1d_cache_sw(void *l1d_flush_pages);
+int flush_l1d_cache_hw(void);
 
 #endif /* _ASM_X86_CACHEFLUSH_H */
diff --git a/arch/x86/kernel/l1d_flush.c b/arch/x86/kernel/l1d_flush.c
index 05f375c33423..0842369bac26 100644
--- a/arch/x86/kernel/l1d_flush.c
+++ b/arch/x86/kernel/l1d_flush.c
@@ -34,3 +34,52 @@ void cleanup_l1d_flush_pages(void *l1d_flush_pages)
 	free_pages((unsigned long)l1d_flush_pages, L1D_CACHE_ORDER);
 }
 EXPORT_SYMBOL_GPL(cleanup_l1d_flush_pages);
+
+void populate_tlb_with_flush_pages(void *l1d_flush_pages)
+{
+	int size = PAGE_SIZE << L1D_CACHE_ORDER;
+
+	asm volatile(
+		/* First ensure the pages are in the TLB */
+		"xorl	%%eax, %%eax\n"
+		".Lpopulate_tlb:\n\t"
+		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
+		"addl	$4096, %%eax\n\t"
+		"cmpl	%%eax, %[size]\n\t"
+		"jne	.Lpopulate_tlb\n\t"
+		"xorl	%%eax, %%eax\n\t"
+		"cpuid\n\t"
+		:: [flush_pages] "r" (l1d_flush_pages),
+		    [size] "r" (size)
+		: "eax", "ebx", "ecx", "edx");
+}
+EXPORT_SYMBOL_GPL(populate_tlb_with_flush_pages);
+
+int flush_l1d_cache_hw(void)
+{
+	if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
+		wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
+		return 0;
+	}
+	return -ENOTSUPP;
+}
+EXPORT_SYMBOL_GPL(flush_l1d_cache_hw);
+
+void flush_l1d_cache_sw(void *l1d_flush_pages)
+{
+	int size = PAGE_SIZE << L1D_CACHE_ORDER;
+
+	asm volatile(
+			/* Fill the cache */
+			"xorl	%%eax, %%eax\n"
+			".Lfill_cache:\n"
+			"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
+			"addl	$64, %%eax\n\t"
+			"cmpl	%%eax, %[size]\n\t"
+			"jne	.Lfill_cache\n\t"
+			"lfence\n"
+			:: [flush_pages] "r" (l1d_flush_pages),
+			[size] "r" (size)
+			: "eax", "ecx");
+}
+EXPORT_SYMBOL_GPL(flush_l1d_cache_sw);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f40c30f9b4d8..2bb91bfb8f53 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5956,8 +5956,6 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu,
  */
 static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
 {
-	int size = PAGE_SIZE << L1D_CACHE_ORDER;
-
 	/*
 	 * This code is only executed when the the flush mode is 'cond' or
 	 * 'always'
@@ -5986,32 +5984,13 @@ static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
 
 	vcpu->stat.l1d_flush++;
 
-	if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
-		wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
+	if (flush_l1d_cache_hw() == 0)
 		return;
-	}
 
-	asm volatile(
-		/* First ensure the pages are in the TLB */
-		"xorl	%%eax, %%eax\n"
-		".Lpopulate_tlb:\n\t"
-		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
-		"addl	$4096, %%eax\n\t"
-		"cmpl	%%eax, %[size]\n\t"
-		"jne	.Lpopulate_tlb\n\t"
-		"xorl	%%eax, %%eax\n\t"
-		"cpuid\n\t"
-		/* Now fill the cache */
-		"xorl	%%eax, %%eax\n"
-		".Lfill_cache:\n"
-		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
-		"addl	$64, %%eax\n\t"
-		"cmpl	%%eax, %[size]\n\t"
-		"jne	.Lfill_cache\n\t"
-		"lfence\n"
-		:: [flush_pages] "r" (vmx_l1d_flush_pages),
-		    [size] "r" (size)
-		: "eax", "ebx", "ecx", "edx");
+	preempt_disable();
+	populate_tlb_with_flush_pages(vmx_l1d_flush_pages);
+	flush_l1d_cache_sw(vmx_l1d_flush_pages);
+	preempt_enable();
 }
 
 static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases
  2020-04-08  9:02 [PATCH v3 0/5] Optionally flush L1D on context switch Balbir Singh
  2020-04-08  9:02 ` [PATCH v3 1/5] arch/x86/kvm: Refactor l1d flush lifecycle management Balbir Singh
  2020-04-08  9:02 ` [PATCH v3 2/5] arch/x86: Refactor tlbflush and l1d flush Balbir Singh
@ 2020-04-08  9:02 ` Balbir Singh
  2020-04-17 13:07   ` Thomas Gleixner
  2020-04-08  9:02 ` [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch Balbir Singh
  2020-04-08  9:02 ` [PATCH v3 5/5] arch/x86: Add L1D flushing Documentation Balbir Singh
  4 siblings, 1 reply; 19+ messages in thread
From: Balbir Singh @ 2020-04-08  9:02 UTC (permalink / raw)
  To: tglx, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

cond_ibpb() has the necessary bits required to track the
previous mm in switch_mm_irqs_off(). This can be reused for
other use cases like L1D flushing (on context switch out).

Signed-off-by: Balbir Singh <sblbir@amazon.com>
---
 arch/x86/include/asm/tlbflush.h |  2 +-
 arch/x86/mm/tlb.c               | 43 +++++++++++++++++----------------
 2 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 6f66d841262d..69e6ea20679c 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -172,7 +172,7 @@ struct tlb_state {
 	/* Last user mm for optimizing IBPB */
 	union {
 		struct mm_struct	*last_user_mm;
-		unsigned long		last_user_mm_ibpb;
+		unsigned long		last_user_mm_spec;
 	};
 
 	u16 loaded_mm_asid;
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 66f96f21a7b6..da5c94286c7d 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -33,10 +33,11 @@
  */
 
 /*
- * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is
- * stored in cpu_tlb_state.last_user_mm_ibpb.
+ * Bits to mangle the TIF_SPEC_IB state into the mm pointer which is
+ * stored in cpu_tlb_state.last_user_mm_spec.
  */
 #define LAST_USER_MM_IBPB	0x1UL
+#define LAST_USER_MM_SPEC_MASK	(LAST_USER_MM_IBPB)
 
 /*
  * We get here when we do something requiring a TLB invalidation
@@ -189,19 +190,24 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
 	}
 }
 
-static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next)
+static inline unsigned long mm_mangle_tif_spec_bits(struct task_struct *next)
 {
 	unsigned long next_tif = task_thread_info(next)->flags;
-	unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_IBPB;
+	unsigned long spec_bits = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_SPEC_MASK;
 
-	return (unsigned long)next->mm | ibpb;
+	return (unsigned long)next->mm | spec_bits;
 }
 
-static void cond_ibpb(struct task_struct *next)
+static void cond_mitigation(struct task_struct *next)
 {
+	unsigned long prev_mm, next_mm;
+
 	if (!next || !next->mm)
 		return;
 
+	next_mm = mm_mangle_tif_spec_bits(next);
+	prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_spec);
+
 	/*
 	 * Both, the conditional and the always IBPB mode use the mm
 	 * pointer to avoid the IBPB when switching between tasks of the
@@ -212,8 +218,6 @@ static void cond_ibpb(struct task_struct *next)
 	 * exposed data is not really interesting.
 	 */
 	if (static_branch_likely(&switch_mm_cond_ibpb)) {
-		unsigned long prev_mm, next_mm;
-
 		/*
 		 * This is a bit more complex than the always mode because
 		 * it has to handle two cases:
@@ -243,20 +247,14 @@ static void cond_ibpb(struct task_struct *next)
 		 * Optimize this with reasonably small overhead for the
 		 * above cases. Mangle the TIF_SPEC_IB bit into the mm
 		 * pointer of the incoming task which is stored in
-		 * cpu_tlbstate.last_user_mm_ibpb for comparison.
-		 */
-		next_mm = mm_mangle_tif_spec_ib(next);
-		prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_ibpb);
-
-		/*
+		 * cpu_tlbstate.last_user_mm_spec for comparison.
+		 *
 		 * Issue IBPB only if the mm's are different and one or
 		 * both have the IBPB bit set.
 		 */
 		if (next_mm != prev_mm &&
 		    (next_mm | prev_mm) & LAST_USER_MM_IBPB)
 			indirect_branch_prediction_barrier();
-
-		this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, next_mm);
 	}
 
 	if (static_branch_unlikely(&switch_mm_always_ibpb)) {
@@ -265,11 +263,12 @@ static void cond_ibpb(struct task_struct *next)
 		 * different context than the user space task which ran
 		 * last on this CPU.
 		 */
-		if (this_cpu_read(cpu_tlbstate.last_user_mm) != next->mm) {
+		if ((prev_mm & ~LAST_USER_MM_SPEC_MASK) !=
+					(unsigned long)next->mm)
 			indirect_branch_prediction_barrier();
-			this_cpu_write(cpu_tlbstate.last_user_mm, next->mm);
-		}
 	}
+
+	this_cpu_write(cpu_tlbstate.last_user_mm_spec, next_mm);
 }
 
 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
@@ -374,8 +373,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 		 * Avoid user/user BTB poisoning by flushing the branch
 		 * predictor when switching between processes. This stops
 		 * one process from doing Spectre-v2 attacks on another.
+		 * The hook can also be used for mitigations that rely
+		 * on switch_mm for hooks.
 		 */
-		cond_ibpb(tsk);
+		cond_mitigation(tsk);
 
 		if (IS_ENABLED(CONFIG_VMAP_STACK)) {
 			/*
@@ -501,7 +502,7 @@ void initialize_tlbstate_and_flush(void)
 	write_cr3(build_cr3(mm->pgd, 0));
 
 	/* Reinitialize tlbstate. */
-	this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB);
+	this_cpu_write(cpu_tlbstate.last_user_mm_spec, LAST_USER_MM_IBPB);
 	this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);
 	this_cpu_write(cpu_tlbstate.next_asid, 1);
 	this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch
  2020-04-08  9:02 [PATCH v3 0/5] Optionally flush L1D on context switch Balbir Singh
                   ` (2 preceding siblings ...)
  2020-04-08  9:02 ` [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases Balbir Singh
@ 2020-04-08  9:02 ` Balbir Singh
  2020-04-17 14:41   ` Thomas Gleixner
  2020-04-08  9:02 ` [PATCH v3 5/5] arch/x86: Add L1D flushing Documentation Balbir Singh
  4 siblings, 1 reply; 19+ messages in thread
From: Balbir Singh @ 2020-04-08  9:02 UTC (permalink / raw)
  To: tglx, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Implement a mechanism to selectively flush the L1D cache. The goal is to
allow tasks that are paranoid due to the recent snoop assisted data sampling
vulnerabilites, to flush their L1D on being switched out.  This protects
their data from being snooped or leaked via side channels after the task
has context switched out.

There are two scenarios we might want to protect against, a task leaving
the CPU with data still in L1D (which is the main concern of this patch),
the second scenario is a malicious task coming in (not so well trusted)
for which we want to clean up the cache before it starts. Only the case
for the former is addressed.

Add prctl()'s to opt-in to the L1D cache on context switch out, the
existing mechanisms of tracking prev_mm via cpu_tlbstate is
reused.

A new thread_info flag TIF_SPEC_FLUSH_L1D is added to track tasks which
opt-into L1D flushing. cpu_tlbstate.last_user_mm_spec is used to convert
the TIF flags into mm state (per cpu via last_user_mm_spec) in
cond_mitigation(), which then used to do decide when to call flush_l1d().

The current version benefited from discussions with Kees and Thomas.
Thomas suggested and provided the code snippet for refactoring the
existing cond_ibpb() code.

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Balbir Singh <sblbir@amazon.com>
---
 arch/x86/include/asm/thread_info.h |  6 ++-
 arch/x86/mm/tlb.c                  | 69 +++++++++++++++++++++++++++++-
 include/uapi/linux/prctl.h         |  4 ++
 kernel/sys.c                       | 20 +++++++++
 4 files changed, 96 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 8de8ceccb8bc..be25cc0c677d 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -84,7 +84,7 @@ struct thread_info {
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing active */
 #define TIF_SECCOMP		8	/* secure computing */
 #define TIF_SPEC_IB		9	/* Indirect branch speculation mitigation */
-#define TIF_SPEC_FORCE_UPDATE	10	/* Force speculation MSR update in context switch */
+#define TIF_SPEC_FLUSH_L1D	10	/* Flush L1D on mm switches (processes) */
 #define TIF_USER_RETURN_NOTIFY	11	/* notify kernel of userspace return */
 #define TIF_UPROBE		12	/* breakpointed or singlestepping */
 #define TIF_PATCH_PENDING	13	/* pending live patching update */
@@ -96,6 +96,7 @@ struct thread_info {
 #define TIF_MEMDIE		20	/* is terminating due to OOM killer */
 #define TIF_POLLING_NRFLAG	21	/* idle is polling for TIF_NEED_RESCHED */
 #define TIF_IO_BITMAP		22	/* uses I/O bitmap */
+#define TIF_SPEC_FORCE_UPDATE	23	/* Force speculation MSR update in context switch */
 #define TIF_FORCED_TF		24	/* true if TF in eflags artificially */
 #define TIF_BLOCKSTEP		25	/* set when we want DEBUGCTLMSR_BTF */
 #define TIF_LAZY_MMU_UPDATES	27	/* task is updating the mmu lazily */
@@ -132,6 +133,7 @@ struct thread_info {
 #define _TIF_ADDR32		(1 << TIF_ADDR32)
 #define _TIF_X32		(1 << TIF_X32)
 #define _TIF_FSCHECK		(1 << TIF_FSCHECK)
+#define _TIF_SPEC_FLUSH_L1D	(1 << TIF_SPEC_FLUSH_L1D)
 
 /* Work to do before invoking the actual syscall. */
 #define _TIF_WORK_SYSCALL_ENTRY	\
@@ -239,6 +241,8 @@ extern void arch_task_cache_init(void);
 extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
 extern void arch_release_task_struct(struct task_struct *tsk);
 extern void arch_setup_new_exec(void);
+extern int arch_prctl_l1d_flush_set(struct task_struct *tsk, unsigned long enable);
+extern int arch_prctl_l1d_flush_get(struct task_struct *tsk);
 #define arch_setup_new_exec arch_setup_new_exec
 #endif	/* !__ASSEMBLY__ */
 
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index da5c94286c7d..85b8eec0ff07 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -13,6 +13,7 @@
 #include <asm/mmu_context.h>
 #include <asm/nospec-branch.h>
 #include <asm/cache.h>
+#include <asm/cacheflush.h>
 #include <asm/apic.h>
 #include <asm/uv/uv.h>
 
@@ -33,11 +34,12 @@
  */
 
 /*
- * Bits to mangle the TIF_SPEC_IB state into the mm pointer which is
+ * Bits to mangle the TIF_SPEC_* state into the mm pointer which is
  * stored in cpu_tlb_state.last_user_mm_spec.
  */
 #define LAST_USER_MM_IBPB	0x1UL
-#define LAST_USER_MM_SPEC_MASK	(LAST_USER_MM_IBPB)
+#define LAST_USER_MM_FLUSH_L1D	0x2UL
+#define LAST_USER_MM_SPEC_MASK	(LAST_USER_MM_IBPB | LAST_USER_MM_FLUSH_L1D)
 
 /*
  * We get here when we do something requiring a TLB invalidation
@@ -152,6 +154,64 @@ void leave_mm(int cpu)
 }
 EXPORT_SYMBOL_GPL(leave_mm);
 
+static void *l1d_flush_pages;
+static DEFINE_MUTEX(l1d_flush_mutex);
+
+int enable_l1d_flush_for_task(struct task_struct *tsk)
+{
+	struct page *page;
+	int ret = 0;
+
+	if (static_cpu_has(X86_FEATURE_FLUSH_L1D))
+		goto done;
+
+	page = READ_ONCE(l1d_flush_pages);
+	if (unlikely(!page)) {
+		mutex_lock(&l1d_flush_mutex);
+		if (!l1d_flush_pages) {
+			l1d_flush_pages = alloc_l1d_flush_pages();
+			if (!l1d_flush_pages) {
+				mutex_unlock(&l1d_flush_mutex);
+				return -ENOMEM;
+			}
+		}
+		mutex_unlock(&l1d_flush_mutex);
+	}
+done:
+	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
+	return ret;
+}
+
+int disable_l1d_flush_for_task(struct task_struct *tsk)
+{
+	clear_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
+	return 0;
+}
+
+int arch_prctl_l1d_flush_get(struct task_struct *tsk)
+{
+	return test_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
+}
+
+int arch_prctl_l1d_flush_set(struct task_struct *tsk, unsigned long enable)
+{
+	if (enable)
+		return enable_l1d_flush_for_task(tsk);
+	return disable_l1d_flush_for_task(tsk);
+}
+
+/*
+ * Flush the L1D cache for this CPU. We want to this at switch mm time,
+ * this is a pessimistic security measure and an opt-in for those tasks
+ * that host sensitive information.
+ */
+static void flush_l1d(void)
+{
+	if (flush_l1d_cache_hw() == 0)
+		return;
+	flush_l1d_cache_sw(l1d_flush_pages);
+}
+
 void switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	       struct task_struct *tsk)
 {
@@ -195,6 +255,8 @@ static inline unsigned long mm_mangle_tif_spec_bits(struct task_struct *next)
 	unsigned long next_tif = task_thread_info(next)->flags;
 	unsigned long spec_bits = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_SPEC_MASK;
 
+	BUILD_BUG_ON(TIF_SPEC_FLUSH_L1D != TIF_SPEC_IB + 1);
+
 	return (unsigned long)next->mm | spec_bits;
 }
 
@@ -268,6 +330,9 @@ static void cond_mitigation(struct task_struct *next)
 			indirect_branch_prediction_barrier();
 	}
 
+	if (prev_mm & LAST_USER_MM_FLUSH_L1D)
+		flush_l1d();
+
 	this_cpu_write(cpu_tlbstate.last_user_mm_spec, next_mm);
 }
 
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 07b4f8131e36..42cb3038c81a 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -238,4 +238,8 @@ struct prctl_mm_map {
 #define PR_SET_IO_FLUSHER		57
 #define PR_GET_IO_FLUSHER		58
 
+/* Flush L1D on context switch (mm) */
+#define PR_SET_L1D_FLUSH		59
+#define PR_GET_L1D_FLUSH		60
+
 #endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index d325f3ab624a..578aa8b6d87e 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2262,6 +2262,16 @@ int __weak arch_prctl_spec_ctrl_set(struct task_struct *t, unsigned long which,
 	return -EINVAL;
 }
 
+int __weak arch_prctl_l1d_flush_set(struct task_struct *tsk, unsigned long enable)
+{
+	return -EINVAL;
+}
+
+int __weak arch_prctl_l1d_flush_get(struct task_struct *t)
+{
+	return -EINVAL;
+}
+
 #define PR_IO_FLUSHER (PF_MEMALLOC_NOIO | PF_LESS_THROTTLE)
 
 SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
@@ -2514,6 +2524,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 
 		error = (current->flags & PR_IO_FLUSHER) == PR_IO_FLUSHER;
 		break;
+	case PR_SET_L1D_FLUSH:
+		if (arg3 || arg4 || arg5)
+			return -EINVAL;
+		error = arch_prctl_l1d_flush_set(me, arg2);
+		break;
+	case PR_GET_L1D_FLUSH:
+		if (arg2 || arg3 || arg4 || arg5)
+			return -EINVAL;
+		error = arch_prctl_l1d_flush_get(me);
+		break;
 	default:
 		error = -EINVAL;
 		break;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 5/5] arch/x86: Add L1D flushing Documentation
  2020-04-08  9:02 [PATCH v3 0/5] Optionally flush L1D on context switch Balbir Singh
                   ` (3 preceding siblings ...)
  2020-04-08  9:02 ` [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch Balbir Singh
@ 2020-04-08  9:02 ` Balbir Singh
  4 siblings, 0 replies; 19+ messages in thread
From: Balbir Singh @ 2020-04-08  9:02 UTC (permalink / raw)
  To: tglx, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Add documentation of l1d flushing, explain the need for the
feature and how it can be used.

Signed-off-by: Balbir Singh <sblbir@amazon.com>
---
 Documentation/admin-guide/hw-vuln/index.rst   |  1 +
 .../admin-guide/hw-vuln/l1d_flush.rst         | 40 +++++++++++++++++++
 2 files changed, 41 insertions(+)
 create mode 100644 Documentation/admin-guide/hw-vuln/l1d_flush.rst

diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index 0795e3c2643f..35633b299d45 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
    mds
    tsx_async_abort
    multihit.rst
+   l1d_flush
diff --git a/Documentation/admin-guide/hw-vuln/l1d_flush.rst b/Documentation/admin-guide/hw-vuln/l1d_flush.rst
new file mode 100644
index 000000000000..7d515b8c29f1
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/l1d_flush.rst
@@ -0,0 +1,40 @@
+L1D Flushing for the paranoid
+=============================
+
+With an increasing number of vulnerabilities being reported around data
+leaks from L1D, a new user space mechanism to flush the L1D cache on
+context switch is added to the kernel. This should help address
+CVE-2020-0550 and for paranoid applications, keep them safe from any
+yet to be discovered vulnerabilities, related to leaks from the L1D
+cache.
+
+Tasks can opt in to this mechanism by using a prctl (implemented only
+for x86 at the moment).
+
+Related CVES
+------------
+At the present moment, the following CVEs can be addressed by this
+mechanism
+
+    =============       ========================     ==================
+    CVE-2020-0550       Improper Data Forwarding     OS related aspects
+    =============       ========================     ==================
+
+Usage Guidelines
+----------------
+Applications can call ``prctl(2)`` with one of these two arguments
+
+1. PR_SET_L1D_FLUSH - flush the L1D cache on context switch (out)
+2. PR_GET_L1D_FLUSH - get the current state of the L1D cache flush, returns 1
+   if set and 0 if not set.
+
+**NOTE**: The feature is disabled by default, applications to need to specifically
+opt into the feature to enable it.
+
+Mitigation
+----------
+When PR_SET_L1D_FLUSH is enabled for a task, on switching tasks (when
+the address space changes), a flush of the L1D cache is performed for
+the task when it leaves the CPU. If the underlying CPU supports L1D
+flushing in hardware, the hardware mechanism is used, otherwise a software
+fallback, similar to the mechanism used by L1TF is used.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] arch/x86/kvm: Refactor l1d flush lifecycle management
  2020-04-08  9:02 ` [PATCH v3 1/5] arch/x86/kvm: Refactor l1d flush lifecycle management Balbir Singh
@ 2020-04-17 12:57   ` Thomas Gleixner
  2020-04-17 22:34     ` Singh, Balbir
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Gleixner @ 2020-04-17 12:57 UTC (permalink / raw)
  To: Balbir Singh, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Balbir Singh <sblbir@amazon.com> writes:
>  #include <asm-generic/cacheflush.h>
>  #include <asm/special_insns.h>
>  
> +#define L1D_CACHE_ORDER 4

Newline between constants and declarations please

>  void clflush_cache_range(void *addr, unsigned int size);
> +void *alloc_l1d_flush_pages(void);
> +void cleanup_l1d_flush_pages(void *l1d_flush_pages);

Can we please have a consistent name space prefix?

l1d_flush_*()

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/5] arch/x86: Refactor tlbflush and l1d flush
  2020-04-08  9:02 ` [PATCH v3 2/5] arch/x86: Refactor tlbflush and l1d flush Balbir Singh
@ 2020-04-17 13:03   ` Thomas Gleixner
  2020-04-17 22:58     ` Singh, Balbir
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Gleixner @ 2020-04-17 13:03 UTC (permalink / raw)
  To: Balbir Singh, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Balbir Singh <sblbir@amazon.com> writes:
> +void populate_tlb_with_flush_pages(void *l1d_flush_pages);
> +void flush_l1d_cache_sw(void *l1d_flush_pages);
> +int flush_l1d_cache_hw(void);

l1d_flush_populate_pages();
l1d_flush_sw()
l1d_flush_hw()

Hmm?

> +void populate_tlb_with_flush_pages(void *l1d_flush_pages)
> +{
> +	int size = PAGE_SIZE << L1D_CACHE_ORDER;
> +
> +	asm volatile(
> +		/* First ensure the pages are in the TLB */
> +		"xorl	%%eax, %%eax\n"
> +		".Lpopulate_tlb:\n\t"
> +		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
> +		"addl	$4096, %%eax\n\t"
> +		"cmpl	%%eax, %[size]\n\t"
> +		"jne	.Lpopulate_tlb\n\t"
> +		"xorl	%%eax, %%eax\n\t"
> +		"cpuid\n\t"
> +		:: [flush_pages] "r" (l1d_flush_pages),
> +		    [size] "r" (size)
> +		: "eax", "ebx", "ecx", "edx");
> +}
> +EXPORT_SYMBOL_GPL(populate_tlb_with_flush_pages);

I probably missed the fine print in the change log why this is separate
from the SW flush function.

> +int flush_l1d_cache_hw(void)
> +{
> +	if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
> +		wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
> +		return 0;
> +	}
> +	return -ENOTSUPP;
> +}
> +EXPORT_SYMBOL_GPL(flush_l1d_cache_hw);

along with the explanation why this needs to be two functions.

> -	if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
> -		wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
> +	if (flush_l1d_cache_hw() == 0)
>  		return;
> -	}

	if (!l1d_flush_hw())
		return;

> -	asm volatile(
> -		/* First ensure the pages are in the TLB */
> -		"xorl	%%eax, %%eax\n"
> -		".Lpopulate_tlb:\n\t"
> -		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
> -		"addl	$4096, %%eax\n\t"
> -		"cmpl	%%eax, %[size]\n\t"
> -		"jne	.Lpopulate_tlb\n\t"
> -		"xorl	%%eax, %%eax\n\t"
> -		"cpuid\n\t"
> -		/* Now fill the cache */
> -		"xorl	%%eax, %%eax\n"
> -		".Lfill_cache:\n"
> -		"movzbl	(%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
> -		"addl	$64, %%eax\n\t"
> -		"cmpl	%%eax, %[size]\n\t"
> -		"jne	.Lfill_cache\n\t"
> -		"lfence\n"
> -		:: [flush_pages] "r" (vmx_l1d_flush_pages),
> -		    [size] "r" (size)
> -		: "eax", "ebx", "ecx", "edx");
> +	preempt_disable();
> +	populate_tlb_with_flush_pages(vmx_l1d_flush_pages);
> +	flush_l1d_cache_sw(vmx_l1d_flush_pages);
> +	preempt_enable();

The preempt_disable/enable was not there before, right? Why do we need
that now? If this is a fix, then that should be a separate patch.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases
  2020-04-08  9:02 ` [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases Balbir Singh
@ 2020-04-17 13:07   ` Thomas Gleixner
  2020-04-17 23:02     ` Singh, Balbir
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Gleixner @ 2020-04-17 13:07 UTC (permalink / raw)
  To: Balbir Singh, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Balbir Singh <sblbir@amazon.com> writes:
>  
>  /*
> - * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is
> - * stored in cpu_tlb_state.last_user_mm_ibpb.
> + * Bits to mangle the TIF_SPEC_IB state into the mm pointer which is
> + * stored in cpu_tlb_state.last_user_mm_spec.
>   */
>  #define LAST_USER_MM_IBPB	0x1UL
> +#define LAST_USER_MM_SPEC_MASK	(LAST_USER_MM_IBPB)
>  
>  	/* Reinitialize tlbstate. */
> -	this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB);
> +	this_cpu_write(cpu_tlbstate.last_user_mm_spec, LAST_USER_MM_IBPB);

Shouldn't that be LAST_USER_MM_MASK?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch
  2020-04-08  9:02 ` [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch Balbir Singh
@ 2020-04-17 14:41   ` Thomas Gleixner
  2020-04-18  1:30     ` Singh, Balbir
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Gleixner @ 2020-04-17 14:41 UTC (permalink / raw)
  To: Balbir Singh, linux-kernel
  Cc: jpoimboe, tony.luck, keescook, benh, x86, dave.hansen, Balbir Singh

Balbir Singh <sblbir@amazon.com> writes:
 
> +static void *l1d_flush_pages;
> +static DEFINE_MUTEX(l1d_flush_mutex);
> +
> +int enable_l1d_flush_for_task(struct task_struct *tsk)

static ?

> +{
> +	struct page *page;
> +	int ret = 0;
> +
> +	if (static_cpu_has(X86_FEATURE_FLUSH_L1D))
> +		goto done;
> +
> +	page = READ_ONCE(l1d_flush_pages);
> +	if (unlikely(!page)) {
> +		mutex_lock(&l1d_flush_mutex);
> +		if (!l1d_flush_pages) {
> +			l1d_flush_pages = alloc_l1d_flush_pages();
> +			if (!l1d_flush_pages) {
> +				mutex_unlock(&l1d_flush_mutex);
> +				return -ENOMEM;
> +			}
> +		}
> +		mutex_unlock(&l1d_flush_mutex);
> +	}

So this is +/- the mutex the same code as KVM has. Why is this not moved
into l1d_flush.c, i.e. 

static void *l1d_flush_pages;
static DEFINE_MUTEX(l1d_flush_mutex);

int l1d_flush_init(void)
{
	int ret;
        
	if (static_cpu_has(X86_FEATURE_FLUSH_L1D) || l1d_flush_pages)
		return 0;

	mutex_lock(&l1d_flush_mutex);
	if (!l1d_flush_pages)
		l1d_flush_pages = l1d_flush_alloc_pages();
        ret = l1d_flush_pages ? 0 : -ENOMEM;        
	mutex_unlock(&l1d_flush_mutex);
        return ret;
}
EXPORT_SYMBOL_GPL(l1d_flush_init);

which removes the export of l1d_flush_alloc_pages() and gets rid of the
cleanup counterpart. In a real world deployment unloading of VMX if used
once is unlikely and with the task based one you end up with these pages
'leaked' anyway if used once.

Can we please make this simple and consistent?

> +done:
> +	set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
> +	return ret;
> +}
> +
> +int disable_l1d_flush_for_task(struct task_struct *tsk)

static void?

> +{
> +	clear_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
> +	return 0;
> +}
> +
> +int arch_prctl_l1d_flush_get(struct task_struct *tsk)
> +{
> +	return test_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
> +}
> +
> +int arch_prctl_l1d_flush_set(struct task_struct *tsk, unsigned long enable)
> +{
> +	if (enable)
> +		return enable_l1d_flush_for_task(tsk);
> +	return disable_l1d_flush_for_task(tsk);
> +}

If any other architecture enables this, then it will have _ALL_ of this
code duplicated. So we should rather have:

  - CONFIG_FLUSH_L1D which gets selected by features requesting
    it, i.e. X86 + VMX

  - CONFIG_FLUSH_L1D_PRCTL which gets selected by architectures
    supporting it, i.e. X86. This selects CONFIG_FLUSH_L1D and enables
    the prctl logic.

  - arch/xxx/include/asm/l1d_flush.h which has for x86:

    #include <linux/l1d_flush.h>
    #include <asm/thread_info.h>

    #define L1D_CACHE_ORDER 4

    static inline bool arch_has_l1d_flush_hw(void)
    {
    	return static_cpu_has(X86_FEATURE_FLUSH_L1D);
    }

    // This is to make the allocation function conditional or if an
    // architecture knows upfront compile time optimized.

    static inline void arch_l1d_flush(void *pages, unsigned long options)
    {
        if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
        	...
                return;
        }

        if (options & POPULATE_TLB)
        	l1d_flush_populate_tlb(pages);
        l1d_flush_sw(pages);
    }

    // The option bits go into linux/l1d_flush.h and the asm header has
    // exactly one file which includes it: lib/l1d_flush.c
    //
    // All other places (VMX, arch context switch code) include 
    // linux/l1d_flush.h which also contains the prototypes for
    // l1d_flush_init() and l1d_flush().

  - Have l1d_flush_init() and the alloc function in lib/l1d_flush.c

  - The flush invocation in lib/l1d_flush.c:

    void l1d_flush(unsigned long options)
    {
   	arch_l1d_flush(l1d_flush_pages, options);
    }
    EXPORT_SYMBOL_GPL(l1d_flush);

  - All architectures have to use TIF_SPEC_FLUSH_L1D if they want to
    support the prctl.

    So all of the above arch_prctl*() stuff becomes generic and sits in
    lib/l1d_flush.c hidden behind CONFIG_FLUSH_L1D_PRCTL.

    The only architecture specific bits are in the actual context switch
    code.

Hmm?

> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 07b4f8131e36..42cb3038c81a 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -238,4 +238,8 @@ struct prctl_mm_map {
>  #define PR_SET_IO_FLUSHER		57
>  #define PR_GET_IO_FLUSHER		58
>  
> +/* Flush L1D on context switch (mm) */
> +#define PR_SET_L1D_FLUSH		59
> +#define PR_GET_L1D_FLUSH		60
> +
>  #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index d325f3ab624a..578aa8b6d87e 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -2262,6 +2262,16 @@ int __weak arch_prctl_spec_ctrl_set(struct task_struct *t, unsigned long which,
>  	return -EINVAL;
>  }
>  
> +int __weak arch_prctl_l1d_flush_set(struct task_struct *tsk, unsigned long enable)
> +{
> +	return -EINVAL;
> +}
> +
> +int __weak arch_prctl_l1d_flush_get(struct task_struct *t)
> +{
> +	return -EINVAL;
> +}
> +
>  #define PR_IO_FLUSHER (PF_MEMALLOC_NOIO | PF_LESS_THROTTLE)
>  
>  SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
> @@ -2514,6 +2524,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>  
>  		error = (current->flags & PR_IO_FLUSHER) == PR_IO_FLUSHER;
>  		break;
> +	case PR_SET_L1D_FLUSH:
> +		if (arg3 || arg4 || arg5)
> +			return -EINVAL;
> +		error = arch_prctl_l1d_flush_set(me, arg2);
> +		break;
> +	case PR_GET_L1D_FLUSH:
> +		if (arg2 || arg3 || arg4 || arg5)
> +			return -EINVAL;
> +		error = arch_prctl_l1d_flush_get(me);
> +		break;
>  	default:
>  		error = -EINVAL;
>  		break;

The prctl itself looks fine to me.

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 1/5] arch/x86/kvm: Refactor l1d flush lifecycle management
  2020-04-17 12:57   ` Thomas Gleixner
@ 2020-04-17 22:34     ` Singh, Balbir
  0 siblings, 0 replies; 19+ messages in thread
From: Singh, Balbir @ 2020-04-17 22:34 UTC (permalink / raw)
  To: tglx, linux-kernel; +Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

On Fri, 2020-04-17 at 14:57 +0200, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
> 
> 
> 
> Balbir Singh <sblbir@amazon.com> writes:
> >  #include <asm-generic/cacheflush.h>
> >  #include <asm/special_insns.h>
> > 
> > +#define L1D_CACHE_ORDER 4
> 
> Newline between constants and declarations please
> 
> >  void clflush_cache_range(void *addr, unsigned int size);
> > +void *alloc_l1d_flush_pages(void);
> > +void cleanup_l1d_flush_pages(void *l1d_flush_pages);
> 
> Can we please have a consistent name space prefix?
> 
> l1d_flush_*()
> 

I used l1d_flush_pages as a noun and then a verb in front of it to denote
action alloc_l1d_flush_pages(), happy to change it over, don't feel to
strongly about it.

Balbir


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 2/5] arch/x86: Refactor tlbflush and l1d flush
  2020-04-17 13:03   ` Thomas Gleixner
@ 2020-04-17 22:58     ` Singh, Balbir
  0 siblings, 0 replies; 19+ messages in thread
From: Singh, Balbir @ 2020-04-17 22:58 UTC (permalink / raw)
  To: tglx, linux-kernel; +Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

On Fri, 2020-04-17 at 15:03 +0200, Thomas Gleixner wrote:
> 
> Balbir Singh <sblbir@amazon.com> writes:
> > +void populate_tlb_with_flush_pages(void *l1d_flush_pages);
> > +void flush_l1d_cache_sw(void *l1d_flush_pages);
> > +int flush_l1d_cache_hw(void);
> 
> l1d_flush_populate_pages();
> l1d_flush_sw()
> l1d_flush_hw()
> 
> Hmm?
> 

I can rename them

> > +void populate_tlb_with_flush_pages(void *l1d_flush_pages)
> > +{
> > +     int size = PAGE_SIZE << L1D_CACHE_ORDER;
> > +
> > +     asm volatile(
> > +             /* First ensure the pages are in the TLB */
> > +             "xorl   %%eax, %%eax\n"
> > +             ".Lpopulate_tlb:\n\t"
> > +             "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
> > +             "addl   $4096, %%eax\n\t"
> > +             "cmpl   %%eax, %[size]\n\t"
> > +             "jne    .Lpopulate_tlb\n\t"
> > +             "xorl   %%eax, %%eax\n\t"
> > +             "cpuid\n\t"
> > +             :: [flush_pages] "r" (l1d_flush_pages),
> > +                 [size] "r" (size)
> > +             : "eax", "ebx", "ecx", "edx");
> > +}
> > +EXPORT_SYMBOL_GPL(populate_tlb_with_flush_pages);
> 
> I probably missed the fine print in the change log why this is separate
> from the SW flush function.

In the RFC we discussed if we really need to prefetch the pages into the TLB 
prior to the flush and I pointed out or thought that the TLB prefetch was not
required for these patches (L1D flush), so I split it out.

> 
> > +int flush_l1d_cache_hw(void)
> > +{
> > +     if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
> > +             wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
> > +             return 0;
> > +     }
> > +     return -ENOTSUPP;
> > +}
> > +EXPORT_SYMBOL_GPL(flush_l1d_cache_hw);
> 
> along with the explanation why this needs to be two functions.
> 

Are you suggesting I abstract the hw and sw flushes into one function? I can
do that.

> > -     if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
> > -             wrmsrl(MSR_IA32_FLUSH_CMD, L1D_FLUSH);
> > +     if (flush_l1d_cache_hw() == 0)
> >               return;
> > -     }
> 
>         if (!l1d_flush_hw())
>                 return;
> 
> > -     asm volatile(
> > -             /* First ensure the pages are in the TLB */
> > -             "xorl   %%eax, %%eax\n"
> > -             ".Lpopulate_tlb:\n\t"
> > -             "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
> > -             "addl   $4096, %%eax\n\t"
> > -             "cmpl   %%eax, %[size]\n\t"
> > -             "jne    .Lpopulate_tlb\n\t"
> > -             "xorl   %%eax, %%eax\n\t"
> > -             "cpuid\n\t"
> > -             /* Now fill the cache */
> > -             "xorl   %%eax, %%eax\n"
> > -             ".Lfill_cache:\n"
> > -             "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
> > -             "addl   $64, %%eax\n\t"
> > -             "cmpl   %%eax, %[size]\n\t"
> > -             "jne    .Lfill_cache\n\t"
> > -             "lfence\n"
> > -             :: [flush_pages] "r" (vmx_l1d_flush_pages),
> > -                 [size] "r" (size)
> > -             : "eax", "ebx", "ecx", "edx");
> > +     preempt_disable();
> > +     populate_tlb_with_flush_pages(vmx_l1d_flush_pages);
> > +     flush_l1d_cache_sw(vmx_l1d_flush_pages);
> > +     preempt_enable();
> 
> The preempt_disable/enable was not there before, right? Why do we need
> that now? If this is a fix, then that should be a separate patch.
> 

No they were not, I added them because I was concerned about preemption, it's
a speculative change, my concern was that we could fill the TLB and then get
preempted. Looking at the caller context, we do run with interrupts disabled,
I might have been too conservative, we don't need this. I'll remove it

Balbir


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases
  2020-04-17 13:07   ` Thomas Gleixner
@ 2020-04-17 23:02     ` Singh, Balbir
  2020-04-18  9:59       ` Thomas Gleixner
  0 siblings, 1 reply; 19+ messages in thread
From: Singh, Balbir @ 2020-04-17 23:02 UTC (permalink / raw)
  To: tglx, linux-kernel; +Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

On Fri, 2020-04-17 at 15:07 +0200, Thomas Gleixner wrote:
> 
> Balbir Singh <sblbir@amazon.com> writes:
> > 
> >  /*
> > - * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is
> > - * stored in cpu_tlb_state.last_user_mm_ibpb.
> > + * Bits to mangle the TIF_SPEC_IB state into the mm pointer which is
> > + * stored in cpu_tlb_state.last_user_mm_spec.
> >   */
> >  #define LAST_USER_MM_IBPB    0x1UL
> > +#define LAST_USER_MM_SPEC_MASK       (LAST_USER_MM_IBPB)
> > 
> >       /* Reinitialize tlbstate. */
> > -     this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB);
> > +     this_cpu_write(cpu_tlbstate.last_user_mm_spec, LAST_USER_MM_IBPB);
> 
> Shouldn't that be LAST_USER_MM_MASK?
> 
> 

No, that crashes the system for SW flushes, because it tries to flush the L1D
via the software loop and early enough we don't have the l1d_flush_pages
allocated. LAST_USER_MM_MASK has LAST_USER_MM_FLUSH_L1D bit set.

Balbir Singh.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch
  2020-04-17 14:41   ` Thomas Gleixner
@ 2020-04-18  1:30     ` Singh, Balbir
  2020-04-18 10:17       ` Thomas Gleixner
  0 siblings, 1 reply; 19+ messages in thread
From: Singh, Balbir @ 2020-04-18  1:30 UTC (permalink / raw)
  To: tglx, linux-kernel; +Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

On Fri, 2020-04-17 at 16:41 +0200, Thomas Gleixner wrote:
> 
> Balbir Singh <sblbir@amazon.com> writes:
> 
> > +static void *l1d_flush_pages;
> > +static DEFINE_MUTEX(l1d_flush_mutex);
> > +
> > +int enable_l1d_flush_for_task(struct task_struct *tsk)
> 
> static ?
> 
> > +{
> > +     struct page *page;
> > +     int ret = 0;
> > +
> > +     if (static_cpu_has(X86_FEATURE_FLUSH_L1D))
> > +             goto done;
> > +
> > +     page = READ_ONCE(l1d_flush_pages);
> > +     if (unlikely(!page)) {
> > +             mutex_lock(&l1d_flush_mutex);
> > +             if (!l1d_flush_pages) {
> > +                     l1d_flush_pages = alloc_l1d_flush_pages();
> > +                     if (!l1d_flush_pages) {
> > +                             mutex_unlock(&l1d_flush_mutex);
> > +                             return -ENOMEM;
> > +                     }
> > +             }
> > +             mutex_unlock(&l1d_flush_mutex);
> > +     }
> 
> So this is +/- the mutex the same code as KVM has. Why is this not moved
> into l1d_flush.c, i.e.
> 
> static void *l1d_flush_pages;
> static DEFINE_MUTEX(l1d_flush_mutex);
> 
> int l1d_flush_init(void)
> {
>         int ret;
> 
>         if (static_cpu_has(X86_FEATURE_FLUSH_L1D) || l1d_flush_pages)
>                 return 0;
> 
>         mutex_lock(&l1d_flush_mutex);
>         if (!l1d_flush_pages)
>                 l1d_flush_pages = l1d_flush_alloc_pages();
>         ret = l1d_flush_pages ? 0 : -ENOMEM;
>         mutex_unlock(&l1d_flush_mutex);
>         return ret;
> }
> EXPORT_SYMBOL_GPL(l1d_flush_init);
> 
> which removes the export of l1d_flush_alloc_pages() and gets rid of the
> cleanup counterpart. In a real world deployment unloading of VMX if used
> once is unlikely and with the task based one you end up with these pages
> 'leaked' anyway if used once.
> 

I don't want the patches to be enforce that one cannot unload the kvm module,
but I can refactor those bits a bit more


> Can we please make this simple and consistent?



> 
> > +done:
> > +     set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
> > +     return ret;
> > +}
> > +
> > +int disable_l1d_flush_for_task(struct task_struct *tsk)
> 
> static void?
> 
> > +{
> > +     clear_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
> > +     return 0;
> > +}
> > +
> > +int arch_prctl_l1d_flush_get(struct task_struct *tsk)
> > +{
> > +     return test_ti_thread_flag(&tsk->thread_info, TIF_SPEC_FLUSH_L1D);
> > +}
> > +
> > +int arch_prctl_l1d_flush_set(struct task_struct *tsk, unsigned long
> > enable)
> > +{
> > +     if (enable)
> > +             return enable_l1d_flush_for_task(tsk);
> > +     return disable_l1d_flush_for_task(tsk);
> > +}
> 
> If any other architecture enables this, then it will have _ALL_ of this
> code duplicated. So we should rather have:

But that is being a bit prescriptive to arch's to implement their L1D flushing
using TIF flags, arch's should be free to use bits in struct_mm for their arch
if they feel so.

> 
>   - CONFIG_FLUSH_L1D which gets selected by features requesting
>     it, i.e. X86 + VMX
> 
>   - CONFIG_FLUSH_L1D_PRCTL which gets selected by architectures
>     supporting it, i.e. X86. This selects CONFIG_FLUSH_L1D and enables
>     the prctl logic.
> 
>   - arch/xxx/include/asm/l1d_flush.h which has for x86:
> 
>     #include <linux/l1d_flush.h>
>     #include <asm/thread_info.h>
> 
>     #define L1D_CACHE_ORDER 4
> 
>     static inline bool arch_has_l1d_flush_hw(void)
>     {
>         return static_cpu_has(X86_FEATURE_FLUSH_L1D);
>     }
> 
>     // This is to make the allocation function conditional or if an
>     // architecture knows upfront compile time optimized.
> 
>     static inline void arch_l1d_flush(void *pages, unsigned long options)
>     {
>         if (static_cpu_has(X86_FEATURE_FLUSH_L1D)) {
>                 ...
>                 return;
>         }
> 
>         if (options & POPULATE_TLB)
>                 l1d_flush_populate_tlb(pages);
>         l1d_flush_sw(pages);
>     }
> 
>     // The option bits go into linux/l1d_flush.h and the asm header has
>     // exactly one file which includes it: lib/l1d_flush.c
>     //
>     // All other places (VMX, arch context switch code) include
>     // linux/l1d_flush.h which also contains the prototypes for
>     // l1d_flush_init() and l1d_flush().
> 
>   - Have l1d_flush_init() and the alloc function in lib/l1d_flush.c
> 
>   - The flush invocation in lib/l1d_flush.c:
> 
>     void l1d_flush(unsigned long options)
>     {
>         arch_l1d_flush(l1d_flush_pages, options);
>     }
>     EXPORT_SYMBOL_GPL(l1d_flush);
> 
>   - All architectures have to use TIF_SPEC_FLUSH_L1D if they want to
>     support the prctl.
> 

That is a concern (see above), should we enforce this?


>     So all of the above arch_prctl*() stuff becomes generic and sits in
>     lib/l1d_flush.c hidden behind CONFIG_FLUSH_L1D_PRCTL.
> 
>     The only architecture specific bits are in the actual context switch
>     code.
> 
> Hmm?
> 

Let me revisit the code and see if I can refactor it.

Balbir

> > diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> > index 07b4f8131e36..42cb3038c81a 100644
> > --- a/include/uapi/linux/prctl.h
> > +++ b/include/uapi/linux/prctl.h
> > @@ -238,4 +238,8 @@ struct prctl_mm_map {
> >  #define PR_SET_IO_FLUSHER            57
> >  #define PR_GET_IO_FLUSHER            58
> > 
> > +/* Flush L1D on context switch (mm) */
> > +#define PR_SET_L1D_FLUSH             59
> > +#define PR_GET_L1D_FLUSH             60
> > +
> >  #endif /* _LINUX_PRCTL_H */
> > diff --git a/kernel/sys.c b/kernel/sys.c
> > index d325f3ab624a..578aa8b6d87e 100644
> > --- a/kernel/sys.c
> > +++ b/kernel/sys.c
> > @@ -2262,6 +2262,16 @@ int __weak arch_prctl_spec_ctrl_set(struct
> > task_struct *t, unsigned long which,
> >       return -EINVAL;
> >  }
> > 
> > +int __weak arch_prctl_l1d_flush_set(struct task_struct *tsk, unsigned
> > long enable)
> > +{
> > +     return -EINVAL;
> > +}
> > +
> > +int __weak arch_prctl_l1d_flush_get(struct task_struct *t)
> > +{
> > +     return -EINVAL;
> > +}
> > +
> >  #define PR_IO_FLUSHER (PF_MEMALLOC_NOIO | PF_LESS_THROTTLE)
> > 
> >  SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long,
> > arg3,
> > @@ -2514,6 +2524,16 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long,
> > arg2, unsigned long, arg3,
> > 
> >               error = (current->flags & PR_IO_FLUSHER) == PR_IO_FLUSHER;
> >               break;
> > +     case PR_SET_L1D_FLUSH:
> > +             if (arg3 || arg4 || arg5)
> > +                     return -EINVAL;
> > +             error = arch_prctl_l1d_flush_set(me, arg2);
> > +             break;
> > +     case PR_GET_L1D_FLUSH:
> > +             if (arg2 || arg3 || arg4 || arg5)
> > +                     return -EINVAL;
> > +             error = arch_prctl_l1d_flush_get(me);
> > +             break;
> >       default:
> >               error = -EINVAL;
> >               break;
> 
> The prctl itself looks fine to me.
> 
> Thanks,
> 
>         tglx
> 

Thanks,
Balbir

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases
  2020-04-17 23:02     ` Singh, Balbir
@ 2020-04-18  9:59       ` Thomas Gleixner
  2020-04-21  3:46         ` Singh, Balbir
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Gleixner @ 2020-04-18  9:59 UTC (permalink / raw)
  To: Singh, Balbir, linux-kernel
  Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

"Singh, Balbir" <sblbir@amazon.com> writes:
> On Fri, 2020-04-17 at 15:07 +0200, Thomas Gleixner wrote:
>> 
>> Balbir Singh <sblbir@amazon.com> writes:
>> > 
>> >  /*
>> > - * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is
>> > - * stored in cpu_tlb_state.last_user_mm_ibpb.
>> > + * Bits to mangle the TIF_SPEC_IB state into the mm pointer which is
>> > + * stored in cpu_tlb_state.last_user_mm_spec.
>> >   */
>> >  #define LAST_USER_MM_IBPB    0x1UL
>> > +#define LAST_USER_MM_SPEC_MASK       (LAST_USER_MM_IBPB)
>> > 
>> >       /* Reinitialize tlbstate. */
>> > -     this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB);
>> > +     this_cpu_write(cpu_tlbstate.last_user_mm_spec, LAST_USER_MM_IBPB);
>> 
>> Shouldn't that be LAST_USER_MM_MASK?
>> 
> No, that crashes the system for SW flushes, because it tries to flush the L1D
> via the software loop and early enough we don't have the l1d_flush_pages
> allocated. LAST_USER_MM_MASK has LAST_USER_MM_FLUSH_L1D bit set.

You can trivially prevent this by checking l1d_flush_pages != NULL.

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch
  2020-04-18  1:30     ` Singh, Balbir
@ 2020-04-18 10:17       ` Thomas Gleixner
  2020-04-20  0:24         ` Singh, Balbir
  0 siblings, 1 reply; 19+ messages in thread
From: Thomas Gleixner @ 2020-04-18 10:17 UTC (permalink / raw)
  To: Singh, Balbir, linux-kernel
  Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

"Singh, Balbir" <sblbir@amazon.com> writes:
> On Fri, 2020-04-17 at 16:41 +0200, Thomas Gleixner wrote:
>> Balbir Singh <sblbir@amazon.com> writes:
>> static void *l1d_flush_pages;
>> static DEFINE_MUTEX(l1d_flush_mutex);
>> 
>> int l1d_flush_init(void)
>> {
>>         int ret;
>> 
>>         if (static_cpu_has(X86_FEATURE_FLUSH_L1D) || l1d_flush_pages)
>>                 return 0;
>> 
>>         mutex_lock(&l1d_flush_mutex);
>>         if (!l1d_flush_pages)
>>                 l1d_flush_pages = l1d_flush_alloc_pages();
>>         ret = l1d_flush_pages ? 0 : -ENOMEM;
>>         mutex_unlock(&l1d_flush_mutex);
>>         return ret;
>> }
>> EXPORT_SYMBOL_GPL(l1d_flush_init);
>> 
>> which removes the export of l1d_flush_alloc_pages() and gets rid of the
>> cleanup counterpart. In a real world deployment unloading of VMX if used
>> once is unlikely and with the task based one you end up with these pages
>> 'leaked' anyway if used once.
>> 
> I don't want the patches to be enforce that one cannot unload the kvm module,
> but I can refactor those bits a bit more

Not freeing the l1d flush pages does not prevent unloading the kvm
module. It just keeps the around. It's the same problem with your L1D
flush for tasks. If one tasks uses it then the pages stay around until
the system reboots.

>> If any other architecture enables this, then it will have _ALL_ of this
>> code duplicated. So we should rather have:
>
> But that is being a bit prescriptive to arch's to implement their L1D flushing
> using TIF flags, arch's should be free to use bits in struct_mm for their arch
> if they feel so.

>>   - All architectures have to use TIF_SPEC_FLUSH_L1D if they want to
>>     support the prctl.
>> 
>
> That is a concern (see above), should we enforce this?

Fair enough, but it's trivial enough to have:

  static inline void arch_task_l1d_flush_update(bool enable)
  static inline bool arch_task_l1d_flush_state(void)

and the rest of the logic is just identical.

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re:  [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch
  2020-04-18 10:17       ` Thomas Gleixner
@ 2020-04-20  0:24         ` Singh, Balbir
  0 siblings, 0 replies; 19+ messages in thread
From: Singh, Balbir @ 2020-04-20  0:24 UTC (permalink / raw)
  To: tglx, linux-kernel; +Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

On Sat, 2020-04-18 at 12:17 +0200, Thomas Gleixner wrote:
> 
> 
> "Singh, Balbir" <sblbir@amazon.com> writes:
> > On Fri, 2020-04-17 at 16:41 +0200, Thomas Gleixner wrote:
> > > Balbir Singh <sblbir@amazon.com> writes:
> > > static void *l1d_flush_pages;
> > > static DEFINE_MUTEX(l1d_flush_mutex);
> > > 
> > > int l1d_flush_init(void)
> > > {
> > >         int ret;
> > > 
> > >         if (static_cpu_has(X86_FEATURE_FLUSH_L1D) || l1d_flush_pages)
> > >                 return 0;
> > > 
> > >         mutex_lock(&l1d_flush_mutex);
> > >         if (!l1d_flush_pages)
> > >                 l1d_flush_pages = l1d_flush_alloc_pages();
> > >         ret = l1d_flush_pages ? 0 : -ENOMEM;
> > >         mutex_unlock(&l1d_flush_mutex);
> > >         return ret;
> > > }
> > > EXPORT_SYMBOL_GPL(l1d_flush_init);
> > > 
> > > which removes the export of l1d_flush_alloc_pages() and gets rid of the
> > > cleanup counterpart. In a real world deployment unloading of VMX if used
> > > once is unlikely and with the task based one you end up with these pages
> > > 'leaked' anyway if used once.
> > > 
> > 
> > I don't want the patches to be enforce that one cannot unload the kvm
> > module,
> > but I can refactor those bits a bit more
> 
> Not freeing the l1d flush pages does not prevent unloading the kvm
> module. It just keeps the around. It's the same problem with your L1D
> flush for tasks. If one tasks uses it then the pages stay around until
> the system reboots.

Yes, Fair enough, you also seem to suggest that the same set of pages can be
shared across VMX and L1D flushes (which is fine by me), I suspect at some
point we'd need to to per NUMA node allocations, but lets not prematurely
optimize.

> 
> > > If any other architecture enables this, then it will have _ALL_ of this
> > > code duplicated. So we should rather have:
> > 
> > But that is being a bit prescriptive to arch's to implement their L1D
> > flushing
> > using TIF flags, arch's should be free to use bits in struct_mm for their
> > arch
> > if they feel so.
> > >   - All architectures have to use TIF_SPEC_FLUSH_L1D if they want to
> > >     support the prctl.
> > > 
> > 
> > That is a concern (see above), should we enforce this?
> 
> Fair enough, but it's trivial enough to have:
> 
>   static inline void arch_task_l1d_flush_update(bool enable)
>   static inline bool arch_task_l1d_flush_state(void)
> 
> and the rest of the logic is just identical.
> 

OK, so you'd still like to see the logic move to lib/l1d_flush.c? Let me get
working on that and see what the changes look like

Thanks for the review,
Balbir Singh

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re:  [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases
  2020-04-18  9:59       ` Thomas Gleixner
@ 2020-04-21  3:46         ` Singh, Balbir
  2020-04-21  9:02           ` Thomas Gleixner
  0 siblings, 1 reply; 19+ messages in thread
From: Singh, Balbir @ 2020-04-21  3:46 UTC (permalink / raw)
  To: tglx, linux-kernel; +Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

On Sat, 2020-04-18 at 11:59 +0200, Thomas Gleixner wrote:
> 
> "Singh, Balbir" <sblbir@amazon.com> writes:
> > On Fri, 2020-04-17 at 15:07 +0200, Thomas Gleixner wrote:
> > > 
> > > Balbir Singh <sblbir@amazon.com> writes:
> > > > 
> > > >  /*
> > > > - * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer
> > > > which is
> > > > - * stored in cpu_tlb_state.last_user_mm_ibpb.
> > > > + * Bits to mangle the TIF_SPEC_IB state into the mm pointer which is
> > > > + * stored in cpu_tlb_state.last_user_mm_spec.
> > > >   */
> > > >  #define LAST_USER_MM_IBPB    0x1UL
> > > > +#define LAST_USER_MM_SPEC_MASK       (LAST_USER_MM_IBPB)
> > > > 
> > > >       /* Reinitialize tlbstate. */
> > > > -     this_cpu_write(cpu_tlbstate.last_user_mm_ibpb,
> > > > LAST_USER_MM_IBPB);
> > > > +     this_cpu_write(cpu_tlbstate.last_user_mm_spec,
> > > > LAST_USER_MM_IBPB);
> > > 
> > > Shouldn't that be LAST_USER_MM_MASK?
> > > 
> > 
> > No, that crashes the system for SW flushes, because it tries to flush the
> > L1D
> > via the software loop and early enough we don't have the l1d_flush_pages
> > allocated. LAST_USER_MM_MASK has LAST_USER_MM_FLUSH_L1D bit set.
> 
> You can trivially prevent this by checking l1d_flush_pages != NULL.
> 

But why would we want to flush on reinit? It is either coming back from a low
power state or initialising, is it worth adding a check for != NULL everytime
we flush to handle this case?

Thanks,
Balbir


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re:  [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases
  2020-04-21  3:46         ` Singh, Balbir
@ 2020-04-21  9:02           ` Thomas Gleixner
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Gleixner @ 2020-04-21  9:02 UTC (permalink / raw)
  To: Singh, Balbir, linux-kernel
  Cc: keescook, tony.luck, benh, jpoimboe, x86, dave.hansen

"Singh, Balbir" <sblbir@amazon.com> writes:
> On Sat, 2020-04-18 at 11:59 +0200, Thomas Gleixner wrote:
>> 
>> "Singh, Balbir" <sblbir@amazon.com> writes:
>> > On Fri, 2020-04-17 at 15:07 +0200, Thomas Gleixner wrote:
>> > > 
>> > > Balbir Singh <sblbir@amazon.com> writes:
>> > > > 
>> > > >  /*
>> > > > - * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer
>> > > > which is
>> > > > - * stored in cpu_tlb_state.last_user_mm_ibpb.
>> > > > + * Bits to mangle the TIF_SPEC_IB state into the mm pointer which is
>> > > > + * stored in cpu_tlb_state.last_user_mm_spec.
>> > > >   */
>> > > >  #define LAST_USER_MM_IBPB    0x1UL
>> > > > +#define LAST_USER_MM_SPEC_MASK       (LAST_USER_MM_IBPB)
>> > > > 
>> > > >       /* Reinitialize tlbstate. */
>> > > > -     this_cpu_write(cpu_tlbstate.last_user_mm_ibpb,
>> > > > LAST_USER_MM_IBPB);
>> > > > +     this_cpu_write(cpu_tlbstate.last_user_mm_spec,
>> > > > LAST_USER_MM_IBPB);
>> > > 
>> > > Shouldn't that be LAST_USER_MM_MASK?
>> > > 
>> > 
>> > No, that crashes the system for SW flushes, because it tries to flush the
>> > L1D
>> > via the software loop and early enough we don't have the l1d_flush_pages
>> > allocated. LAST_USER_MM_MASK has LAST_USER_MM_FLUSH_L1D bit set.
>> 
>> You can trivially prevent this by checking l1d_flush_pages != NULL.
>> 
>
> But why would we want to flush on reinit? It is either coming back from a low
> power state or initialising, is it worth adding a check for != NULL everytime
> we flush to handle this case?

Fair enough. Please add a comment so the same question does not come
back 3 month from now.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-04-21  9:03 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-08  9:02 [PATCH v3 0/5] Optionally flush L1D on context switch Balbir Singh
2020-04-08  9:02 ` [PATCH v3 1/5] arch/x86/kvm: Refactor l1d flush lifecycle management Balbir Singh
2020-04-17 12:57   ` Thomas Gleixner
2020-04-17 22:34     ` Singh, Balbir
2020-04-08  9:02 ` [PATCH v3 2/5] arch/x86: Refactor tlbflush and l1d flush Balbir Singh
2020-04-17 13:03   ` Thomas Gleixner
2020-04-17 22:58     ` Singh, Balbir
2020-04-08  9:02 ` [PATCH v3 3/5] arch/x86/mm: Refactor cond_ibpb() to support other use cases Balbir Singh
2020-04-17 13:07   ` Thomas Gleixner
2020-04-17 23:02     ` Singh, Balbir
2020-04-18  9:59       ` Thomas Gleixner
2020-04-21  3:46         ` Singh, Balbir
2020-04-21  9:02           ` Thomas Gleixner
2020-04-08  9:02 ` [PATCH v3 4/5] arch/x86: Optionally flush L1D on context switch Balbir Singh
2020-04-17 14:41   ` Thomas Gleixner
2020-04-18  1:30     ` Singh, Balbir
2020-04-18 10:17       ` Thomas Gleixner
2020-04-20  0:24         ` Singh, Balbir
2020-04-08  9:02 ` [PATCH v3 5/5] arch/x86: Add L1D flushing Documentation Balbir Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).