All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
@ 2014-04-01  9:46 Feng Wu
  2014-04-01  9:46 ` [PATCH v4 1/4] KVM: Remove SMAP bit from CR4_RESERVED_BITS Feng Wu
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Feng Wu @ 2014-04-01  9:46 UTC (permalink / raw)
  To: pbonzini, gleb, hpa, kvm; +Cc: Feng Wu

Supervisor Mode Access Prevention (SMAP) is a new security feature 
disclosed by Intel, please refer to the following document: 

http://software.intel.com/sites/default/files/319433-014.pdf
 
Every access to a linear address is either a supervisor-mode access
or a user-mode access. All accesses performed while the current
privilege level (CPL) is less than 3 are supervisor-mode accesses.
If CPL = 3, accesses are generally user-mode accesses. However, some
operations implicitly access system data structures, and the resulting
accesses to those data structures are supervisor-mode accesses regardless
of CPL. Examples of such implicit supervisor accesses include the following:
accesses to the global descriptor table (GDT) or local descriptor table
(LDT) to load a segment descriptor; accesses to the interrupt descriptor
table (IDT) when delivering an interrupt or exception; and accesses to the
task-state segment (TSS) as part of a task switch or change of CPL.

If CR4.SMAP = 1, supervisor-mode data accesses are not allowed to linear
addresses that are accessible in user mode. If CPL < 3, SMAP protections
are disabled if EFLAGS.AC = 1. If CPL = 3, SMAP applies to all supervisor-mode
data accesses (these are implicit supervisor accesses) regardless of the
value of EFLAGS.AC.

This patchset pass-through SMAP feature to guests, and let guests
benefit from it.

Version 1:
  * Remove SMAP bit from CR4_RESERVED_BITS.
  * Add SMAP support when setting CR4
  * Disable SMAP for guests in EPT realmode and EPT unpaging mode
  * Expose SMAP feature to guest

Version 2:
  * Change the logic of updating mmu permission bitmap for SMAP violation
  * Expose SMAP feature to guest in the last patch of this series.

Version 3:
  * Changes in update_permission_bitmask().
  * Use a branchless way suggested by Paolo Bonzini to detect SMAP
    violation in permission_fault(). 

Version 4:
  * Changes to some comments and code style.

Feng Wu (4):
  KVM: Remove SMAP bit from CR4_RESERVED_BITS.
  KVM: Add SMAP support when setting CR4
  KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode
  KVM: expose SMAP feature to guest

 arch/x86/include/asm/kvm_host.h |  2 +-
 arch/x86/kvm/cpuid.c            |  2 +-
 arch/x86/kvm/cpuid.h            |  8 ++++++++
 arch/x86/kvm/mmu.c              | 34 ++++++++++++++++++++++++++++---
 arch/x86/kvm/mmu.h              | 44 +++++++++++++++++++++++++++++++++--------
 arch/x86/kvm/paging_tmpl.h      |  2 +-
 arch/x86/kvm/vmx.c              | 11 ++++++-----
 arch/x86/kvm/x86.c              |  9 ++++++++-
 8 files changed, 92 insertions(+), 20 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v4 1/4] KVM: Remove SMAP bit from CR4_RESERVED_BITS.
  2014-04-01  9:46 [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Feng Wu
@ 2014-04-01  9:46 ` Feng Wu
  2014-04-01  9:46 ` [PATCH v4 2/4] KVM: Add SMAP support when setting CR4 Feng Wu
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: Feng Wu @ 2014-04-01  9:46 UTC (permalink / raw)
  To: pbonzini, gleb, hpa, kvm; +Cc: Feng Wu

This patch removes SMAP bit from CR4_RESERVED_BITS.

Signed-off-by: Feng Wu <feng.wu@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index fdf83af..4eeb049 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -60,7 +60,7 @@
 			  | X86_CR4_PSE | X86_CR4_PAE | X86_CR4_MCE     \
 			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
 			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
-			  | X86_CR4_OSXMMEXCPT | X86_CR4_VMXE))
+			  | X86_CR4_OSXMMEXCPT | X86_CR4_VMXE | X86_CR4_SMAP))
 
 #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 2/4] KVM: Add SMAP support when setting CR4
  2014-04-01  9:46 [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Feng Wu
  2014-04-01  9:46 ` [PATCH v4 1/4] KVM: Remove SMAP bit from CR4_RESERVED_BITS Feng Wu
@ 2014-04-01  9:46 ` Feng Wu
  2014-04-10 20:12   ` Marcelo Tosatti
  2014-04-01  9:46 ` [PATCH v4 3/4] KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode Feng Wu
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Feng Wu @ 2014-04-01  9:46 UTC (permalink / raw)
  To: pbonzini, gleb, hpa, kvm; +Cc: Feng Wu

This patch adds SMAP handling logic when setting CR4 for guests

Thanks a lot to Paolo Bonzini for his suggestion to use the branchless
way to detect SMAP violation.

Signed-off-by: Feng Wu <feng.wu@intel.com>
---
 arch/x86/kvm/cpuid.h       |  8 ++++++++
 arch/x86/kvm/mmu.c         | 34 +++++++++++++++++++++++++++++++---
 arch/x86/kvm/mmu.h         | 44 ++++++++++++++++++++++++++++++++++++--------
 arch/x86/kvm/paging_tmpl.h |  2 +-
 arch/x86/kvm/x86.c         |  9 ++++++++-
 5 files changed, 84 insertions(+), 13 deletions(-)

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index a2a1bb7..eeecbed 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -48,6 +48,14 @@ static inline bool guest_cpuid_has_smep(struct kvm_vcpu *vcpu)
 	return best && (best->ebx & bit(X86_FEATURE_SMEP));
 }
 
+static inline bool guest_cpuid_has_smap(struct kvm_vcpu *vcpu)
+{
+	struct kvm_cpuid_entry2 *best;
+
+	best = kvm_find_cpuid_entry(vcpu, 7, 0);
+	return best && (best->ebx & bit(X86_FEATURE_SMAP));
+}
+
 static inline bool guest_cpuid_has_fsgsbase(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpuid_entry2 *best;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9b53135..a183783 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3601,20 +3601,27 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
 	}
 }
 
-static void update_permission_bitmask(struct kvm_vcpu *vcpu,
+void update_permission_bitmask(struct kvm_vcpu *vcpu,
 		struct kvm_mmu *mmu, bool ept)
 {
 	unsigned bit, byte, pfec;
 	u8 map;
-	bool fault, x, w, u, wf, uf, ff, smep;
+	bool fault, x, w, u, wf, uf, ff, smapf, cr4_smap, smep, smap = 0;
 
 	smep = kvm_read_cr4_bits(vcpu, X86_CR4_SMEP);
+	cr4_smap = kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
 	for (byte = 0; byte < ARRAY_SIZE(mmu->permissions); ++byte) {
 		pfec = byte << 1;
 		map = 0;
 		wf = pfec & PFERR_WRITE_MASK;
 		uf = pfec & PFERR_USER_MASK;
 		ff = pfec & PFERR_FETCH_MASK;
+		/*
+		 * PFERR_RSVD_MASK bit is set in PFEC if the access is not
+		 * subject to SMAP restrictions, and cleared otherwise. The
+		 * bit is only meaningful if the SMAP bit is set in CR4.
+		 */
+		smapf = !(pfec & PFERR_RSVD_MASK);
 		for (bit = 0; bit < 8; ++bit) {
 			x = bit & ACC_EXEC_MASK;
 			w = bit & ACC_WRITE_MASK;
@@ -3627,11 +3634,32 @@ static void update_permission_bitmask(struct kvm_vcpu *vcpu,
 				w |= !is_write_protection(vcpu) && !uf;
 				/* Disallow supervisor fetches of user code if cr4.smep */
 				x &= !(smep && u && !uf);
+
+				/*
+				 * SMAP:kernel-mode data accesses from user-mode
+				 * mappings should fault. A fault is considered
+				 * as a SMAP violation if all of the following
+				 * conditions are ture:
+				 *   - X86_CR4_SMAP is set in CR4
+				 *   - An user page is accessed
+				 *   - Page fault in kernel mode
+				 *   - if CPL = 3 or X86_EFLAGS_AC is clear
+				 *
+				 *   Here, we cover the first three conditions.
+				 *   The fourth is computed dynamically in
+				 *   permission_fault() and is in smapf.
+				 *
+				 *   Also, SMAP does not affect instruction
+				 *   fetches, add the !ff check here to make it
+				 *   clearer.
+				 */
+				smap = cr4_smap && u && !uf && !ff;
 			} else
 				/* Not really needed: no U/S accesses on ept  */
 				u = 1;
 
-			fault = (ff && !x) || (uf && !u) || (wf && !w);
+			fault = (ff && !x) || (uf && !u) || (wf && !w) ||
+				(smapf && smap);
 			map |= fault << bit;
 		}
 		mmu->permissions[byte] = map;
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 2926152..3842e70 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -44,11 +44,17 @@
 #define PT_DIRECTORY_LEVEL 2
 #define PT_PAGE_TABLE_LEVEL 1
 
-#define PFERR_PRESENT_MASK (1U << 0)
-#define PFERR_WRITE_MASK (1U << 1)
-#define PFERR_USER_MASK (1U << 2)
-#define PFERR_RSVD_MASK (1U << 3)
-#define PFERR_FETCH_MASK (1U << 4)
+#define PFERR_PRESENT_BIT 0
+#define PFERR_WRITE_BIT 1
+#define PFERR_USER_BIT 2
+#define PFERR_RSVD_BIT 3
+#define PFERR_FETCH_BIT 4
+
+#define PFERR_PRESENT_MASK (1U << PFERR_PRESENT_BIT)
+#define PFERR_WRITE_MASK (1U << PFERR_WRITE_BIT)
+#define PFERR_USER_MASK (1U << PFERR_USER_BIT)
+#define PFERR_RSVD_MASK (1U << PFERR_RSVD_BIT)
+#define PFERR_FETCH_MASK (1U << PFERR_FETCH_BIT)
 
 int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64 sptes[4]);
 void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask);
@@ -73,6 +79,8 @@ int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct);
 void kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context);
 void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context,
 		bool execonly);
+void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+		bool ept);
 
 static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm)
 {
@@ -110,10 +118,30 @@ static inline bool is_write_protection(struct kvm_vcpu *vcpu)
  * Will a fault with a given page-fault error code (pfec) cause a permission
  * fault with the given access (in ACC_* format)?
  */
-static inline bool permission_fault(struct kvm_mmu *mmu, unsigned pte_access,
-				    unsigned pfec)
+static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
+				    unsigned pte_access, unsigned pfec)
 {
-	return (mmu->permissions[pfec >> 1] >> pte_access) & 1;
+	int cpl = kvm_x86_ops->get_cpl(vcpu);
+	unsigned long rflags = kvm_x86_ops->get_rflags(vcpu);
+
+	/*
+	 * If CPL < 3, SMAP prevention are disabled if EFLAGS.AC = 1.
+	 *
+	 * If CPL = 3, SMAP applies to all supervisor-mode data accesses
+	 * (these are implicit supervisor accesses) regardless of the value
+	 * of EFLAGS.AC.
+	 *
+	 * This computes (cpl < 3) && (rflags & X86_EFLAGS_AC), leaving
+	 * the result in X86_EFLAGS_AC. We then insert it in place of
+	 * the PFERR_RSVD_MASK bit; this bit will always be zero in pfec,
+	 * but it will be one in index if SMAP checks are being overridden.
+	 * It is important to keep this branchless.
+	 */
+	unsigned long smap = (cpl - 3) & (rflags & X86_EFLAGS_AC);
+	int index = (pfec >> 1) +
+		    (smap >> (X86_EFLAGS_AC_BIT - PFERR_RSVD_BIT + 1));
+
+	return (mmu->permissions[index] >> pte_access) & 1;
 }
 
 void kvm_mmu_invalidate_zap_all_pages(struct kvm *kvm);
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index cba218a..4107765 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -353,7 +353,7 @@ retry_walk:
 		walker->ptes[walker->level - 1] = pte;
 	} while (!is_last_gpte(mmu, walker->level, pte));
 
-	if (unlikely(permission_fault(mmu, pte_access, access))) {
+	if (unlikely(permission_fault(vcpu, mmu, pte_access, access))) {
 		errcode |= PFERR_PRESENT_MASK;
 		goto error;
 	}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2b85784..5869c6d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -646,6 +646,9 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	if (!guest_cpuid_has_smep(vcpu) && (cr4 & X86_CR4_SMEP))
 		return 1;
 
+	if (!guest_cpuid_has_smap(vcpu) && (cr4 & X86_CR4_SMAP))
+		return 1;
+
 	if (!guest_cpuid_has_fsgsbase(vcpu) && (cr4 & X86_CR4_FSGSBASE))
 		return 1;
 
@@ -674,6 +677,9 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 	    (!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
 		kvm_mmu_reset_context(vcpu);
 
+	if ((cr4 ^ old_cr4) & X86_CR4_SMAP)
+		update_permission_bitmask(vcpu, vcpu->arch.walk_mmu, false);
+
 	if ((cr4 ^ old_cr4) & X86_CR4_OSXSAVE)
 		kvm_update_cpuid(vcpu);
 
@@ -4108,7 +4114,8 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
 		| (write ? PFERR_WRITE_MASK : 0);
 
 	if (vcpu_match_mmio_gva(vcpu, gva)
-	    && !permission_fault(vcpu->arch.walk_mmu, vcpu->arch.access, access)) {
+	    && !permission_fault(vcpu, vcpu->arch.walk_mmu,
+				 vcpu->arch.access, access)) {
 		*gpa = vcpu->arch.mmio_gfn << PAGE_SHIFT |
 					(gva & (PAGE_SIZE - 1));
 		trace_vcpu_match_mmio(gva, *gpa, write, false);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 3/4] KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode
  2014-04-01  9:46 [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Feng Wu
  2014-04-01  9:46 ` [PATCH v4 1/4] KVM: Remove SMAP bit from CR4_RESERVED_BITS Feng Wu
  2014-04-01  9:46 ` [PATCH v4 2/4] KVM: Add SMAP support when setting CR4 Feng Wu
@ 2014-04-01  9:46 ` Feng Wu
  2014-04-01  9:46 ` [PATCH v4 4/4] KVM: expose SMAP feature to guest Feng Wu
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 18+ messages in thread
From: Feng Wu @ 2014-04-01  9:46 UTC (permalink / raw)
  To: pbonzini, gleb, hpa, kvm; +Cc: Feng Wu

SMAP is disabled if CPU is in non-paging mode in hardware.
However KVM always uses paging mode to emulate guest non-paging
mode with TDP. To emulate this behavior, SMAP needs to be
manually disabled when guest switches to non-paging mode.

Signed-off-by: Feng Wu <feng.wu@intel.com>
---
 arch/x86/kvm/vmx.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 3927528..e58cb5f 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3452,13 +3452,14 @@ static int vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
 			hw_cr4 &= ~X86_CR4_PAE;
 			hw_cr4 |= X86_CR4_PSE;
 			/*
-			 * SMEP is disabled if CPU is in non-paging mode in
-			 * hardware. However KVM always uses paging mode to
+			 * SMEP/SMAP is disabled if CPU is in non-paging mode
+			 * in hardware. However KVM always uses paging mode to
 			 * emulate guest non-paging mode with TDP.
-			 * To emulate this behavior, SMEP needs to be manually
-			 * disabled when guest switches to non-paging mode.
+			 * To emulate this behavior, SMEP/SMAP needs to be
+			 * manually disabled when guest switches to non-paging
+			 * mode.
 			 */
-			hw_cr4 &= ~X86_CR4_SMEP;
+			hw_cr4 &= ~(X86_CR4_SMEP | X86_CR4_SMAP);
 		} else if (!(cr4 & X86_CR4_PAE)) {
 			hw_cr4 &= ~X86_CR4_PAE;
 		}
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 4/4] KVM: expose SMAP feature to guest
  2014-04-01  9:46 [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Feng Wu
                   ` (2 preceding siblings ...)
  2014-04-01  9:46 ` [PATCH v4 3/4] KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode Feng Wu
@ 2014-04-01  9:46 ` Feng Wu
  2014-04-03 16:46 ` [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Paolo Bonzini
  2014-04-10 20:16 ` Marcelo Tosatti
  5 siblings, 0 replies; 18+ messages in thread
From: Feng Wu @ 2014-04-01  9:46 UTC (permalink / raw)
  To: pbonzini, gleb, hpa, kvm; +Cc: Feng Wu

This patch exposes SMAP feature to guest

Signed-off-by: Feng Wu <feng.wu@intel.com>
---
 arch/x86/kvm/cpuid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index c697625..deb5f9b 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -303,7 +303,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 	/* cpuid 7.0.ebx */
 	const u32 kvm_supported_word9_x86_features =
 		F(FSGSBASE) | F(BMI1) | F(HLE) | F(AVX2) | F(SMEP) |
-		F(BMI2) | F(ERMS) | f_invpcid | F(RTM);
+		F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | F(SMAP);
 
 	/* all calls to cpuid_count() should be made on the same cpu */
 	get_cpu();
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-01  9:46 [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Feng Wu
                   ` (3 preceding siblings ...)
  2014-04-01  9:46 ` [PATCH v4 4/4] KVM: expose SMAP feature to guest Feng Wu
@ 2014-04-03 16:46 ` Paolo Bonzini
  2014-04-04  2:22   ` Wu, Feng
  2014-04-10 20:16 ` Marcelo Tosatti
  5 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2014-04-03 16:46 UTC (permalink / raw)
  To: Feng Wu, gleb, hpa, kvm

Il 01/04/2014 11:46, Feng Wu ha scritto:
> Feng Wu (4):
>   KVM: Remove SMAP bit from CR4_RESERVED_BITS.
>   KVM: Add SMAP support when setting CR4
>   KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode
>   KVM: expose SMAP feature to guest
>

Hi,

I prepared some testcases.  You can find them in branch "smap" of 
git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git

To compile them for 32-bits:
	git clean -xdf
	./configure --arch=i386
	make
	./x86-run x86/smap.flat -cpu host

For 64-bits:
	git clean -xdf
	./configure
	make
	./x86-run x86/smap.flat -cpu host

I tried them with QEMU and they all pass.  The output should be 
something like this:

	enabling apic
	paging enabled
	cr0 = 80010011
	cr3 = 7fff000
	cr4 = 10
	testing without INVLPG
	PASS: write to supervisor page
	PASS: read from user page with AC=1
	PASS: read from user page with AC=0
	PASS: write to user page with AC=1
	PASS: read from user page with AC=0
	PASS: write to user stack with AC=1
	PASS: write to user stack with AC=0
	PASS: executing on user page with AC=0
	testing with INVLPG
	PASS: write to supervisor page
	PASS: read from user page with AC=1
	PASS: read from user page with AC=0
	PASS: write to user page with AC=1
	PASS: read from user page with AC=0
	PASS: write to user stack with AC=1
	PASS: write to user stack with AC=0
	PASS: executing on user page with AC=0
	
	SUMMARY: 16 tests, 0 failures

Please test them (both 32- and 64-bits) with both ept=1 and ept=0.  If 
the tests pass, the series is okay.

The only part that is not covered is the implicit kernel accesses at 
CPL=3, which QEMU doesn't implement that (I fixed it, but didn't have 
time to think about tests).  Since I'm going on vacation next week, I 
wanted to throw this out today.  I'll post the test patches when I'm back.

Paolo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-03 16:46 ` [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Paolo Bonzini
@ 2014-04-04  2:22   ` Wu, Feng
  2014-04-04  7:27     ` Paolo Bonzini
  0 siblings, 1 reply; 18+ messages in thread
From: Wu, Feng @ 2014-04-04  2:22 UTC (permalink / raw)
  To: Paolo Bonzini, gleb, hpa, kvm

[-- Attachment #1: Type: text/plain, Size: 3338 bytes --]



> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On
> Behalf Of Paolo Bonzini
> Sent: Friday, April 04, 2014 12:46 AM
> To: Wu, Feng; gleb@redhat.com; hpa@zytor.com; kvm@vger.kernel.org
> Subject: Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
> 
> Il 01/04/2014 11:46, Feng Wu ha scritto:
> > Feng Wu (4):
> >   KVM: Remove SMAP bit from CR4_RESERVED_BITS.
> >   KVM: Add SMAP support when setting CR4
> >   KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode
> >   KVM: expose SMAP feature to guest
> >
> 
> Hi,
> 
> I prepared some testcases.  You can find them in branch "smap" of
> git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
> 
> To compile them for 32-bits:
> 	git clean -xdf
> 	./configure --arch=i386
> 	make
> 	./x86-run x86/smap.flat -cpu host
> 
> For 64-bits:
> 	git clean -xdf
> 	./configure
> 	make
> 	./x86-run x86/smap.flat -cpu host
> 
> I tried them with QEMU and they all pass.  The output should be
> something like this:
> 
> 	enabling apic
> 	paging enabled
> 	cr0 = 80010011
> 	cr3 = 7fff000
> 	cr4 = 10
> 	testing without INVLPG
> 	PASS: write to supervisor page
> 	PASS: read from user page with AC=1
> 	PASS: read from user page with AC=0
> 	PASS: write to user page with AC=1
> 	PASS: read from user page with AC=0
> 	PASS: write to user stack with AC=1
> 	PASS: write to user stack with AC=0
> 	PASS: executing on user page with AC=0
> 	testing with INVLPG
> 	PASS: write to supervisor page
> 	PASS: read from user page with AC=1
> 	PASS: read from user page with AC=0
> 	PASS: write to user page with AC=1
> 	PASS: read from user page with AC=0
> 	PASS: write to user stack with AC=1
> 	PASS: write to user stack with AC=0
> 	PASS: executing on user page with AC=0
> 
> 	SUMMARY: 16 tests, 0 failures
> 
> Please test them (both 32- and 64-bits) with both ept=1 and ept=0.  If
> the tests pass, the series is okay.

Thank you for providing these test cases. I tested it in related hardware
(both 32- and 64-bits) with both ept=1 and ept=0, they all pass.

I also did some similar testing before posting the patch set. Since SMAP
has been already supported in Linux kernel, in which, stac() and clac() are
added in functions like copy_from_user(), copy_to_user(), etc.. From my
previous test, Linux guest can run well on top of KVM with SMAP enabled.
I think this covers the AC bit logic for testing. I also tested whether it can
induce an SMAP violation when accessing user pages in kernel mode with
AC bit cleared, I successfully got the SMAP violation fault in guest in that case.

Besides that, I got some patches for SMAP testing in native Linux, which are
attached, I applied them to the guest kernel and everything works well.

> 
> The only part that is not covered is the implicit kernel accesses at
> CPL=3, which QEMU doesn't implement that (I fixed it, but didn't have
> time to think about tests).  Since I'm going on vacation next week, I
> wanted to throw this out today.  I'll post the test patches when I'm back.
> 
> Paolo
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: 0001-x86-smap-add-debug-code-for-local-interrupts.patch --]
[-- Type: application/octet-stream, Size: 1973 bytes --]

From 23da102c05ceda22d09499f3a16134d5cdddddb3 Mon Sep 17 00:00:00 2001
From: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Date: Thu, 8 Aug 2013 18:08:39 -0700
Subject: [PATCH 1/8] x86, smap: add debug code for local interrupts

When interrupts are disabled, we are in kernel mode. Thus, SMAP
should prevent accessing user-space memory. We make sure that SMAP
feature enforces this by checking if EFLAGS.AC is clear.

Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
---
 arch/x86/Kconfig                |  9 +++++++++
 arch/x86/include/asm/irqflags.h | 12 ++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 15b5cef..59f34bf 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1546,6 +1546,15 @@ config X86_SMAP
 
 	  If unsure, say Y.
 
+config X86_SMAP_DEBUG_INTERRUPTS
+	def_bool n
+	prompt "Debug interrupts for SMAP" if X86_SMAP
+	---help---
+	  Enable code to debug problems related to the SMAP feature: when
+	  interrupts are disabled (i.e., only in kernel mode), EFLAGS.AC
+	  should be clear. Thus, access to user-space memory while in kernel
+	  mode is prevented by SMAP.
+
 config EFI
 	bool "EFI runtime service support"
 	depends on ACPI
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index bba3cf8..53f8c58 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -2,6 +2,15 @@
 #define _X86_IRQFLAGS_H_
 
 #include <asm/processor-flags.h>
+#ifndef __ASSEMBLY__
+#include <asm/bug.h>
+
+#ifdef X86_SMAP_DEBUG_INTERRUPTS
+# define ENFORCE_CLEAR_EFLAGS_AC(x) BUG_ON((x) & X86_EFLAGS_AC)
+#else
+# define ENFORCE_CLEAR_EFLAGS_AC(x)
+#endif
+#endif
 
 #ifndef __ASSEMBLY__
 /*
@@ -107,6 +116,9 @@ static inline notrace unsigned long arch_local_irq_save(void)
 {
 	unsigned long flags = arch_local_save_flags();
 	arch_local_irq_disable();
+
+	ENFORCE_CLEAR_EFLAGS_AC(flags);
+
 	return flags;
 }
 #else
-- 
1.8.3.1


[-- Attachment #3: 0002-x86-smap-add-debug-code-for-STAC-opcode.patch --]
[-- Type: application/octet-stream, Size: 4948 bytes --]

From 5d3b919cfd9b6f180e025a7027c27233b4d07be9 Mon Sep 17 00:00:00 2001
From: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Date: Wed, 21 Aug 2013 15:52:56 -0700
Subject: [PATCH 2/8] x86, smap: add debug code for STAC opcode

When SMAP is temporary disabled to legally access user-space address
space in kernel mode, we expect EFLAGS.AC=0. This code verifies
that such condition is satisifed. Otherwise, an invalid opcode
exception is generated.

Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
---
 arch/x86/Kconfig            |  8 +++++
 arch/x86/include/asm/smap.h | 85 +++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 86 insertions(+), 7 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 59f34bf..34bc8ba 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1555,6 +1555,14 @@ config X86_SMAP_DEBUG_INTERRUPTS
 	  should be clear. Thus, access to user-space memory while in kernel
 	  mode is prevented by SMAP.
 
+config X86_SMAP_DEBUG_STAC
+	def_bool n
+	prompt "Debug STAC for SMAP" if X86_SMAP
+	---help---
+	  Enable code to debug problems related to the SMAP feature. When STAC
+	  is called, EFLAGS.AC should be clear. This option verifies that this
+	  condition holds; otherwise, it generates an invalid opcode exception.
+
 config EFI
 	bool "EFI runtime service support"
 	depends on ACPI
diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h
index 8d3120f..ecfd7a4 100644
--- a/arch/x86/include/asm/smap.h
+++ b/arch/x86/include/asm/smap.h
@@ -12,17 +12,52 @@
 
 #ifndef _ASM_X86_SMAP_H
 #define _ASM_X86_SMAP_H
+#define X86_EFLAGS_AC_BIT 18
 
 #include <linux/stringify.h>
 #include <asm/nops.h>
 #include <asm/cpufeature.h>
+#include <asm/processor-flags.h>
 
 /* "Raw" instruction opcodes */
-#define __ASM_CLAC	.byte 0x0f,0x01,0xca
-#define __ASM_STAC	.byte 0x0f,0x01,0xcb
+#define ___ASM_CLAC_OPCODE_	.byte 0x0f,0x01,0xca
+#define ___ASM_STAC_OPCODE_	.byte 0x0f,0x01,0xcb
+
+#define __ASM_CLAC ___ASM_CLAC_OPCODE_
 
 #ifdef __ASSEMBLY__
 
+#ifdef CONFIG_X86_SMAP_DEBUG_STAC
+
+#define __ASM_STAC_NOP	ASM_NOP8 ;					\
+			ASM_NOP8 ;					\
+			ASM_NOP2 ;
+
+#ifdef CONFIG_X86_64
+#define __ASM_STAC							\
+			pushfq ;					\
+			btq $X86_EFLAGS_AC_BIT, (%rsp) ;		\
+			jnc 937f ;					\
+			ud2 ;						\
+		937:	addq $8, %rsp ;					\
+			___ASM_STAC_OPCODE_ ;
+#else
+#define __ASM_STAC							\
+			pushfl ;					\
+			btl $X86_EFLAGS_AC_BIT, (%esp) ;		\
+			jnc 937f ;					\
+			ud2 ;						\
+		937:	addl $4, %esp ;					\
+			___ASM_STAC_OPCODE_ ;
+#endif /* CONFIG_X86_64 */
+
+#else /* CONFIG_X86_SMAP_DEBUG_STAC */
+
+#define __ASM_STAC ___ASM_STAC_OPCODE_
+#define __ASM_STAC_NOP	ASM_NOP3
+
+#endif /* CONFIG_X86_SMAP_DEBUG_STAC */
+
 #include <asm/alternative-asm.h>
 
 #ifdef CONFIG_X86_SMAP
@@ -37,12 +72,15 @@
 	.popsection
 
 #define ASM_STAC							\
-	661: ASM_NOP3 ;							\
+	661: __ASM_STAC_NOP ;						\
+	662:								\
 	.pushsection .altinstr_replacement, "ax" ;			\
-	662: __ASM_STAC ;						\
+	663: __ASM_STAC ;						\
+	664:								\
 	.popsection ;							\
 	.pushsection .altinstructions, "a" ;				\
-	altinstruction_entry 661b, 662b, X86_FEATURE_SMAP, 3, 3 ;	\
+	altinstruction_entry 661b, 663b, X86_FEATURE_SMAP, 662b-661b,	\
+							664b-663b ;	\
 	.popsection
 
 #else /* CONFIG_X86_SMAP */
@@ -56,10 +94,42 @@
 
 #include <asm/alternative.h>
 
+#ifdef CONFIG_X86_SMAP_DEBUG_STAC
+
+#define __ASM_STAC_NOP	ASM_NOP8					\
+			ASM_NOP8					\
+			ASM_NOP2
+
+#ifdef CONFIG_X86_64
+#define __ASM_STAC	"pushfq\n"					\
+			"btq $" __stringify(X86_EFLAGS_AC_BIT)		\
+							", (%%rsp)\n"	\
+			"jnc 937f\n"					\
+			"ud2\n"						\
+			"937: addq $8, %%rsp\n"				\
+			__stringify(___ASM_STAC_OPCODE_)
+#else
+#define __ASM_STAC	"pushfl\n"					\
+			"btl $"__stringify(X86_EFLAGS_AC_BIT)	 	\
+							", (%%esp)\n"	\
+			"jnc 937f\n"					\
+			"ud2\n"						\
+			"937: addl $4, %%esp\n"				\
+			__stringify(___ASM_STAC_OPCODE_)
+#endif
+
+#else /* CONFIG_X86_SMAP_DEBUG_STAC */
+
+#define __ASM_STAC_NOP	ASM_NOP3
+#define __ASM_STAC	__stringify(___ASM_STAC_OPCODE_)
+
+#endif /* CONFIG_X86_SMAP_DEBUG_STAC */
+
 #ifdef CONFIG_X86_SMAP
 
 static __always_inline void clac(void)
 {
+
 	/* Note: a barrier is implicit in alternative() */
 	alternative(ASM_NOP3, __stringify(__ASM_CLAC), X86_FEATURE_SMAP);
 }
@@ -67,14 +137,15 @@ static __always_inline void clac(void)
 static __always_inline void stac(void)
 {
 	/* Note: a barrier is implicit in alternative() */
-	alternative(ASM_NOP3, __stringify(__ASM_STAC), X86_FEATURE_SMAP);
+	alternative(__ASM_STAC_NOP, __ASM_STAC, X86_FEATURE_SMAP);
 }
 
 /* These macros can be used in asm() statements */
+
 #define ASM_CLAC \
 	ALTERNATIVE(ASM_NOP3, __stringify(__ASM_CLAC), X86_FEATURE_SMAP)
 #define ASM_STAC \
-	ALTERNATIVE(ASM_NOP3, __stringify(__ASM_STAC), X86_FEATURE_SMAP)
+	ALTERNATIVE(__ASM_STAC_NOP, __ASM_STAC, X86_FEATURE_SMAP)
 
 #else /* CONFIG_X86_SMAP */
 
-- 
1.8.3.1


[-- Attachment #4: 0003-x86-smap-add-debug-code-for-CLAC-opcode.patch --]
[-- Type: application/octet-stream, Size: 4147 bytes --]

From d75cc118d0fdb75a4649a951e16e341ca87af199 Mon Sep 17 00:00:00 2001
From: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Date: Tue, 10 Sep 2013 17:41:03 -0700
Subject: [PATCH 3/8] x86, smap: add debug code for CLAC opcode

In most cases, when SMAP enforced by calling CLAC, we expect it to be
previosly disabled (i.e., EFLAGS.AC=1) by a call to STAC. This debug
code verifies that this condition is met.

There are cases, however, in which CLAC is called without previously
having EFLAGS.AC=0. This is the case of error paths. For this reason,
debug functions are defined separately of the actual CLAC macros and
functions. This is to be able to selectively put debug points only
where required.

The CLAC debug code is implemented as an ALTERNATIVE, this is to
prevent that CPUs that do not support SMAP feature this code in case
it is enabled at build time.

Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
---
 arch/x86/Kconfig            |  9 ++++++
 arch/x86/include/asm/smap.h | 77 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 86 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 34bc8ba..1ece076 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1563,6 +1563,15 @@ config X86_SMAP_DEBUG_STAC
 	  is called, EFLAGS.AC should be clear. This option verifies that this
 	  condition holds; otherwise, it generates an invalid opcode exception.
 
+config X86_SMAP_DEBUG_CLAC
+	def_bool n
+	prompt "Debug CLAC for SMAP" if X86_SMAP
+	---help---
+	  Enable code to debug problems related to the SMAP feature. When CLAC
+	  is called, EFLAGS.AC should be set in the majority of the cases. This
+	  option verifies that this condition holds; otherwise, it generates an
+	  invalid opcode exception.
+
 config EFI
 	bool "EFI runtime service support"
 	depends on ACPI
diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h
index ecfd7a4..1954656 100644
--- a/arch/x86/include/asm/smap.h
+++ b/arch/x86/include/asm/smap.h
@@ -58,6 +58,44 @@
 
 #endif /* CONFIG_X86_SMAP_DEBUG_STAC */
 
+#ifdef CONFIG_X86_SMAP_DEBUG_CLAC
+
+#define __ASM_CLAC_NOP	ASM_NOP8 ;					\
+			ASM_NOP7 ;
+
+#ifdef CONFIG_X86_64
+#define __ASM_CLAC_DEBUG						\
+			pushfq ;					\
+			btq $X86_EFLAGS_AC_BIT, (%rsp) ;		\
+			jc 845f ;					\
+			ud2 ;						\
+		845:	addq $8, %rsp ;
+#else
+#define __ASM_CLAC_DEBUG						\
+			pushfl ;					\
+			btl $X86_EFLAGS_AC_BIT, (%esp) ;		\
+			jc 845f ;					\
+			ud2 ;						\
+		845:	addl $4, %esp ;
+#endif /* CONFIG_X86_64 */
+
+#define ASM_CLAC_DEBUG							\
+	661: __ASM_CLAC_NOP ;						\
+	662:								\
+	.pushsection .altinstr_replacement, "ax" ;			\
+	663: __ASM_CLAC_DEBUG ;						\
+	664:								\
+	.popsection ;							\
+	.pushsection .altinstructions, "a" ;				\
+	altinstruction_entry 661b, 663b, X86_FEATURE_SMAP, 662b-661b,	\
+							664b-663b ;	\
+	.popsection
+#else  /* CONFIG_X86_SMAP_DEBUG_CLAC */
+
+#define ASM_CLAC_DEBUG
+
+#endif /* CONFIG_X86_SMAP_DEBUG_CLAC */
+
 #include <asm/alternative-asm.h>
 
 #ifdef CONFIG_X86_SMAP
@@ -125,6 +163,45 @@
 
 #endif /* CONFIG_X86_SMAP_DEBUG_STAC */
 
+#ifdef CONFIG_X86_SMAP_DEBUG_CLAC
+
+#define __ASM_CLAC_DEBUG_NOP						\
+			ASM_NOP8					\
+			ASM_NOP7
+
+#ifdef CONFIG_X86_64
+#define __ASM_CLAC_DEBUG						\
+			"pushfq\n"					\
+			"btq $"__stringify(X86_EFLAGS_AC_BIT)		\
+							", (%%rsp)\n"	\
+			"jc 845f\n"					\
+			"ud2\n"						\
+			"845: addq $8, %%rsp\n"
+#else /* CONFIG_X86_64 */
+#define __ASM_CLAC_DEBUG						\
+			"pushfl\n"					\
+			"btl $"__stringify(X86_EFLAGS_AC_BIT)		\
+							", (%%esp)\n"	\
+			"jc 845f\n"					\
+			"ud2\n"						\
+			"845: addl $4, %%esp\n"
+#endif /* CONFIG_X86_64 */
+
+static __always_inline void clac_debug(void)
+{
+	alternative(__ASM_CLAC_DEBUG_NOP, __ASM_CLAC_DEBUG, X86_FEATURE_SMAP);
+}
+
+#define ASM_CLAC_DEBUG \
+	ALTERNATIVE(__ASM_CLAC_DEBUG_NOP, __ASM_CLAC_DEBUG, X86_FEATURE_SMAP)
+
+#else /* CONFIG_X86_SMAP_DEBUG_CLAC */
+
+#define ASM_CLAC_DEBUG
+static inline void clac_debug(void) { }
+
+#endif /* CONFIG_X86_SMAP_DEBUG_CLAC */
+
 #ifdef CONFIG_X86_SMAP
 
 static __always_inline void clac(void)
-- 
1.8.3.1


[-- Attachment #5: 0004-x86-smap-add-debug-points-for-CLAC-calls.patch --]
[-- Type: application/octet-stream, Size: 10154 bytes --]

From 3478516096c793e64bb4fb6635f8a33bd9774832 Mon Sep 17 00:00:00 2001
From: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Date: Tue, 10 Sep 2013 17:46:41 -0700
Subject: [PATCH 4/8] x86, smap: add debug points for CLAC calls

Add debug points for CLAC only where we know that a previous
STAC must have been issued; in other words, error paths are
not considered.

Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
---
 arch/x86/ia32/ia32entry.S           |  2 ++
 arch/x86/include/asm/fpu-internal.h |  3 ++-
 arch/x86/include/asm/futex.h        |  9 ++++++---
 arch/x86/include/asm/uaccess.h      | 12 ++++++++----
 arch/x86/include/asm/xsave.h        |  6 ++++--
 arch/x86/kernel/entry_32.S          |  1 +
 arch/x86/lib/copy_user_64.S         |  3 +++
 arch/x86/lib/copy_user_nocache_64.S |  1 +
 arch/x86/lib/getuser.S              |  5 +++++
 arch/x86/lib/putuser.S              |  3 ++-
 arch/x86/lib/usercopy_32.c          |  8 +++++++-
 11 files changed, 41 insertions(+), 12 deletions(-)

diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
index 474dc1b..6741122 100644
--- a/arch/x86/ia32/ia32entry.S
+++ b/arch/x86/ia32/ia32entry.S
@@ -150,6 +150,7 @@ ENTRY(ia32_sysenter_target)
 	ASM_STAC
 1:	movl	(%rbp),%ebp
 	_ASM_EXTABLE(1b,ia32_badarg)
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	orl     $TS_COMPAT,TI_status+THREAD_INFO(%rsp,RIP-ARGOFFSET)
 	testl   $_TIF_WORK_SYSCALL_ENTRY,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
@@ -307,6 +308,7 @@ ENTRY(ia32_cstar_target)
 	ASM_STAC
 1:	movl	(%r8),%r9d
 	_ASM_EXTABLE(1b,ia32_badarg)
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	orl     $TS_COMPAT,TI_status+THREAD_INFO(%rsp,RIP-ARGOFFSET)
 	testl   $_TIF_WORK_SYSCALL_ENTRY,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index e25cc33..c226c67 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -128,7 +128,8 @@ static inline void sanitize_i387_state(struct task_struct *tsk)
 	int err;							\
 	asm volatile(ASM_STAC "\n"					\
 		     "1:" #insn "\n\t"					\
-		     "2: " ASM_CLAC "\n"				\
+		     "2: " ASM_CLAC_DEBUG "\n"				\
+		     "   " ASM_CLAC "\n"				\
 		     ".section .fixup,\"ax\"\n"				\
 		     "3:  movl $-1,%[err]\n"				\
 		     "    jmp  2b\n"					\
diff --git a/arch/x86/include/asm/futex.h b/arch/x86/include/asm/futex.h
index be27ba1..da519e3 100644
--- a/arch/x86/include/asm/futex.h
+++ b/arch/x86/include/asm/futex.h
@@ -14,7 +14,8 @@
 #define __futex_atomic_op1(insn, ret, oldval, uaddr, oparg)	\
 	asm volatile("\t" ASM_STAC "\n"				\
 		     "1:\t" insn "\n"				\
-		     "2:\t" ASM_CLAC "\n"			\
+		     "2:\t" ASM_CLAC_DEBUG "\n"			\
+		     "\t" ASM_CLAC "\n"				\
 		     "\t.section .fixup,\"ax\"\n"		\
 		     "3:\tmov\t%3, %1\n"			\
 		     "\tjmp\t2b\n"				\
@@ -30,7 +31,8 @@
 		     "\t" insn "\n"				\
 		     "2:\t" LOCK_PREFIX "cmpxchgl %3, %2\n"	\
 		     "\tjnz\t1b\n"				\
-		     "3:\t" ASM_CLAC "\n"			\
+		     "3:\t" ASM_CLAC_DEBUG "\n"			\
+		     "\t" ASM_CLAC "\n"				\
 		     "\t.section .fixup,\"ax\"\n"		\
 		     "4:\tmov\t%5, %1\n"			\
 		     "\tjmp\t3b\n"				\
@@ -117,7 +119,8 @@ static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
 
 	asm volatile("\t" ASM_STAC "\n"
 		     "1:\t" LOCK_PREFIX "cmpxchgl %4, %2\n"
-		     "2:\t" ASM_CLAC "\n"
+		     "2:\t" ASM_CLAC_DEBUG "\n"
+		     "\t" ASM_CLAC "\n"
 		     "\t.section .fixup, \"ax\"\n"
 		     "3:\tmov     %3, %0\n"
 		     "\tjmp     2b\n"
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 5ee2687..f6c0863 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -183,7 +183,8 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
 	asm volatile(ASM_STAC "\n"					\
 		     "1:	movl %%eax,0(%2)\n"			\
 		     "2:	movl %%edx,4(%2)\n"			\
-		     "3: " ASM_CLAC "\n"				\
+		     "3: " ASM_CLAC_DEBUG "\n"				\
+		     "   " ASM_CLAC "\n"				\
 		     ".section .fixup,\"ax\"\n"				\
 		     "4:	movl %3,%0\n"				\
 		     "	jmp 3b\n"					\
@@ -197,7 +198,8 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
 	asm volatile(ASM_STAC "\n"					\
 		     "1:	movl %%eax,0(%1)\n"			\
 		     "2:	movl %%edx,4(%1)\n"			\
-		     "3: " ASM_CLAC "\n"				\
+		     "3: " ASM_CLAC_DEBUG "\n"				\
+		     "   " ASM_CLAC "\n"				\
 		     _ASM_EXTABLE_EX(1b, 2b)				\
 		     _ASM_EXTABLE_EX(2b, 3b)				\
 		     : : "A" (x), "r" (addr))
@@ -346,7 +348,8 @@ do {									\
 #define __get_user_asm(x, addr, err, itype, rtype, ltype, errret)	\
 	asm volatile(ASM_STAC "\n"					\
 		     "1:	mov"itype" %2,%"rtype"1\n"		\
-		     "2: " ASM_CLAC "\n"				\
+		     "2: " ASM_CLAC_DEBUG "\n"				\
+		     "   " ASM_CLAC "\n"				\
 		     ".section .fixup,\"ax\"\n"				\
 		     "3:	mov %3,%0\n"				\
 		     "	xor"itype" %"rtype"1,%"rtype"1\n"		\
@@ -411,7 +414,8 @@ struct __large_struct { unsigned long buf[100]; };
 #define __put_user_asm(x, addr, err, itype, rtype, ltype, errret)	\
 	asm volatile(ASM_STAC "\n"					\
 		     "1:	mov"itype" %"rtype"1,%2\n"		\
-		     "2: " ASM_CLAC "\n"				\
+		     "2: " ASM_CLAC_DEBUG "\n"				\
+		     "   " ASM_CLAC "\n"				\
 		     ".section .fixup,\"ax\"\n"				\
 		     "3:	mov %3,%0\n"				\
 		     "	jmp 2b\n"					\
diff --git a/arch/x86/include/asm/xsave.h b/arch/x86/include/asm/xsave.h
index 0415cda..c738b72 100644
--- a/arch/x86/include/asm/xsave.h
+++ b/arch/x86/include/asm/xsave.h
@@ -72,7 +72,8 @@ static inline int xsave_user(struct xsave_struct __user *buf)
 
 	__asm__ __volatile__(ASM_STAC "\n"
 			     "1: .byte " REX_PREFIX "0x0f,0xae,0x27\n"
-			     "2: " ASM_CLAC "\n"
+			     "2: " ASM_CLAC_DEBUG "\n"
+			     "   " ASM_CLAC "\n"
 			     ".section .fixup,\"ax\"\n"
 			     "3:  movl $-1,%[err]\n"
 			     "    jmp  2b\n"
@@ -93,7 +94,8 @@ static inline int xrestore_user(struct xsave_struct __user *buf, u64 mask)
 
 	__asm__ __volatile__(ASM_STAC "\n"
 			     "1: .byte " REX_PREFIX "0x0f,0xae,0x2f\n"
-			     "2: " ASM_CLAC "\n"
+			     "2: " ASM_CLAC_DEBUG "\n"
+			     "   " ASM_CLAC "\n"
 			     ".section .fixup,\"ax\"\n"
 			     "3:  movl $-1,%[err]\n"
 			     "    jmp  2b\n"
diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index 8f3e2de..b1bf048 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -424,6 +424,7 @@ sysenter_past_esp:
 	jae syscall_fault
 	ASM_STAC
 1:	movl (%ebp),%ebp
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	movl %ebp,PT_EBP(%esp)
 	_ASM_EXTABLE(1b,syscall_fault)
diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
index a30ca15..7207fad 100644
--- a/arch/x86/lib/copy_user_64.S
+++ b/arch/x86/lib/copy_user_64.S
@@ -179,6 +179,7 @@ ENTRY(copy_user_generic_unrolled)
 	decl %ecx
 	jnz 21b
 23:	xor %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	ret
 
@@ -250,6 +251,7 @@ ENTRY(copy_user_generic_string)
 3:	rep
 	movsb
 4:	xorl %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	ret
 
@@ -285,6 +287,7 @@ ENTRY(copy_user_enhanced_fast_string)
 1:	rep
 	movsb
 2:	xorl %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	ret
 
diff --git a/arch/x86/lib/copy_user_nocache_64.S b/arch/x86/lib/copy_user_nocache_64.S
index 6a4f43c..09980ea 100644
--- a/arch/x86/lib/copy_user_nocache_64.S
+++ b/arch/x86/lib/copy_user_nocache_64.S
@@ -97,6 +97,7 @@ ENTRY(__copy_user_nocache)
 	decl %ecx
 	jnz 21b
 23:	xorl %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	sfence
 	ret
diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
index a451235..fe75870 100644
--- a/arch/x86/lib/getuser.S
+++ b/arch/x86/lib/getuser.S
@@ -43,6 +43,7 @@ ENTRY(__get_user_1)
 	ASM_STAC
 1:	movzbl (%_ASM_AX),%edx
 	xor %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	ret
 	CFI_ENDPROC
@@ -58,6 +59,7 @@ ENTRY(__get_user_2)
 	ASM_STAC
 2:	movzwl -1(%_ASM_AX),%edx
 	xor %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	ret
 	CFI_ENDPROC
@@ -73,6 +75,7 @@ ENTRY(__get_user_4)
 	ASM_STAC
 3:	movl -3(%_ASM_AX),%edx
 	xor %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	ret
 	CFI_ENDPROC
@@ -89,6 +92,7 @@ ENTRY(__get_user_8)
 	ASM_STAC
 4:	movq -7(%_ASM_AX),%rdx
 	xor %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	ret
 #else
@@ -101,6 +105,7 @@ ENTRY(__get_user_8)
 4:	movl -7(%_ASM_AX),%edx
 5:	movl -3(%_ASM_AX),%ecx
 	xor %eax,%eax
+	ASM_CLAC_DEBUG
 	ASM_CLAC
 	ret
 #endif
diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S
index fc6ba17..b59fb60 100644
--- a/arch/x86/lib/putuser.S
+++ b/arch/x86/lib/putuser.S
@@ -32,7 +32,8 @@
 
 #define ENTER	CFI_STARTPROC ; \
 		GET_THREAD_INFO(%_ASM_BX)
-#define EXIT	ASM_CLAC ;	\
+#define EXIT	ASM_CLAC_DEBUG ;\
+		ASM_CLAC ;	\
 		ret ;		\
 		CFI_ENDPROC
 
diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
index f0312d7..70a1f0a 100644
--- a/arch/x86/lib/usercopy_32.c
+++ b/arch/x86/lib/usercopy_32.c
@@ -46,7 +46,8 @@ do {									\
 		"0:	rep; stosl\n"					\
 		"	movl %2,%0\n"					\
 		"1:	rep; stosb\n"					\
-		"2: " ASM_CLAC "\n"					\
+		"2: " ASM_CLAC_DEBUG "\n"				\
+		"   " ASM_CLAC "\n"					\
 		".section .fixup,\"ax\"\n"				\
 		"3:	lea 0(%2,%0,4),%0\n"				\
 		"	jmp 2b\n"					\
@@ -575,6 +576,7 @@ unsigned long __copy_to_user_ll(void __user *to, const void *from,
 		__copy_user(to, from, n);
 	else
 		n = __copy_user_intel(to, from, n);
+	clac_debug();
 	clac();
 	return n;
 }
@@ -588,6 +590,7 @@ unsigned long __copy_from_user_ll(void *to, const void __user *from,
 		__copy_user_zeroing(to, from, n);
 	else
 		n = __copy_user_zeroing_intel(to, from, n);
+	clac_debug();
 	clac();
 	return n;
 }
@@ -602,6 +605,7 @@ unsigned long __copy_from_user_ll_nozero(void *to, const void __user *from,
 	else
 		n = __copy_user_intel((void __user *)to,
 				      (const void *)from, n);
+	clac_debug();
 	clac();
 	return n;
 }
@@ -619,6 +623,7 @@ unsigned long __copy_from_user_ll_nocache(void *to, const void __user *from,
 #else
 	__copy_user_zeroing(to, from, n);
 #endif
+	clac_debug();
 	clac();
 	return n;
 }
@@ -636,6 +641,7 @@ unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *fr
 #else
 	__copy_user(to, from, n);
 #endif
+	clac_debug();
 	clac();
 	return n;
 }
-- 
1.8.3.1


[-- Attachment #6: 0007-x86-smap-add-more-debug-points-for-CLAC-calls.patch --]
[-- Type: application/octet-stream, Size: 1824 bytes --]

From 9dcac6c7aebae26deea336fc507bfdd7fea3fadf Mon Sep 17 00:00:00 2001
From: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Date: Tue, 17 Sep 2013 16:57:11 -0700
Subject: [PATCH 7/8] x86, smap: add more debug points for CLAC calls

Add debug points for CLAC in the checksum functions.

Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
---
 arch/x86/include/asm/checksum_32.h | 2 ++
 arch/x86/lib/csum-wrappers_64.c    | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/checksum_32.h b/arch/x86/include/asm/checksum_32.h
index f50de69..f3df574 100644
--- a/arch/x86/include/asm/checksum_32.h
+++ b/arch/x86/include/asm/checksum_32.h
@@ -55,6 +55,7 @@ static inline __wsum csum_partial_copy_from_user(const void __user *src,
 	stac();
 	ret = csum_partial_copy_generic((__force void *)src, dst,
 					len, sum, err_ptr, NULL);
+	clac_debug();
 	clac();
 
 	return ret;
@@ -189,6 +190,7 @@ static inline __wsum csum_and_copy_to_user(const void *src,
 		stac();
 		ret = csum_partial_copy_generic(src, (__force void *)dst,
 						len, sum, NULL, err_ptr);
+		clac_debug();
 		clac();
 		return ret;
 	}
diff --git a/arch/x86/lib/csum-wrappers_64.c b/arch/x86/lib/csum-wrappers_64.c
index 7609e0e..c598185 100644
--- a/arch/x86/lib/csum-wrappers_64.c
+++ b/arch/x86/lib/csum-wrappers_64.c
@@ -56,6 +56,7 @@ csum_partial_copy_from_user(const void __user *src, void *dst,
 	stac();
 	isum = csum_partial_copy_generic((__force const void *)src,
 				dst, len, isum, errp, NULL);
+	clac_debug();
 	clac();
 	if (unlikely(*errp))
 		goto out_err;
@@ -113,6 +114,7 @@ csum_partial_copy_to_user(const void *src, void __user *dst,
 	stac();
 	ret = csum_partial_copy_generic(src, (void __force *)dst,
 					len, isum, NULL, errp);
+	clac_debug();
 	clac();
 	return ret;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-04  2:22   ` Wu, Feng
@ 2014-04-04  7:27     ` Paolo Bonzini
  2014-04-08  1:06       ` Wu, Feng
  0 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2014-04-04  7:27 UTC (permalink / raw)
  To: Wu, Feng, gleb, hpa, kvm

Il 04/04/2014 04:22, Wu, Feng ha scritto:
> Thank you for providing these test cases. I tested it in related hardware
> (both 32- and 64-bits) with both ept=1 and ept=0, they all pass.
>
> I also did some similar testing before posting the patch set. Since SMAP
> has been already supported in Linux kernel, in which, stac() and clac() are
> added in functions like copy_from_user(), copy_to_user(), etc.. From my
> previous test, Linux guest can run well on top of KVM with SMAP enabled.
> I think this covers the AC bit logic for testing. I also tested whether it can
> induce an SMAP violation when accessing user pages in kernel mode with
> AC bit cleared, I successfully got the SMAP violation fault in guest in that case.

Thanks, that is useful to know.  Knowing that you tried the failure path 
is good (next time point it out when submitting the patch).

I made unit tests because I'm not sure how much the new code is 
stimulated in normal runs of Linux with ept=1.  After the EPT tables are 
built on the first access, the processor will take care of doing SMAP 
checks.  With ept=0, more page faults should happen on the first access 
to a page.  Still, it seemed safer to have unit tests and have them try 
both without and with invlpg.

Even though the tests do not cover the CPL=3/implicit access case, the 
logic to compute PFERR_RSVD_MASK dynamically is already covered by AC=1. 
  So I'm quite happy with the coverage.  Series is

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-04  7:27     ` Paolo Bonzini
@ 2014-04-08  1:06       ` Wu, Feng
  2014-04-08 20:38         ` Paolo Bonzini
  0 siblings, 1 reply; 18+ messages in thread
From: Wu, Feng @ 2014-04-08  1:06 UTC (permalink / raw)
  To: Paolo Bonzini, gleb, hpa, kvm



> -----Original Message-----
> From: kvm-owner@vger.kernel.org [mailto:kvm-owner@vger.kernel.org] On
> Behalf Of Paolo Bonzini
> Sent: Friday, April 04, 2014 3:28 PM
> To: Wu, Feng; gleb@redhat.com; hpa@zytor.com; kvm@vger.kernel.org
> Subject: Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
> 
> Il 04/04/2014 04:22, Wu, Feng ha scritto:
> > Thank you for providing these test cases. I tested it in related hardware
> > (both 32- and 64-bits) with both ept=1 and ept=0, they all pass.
> >
> > I also did some similar testing before posting the patch set. Since SMAP
> > has been already supported in Linux kernel, in which, stac() and clac() are
> > added in functions like copy_from_user(), copy_to_user(), etc.. From my
> > previous test, Linux guest can run well on top of KVM with SMAP enabled.
> > I think this covers the AC bit logic for testing. I also tested whether it can
> > induce an SMAP violation when accessing user pages in kernel mode with
> > AC bit cleared, I successfully got the SMAP violation fault in guest in that case.
> 
> Thanks, that is useful to know.  Knowing that you tried the failure path
> is good (next time point it out when submitting the patch).

Sure, will do this next time!:)
> 
> I made unit tests because I'm not sure how much the new code is
> stimulated in normal runs of Linux with ept=1.  After the EPT tables are
> built on the first access, the processor will take care of doing SMAP
> checks.  With ept=0, more page faults should happen on the first access
> to a page.  Still, it seemed safer to have unit tests and have them try
> both without and with invlpg.

Yes, Having the unit test code is really helpful for the code stability. Really
appreciate the effort you put on these test cases!

> 
> Even though the tests do not cover the CPL=3/implicit access case, the
> logic to compute PFERR_RSVD_MASK dynamically is already covered by AC=1.
>   So I'm quite happy with the coverage.  Series is
> 
> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>]

Thanks very much for your review on this.
BTW: Since 3.15 merge window is still open, I am wondering whether there is
any possibility to make SMAP into 3.15 with another pull request.

> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Thanks,
Feng

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-08  1:06       ` Wu, Feng
@ 2014-04-08 20:38         ` Paolo Bonzini
  2014-04-10 20:01           ` Marcelo Tosatti
  0 siblings, 1 reply; 18+ messages in thread
From: Paolo Bonzini @ 2014-04-08 20:38 UTC (permalink / raw)
  To: Wu, Feng, gleb, hpa, kvm, Marcelo Tosatti

Il 07/04/2014 21:06, Wu, Feng ha scritto:
>> > Even though the tests do not cover the CPL=3/implicit access case, the
>> > logic to compute PFERR_RSVD_MASK dynamically is already covered by AC=1.
>> >   So I'm quite happy with the coverage.  Series is
>> >
>> > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>]
> Thanks very much for your review on this.
> BTW: Since 3.15 merge window is still open, I am wondering whether there is
> any possibility to make SMAP into 3.15 with another pull request.
>

This is up to Marcelo who is currently managing the KVM tree.

Paolo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-08 20:38         ` Paolo Bonzini
@ 2014-04-10 20:01           ` Marcelo Tosatti
  2014-04-12  0:16             ` Paolo Bonzini
  0 siblings, 1 reply; 18+ messages in thread
From: Marcelo Tosatti @ 2014-04-10 20:01 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Wu, Feng, gleb, hpa, kvm

On Tue, Apr 08, 2014 at 04:38:08PM -0400, Paolo Bonzini wrote:
> Il 07/04/2014 21:06, Wu, Feng ha scritto:
> >>> Even though the tests do not cover the CPL=3/implicit access case, the
> >>> logic to compute PFERR_RSVD_MASK dynamically is already covered by AC=1.
> >>>   So I'm quite happy with the coverage.  Series is
> >>>
> >>> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>]
> >Thanks very much for your review on this.
> >BTW: Since 3.15 merge window is still open, I am wondering whether there is
> >any possibility to make SMAP into 3.15 with another pull request.
> >
> 
> This is up to Marcelo who is currently managing the KVM tree.
> 
> Paolo

The merge window is for patches which have been tested in queue/next
for sometime. This patch has received no testing other than the
developer testing.

Lack of implicit supervisor mode by instructions such as "Examples of
such implicit..." in section 9.3.2, in KVM's emulator, makes the feature
incomplete, does it not ?

(would have to signal those accesses as supervisor ones).



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/4] KVM: Add SMAP support when setting CR4
  2014-04-01  9:46 ` [PATCH v4 2/4] KVM: Add SMAP support when setting CR4 Feng Wu
@ 2014-04-10 20:12   ` Marcelo Tosatti
  2014-04-12  0:14     ` Paolo Bonzini
  0 siblings, 1 reply; 18+ messages in thread
From: Marcelo Tosatti @ 2014-04-10 20:12 UTC (permalink / raw)
  To: Feng Wu; +Cc: pbonzini, gleb, hpa, kvm

On Tue, Apr 01, 2014 at 05:46:34PM +0800, Feng Wu wrote:
> This patch adds SMAP handling logic when setting CR4 for guests
> 
> Thanks a lot to Paolo Bonzini for his suggestion to use the branchless
> way to detect SMAP violation.
> 
> Signed-off-by: Feng Wu <feng.wu@intel.com>
>
> @@ -110,10 +118,30 @@ static inline bool is_write_protection(struct kvm_vcpu *vcpu)
>   * Will a fault with a given page-fault error code (pfec) cause a permission
>   * fault with the given access (in ACC_* format)?
>   */
> -static inline bool permission_fault(struct kvm_mmu *mmu, unsigned pte_access,
> -				    unsigned pfec)
> +static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
> +				    unsigned pte_access, unsigned pfec)
>  {
> -	return (mmu->permissions[pfec >> 1] >> pte_access) & 1;
> +	int cpl = kvm_x86_ops->get_cpl(vcpu);
> +	unsigned long rflags = kvm_x86_ops->get_rflags(vcpu);

Some branches added here (can't think of anything useful to avoid them).


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-01  9:46 [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Feng Wu
                   ` (4 preceding siblings ...)
  2014-04-03 16:46 ` [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Paolo Bonzini
@ 2014-04-10 20:16 ` Marcelo Tosatti
  5 siblings, 0 replies; 18+ messages in thread
From: Marcelo Tosatti @ 2014-04-10 20:16 UTC (permalink / raw)
  To: Feng Wu; +Cc: pbonzini, gleb, hpa, kvm

On Tue, Apr 01, 2014 at 05:46:32PM +0800, Feng Wu wrote:
> Supervisor Mode Access Prevention (SMAP) is a new security feature 
> disclosed by Intel, please refer to the following document: 
> 
> http://software.intel.com/sites/default/files/319433-014.pdf
>  
> Every access to a linear address is either a supervisor-mode access
> or a user-mode access. All accesses performed while the current
> privilege level (CPL) is less than 3 are supervisor-mode accesses.
> If CPL = 3, accesses are generally user-mode accesses. However, some
> operations implicitly access system data structures, and the resulting
> accesses to those data structures are supervisor-mode accesses regardless
> of CPL. Examples of such implicit supervisor accesses include the following:
> accesses to the global descriptor table (GDT) or local descriptor table
> (LDT) to load a segment descriptor; accesses to the interrupt descriptor
> table (IDT) when delivering an interrupt or exception; and accesses to the
> task-state segment (TSS) as part of a task switch or change of CPL.
> 
> If CR4.SMAP = 1, supervisor-mode data accesses are not allowed to linear
> addresses that are accessible in user mode. If CPL < 3, SMAP protections
> are disabled if EFLAGS.AC = 1. If CPL = 3, SMAP applies to all supervisor-mode
> data accesses (these are implicit supervisor accesses) regardless of the
> value of EFLAGS.AC.
> 
> This patchset pass-through SMAP feature to guests, and let guests
> benefit from it.
> 
> Version 1:
>   * Remove SMAP bit from CR4_RESERVED_BITS.
>   * Add SMAP support when setting CR4
>   * Disable SMAP for guests in EPT realmode and EPT unpaging mode
>   * Expose SMAP feature to guest
> 
> Version 2:
>   * Change the logic of updating mmu permission bitmap for SMAP violation
>   * Expose SMAP feature to guest in the last patch of this series.
> 
> Version 3:
>   * Changes in update_permission_bitmask().
>   * Use a branchless way suggested by Paolo Bonzini to detect SMAP
>     violation in permission_fault(). 
> 
> Version 4:
>   * Changes to some comments and code style.
> 
> Feng Wu (4):
>   KVM: Remove SMAP bit from CR4_RESERVED_BITS.
>   KVM: Add SMAP support when setting CR4
>   KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode
>   KVM: expose SMAP feature to guest
> 
>  arch/x86/include/asm/kvm_host.h |  2 +-
>  arch/x86/kvm/cpuid.c            |  2 +-
>  arch/x86/kvm/cpuid.h            |  8 ++++++++
>  arch/x86/kvm/mmu.c              | 34 ++++++++++++++++++++++++++++---
>  arch/x86/kvm/mmu.h              | 44 +++++++++++++++++++++++++++++++++--------
>  arch/x86/kvm/paging_tmpl.h      |  2 +-
>  arch/x86/kvm/vmx.c              | 11 ++++++-----
>  arch/x86/kvm/x86.c              |  9 ++++++++-
>  8 files changed, 92 insertions(+), 20 deletions(-)

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/4] KVM: Add SMAP support when setting CR4
  2014-04-10 20:12   ` Marcelo Tosatti
@ 2014-04-12  0:14     ` Paolo Bonzini
  0 siblings, 0 replies; 18+ messages in thread
From: Paolo Bonzini @ 2014-04-12  0:14 UTC (permalink / raw)
  To: Marcelo Tosatti, Feng Wu; +Cc: gleb, hpa, kvm

Il 10/04/2014 16:12, Marcelo Tosatti ha scritto:
> On Tue, Apr 01, 2014 at 05:46:34PM +0800, Feng Wu wrote:
>> This patch adds SMAP handling logic when setting CR4 for guests
>>
>> Thanks a lot to Paolo Bonzini for his suggestion to use the branchless
>> way to detect SMAP violation.
>>
>> Signed-off-by: Feng Wu <feng.wu@intel.com>
>>
>> @@ -110,10 +118,30 @@ static inline bool is_write_protection(struct kvm_vcpu *vcpu)
>>   * Will a fault with a given page-fault error code (pfec) cause a permission
>>   * fault with the given access (in ACC_* format)?
>>   */
>> -static inline bool permission_fault(struct kvm_mmu *mmu, unsigned pte_access,
>> -				    unsigned pfec)
>> +static inline bool permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
>> +				    unsigned pte_access, unsigned pfec)
>>  {
>> -	return (mmu->permissions[pfec >> 1] >> pte_access) & 1;
>> +	int cpl = kvm_x86_ops->get_cpl(vcpu);
>> +	unsigned long rflags = kvm_x86_ops->get_rflags(vcpu);
>
> Some branches added here (can't think of anything useful to avoid them).

Yeah, but most of them should be well predicted.

Paolo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-10 20:01           ` Marcelo Tosatti
@ 2014-04-12  0:16             ` Paolo Bonzini
  2014-04-12  0:17               ` Paolo Bonzini
  2014-04-13 21:57               ` Marcelo Tosatti
  0 siblings, 2 replies; 18+ messages in thread
From: Paolo Bonzini @ 2014-04-12  0:16 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Wu, Feng, gleb, hpa, kvm

Il 10/04/2014 16:01, Marcelo Tosatti ha scritto:
> On Tue, Apr 08, 2014 at 04:38:08PM -0400, Paolo Bonzini wrote:
>> Il 07/04/2014 21:06, Wu, Feng ha scritto:
>>>>> Even though the tests do not cover the CPL=3/implicit access case, the
>>>>> logic to compute PFERR_RSVD_MASK dynamically is already covered by AC=1.
>>>>>   So I'm quite happy with the coverage.  Series is
>>>>>
>>>>> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>]
>>> Thanks very much for your review on this.
>>> BTW: Since 3.15 merge window is still open, I am wondering whether there is
>>> any possibility to make SMAP into 3.15 with another pull request.
>>>
>>
>> This is up to Marcelo who is currently managing the KVM tree.
>>
>> Paolo
>
> The merge window is for patches which have been tested in queue/next
> for sometime. This patch has received no testing other than the
> developer testing.

This is not going to change unfortunately since this is not shipping in 
any real silicon.  The only hope could be to use QEMU's SVM and SMAP 
emulation.

> Lack of implicit supervisor mode by instructions such as "Examples of
> such implicit..." in section 9.3.2, in KVM's emulator, makes the feature
> incomplete, does it not ?

Implicit supervisor mode is handled by KVM emulator using 
read/write_std.  These accesses do not set PFERR_USER_MASK, and should 
work fine with SMAP.  Am I misunderstanding?

Paolo

> (would have to signal those accesses as supervisor ones).

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-12  0:16             ` Paolo Bonzini
@ 2014-04-12  0:17               ` Paolo Bonzini
  2014-04-13 21:57               ` Marcelo Tosatti
  1 sibling, 0 replies; 18+ messages in thread
From: Paolo Bonzini @ 2014-04-12  0:17 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Wu, Feng, gleb, hpa, kvm

Il 11/04/2014 20:16, Paolo Bonzini ha scritto:
> Il 10/04/2014 16:01, Marcelo Tosatti ha scritto:
>> On Tue, Apr 08, 2014 at 04:38:08PM -0400, Paolo Bonzini wrote:
>>> Il 07/04/2014 21:06, Wu, Feng ha scritto:
>>>>>> Even though the tests do not cover the CPL=3/implicit access case,
>>>>>> the
>>>>>> logic to compute PFERR_RSVD_MASK dynamically is already covered by
>>>>>> AC=1.
>>>>>>   So I'm quite happy with the coverage.  Series is
>>>>>>
>>>>>> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>]
>>>> Thanks very much for your review on this.
>>>> BTW: Since 3.15 merge window is still open, I am wondering whether
>>>> there is
>>>> any possibility to make SMAP into 3.15 with another pull request.
>>>>
>>>
>>> This is up to Marcelo who is currently managing the KVM tree.
>>>
>>> Paolo
>>
>> The merge window is for patches which have been tested in queue/next
>> for sometime. This patch has received no testing other than the
>> developer testing.
>
> This is not going to change unfortunately since this is not shipping in
> any real silicon.  The only hope could be to use QEMU's SVM and SMAP
> emulation.

Note this is not speaking in favor or against merging this (I would be 
slightly against if I were managing the tree, for the record).

Paolo

>> Lack of implicit supervisor mode by instructions such as "Examples of
>> such implicit..." in section 9.3.2, in KVM's emulator, makes the feature
>> incomplete, does it not ?
>
> Implicit supervisor mode is handled by KVM emulator using
> read/write_std.  These accesses do not set PFERR_USER_MASK, and should
> work fine with SMAP.  Am I misunderstanding?
>
> Paolo
>
>> (would have to signal those accesses as supervisor ones).
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-12  0:16             ` Paolo Bonzini
  2014-04-12  0:17               ` Paolo Bonzini
@ 2014-04-13 21:57               ` Marcelo Tosatti
  2014-04-13 22:48                 ` H. Peter Anvin
  1 sibling, 1 reply; 18+ messages in thread
From: Marcelo Tosatti @ 2014-04-13 21:57 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Wu, Feng, gleb, hpa, kvm

On Fri, Apr 11, 2014 at 08:16:28PM -0400, Paolo Bonzini wrote:
> Il 10/04/2014 16:01, Marcelo Tosatti ha scritto:
> >On Tue, Apr 08, 2014 at 04:38:08PM -0400, Paolo Bonzini wrote:
> >>Il 07/04/2014 21:06, Wu, Feng ha scritto:
> >>>>>Even though the tests do not cover the CPL=3/implicit access case, the
> >>>>>logic to compute PFERR_RSVD_MASK dynamically is already covered by AC=1.
> >>>>>  So I'm quite happy with the coverage.  Series is
> >>>>>
> >>>>>Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>]
> >>>Thanks very much for your review on this.
> >>>BTW: Since 3.15 merge window is still open, I am wondering whether there is
> >>>any possibility to make SMAP into 3.15 with another pull request.
> >>>
> >>
> >>This is up to Marcelo who is currently managing the KVM tree.
> >>
> >>Paolo
> >
> >The merge window is for patches which have been tested in queue/next
> >for sometime. This patch has received no testing other than the
> >developer testing.
> 
> This is not going to change unfortunately since this is not shipping
> in any real silicon.  The only hope could be to use QEMU's SVM and
> SMAP emulation.

Well, let me know if you want an exception to the rule so i should
merge this patchset and submit it for 3.15.

> 
> >Lack of implicit supervisor mode by instructions such as "Examples of
> >such implicit..." in section 9.3.2, in KVM's emulator, makes the feature
> >incomplete, does it not ?
> 
> Implicit supervisor mode is handled by KVM emulator using
> read/write_std.  These accesses do not set PFERR_USER_MASK, and
> should work fine with SMAP.  Am I misunderstanding?
> 
> Paolo

Right.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 0/4] KVM: enable Intel SMAP for KVM
  2014-04-13 21:57               ` Marcelo Tosatti
@ 2014-04-13 22:48                 ` H. Peter Anvin
  0 siblings, 0 replies; 18+ messages in thread
From: H. Peter Anvin @ 2014-04-13 22:48 UTC (permalink / raw)
  To: Marcelo Tosatti, Paolo Bonzini; +Cc: Wu, Feng, gleb, kvm

I would like to see this in 3.15.

     -hpa

On April 13, 2014 2:57:38 PM PDT, Marcelo Tosatti <mtosatti@redhat.com> wrote:
>On Fri, Apr 11, 2014 at 08:16:28PM -0400, Paolo Bonzini wrote:
>> Il 10/04/2014 16:01, Marcelo Tosatti ha scritto:
>> >On Tue, Apr 08, 2014 at 04:38:08PM -0400, Paolo Bonzini wrote:
>> >>Il 07/04/2014 21:06, Wu, Feng ha scritto:
>> >>>>>Even though the tests do not cover the CPL=3/implicit access
>case, the
>> >>>>>logic to compute PFERR_RSVD_MASK dynamically is already covered
>by AC=1.
>> >>>>>  So I'm quite happy with the coverage.  Series is
>> >>>>>
>> >>>>>Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>]
>> >>>Thanks very much for your review on this.
>> >>>BTW: Since 3.15 merge window is still open, I am wondering whether
>there is
>> >>>any possibility to make SMAP into 3.15 with another pull request.
>> >>>
>> >>
>> >>This is up to Marcelo who is currently managing the KVM tree.
>> >>
>> >>Paolo
>> >
>> >The merge window is for patches which have been tested in queue/next
>> >for sometime. This patch has received no testing other than the
>> >developer testing.
>> 
>> This is not going to change unfortunately since this is not shipping
>> in any real silicon.  The only hope could be to use QEMU's SVM and
>> SMAP emulation.
>
>Well, let me know if you want an exception to the rule so i should
>merge this patchset and submit it for 3.15.
>
>> 
>> >Lack of implicit supervisor mode by instructions such as "Examples
>of
>> >such implicit..." in section 9.3.2, in KVM's emulator, makes the
>feature
>> >incomplete, does it not ?
>> 
>> Implicit supervisor mode is handled by KVM emulator using
>> read/write_std.  These accesses do not set PFERR_USER_MASK, and
>> should work fine with SMAP.  Am I misunderstanding?
>> 
>> Paolo
>
>Right.

-- 
Sent from my mobile phone.  Please pardon brevity and lack of formatting.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2014-04-13 22:48 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-01  9:46 [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Feng Wu
2014-04-01  9:46 ` [PATCH v4 1/4] KVM: Remove SMAP bit from CR4_RESERVED_BITS Feng Wu
2014-04-01  9:46 ` [PATCH v4 2/4] KVM: Add SMAP support when setting CR4 Feng Wu
2014-04-10 20:12   ` Marcelo Tosatti
2014-04-12  0:14     ` Paolo Bonzini
2014-04-01  9:46 ` [PATCH v4 3/4] KVM: Disable SMAP for guests in EPT realmode and EPT unpaging mode Feng Wu
2014-04-01  9:46 ` [PATCH v4 4/4] KVM: expose SMAP feature to guest Feng Wu
2014-04-03 16:46 ` [PATCH v4 0/4] KVM: enable Intel SMAP for KVM Paolo Bonzini
2014-04-04  2:22   ` Wu, Feng
2014-04-04  7:27     ` Paolo Bonzini
2014-04-08  1:06       ` Wu, Feng
2014-04-08 20:38         ` Paolo Bonzini
2014-04-10 20:01           ` Marcelo Tosatti
2014-04-12  0:16             ` Paolo Bonzini
2014-04-12  0:17               ` Paolo Bonzini
2014-04-13 21:57               ` Marcelo Tosatti
2014-04-13 22:48                 ` H. Peter Anvin
2014-04-10 20:16 ` Marcelo Tosatti

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.