kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support
@ 2019-08-14  7:03 Yang Weijiang
  2019-08-14  7:03 ` [PATCH RESEND v4 1/9] Documentation: Introduce EPT based Subpage Protection Yang Weijiang
                   ` (9 more replies)
  0 siblings, 10 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:03 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

EPT-Based Sub-Page write Protection(SPP)is a HW capability which allows
Virtual Machine Monitor(VMM) to specify write-permission for guest
physical memory at a sub-page(128 byte) granularity. When this
capability is enabled, the CPU enforces write-access check for sub-pages
within a 4KB page.

The feature is targeted to provide fine-grained memory protection for
usages such as device virtualization, memory check-point and VM
introspection etc.

SPP is active when the "sub-page write protection" (bit 23) is 1 in
Secondary VM-Execution Controls. The feature is backed with a Sub-Page
Permission Table(SPPT), SPPT is referenced via a 64-bit control field
called Sub-Page Permission Table Pointer (SPPTP) which contains a
4K-aligned physical address.

Right now, only 4KB physical pages are supported for SPP. To enable SPP
for certain physical page, we need to first make the physical page
write-protected, then set bit 61 of the corresponding EPT leaf entry. 
While HW walks EPT, if bit 61 is set, it traverses SPPT with the guset
physical address to find out the sub-page permissions at the leaf entry.
If the corresponding bit is set, write to sub-page is permitted,
otherwise, SPP induced EPT violation is generated.

This patch serial passed SPP function test and selftest on Ice-Lake platform.

Please refer to the SPP introduction document in this patch set and
Intel SDM for details:

Intel SDM:
https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf

SPP selftest patch:
https://lkml.org/lkml/2019/6/18/1197

Previous patch:
https://lkml.org/lkml/2019/6/6/695

Patch 1: Introduction to SPP.
Patch 2: Add SPP related flags and control bits.
Patch 3: Functions for SPPT setup.
Patch 4: Add SPP access bitmaps for memslots.
Patch 5: Low level implementation of SPP operations.
Patch 6: Implement User space access IOCTLs.
Patch 7: Handle SPP induced VMExit and EPT violation.
Patch 8: Enable lazy mode SPPT setup.
Patch 9: Handle memory remapping and reclaim.


Change logs:

V3 -> v4:
  1. Modified documentation to make it consistent with patches.
  2. Allocated SPPT root page in init_spp() instead of vmx_set_cr3() to
     avoid SPPT miss error.
  3. Added back co-developers and sign-offs.

V2 -> V3:                                                                
  1. Rebased patches to kernel 5.1 release                                
  2. Deferred SPPT setup to EPT fault handler if the page is not
     available while set_subpage() is being called.
  3. Added init IOCTL to reduce extra cost if SPP is not used.
  4. Refactored patch structure, cleaned up cross referenced functions.
  5. Added code to deal with memory swapping/migration/shrinker cases.

V2 -> V1:
  1. Rebased to 4.20-rc1
  2. Move VMCS change to a separated patch.
  3. Code refine and Bug fix 


Yang Weijiang (9):
  Documentation: Introduce EPT based Subpage Protection
  KVM: VMX: Add control flags for SPP enabling
  KVM: VMX: Implement functions for SPPT paging setup
  KVM: VMX: Introduce SPP access bitmap and operation functions
  KVM: VMX: Add init/set/get functions for SPP
  KVM: VMX: Introduce SPP user-space IOCTLs
  KVM: VMX: Handle SPP induced vmexit and page fault
  KVM: MMU: Enable Lazy mode SPPT setup
  KVM: MMU: Handle host memory remapping and reclaim

 Documentation/virtual/kvm/spp_kvm.txt | 173 ++++++++++
 arch/x86/include/asm/cpufeatures.h    |   1 +
 arch/x86/include/asm/kvm_host.h       |  26 +-
 arch/x86/include/asm/vmx.h            |  10 +
 arch/x86/include/uapi/asm/vmx.h       |   2 +
 arch/x86/kernel/cpu/intel.c           |   4 +
 arch/x86/kvm/mmu.c                    | 480 ++++++++++++++++++++++++++
 arch/x86/kvm/mmu.h                    |   1 +
 arch/x86/kvm/vmx/capabilities.h       |   5 +
 arch/x86/kvm/vmx/vmx.c                | 129 +++++++
 arch/x86/kvm/x86.c                    | 141 ++++++++
 include/linux/kvm_host.h              |   9 +
 include/uapi/linux/kvm.h              |  17 +
 13 files changed, 997 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/virtual/kvm/spp_kvm.txt

-- 
2.17.2


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 1/9] Documentation: Introduce EPT based Subpage Protection
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
@ 2019-08-14  7:03 ` Yang Weijiang
  2019-08-14  7:03 ` [PATCH RESEND v4 2/9] KVM: VMX: Add control flags for SPP enabling Yang Weijiang
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:03 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

Co-developed-by: yi.z.zhang@linux.intel.com
Signed-off-by: yi.z.zhang@linux.intel.com
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 Documentation/virtual/kvm/spp_kvm.txt | 173 ++++++++++++++++++++++++++
 1 file changed, 173 insertions(+)
 create mode 100644 Documentation/virtual/kvm/spp_kvm.txt

diff --git a/Documentation/virtual/kvm/spp_kvm.txt b/Documentation/virtual/kvm/spp_kvm.txt
new file mode 100644
index 000000000000..072fa86f14d3
--- /dev/null
+++ b/Documentation/virtual/kvm/spp_kvm.txt
@@ -0,0 +1,173 @@
+EPT-Based Sub-Page Protection (SPP) for KVM
+====================================================
+
+1.Overview
+  EPT-based Sub-Page Protection(SPP) allows VMM to specify
+  fine-grained(128byte per sub-page) write-protection for guest physical
+  memory. When it's enabled, the CPU enforces write-access permission
+  for the sub-pages within a 4KB page, if corresponding bit is set in
+  permission vector, write to sub-page region is allowed, otherwise,
+  it's prevented with a EPT violation.
+
+2.SPP Operation
+  Sub-Page Protection Table (SPPT) is introduced to manage sub-page
+  write-access permission.
+
+  It is active when:
+  a) large paging is disabled on host side.
+  b) "sub-page write protection" VM-execution control is 1.
+  c) SPP is initialized with KVM_INIT_SPP ioctl successfully.
+  d) Sub-page permissions are set with KVM_SUBPAGES_SET_ACCESS ioctl
+     successfully. see below sections for details.
+
+  __________________________________________________________________________
+
+  How SPP hardware works:
+  __________________________________________________________________________
+
+  Guest write access --> GPA --> Walk EPT --> EPT leaf entry -----|
+  |---------------------------------------------------------------|
+  |-> if VMexec_control.spp && ept_leaf_entry.spp_bit (bit 61)
+       |
+       |-> <false> --> EPT legacy behavior
+       |
+       |
+       |-> <true>  --> if ept_leaf_entry.writable
+                        |
+                        |-> <true>  --> Ignore SPP
+                        |
+                        |-> <false> --> GPA --> Walk SPP 4-level table--|
+                                                                        |
+  |------------<----------get-the-SPPT-point-from-VMCS-filed-----<------|
+  |
+  Walk SPP L4E table
+  |
+  |---> if-entry-misconfiguration ------------>-------|-------<---------|
+   |                                                  |                 |
+  else                                                |                 |
+   |                                                  |                 |
+   |   |------------------SPP VMexit<-----------------|                 |
+   |   |                                                                |
+   |   |-> exit_qualification & sppt_misconfig --> sppt misconfig       |
+   |   |                                                                |
+   |   |-> exit_qualification & sppt_miss --> sppt miss                 |
+   |---|                                                                |
+       |                                                                |
+  walk SPPT L3E--|--> if-entry-misconfiguration------------>------------|
+                 |                                                      |
+                else                                                    |
+                 |                                                      |
+                 |                                                      |
+          walk SPPT L2E --|--> if-entry-misconfiguration-------->-------|
+                          |                                             |
+                         else                                           |
+                          |                                             |
+                          |                                             |
+                   walk SPPT L1E --|-> if-entry-misconfiguration--->----|
+                                   |
+                                 else
+                                   |
+                                   |-> if sub-page writable
+                                   |-> <true>  allow, write access
+                                   |-> <false> disallow, EPT violation
+  ______________________________________________________________________________
+
+3.IOCTL Interfaces
+
+    KVM_INIT_SPP:
+    Allocate storage for sub-page permission vectors and SPPT root page.
+
+    KVM_SUBPAGES_GET_ACCESS:
+    Get sub-page write permission vectors for given continuous guest pages.
+
+    KVM_SUBPAGES_SET_ACCESS
+    Set sub-pages write permission vectors for given continuous guest pages.
+
+    /* for KVM_SUBPAGES_GET_ACCESS and KVM_SUBPAGES_SET_ACCESS */
+    struct kvm_subpage_info {
+       __u64 gfn; /* the first page gfn of the continuous pages */
+       __u64 npages; /* number of 4K pages */
+       __u64 *access_map; /* sub-page write-access bitmap array */
+    };
+
+    #define KVM_SUBPAGES_GET_ACCESS   _IOR(KVMIO,  0x49, __u64)
+    #define KVM_SUBPAGES_SET_ACCESS   _IOW(KVMIO,  0x4a, __u64)
+    #define KVM_INIT_SPP              _IOW(KVMIO,  0x4b, __u64)
+
+4.Set Sub-Page Permission
+
+  * To enable SPP protection, system admin sets sub-page permission via
+    KVM_SUBPAGES_SET_ACCESS ioctl:
+
+    (1) If the target 4KB pages are there, it locates EPT leaf entries
+        via the guest physical addresses, sets the bit 61 of the corresponding 
+        entries to enable sub-page protection, then set up SPPT paging structure.
+    (2) otherwise, stores the [gfn,permission] mappings in KVM data structure. When
+        EPT page-fault is generated due to access to target page, it settles
+        EPT entry configuration together with SPPT setup, this is called lazy mode
+        setup.
+
+   The SPPT paging structure format is as below:
+
+   Format of the SPPT L4E, L3E, L2E:
+   | Bit    | Contents                                                                 |
+   | :----- | :------------------------------------------------------------------------|
+   | 0      | Valid entry when set; indicates whether the entry is present             |
+   | 11:1   | Reserved (0)                                                             |
+   | N-1:12 | Physical address of 4KB aligned SPPT LX-1 Table referenced by this entry |
+   | 51:N   | Reserved (0)                                                             |
+   | 63:52  | Reserved (0)                                                             |
+   Note: N is the physical address width supported by the processor. X is the page level
+
+   Format of the SPPT L1E:
+   | Bit   | Contents                                                          |
+   | :---- | :---------------------------------------------------------------- |
+   | 0+2i  | Write permission for i-th 128 byte sub-page region.               |
+   | 1+2i  | Reserved (0).                                                     |
+   Note: 0<=i<=31
+
+5.SPPT-induced VM exit
+
+  * SPPT miss and misconfiguration induced VM exit
+
+    A SPPT missing VM exit occurs when walk the SPPT, there is no SPPT
+    misconfiguration but a paging-structure entry is not
+    present in any of L4E/L3E/L2E entries.
+
+    A SPPT misconfiguration VM exit occurs when reserved bits or unsupported values
+    are set in SPPT entry.
+
+    *NOTE* SPPT miss and SPPT misconfigurations can occur only due to an
+    attempt to write memory with a guest physical address.
+
+  * SPP permission induced VM exit
+    SPP sub-page permission induced violation is reported as EPT violation
+    thesefore causes VM exit.
+
+6.SPPT-induced VM exit handling
+
+  #define EXIT_REASON_SPP                 66
+
+  static int (*const kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
+    ...
+    [EXIT_REASON_SPP]                     = handle_spp,
+    ...
+  };
+
+  New exit qualification for SPPT-induced vmexits.
+
+  | Bit   | Contents                                                          |
+  | :---- | :---------------------------------------------------------------- |
+  | 10:0  | Reserved (0).                                                     |
+  | 11    | SPPT VM exit type. Set for SPPT Miss, cleared for SPPT Misconfig. |
+  | 12    | NMI unblocking due to IRET                                        |
+  | 63:13 | Reserved (0)                                                      |
+
+  In addition to the exit qualification, guest linear address and guest
+  physical address fields will be reported.
+
+  * SPPT miss and misconfiguration induced VM exit
+    Allocate a physical page for the SPPT and set the entry correctly.
+
+  * SPP permission induced VM exit
+    This kind of VM exit is left to VMI tool to handle.
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 2/9] KVM: VMX: Add control flags for SPP enabling
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
  2019-08-14  7:03 ` [PATCH RESEND v4 1/9] Documentation: Introduce EPT based Subpage Protection Yang Weijiang
@ 2019-08-14  7:03 ` Yang Weijiang
  2019-08-14  7:03 ` [PATCH RESEND v4 3/9] KVM: VMX: Implement functions for SPPT paging setup Yang Weijiang
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:03 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

Check SPP capability in MSR_IA32_VMX_PROCBASED_CTLS2, its 23-bit
indicates SPP support. Mark SPP bit in CPU capabilities bitmap if
it's supported.

Co-developed-by: He Chen <he.chen@linux.intel.com>
Signed-off-by: He Chen <he.chen@linux.intel.com>
Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/cpufeatures.h |  1 +
 arch/x86/include/asm/vmx.h         |  1 +
 arch/x86/kernel/cpu/intel.c        |  4 ++++
 arch/x86/kvm/vmx/capabilities.h    |  5 +++++
 arch/x86/kvm/vmx/vmx.c             | 10 ++++++++++
 5 files changed, 21 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 981ff9479648..5bcc356741e6 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -228,6 +228,7 @@
 #define X86_FEATURE_FLEXPRIORITY	( 8*32+ 2) /* Intel FlexPriority */
 #define X86_FEATURE_EPT			( 8*32+ 3) /* Intel Extended Page Table */
 #define X86_FEATURE_VPID		( 8*32+ 4) /* Intel Virtual Processor ID */
+#define X86_FEATURE_SPP			( 8*32+ 5) /* Intel EPT-based Sub-Page Write Protection */
 
 #define X86_FEATURE_VMMCALL		( 8*32+15) /* Prefer VMMCALL to VMCALL */
 #define X86_FEATURE_XENPV		( 8*32+16) /* "" Xen paravirtual guest */
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 4e4133e86484..a2c9e18e0ad7 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -81,6 +81,7 @@
 #define SECONDARY_EXEC_XSAVES			0x00100000
 #define SECONDARY_EXEC_PT_USE_GPA		0x01000000
 #define SECONDARY_EXEC_MODE_BASED_EPT_EXEC	0x00400000
+#define SECONDARY_EXEC_ENABLE_SPP		0x00800000
 #define SECONDARY_EXEC_TSC_SCALING              0x02000000
 
 #define PIN_BASED_EXT_INTR_MASK                 0x00000001
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 3142fd7a9b32..e937a44c8757 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -476,6 +476,7 @@ static void detect_vmx_virtcap(struct cpuinfo_x86 *c)
 #define X86_VMX_FEATURE_PROC_CTLS2_EPT		0x00000002
 #define X86_VMX_FEATURE_PROC_CTLS2_VPID		0x00000020
 #define x86_VMX_FEATURE_EPT_CAP_AD		0x00200000
+#define X86_VMX_FEATURE_PROC_CTLS2_SPP		0x00800000
 
 	u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2;
 	u32 msr_vpid_cap, msr_ept_cap;
@@ -486,6 +487,7 @@ static void detect_vmx_virtcap(struct cpuinfo_x86 *c)
 	clear_cpu_cap(c, X86_FEATURE_EPT);
 	clear_cpu_cap(c, X86_FEATURE_VPID);
 	clear_cpu_cap(c, X86_FEATURE_EPT_AD);
+	clear_cpu_cap(c, X86_FEATURE_SPP);
 
 	rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high);
 	msr_ctl = vmx_msr_high | vmx_msr_low;
@@ -509,6 +511,8 @@ static void detect_vmx_virtcap(struct cpuinfo_x86 *c)
 		}
 		if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID)
 			set_cpu_cap(c, X86_FEATURE_VPID);
+		if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_SPP)
+			set_cpu_cap(c, X86_FEATURE_SPP);
 	}
 }
 
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 854e144131c6..8221ecbf6516 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -239,6 +239,11 @@ static inline bool cpu_has_vmx_pml(void)
 	return vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_ENABLE_PML;
 }
 
+static inline bool cpu_has_vmx_ept_spp(void)
+{
+	return vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_ENABLE_SPP;
+}
+
 static inline bool vmx_xsaves_supported(void)
 {
 	return vmcs_config.cpu_based_2nd_exec_ctrl &
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0c955bb286ff..a0753a36155d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -114,6 +114,8 @@ static u64 __read_mostly host_xss;
 bool __read_mostly enable_pml = 1;
 module_param_named(pml, enable_pml, bool, S_IRUGO);
 
+static bool __read_mostly spp_supported = 0;
+
 #define MSR_BITMAP_MODE_X2APIC		1
 #define MSR_BITMAP_MODE_X2APIC_APICV	2
 
@@ -2240,6 +2242,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 			SECONDARY_EXEC_RDSEED_EXITING |
 			SECONDARY_EXEC_RDRAND_EXITING |
 			SECONDARY_EXEC_ENABLE_PML |
+			SECONDARY_EXEC_ENABLE_SPP |
 			SECONDARY_EXEC_TSC_SCALING |
 			SECONDARY_EXEC_PT_USE_GPA |
 			SECONDARY_EXEC_PT_CONCEAL_VMX |
@@ -3895,6 +3898,9 @@ static void vmx_compute_secondary_exec_control(struct vcpu_vmx *vmx)
 	if (!enable_pml)
 		exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
 
+	if (!spp_supported)
+		exec_control &= ~SECONDARY_EXEC_ENABLE_SPP;
+
 	if (vmx_xsaves_supported()) {
 		/* Exposing XSAVES only when XSAVE is exposed */
 		bool xsaves_enabled =
@@ -7470,6 +7476,10 @@ static __init int hardware_setup(void)
 	if (!cpu_has_vmx_flexpriority())
 		flexpriority_enabled = 0;
 
+	if (cpu_has_vmx_ept_spp() && enable_ept &&
+	    boot_cpu_has(X86_FEATURE_SPP))
+		spp_supported = 1;
+
 	if (!cpu_has_virtual_nmis())
 		enable_vnmi = 0;
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 3/9] KVM: VMX: Implement functions for SPPT paging setup
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
  2019-08-14  7:03 ` [PATCH RESEND v4 1/9] Documentation: Introduce EPT based Subpage Protection Yang Weijiang
  2019-08-14  7:03 ` [PATCH RESEND v4 2/9] KVM: VMX: Add control flags for SPP enabling Yang Weijiang
@ 2019-08-14  7:03 ` Yang Weijiang
  2019-08-14  7:03 ` [PATCH RESEND v4 4/9] KVM: VMX: Introduce SPP access bitmap and operation functions Yang Weijiang
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:03 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

SPPT is a 4-level paging structure similar to EPT, when SPP is
kicked for target physical page, bit 61 of the corresponding
EPT enty will be flaged, then SPPT is traversed with the gfn to
build up entries, the leaf entry of SPPT contains the access
bitmap for subpages inside the target 4KB physical page, one bit
per 128-byte subpage.

SPPT entries are set up in below cases:
1. the EPT faulted page is SPP protected.
2. SPP mis-config induced vmexit is handled.
3. User configures SPP protected pages via SPP IOCTLs.

Co-developed-by: He Chen <he.chen@linux.intel.com>
Signed-off-by: He Chen <he.chen@linux.intel.com>
Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/kvm_host.h |   7 +-
 arch/x86/kvm/mmu.c              | 207 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/mmu.h              |   1 +
 include/linux/kvm_host.h        |   3 +
 4 files changed, 217 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index c79abe7ca093..98e5158287d5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -271,7 +271,8 @@ union kvm_mmu_page_role {
 		unsigned smap_andnot_wp:1;
 		unsigned ad_disabled:1;
 		unsigned guest_mode:1;
-		unsigned :6;
+		unsigned spp:1;
+		unsigned reserved:5;
 
 		/*
 		 * This is left at the top of the word so that
@@ -1407,6 +1408,10 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
 
 int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t gva, u64 error_code,
 		       void *insn, int insn_len);
+
+int kvm_mmu_setup_spp_structure(struct kvm_vcpu *vcpu,
+				u32 access_map, gfn_t gfn);
+
 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9c7b45d231f..494ad2038f36 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -208,6 +208,11 @@ static const union kvm_mmu_page_role mmu_base_role_mask = {
 		({ spte = mmu_spte_get_lockless(_walker.sptep); 1; });	\
 	     __shadow_walk_next(&(_walker), spte))
 
+#define for_each_shadow_spp_entry(_vcpu, _addr, _walker)    \
+	for (shadow_spp_walk_init(&(_walker), _vcpu, _addr);	\
+	     shadow_walk_okay(&(_walker));			\
+	     shadow_walk_next(&(_walker)))
+
 static struct kmem_cache *pte_list_desc_cache;
 static struct kmem_cache *mmu_page_header_cache;
 static struct percpu_counter kvm_total_used_mmu_pages;
@@ -516,6 +521,11 @@ static int is_shadow_present_pte(u64 pte)
 	return (pte != 0) && !is_mmio_spte(pte);
 }
 
+static int is_spp_shadow_present(u64 pte)
+{
+	return pte & PT_PRESENT_MASK;
+}
+
 static int is_large_pte(u64 pte)
 {
 	return pte & PT_PAGE_SIZE_MASK;
@@ -535,6 +545,11 @@ static bool is_executable_pte(u64 spte)
 	return (spte & (shadow_x_mask | shadow_nx_mask)) == shadow_x_mask;
 }
 
+static bool is_spp_spte(struct kvm_mmu_page *sp)
+{
+	return sp->role.spp;
+}
+
 static kvm_pfn_t spte_to_pfn(u64 pte)
 {
 	return (pte & PT64_BASE_ADDR_MASK) >> PAGE_SHIFT;
@@ -1703,6 +1718,87 @@ int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu)
 	return 0;
 }
 
+static bool __rmap_open_subpage_bit(struct kvm *kvm,
+				    struct kvm_rmap_head *rmap_head)
+{
+	struct rmap_iterator iter;
+	bool flush = false;
+	u64 *sptep;
+	u64 spte;
+
+	for_each_rmap_spte(rmap_head, &iter, sptep) {
+		/*
+		 * SPP works only when the page is write-protected
+		 * and SPP bit is set in EPT leaf entry.
+		 */
+		flush |= spte_write_protect(sptep, false);
+		spte = *sptep | PT_SPP_MASK;
+		flush |= mmu_spte_update(sptep, spte);
+	}
+
+	return flush;
+}
+
+static int kvm_mmu_open_subpage_write_protect(struct kvm *kvm,
+					      struct kvm_memory_slot *slot,
+					      gfn_t gfn)
+{
+	struct kvm_rmap_head *rmap_head;
+	bool flush = false;
+
+	/*
+	 * SPP is only supported with 4KB level1 memory page, check
+	 * if the page is mapped in EPT leaf entry.
+	 */
+	rmap_head = __gfn_to_rmap(gfn, PT_PAGE_TABLE_LEVEL, slot);
+
+	if (!rmap_head->val)
+		return -EFAULT;
+
+	flush |= __rmap_open_subpage_bit(kvm, rmap_head);
+
+	if (flush)
+		kvm_flush_remote_tlbs(kvm);
+
+	return 0;
+}
+
+static bool __rmap_clear_subpage_bit(struct kvm *kvm,
+				     struct kvm_rmap_head *rmap_head)
+{
+	struct rmap_iterator iter;
+	bool flush = false;
+	u64 *sptep;
+	u64 spte;
+
+	for_each_rmap_spte(rmap_head, &iter, sptep) {
+		spte = (*sptep & ~PT_SPP_MASK) | PT_WRITABLE_MASK;
+		flush |= mmu_spte_update(sptep, spte);
+	}
+
+	return flush;
+}
+
+static int kvm_mmu_clear_subpage_write_protect(struct kvm *kvm,
+					       struct kvm_memory_slot *slot,
+					       gfn_t gfn)
+{
+	struct kvm_rmap_head *rmap_head;
+	bool flush = false;
+
+	rmap_head = __gfn_to_rmap(gfn, PT_PAGE_TABLE_LEVEL, slot);
+
+	if (!rmap_head->val)
+		return -EFAULT;
+
+	flush |= __rmap_clear_subpage_bit(kvm, rmap_head);
+
+	if (flush)
+		kvm_flush_remote_tlbs(kvm);
+
+	return 0;
+}
+
 bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm,
 				    struct kvm_memory_slot *slot, u64 gfn)
 {
@@ -2410,6 +2506,30 @@ static void clear_sp_write_flooding_count(u64 *spte)
 	__clear_sp_write_flooding_count(sp);
 }
 
+struct kvm_mmu_page *kvm_mmu_get_spp_page(struct kvm_vcpu *vcpu,
+						 gfn_t gfn,
+						 unsigned int level)
+
+{
+	struct kvm_mmu_page *sp;
+	union kvm_mmu_page_role role;
+
+	role = vcpu->arch.mmu->mmu_role.base;
+	role.level = level;
+	role.direct = true;
+	role.spp = true;
+
+	sp = kvm_mmu_alloc_page(vcpu, true);
+	sp->gfn = gfn;
+	sp->role = role;
+	hlist_add_head(&sp->hash_link,
+		       &vcpu->kvm->arch.mmu_page_hash
+		       [kvm_page_table_hashfn(gfn)]);
+	clear_page(sp->spt);
+	return sp;
+}
+EXPORT_SYMBOL_GPL(kvm_mmu_get_spp_page);
+
 static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
 					     gfn_t gfn,
 					     gva_t gaddr,
@@ -2536,6 +2656,16 @@ static void shadow_walk_init(struct kvm_shadow_walk_iterator *iterator,
 				    addr);
 }
 
+static void shadow_spp_walk_init(struct kvm_shadow_walk_iterator *iterator,
+				 struct kvm_vcpu *vcpu, u64 addr)
+{
+	iterator->addr = addr;
+	iterator->shadow_addr = vcpu->arch.mmu->sppt_root;
+
+	/* SPP Table is a 4-level paging structure */
+	iterator->level = 4;
+}
+
 static bool shadow_walk_okay(struct kvm_shadow_walk_iterator *iterator)
 {
 	if (iterator->level < PT_PAGE_TABLE_LEVEL)
@@ -2586,6 +2716,18 @@ static void link_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
 		mark_unsync(sptep);
 }
 
+static void link_spp_shadow_page(struct kvm_vcpu *vcpu, u64 *sptep,
+				 struct kvm_mmu_page *sp)
+{
+	u64 spte;
+
+	spte = __pa(sp->spt) | PT_PRESENT_MASK;
+
+	mmu_spte_set(sptep, spte);
+
+	mmu_page_add_parent_pte(vcpu, sp, sptep);
+}
+
 static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 				   unsigned direct_access)
 {
@@ -4157,6 +4299,71 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
 	return RET_PF_RETRY;
 }
 
+static u64 format_spp_spte(u32 spp_wp_bitmap)
+{
+	u64 new_spte = 0;
+	int i = 0;
+
+	/*
+	 * One 4K page contains 32 sub-pages, in SPP table L4E, old bits
+	 * are reserved, so we need to transfer u32 subpage write
+	 * protect bitmap to u64 SPP L4E format.
+	 */
+	while (i < 32) {
+		if (spp_wp_bitmap & (1ULL << i))
+			new_spte |= 1ULL << (i * 2);
+
+		i++;
+	}
+
+	return new_spte;
+}
+
+static void mmu_spp_spte_set(u64 *sptep, u64 new_spte)
+{
+	__set_spte(sptep, new_spte);
+}
+
+int kvm_mmu_setup_spp_structure(struct kvm_vcpu *vcpu,
+				u32 access_map, gfn_t gfn)
+{
+	struct kvm_shadow_walk_iterator iter;
+	struct kvm_mmu_page *sp;
+	gfn_t pseudo_gfn;
+	u64 old_spte, spp_spte;
+	int ret = -EFAULT;
+
+	/* direct_map spp start */
+	if (!VALID_PAGE(vcpu->arch.mmu->sppt_root))
+		return -EFAULT;
+
+	for_each_shadow_spp_entry(vcpu, (u64)gfn << PAGE_SHIFT, iter) {
+		if (iter.level == PT_PAGE_TABLE_LEVEL) {
+			spp_spte = format_spp_spte(access_map);
+			old_spte = mmu_spte_get_lockless(iter.sptep);
+			if (old_spte != spp_spte) {
+				mmu_spp_spte_set(iter.sptep, spp_spte);
+				kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
+			}
+
+			ret = 0;
+			break;
+		}
+
+		if (!is_spp_shadow_present(*iter.sptep)) {
+			u64 base_addr = iter.addr;
+
+			base_addr &= PT64_LVL_ADDR_MASK(iter.level);
+			pseudo_gfn = base_addr >> PAGE_SHIFT;
+			sp = kvm_mmu_get_spp_page(vcpu, pseudo_gfn,
+						  iter.level - 1);
+			link_spp_shadow_page(vcpu, iter.sptep, sp);
+		}
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_mmu_setup_spp_structure);
 static void nonpaging_init_context(struct kvm_vcpu *vcpu,
 				   struct kvm_mmu *context)
 {
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 54c2a377795b..ba1e85326713 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -26,6 +26,7 @@
 #define PT_PAGE_SIZE_MASK (1ULL << PT_PAGE_SIZE_SHIFT)
 #define PT_PAT_MASK (1ULL << 7)
 #define PT_GLOBAL_MASK (1ULL << 8)
+#define PT_SPP_MASK (1ULL << 61)
 #define PT64_NX_SHIFT 63
 #define PT64_NX_MASK (1ULL << PT64_NX_SHIFT)
 
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 640a03642766..7f904d4d788c 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -847,6 +847,9 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu);
 bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu);
 int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
 
+struct kvm_mmu_page *kvm_mmu_get_spp_page(struct kvm_vcpu *vcpu,
+			gfn_t gfn, unsigned int level);
+
 #ifndef __KVM_HAVE_ARCH_VM_ALLOC
 /*
  * All architectures that want to use vzalloc currently also
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 4/9] KVM: VMX: Introduce SPP access bitmap and operation functions
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
                   ` (2 preceding siblings ...)
  2019-08-14  7:03 ` [PATCH RESEND v4 3/9] KVM: VMX: Implement functions for SPPT paging setup Yang Weijiang
@ 2019-08-14  7:03 ` Yang Weijiang
  2019-08-14  7:03 ` [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP Yang Weijiang
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:03 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

Create access bitmap for SPP subpages, 4KB/128B = 32bits,
for each 4KB physical page, 32bits are required. The bitmap can
be easily accessed with a gfn. The initial access bitmap for each
physical page is 0xFFFFFFFF, meaning SPP is not enabled for the
subpages.

Co-developed-by: He Chen <he.chen@linux.intel.com>
Signed-off-by: He Chen <he.chen@linux.intel.com>
Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/mmu.c              | 50 +++++++++++++++++++++++++++++++++
 arch/x86/kvm/x86.c              | 11 ++++++++
 3 files changed, 62 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 98e5158287d5..44f6e1757861 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -790,6 +790,7 @@ struct kvm_lpage_info {
 
 struct kvm_arch_memory_slot {
 	struct kvm_rmap_head *rmap[KVM_NR_PAGE_SIZES];
+	u32 *subpage_wp_info;
 	struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
 	unsigned short *gfn_track[KVM_PAGE_TRACK_MAX];
 };
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 494ad2038f36..16e390fd6e58 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1485,6 +1485,56 @@ static u64 *rmap_get_next(struct rmap_iterator *iter)
 	return sptep;
 }
 
+#define FULL_SPP_ACCESS		((u32)((1ULL << 32) - 1))
+
+static int kvm_subpage_create_bitmaps(struct kvm *kvm)
+{
+	struct kvm_memslots *slots;
+	struct kvm_memory_slot *memslot;
+	int i, j, ret;
+	u32 *buff;
+
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		slots = __kvm_memslots(kvm, i);
+		kvm_for_each_memslot(memslot, slots) {
+			buff = kvzalloc(memslot->npages*
+				sizeof(*memslot->arch.subpage_wp_info),
+				GFP_KERNEL);
+
+			if (!buff) {
+			      ret = -ENOMEM;
+			      goto out_free;
+			}
+			memslot->arch.subpage_wp_info = buff;
+
+			for(j = 0; j< memslot->npages; j++)
+			      buff[j] = FULL_SPP_ACCESS;
+		}
+	}
+
+	return 0;
+out_free:
+	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+		slots = __kvm_memslots(kvm, i);
+		kvm_for_each_memslot(memslot, slots) {
+			if (memslot->arch.subpage_wp_info) {
+				kvfree(memslot->arch.subpage_wp_info);
+				memslot->arch.subpage_wp_info = NULL;
+			}
+		}
+	}
+
+	return ret;
+}
+
+static u32 *gfn_to_subpage_wp_info(struct kvm_memory_slot *slot, gfn_t gfn)
+{
+	unsigned long idx;
+
+	idx = gfn_to_index(gfn, slot->base_gfn, PT_PAGE_TABLE_LEVEL);
+	return &slot->arch.subpage_wp_info[idx];
+}
+
 #define for_each_rmap_spte(_rmap_head_, _iter_, _spte_)			\
 	for (_spte_ = rmap_get_first(_rmap_head_, _iter_);		\
 	     _spte_; _spte_ = rmap_get_next(_iter_))
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b5edc8e3ce1d..26e4c574731c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9317,6 +9317,17 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
 	kvm_hv_destroy_vm(kvm);
 }
 
+void kvm_subpage_free_memslot(struct kvm_memory_slot *free,
+			      struct kvm_memory_slot *dont)
+{
+
+	if (!dont || free->arch.subpage_wp_info !=
+	    dont->arch.subpage_wp_info) {
+		kvfree(free->arch.subpage_wp_info);
+		free->arch.subpage_wp_info = NULL;
+	}
+}
+
 void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 			   struct kvm_memory_slot *dont)
 {
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
                   ` (3 preceding siblings ...)
  2019-08-14  7:03 ` [PATCH RESEND v4 4/9] KVM: VMX: Introduce SPP access bitmap and operation functions Yang Weijiang
@ 2019-08-14  7:03 ` Yang Weijiang
  2019-08-14 12:43   ` Vitaly Kuznetsov
                     ` (2 more replies)
  2019-08-14  7:04 ` [PATCH RESEND v4 6/9] KVM: VMX: Introduce SPP user-space IOCTLs Yang Weijiang
                   ` (4 subsequent siblings)
  9 siblings, 3 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:03 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

init_spp() must be called before {get, set}_subpage
functions, it creates subpage access bitmaps for memory pages
and issues a KVM request to setup SPPT root pages.

kvm_mmu_set_subpages() is to enable SPP bit in EPT leaf page
and setup corresponding SPPT entries. The mmu_lock
is held before above operation. If it's called in EPT fault and
SPPT mis-config induced handler, mmu_lock is acquired outside
the function, otherwise, it's acquired inside it.

kvm_mmu_get_subpages() is used to query access bitmap for
protected page, it's also used in EPT fault handler to check
whether the fault EPT page is SPP protected as well.

Co-developed-by: He Chen <he.chen@linux.intel.com>
Signed-off-by: He Chen <he.chen@linux.intel.com>
Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/kvm_host.h |  18 ++++
 arch/x86/include/asm/vmx.h      |   2 +
 arch/x86/kvm/mmu.c              | 160 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.c          |  48 ++++++++++
 arch/x86/kvm/x86.c              |  40 ++++++++
 include/linux/kvm_host.h        |   4 +-
 include/uapi/linux/kvm.h        |   9 ++
 7 files changed, 280 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 44f6e1757861..5c4882015acc 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -398,8 +398,13 @@ struct kvm_mmu {
 	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
 	void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 			   u64 *spte, const void *pte);
+	int (*get_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
+	int (*set_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
+	int (*init_spp)(struct kvm *kvm);
+
 	hpa_t root_hpa;
 	gpa_t root_cr3;
+	hpa_t sppt_root;
 	union kvm_mmu_role mmu_role;
 	u8 root_level;
 	u8 shadow_root_level;
@@ -927,6 +932,8 @@ struct kvm_arch {
 
 	bool guest_can_read_msr_platform_info;
 	bool exception_payload_enabled;
+
+	bool spp_active;
 };
 
 struct kvm_vm_stat {
@@ -1199,6 +1206,11 @@ struct kvm_x86_ops {
 	uint16_t (*nested_get_evmcs_version)(struct kvm_vcpu *vcpu);
 
 	bool (*need_emulation_on_page_fault)(struct kvm_vcpu *vcpu);
+
+	bool (*get_spp_status)(void);
+	int (*get_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
+	int (*set_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
+	int (*init_spp)(struct kvm *kvm);
 };
 
 struct kvm_arch_async_pf {
@@ -1417,6 +1429,12 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
 void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid);
 void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush);
 
+int kvm_mmu_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
+			 bool mmu_locked);
+int kvm_mmu_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
+			 bool mmu_locked);
+int kvm_mmu_init_spp(struct kvm *kvm);
+
 void kvm_enable_tdp(void);
 void kvm_disable_tdp(void);
 
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index a2c9e18e0ad7..6cb05ac07453 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -224,6 +224,8 @@ enum vmcs_field {
 	XSS_EXIT_BITMAP_HIGH            = 0x0000202D,
 	ENCLS_EXITING_BITMAP		= 0x0000202E,
 	ENCLS_EXITING_BITMAP_HIGH	= 0x0000202F,
+	SPPT_POINTER			= 0x00002030,
+	SPPT_POINTER_HIGH		= 0x00002031,
 	TSC_MULTIPLIER                  = 0x00002032,
 	TSC_MULTIPLIER_HIGH             = 0x00002033,
 	GUEST_PHYSICAL_ADDRESS          = 0x00002400,
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 16e390fd6e58..16d3ca544b67 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3756,6 +3756,9 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
 		    (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) {
 			mmu_free_root_page(vcpu->kvm, &mmu->root_hpa,
 					   &invalid_list);
+			if (vcpu->kvm->arch.spp_active)
+				mmu_free_root_page(vcpu->kvm, &mmu->sppt_root,
+						   &invalid_list);
 		} else {
 			for (i = 0; i < 4; ++i)
 				if (mmu->pae_root[i] != 0)
@@ -4414,6 +4417,158 @@ int kvm_mmu_setup_spp_structure(struct kvm_vcpu *vcpu,
 	return ret;
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_setup_spp_structure);
+
+int kvm_mmu_init_spp(struct kvm *kvm)
+{
+	int i, ret;
+	struct kvm_vcpu *vcpu;
+	int root_level;
+	struct kvm_mmu_page *ssp_sp;
+
+
+	if (!kvm_x86_ops->get_spp_status())
+	      return -ENODEV;
+
+	if (kvm->arch.spp_active)
+	      return 0;
+
+	ret = kvm_subpage_create_bitmaps(kvm);
+
+	if (ret)
+	      return ret;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		/* prepare caches for SPP setup.*/
+		mmu_topup_memory_caches(vcpu);
+		root_level = vcpu->arch.mmu->shadow_root_level;
+		ssp_sp = kvm_mmu_get_spp_page(vcpu, 0, root_level);
+		++ssp_sp->root_count;
+		vcpu->arch.mmu->sppt_root = __pa(ssp_sp->spt);
+		kvm_make_request(KVM_REQ_LOAD_CR3, vcpu);
+	}
+
+	kvm->arch.spp_active = true;
+	return 0;
+}
+
+int kvm_mmu_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
+			 bool mmu_locked)
+{
+	u32 *access = spp_info->access_map;
+	gfn_t gfn = spp_info->base_gfn;
+	int npages = spp_info->npages;
+	struct kvm_memory_slot *slot;
+	int i;
+	int ret;
+
+	if (!kvm->arch.spp_active)
+	      return -ENODEV;
+
+	if (!mmu_locked)
+	      spin_lock(&kvm->mmu_lock);
+
+	for (i = 0; i < npages; i++, gfn++) {
+		slot = gfn_to_memslot(kvm, gfn);
+		if (!slot) {
+			ret = -EFAULT;
+			goto out_unlock;
+		}
+		access[i] = *gfn_to_subpage_wp_info(slot, gfn);
+	}
+
+	ret = i;
+
+out_unlock:
+	if (!mmu_locked)
+	      spin_unlock(&kvm->mmu_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_mmu_get_subpages);
+
+int kvm_mmu_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
+			 bool mmu_locked)
+{
+	u32 *access = spp_info->access_map;
+	gfn_t gfn = spp_info->base_gfn;
+	int npages = spp_info->npages;
+	struct kvm_memory_slot *slot;
+	struct kvm_vcpu *vcpu;
+	struct kvm_rmap_head *rmap_head;
+	int i, k;
+	u32 *wp_map;
+	int ret = -EFAULT;
+
+	if (!kvm->arch.spp_active)
+		return -ENODEV;
+
+	if (!mmu_locked)
+	      spin_lock(&kvm->mmu_lock);
+
+	for (i = 0; i < npages; i++, gfn++) {
+		slot = gfn_to_memslot(kvm, gfn);
+		if (!slot)
+			goto out_unlock;
+
+		/*
+		 * check whether the target 4KB page exists in EPT leaf
+		 * entries.If it's there, we can setup SPP protection now,
+		 * otherwise, need to defer it to EPT page fault handler.
+		 */
+		rmap_head = __gfn_to_rmap(gfn, PT_PAGE_TABLE_LEVEL, slot);
+
+		if (rmap_head->val) {
+			/*
+			 * if all subpages are not writable, open SPP bit in
+			 * EPT leaf entry to enable SPP protection for
+			 * corresponding page.
+			 */
+			if (access[i] != FULL_SPP_ACCESS) {
+				ret = kvm_mmu_open_subpage_write_protect(kvm,
+						slot, gfn);
+
+				if (ret)
+					goto out_err;
+
+				kvm_for_each_vcpu(k, vcpu, kvm)
+					kvm_mmu_setup_spp_structure(vcpu,
+						access[i], gfn);
+			} else {
+				ret = kvm_mmu_clear_subpage_write_protect(kvm,
+						slot, gfn);
+				if (ret)
+					goto out_err;
+			}
+
+		} else
+			pr_info("%s - No ETP entry, gfn = 0x%llx, access = 0x%x.\n", __func__, gfn, access[i]);
+
+		/* if this function is called in tdp_page_fault() or
+		 * spp_handler(), mmu_locked = true, SPP access bitmap
+		 * is being used, otherwise, it's being stored.
+		 */
+		if (!mmu_locked) {
+			wp_map = gfn_to_subpage_wp_info(slot, gfn);
+			*wp_map = access[i];
+		}
+	}
+
+	ret = i;
+out_err:
+	if (ret < 0)
+	      pr_info("SPP-Error, didn't get the gfn:" \
+		      "%llx from EPT leaf.\n"
+		      "Current we don't support SPP on" \
+		      "huge page.\n"
+		      "Please disable huge page and have" \
+		      "another try.\n", gfn);
+out_unlock:
+	if (!mmu_locked)
+	      spin_unlock(&kvm->mmu_lock);
+
+	return ret;
+}
+
 static void nonpaging_init_context(struct kvm_vcpu *vcpu,
 				   struct kvm_mmu *context)
 {
@@ -5104,6 +5259,9 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
 	context->get_cr3 = get_cr3;
 	context->get_pdptr = kvm_pdptr_read;
 	context->inject_page_fault = kvm_inject_page_fault;
+	context->get_subpages = kvm_x86_ops->get_subpages;
+	context->set_subpages = kvm_x86_ops->set_subpages;
+	context->init_spp = kvm_x86_ops->init_spp;
 
 	if (!is_paging(vcpu)) {
 		context->nx = false;
@@ -5309,6 +5467,8 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu, bool reset_roots)
 		uint i;
 
 		vcpu->arch.mmu->root_hpa = INVALID_PAGE;
+		if (!vcpu->kvm->arch.spp_active)
+			vcpu->arch.mmu->sppt_root = INVALID_PAGE;
 
 		for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
 			vcpu->arch.mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a0753a36155d..855d02dd94c7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2847,11 +2847,17 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa)
 	return eptp;
 }
 
+static inline u64 construct_spptp(unsigned long root_hpa)
+{
+	return root_hpa & PAGE_MASK;
+}
+
 void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 {
 	struct kvm *kvm = vcpu->kvm;
 	unsigned long guest_cr3;
 	u64 eptp;
+	u64 spptp;
 
 	guest_cr3 = cr3;
 	if (enable_ept) {
@@ -2874,6 +2880,12 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 		ept_load_pdptrs(vcpu);
 	}
 
+	if (kvm->arch.spp_active && VALID_PAGE(vcpu->arch.mmu->sppt_root)) {
+		spptp = construct_spptp(vcpu->arch.mmu->sppt_root);
+		vmcs_write64(SPPT_POINTER, spptp);
+		vmx_flush_tlb(vcpu, true);
+	}
+
 	vmcs_writel(GUEST_CR3, guest_cr3);
 }
 
@@ -5735,6 +5747,9 @@ void dump_vmcs(void)
 		pr_err("PostedIntrVec = 0x%02x\n", vmcs_read16(POSTED_INTR_NV));
 	if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT))
 		pr_err("EPT pointer = 0x%016llx\n", vmcs_read64(EPT_POINTER));
+	if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_SPP))
+		pr_err("SPPT pointer = 0x%016llx\n", vmcs_read64(SPPT_POINTER));
+
 	n = vmcs_read32(CR3_TARGET_COUNT);
 	for (i = 0; i + 1 < n; i += 4)
 		pr_err("CR3 target%u=%016lx target%u=%016lx\n",
@@ -7546,6 +7561,12 @@ static __init int hardware_setup(void)
 		kvm_x86_ops->enable_log_dirty_pt_masked = NULL;
 	}
 
+	if (!spp_supported) {
+		kvm_x86_ops->get_subpages = NULL;
+		kvm_x86_ops->set_subpages = NULL;
+		kvm_x86_ops->init_spp = NULL;
+	}
+
 	if (!cpu_has_vmx_preemption_timer())
 		kvm_x86_ops->request_immediate_exit = __kvm_request_immediate_exit;
 
@@ -7592,6 +7613,28 @@ static __exit void hardware_unsetup(void)
 	free_kvm_area();
 }
 
+static bool vmx_get_spp_status(void)
+{
+	return spp_supported;
+}
+
+static int vmx_get_subpages(struct kvm *kvm,
+			    struct kvm_subpage *spp_info)
+{
+	return kvm_get_subpages(kvm, spp_info);
+}
+
+static int vmx_set_subpages(struct kvm *kvm,
+			    struct kvm_subpage *spp_info)
+{
+	return kvm_set_subpages(kvm, spp_info);
+}
+
+static int vmx_init_spp(struct kvm *kvm)
+{
+	return kvm_init_spp(kvm);
+}
+
 static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 	.cpu_has_kvm_support = cpu_has_kvm_support,
 	.disabled_by_bios = vmx_disabled_by_bios,
@@ -7740,6 +7783,11 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
 	.get_vmcs12_pages = NULL,
 	.nested_enable_evmcs = NULL,
 	.need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
+
+	.get_spp_status = vmx_get_spp_status,
+	.get_subpages = vmx_get_subpages,
+	.set_subpages = vmx_set_subpages,
+	.init_spp = vmx_init_spp,
 };
 
 static void vmx_cleanup_l1d_flush(void)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 26e4c574731c..c7cb17941344 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4589,6 +4589,44 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 	return r;
 }
 
+int kvm_get_subpages(struct kvm *kvm,
+		     struct kvm_subpage *spp_info)
+{
+	int ret;
+
+	mutex_lock(&kvm->slots_lock);
+	ret = kvm_mmu_get_subpages(kvm, spp_info, false);
+	mutex_unlock(&kvm->slots_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_get_subpages);
+
+int kvm_set_subpages(struct kvm *kvm,
+		     struct kvm_subpage *spp_info)
+{
+	int ret;
+
+	mutex_lock(&kvm->slots_lock);
+	ret = kvm_mmu_set_subpages(kvm, spp_info, false);
+	mutex_unlock(&kvm->slots_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_set_subpages);
+
+int kvm_init_spp(struct kvm *kvm)
+{
+	int ret;
+
+	mutex_lock(&kvm->slots_lock);
+	ret = kvm_mmu_init_spp(kvm);
+	mutex_unlock(&kvm->slots_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kvm_init_spp);
+
 long kvm_arch_vm_ioctl(struct file *filp,
 		       unsigned int ioctl, unsigned long arg)
 {
@@ -9349,6 +9387,8 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
 	}
 
 	kvm_page_track_free_memslot(free, dont);
+	if (kvm->arch.spp_active)
+	      kvm_subpage_free_memslot(free, dont);
 }
 
 int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 7f904d4d788c..da30fcbb2727 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -849,7 +849,9 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
 
 struct kvm_mmu_page *kvm_mmu_get_spp_page(struct kvm_vcpu *vcpu,
 			gfn_t gfn, unsigned int level);
-
+int kvm_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info);
+int kvm_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info);
+int kvm_init_spp(struct kvm *kvm);
 #ifndef __KVM_HAVE_ARCH_VM_ALLOC
 /*
  * All architectures that want to use vzalloc currently also
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 6d4ea4b6c922..2c75a87ab3b5 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -102,6 +102,15 @@ struct kvm_userspace_memory_region {
 	__u64 userspace_addr; /* start of the userspace allocated memory */
 };
 
+/* for KVM_SUBPAGES_GET_ACCESS and KVM_SUBPAGES_SET_ACCESS */
+#define SUBPAGE_MAX_BITMAP   64
+struct kvm_subpage {
+	__u64 base_gfn;
+	__u64 npages;
+	 /* sub-page write-access bitmap array */
+	__u32 access_map[SUBPAGE_MAX_BITMAP];
+};
+
 /*
  * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspace,
  * other bits are reserved for kvm internal use which are defined in
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 6/9] KVM: VMX: Introduce SPP user-space IOCTLs
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
                   ` (4 preceding siblings ...)
  2019-08-14  7:03 ` [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP Yang Weijiang
@ 2019-08-14  7:04 ` Yang Weijiang
  2019-08-14  7:04 ` [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault Yang Weijiang
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:04 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

User application, e.g., QEMU or VMI, must initialize SPP
before gets/sets SPP subpages, the dynamic initialization is to
reduce the extra storage cost if the SPP feature is not not used.

Co-developed-by: He Chen <he.chen@linux.intel.com>
Signed-off-by: He Chen <he.chen@linux.intel.com>
Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/kvm/x86.c       | 90 ++++++++++++++++++++++++++++++++++++++++
 include/linux/kvm_host.h |  4 ++
 include/uapi/linux/kvm.h |  3 ++
 3 files changed, 97 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c7cb17941344..54a1d2423a17 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4589,6 +4589,23 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
 	return r;
 }
 
+static int kvm_vm_ioctl_get_subpages(struct kvm *kvm,
+				     struct kvm_subpage *spp_info)
+{
+	return kvm_arch_get_subpages(kvm, spp_info);
+}
+
+static int kvm_vm_ioctl_set_subpages(struct kvm *kvm,
+				     struct kvm_subpage *spp_info)
+{
+	return kvm_arch_set_subpages(kvm, spp_info);
+}
+
+static int kvm_vm_ioctl_init_spp(struct kvm *kvm)
+{
+	return kvm_arch_init_spp(kvm);
+}
+
 int kvm_get_subpages(struct kvm *kvm,
 		     struct kvm_subpage *spp_info)
 {
@@ -4922,8 +4939,55 @@ long kvm_arch_vm_ioctl(struct file *filp,
 		if (copy_from_user(&hvevfd, argp, sizeof(hvevfd)))
 			goto out;
 		r = kvm_vm_ioctl_hv_eventfd(kvm, &hvevfd);
+	}
+	case KVM_SUBPAGES_GET_ACCESS: {
+		struct kvm_subpage spp_info;
+
+		if (!kvm->arch.spp_active) {
+			r = -ENODEV;
+			goto out;
+		}
+
+		r = -EFAULT;
+		if (copy_from_user(&spp_info, argp, sizeof(spp_info)))
+			goto out;
+
+		r = -EINVAL;
+		if (spp_info.npages == 0 ||
+		    spp_info.npages > SUBPAGE_MAX_BITMAP)
+			goto out;
+
+		r = kvm_vm_ioctl_get_subpages(kvm, &spp_info);
+		if (copy_to_user(argp, &spp_info, sizeof(spp_info))) {
+			r = -EFAULT;
+			goto out;
+		}
+		break;
+	}
+	case KVM_SUBPAGES_SET_ACCESS: {
+		struct kvm_subpage spp_info;
+
+		if (!kvm->arch.spp_active) {
+			r = -ENODEV;
+			goto out;
+		}
+
+		r = -EFAULT;
+		if (copy_from_user(&spp_info, argp, sizeof(spp_info)))
+			goto out;
+
+		r = -EINVAL;
+		if (spp_info.npages == 0 ||
+		    spp_info.npages > SUBPAGE_MAX_BITMAP)
+			goto out;
+
+		r = kvm_vm_ioctl_set_subpages(kvm, &spp_info);
 		break;
 	}
+	case KVM_INIT_SPP: {
+		r = kvm_vm_ioctl_init_spp(kvm);
+		break;
+	 }
 	default:
 		r = -ENOTTY;
 	}
@@ -9925,6 +9989,32 @@ int kvm_arch_update_irqfd_routing(struct kvm *kvm, unsigned int host_irq,
 	return kvm_x86_ops->update_pi_irte(kvm, host_irq, guest_irq, set);
 }
 
+int kvm_arch_get_subpages(struct kvm *kvm,
+			  struct kvm_subpage *spp_info)
+{
+	if (!kvm_x86_ops->get_subpages)
+		return -EINVAL;
+
+	return kvm_x86_ops->get_subpages(kvm, spp_info);
+}
+
+int kvm_arch_set_subpages(struct kvm *kvm,
+			  struct kvm_subpage *spp_info)
+{
+	if (!kvm_x86_ops->set_subpages)
+		return -EINVAL;
+
+	return kvm_x86_ops->set_subpages(kvm, spp_info);
+}
+
+int kvm_arch_init_spp(struct kvm *kvm)
+{
+	if (!kvm_x86_ops->init_spp)
+		return -EINVAL;
+
+	return kvm_x86_ops->init_spp(kvm);
+}
+
 bool kvm_vector_hashing_enabled(void)
 {
 	return vector_hashing;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index da30fcbb2727..b5ae112f209f 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -852,6 +852,10 @@ struct kvm_mmu_page *kvm_mmu_get_spp_page(struct kvm_vcpu *vcpu,
 int kvm_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info);
 int kvm_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info);
 int kvm_init_spp(struct kvm *kvm);
+int kvm_arch_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info);
+int kvm_arch_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info);
+int kvm_arch_init_spp(struct kvm *kvm);
+
 #ifndef __KVM_HAVE_ARCH_VM_ALLOC
 /*
  * All architectures that want to use vzalloc currently also
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2c75a87ab3b5..5754f8d21e7d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1246,6 +1246,9 @@ struct kvm_vfio_spapr_tce {
 					struct kvm_userspace_memory_region)
 #define KVM_SET_TSS_ADDR          _IO(KVMIO,   0x47)
 #define KVM_SET_IDENTITY_MAP_ADDR _IOW(KVMIO,  0x48, __u64)
+#define KVM_SUBPAGES_GET_ACCESS   _IOR(KVMIO,  0x49, __u64)
+#define KVM_SUBPAGES_SET_ACCESS   _IOW(KVMIO,  0x4a, __u64)
+#define KVM_INIT_SPP              _IOW(KVMIO,  0x4b, __u64)
 
 /* enable ucontrol for s390 */
 struct kvm_s390_ucas_mapping {
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
                   ` (5 preceding siblings ...)
  2019-08-14  7:04 ` [PATCH RESEND v4 6/9] KVM: VMX: Introduce SPP user-space IOCTLs Yang Weijiang
@ 2019-08-14  7:04 ` Yang Weijiang
  2019-08-19 14:43   ` Paolo Bonzini
  2019-08-14  7:04 ` [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup Yang Weijiang
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:04 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

If write to subpage is not allowed, EPT violation is generated,
it's propagated to QEMU or VMI to handle.

If the target page is SPP protected, however SPPT missing is
encoutered while traversing with gfn, vmexit is generated so
that KVM can handle the issue. Any SPPT misconfig will be
propagated to QEMU or VMI.

A SPP specific bit(11) is added to exit_qualification and a new
exit reason(66) is introduced for SPP.

Co-developed-by: He Chen <he.chen@linux.intel.com>
Signed-off-by: He Chen <he.chen@linux.intel.com>
Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/vmx.h      |  7 ++++
 arch/x86/include/uapi/asm/vmx.h |  2 +
 arch/x86/kvm/mmu.c              | 17 ++++++++
 arch/x86/kvm/vmx/vmx.c          | 71 +++++++++++++++++++++++++++++++++
 include/uapi/linux/kvm.h        |  5 +++
 5 files changed, 102 insertions(+)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 6cb05ac07453..11ca64ced578 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -547,6 +547,13 @@ struct vmx_msr_entry {
 #define EPT_VIOLATION_EXECUTABLE	(1 << EPT_VIOLATION_EXECUTABLE_BIT)
 #define EPT_VIOLATION_GVA_TRANSLATED	(1 << EPT_VIOLATION_GVA_TRANSLATED_BIT)
 
+/*
+ * Exit Qualifications for SPPT-Induced vmexits
+ */
+#define SPPT_INDUCED_EXIT_TYPE_BIT     11
+#define SPPT_INDUCED_EXIT_TYPE         (1 << SPPT_INDUCED_EXIT_TYPE_BIT)
+#define SPPT_INTR_INFO_UNBLOCK_NMI     INTR_INFO_UNBLOCK_NMI
+
 /*
  * VM-instruction error numbers
  */
diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
index d213ec5c3766..fd89ebf321c9 100644
--- a/arch/x86/include/uapi/asm/vmx.h
+++ b/arch/x86/include/uapi/asm/vmx.h
@@ -85,6 +85,7 @@
 #define EXIT_REASON_PML_FULL            62
 #define EXIT_REASON_XSAVES              63
 #define EXIT_REASON_XRSTORS             64
+#define EXIT_REASON_SPP                 66
 
 #define VMX_EXIT_REASONS \
 	{ EXIT_REASON_EXCEPTION_NMI,         "EXCEPTION_NMI" }, \
@@ -141,6 +142,7 @@
 	{ EXIT_REASON_ENCLS,                 "ENCLS" }, \
 	{ EXIT_REASON_RDSEED,                "RDSEED" }, \
 	{ EXIT_REASON_PML_FULL,              "PML_FULL" }, \
+	{ EXIT_REASON_SPP,                   "SPP" }, \
 	{ EXIT_REASON_XSAVES,                "XSAVES" }, \
 	{ EXIT_REASON_XRSTORS,               "XRSTORS" }
 
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 16d3ca544b67..419878301375 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3602,6 +3602,19 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
 		if ((error_code & PFERR_WRITE_MASK) &&
 		    spte_can_locklessly_be_made_writable(spte))
 		{
+			/*
+			 * Record write protect fault caused by
+			 * Sub-page Protection, let VMI decide
+			 * the next step.
+			 */
+			if (spte & PT_SPP_MASK) {
+				fault_handled = true;
+				vcpu->run->exit_reason = KVM_EXIT_SPP;
+				vcpu->run->spp.addr = gva;
+				kvm_skip_emulated_instruction(vcpu);
+				break;
+			}
+
 			new_spte |= PT_WRITABLE_MASK;
 
 			/*
@@ -5786,6 +5799,10 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
 		r = vcpu->arch.mmu->page_fault(vcpu, cr2,
 					       lower_32_bits(error_code),
 					       false);
+
+		if (vcpu->run->exit_reason == KVM_EXIT_SPP)
+			return 0;
+
 		WARN_ON(r == RET_PF_INVALID);
 	}
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 855d02dd94c7..2506ca958277 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5327,6 +5327,76 @@ static int handle_monitor(struct kvm_vcpu *vcpu)
 	return handle_nop(vcpu);
 }
 
+static int handle_spp(struct kvm_vcpu *vcpu)
+{
+	unsigned long exit_qualification;
+	struct kvm_memory_slot *slot;
+	gpa_t gpa;
+	gfn_t gfn;
+
+	exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
+
+	/*
+	 * SPP VM exit happened while executing iret from NMI,
+	 * "blocked by NMI" bit has to be set before next VM entry.
+	 * There are errata that may cause this bit to not be set:
+	 * AAK134, BY25.
+	 */
+	if (!(to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK) &&
+	    (exit_qualification & SPPT_INTR_INFO_UNBLOCK_NMI))
+		vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
+			      GUEST_INTR_STATE_NMI);
+
+	vcpu->arch.exit_qualification = exit_qualification;
+	if (exit_qualification & SPPT_INDUCED_EXIT_TYPE) {
+		struct kvm_subpage spp_info = {0};
+		int ret;
+
+		/*
+		 * SPPT missing
+		 * We don't set SPP write access for the corresponding
+		 * GPA, if we haven't setup, we need to construct
+		 * SPP table here.
+		 */
+		pr_info("SPP - SPPT entry missing!\n");
+		gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
+		gfn = gpa >> PAGE_SHIFT;
+		slot = gfn_to_memslot(vcpu->kvm, gfn);
+		if (!slot)
+		      return -EFAULT;
+
+		/*
+		 * if the target gfn is not protected, but SPPT is
+		 * traversed now, regard this as some kind of fault.
+		 */
+		spp_info.base_gfn = gfn;
+		spp_info.npages = 1;
+
+		spin_lock(&(vcpu->kvm->mmu_lock));
+		ret = kvm_mmu_get_subpages(vcpu->kvm, &spp_info, true);
+		if (ret == 1) {
+			kvm_mmu_setup_spp_structure(vcpu,
+				spp_info.access_map[0], gfn);
+		}
+		spin_unlock(&(vcpu->kvm->mmu_lock));
+
+		return 1;
+
+	}
+
+	/*
+	 * SPPT Misconfig
+	 * This is probably caused by some mis-configuration in SPPT
+	 * entries, cannot handle it here, escalate the fault to
+	 * emulator.
+	 */
+	WARN_ON(1);
+	vcpu->run->exit_reason = KVM_EXIT_UNKNOWN;
+	vcpu->run->hw.hardware_exit_reason = EXIT_REASON_SPP;
+	pr_alert("SPP - SPPT Misconfiguration!\n");
+	return 0;
+}
+
 static int handle_invpcid(struct kvm_vcpu *vcpu)
 {
 	u32 vmx_instruction_info;
@@ -5530,6 +5600,7 @@ static int (*kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = {
 	[EXIT_REASON_INVVPID]                 = handle_vmx_instruction,
 	[EXIT_REASON_RDRAND]                  = handle_invalid_op,
 	[EXIT_REASON_RDSEED]                  = handle_invalid_op,
+	[EXIT_REASON_SPP]                     = handle_spp,
 	[EXIT_REASON_XSAVES]                  = handle_xsaves,
 	[EXIT_REASON_XRSTORS]                 = handle_xrstors,
 	[EXIT_REASON_PML_FULL]		      = handle_pml_full,
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 5754f8d21e7d..caff0ed1eeaf 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -244,6 +244,7 @@ struct kvm_hyperv_exit {
 #define KVM_EXIT_S390_STSI        25
 #define KVM_EXIT_IOAPIC_EOI       26
 #define KVM_EXIT_HYPERV           27
+#define KVM_EXIT_SPP              28
 
 /* For KVM_EXIT_INTERNAL_ERROR */
 /* Emulate instruction failed. */
@@ -399,6 +400,10 @@ struct kvm_run {
 		struct {
 			__u8 vector;
 		} eoi;
+		/* KVM_EXIT_SPP */
+		struct {
+			__u64 addr;
+		} spp;
 		/* KVM_EXIT_HYPERV */
 		struct kvm_hyperv_exit hyperv;
 		/* Fix the size of the union. */
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
                   ` (6 preceding siblings ...)
  2019-08-14  7:04 ` [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault Yang Weijiang
@ 2019-08-14  7:04 ` Yang Weijiang
  2019-08-19 14:46   ` Paolo Bonzini
  2019-08-14  7:04 ` [PATCH RESEND v4 9/9] KVM: MMU: Handle host memory remapping and reclaim Yang Weijiang
  2019-08-14 12:36 ` [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Paolo Bonzini
  9 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:04 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

If SPP subpages are set while the physical page are not
available in EPT leaf entry, the mapping is first stored
in SPP access bitmap buffer. SPPT setup is deferred to
access to the protected page, in EPT page fault handler,
the SPPT enries are set up.

Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/kvm/mmu.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 419878301375..f017fe6cd67b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4304,6 +4304,26 @@ check_hugepage_cache_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int level)
 	return kvm_mtrr_check_gfn_range_consistency(vcpu, gfn, page_num);
 }
 
+static int kvm_enable_spp_protection(struct kvm *kvm, u64 gfn)
+{
+	struct kvm_subpage spp_info = {0};
+	struct kvm_memory_slot *slot;
+
+	slot = gfn_to_memslot(kvm, gfn);
+	if (!slot)
+		return -EFAULT;
+
+	spp_info.base_gfn = gfn;
+	spp_info.npages = 1;
+
+	if (kvm_mmu_get_subpages(kvm, &spp_info, true) < 0)
+		return -EFAULT;
+
+	if (spp_info.access_map[0] != FULL_SPP_ACCESS)
+		kvm_mmu_set_subpages(kvm, &spp_info, true);
+
+	return 0;
+}
 static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
 			  bool prefault)
 {
@@ -4355,6 +4375,10 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code,
 	if (likely(!force_pt_level))
 		transparent_hugepage_adjust(vcpu, &gfn, &pfn, &level);
 	r = __direct_map(vcpu, write, map_writable, level, gfn, pfn, prefault);
+
+	if (vcpu->kvm->arch.spp_active && level == PT_PAGE_TABLE_LEVEL)
+		kvm_enable_spp_protection(vcpu->kvm, gfn);
+
 	spin_unlock(&vcpu->kvm->mmu_lock);
 
 	return r;
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [PATCH RESEND v4 9/9] KVM: MMU: Handle host memory remapping and reclaim
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
                   ` (7 preceding siblings ...)
  2019-08-14  7:04 ` [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup Yang Weijiang
@ 2019-08-14  7:04 ` Yang Weijiang
  2019-08-14 12:36 ` [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Paolo Bonzini
  9 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14  7:04 UTC (permalink / raw)
  To: kvm, linux-kernel, pbonzini, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang

Host page swapping/migration may change the translation in
EPT leaf entry, if the target page is SPP protected,
re-enable SPP protection in MMU notifier. If SPPT shadow
page is reclaimed, the level1 pages don't have rmap to clear.

Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/kvm/mmu.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f017fe6cd67b..6aab8902c808 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1926,6 +1926,24 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
 			new_spte &= ~PT_WRITABLE_MASK;
 			new_spte &= ~SPTE_HOST_WRITEABLE;
 
+			/*
+			 * if it's EPT leaf entry and the physical page is
+			 * SPP protected, then re-enable SPP protection for
+			 * the page.
+			 */
+			if (kvm->arch.spp_active &&
+			    level == PT_PAGE_TABLE_LEVEL) {
+				struct kvm_subpage spp_info = {0};
+				int i;
+
+				spp_info.base_gfn = gfn;
+				spp_info.npages = 1;
+				i = kvm_mmu_get_subpages(kvm, &spp_info, true);
+				if (i == 1 &&
+				    spp_info.access_map[0] != FULL_SPP_ACCESS)
+					new_spte |= PT_SPP_MASK;
+			}
+
 			new_spte = mark_spte_for_access_track(new_spte);
 
 			mmu_spte_clear_track_bits(sptep);
@@ -2809,6 +2827,10 @@ static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
 	pte = *spte;
 	if (is_shadow_present_pte(pte)) {
 		if (is_last_spte(pte, sp->role.level)) {
+			/* SPPT leaf entries don't have rmaps*/
+			if (sp->role.level == PT_PAGE_TABLE_LEVEL &&
+			    is_spp_spte(sp))
+				return true;
 			drop_spte(kvm, spte);
 			if (is_large_pte(pte))
 				--kvm->stat.lpages;
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support
  2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
                   ` (8 preceding siblings ...)
  2019-08-14  7:04 ` [PATCH RESEND v4 9/9] KVM: MMU: Handle host memory remapping and reclaim Yang Weijiang
@ 2019-08-14 12:36 ` Paolo Bonzini
  2019-08-14 14:02   ` Yang Weijiang
  9 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-14 12:36 UTC (permalink / raw)
  To: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar

On 14/08/19 09:03, Yang Weijiang wrote:
> EPT-Based Sub-Page write Protection(SPP)is a HW capability which allows
> Virtual Machine Monitor(VMM) to specify write-permission for guest
> physical memory at a sub-page(128 byte) granularity. When this
> capability is enabled, the CPU enforces write-access check for sub-pages
> within a 4KB page.
> 
> The feature is targeted to provide fine-grained memory protection for
> usages such as device virtualization, memory check-point and VM
> introspection etc.
> 
> SPP is active when the "sub-page write protection" (bit 23) is 1 in
> Secondary VM-Execution Controls. The feature is backed with a Sub-Page
> Permission Table(SPPT), SPPT is referenced via a 64-bit control field
> called Sub-Page Permission Table Pointer (SPPTP) which contains a
> 4K-aligned physical address.
> 
> Right now, only 4KB physical pages are supported for SPP. To enable SPP
> for certain physical page, we need to first make the physical page
> write-protected, then set bit 61 of the corresponding EPT leaf entry. 
> While HW walks EPT, if bit 61 is set, it traverses SPPT with the guset
> physical address to find out the sub-page permissions at the leaf entry.
> If the corresponding bit is set, write to sub-page is permitted,
> otherwise, SPP induced EPT violation is generated.

Still no testcases?

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-14  7:03 ` [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP Yang Weijiang
@ 2019-08-14 12:43   ` Vitaly Kuznetsov
  2019-08-14 14:34     ` Yang Weijiang
  2019-08-15 13:43     ` Yang Weijiang
  2019-08-19 15:05   ` Paolo Bonzini
  2019-08-19 15:13   ` Paolo Bonzini
  2 siblings, 2 replies; 40+ messages in thread
From: Vitaly Kuznetsov @ 2019-08-14 12:43 UTC (permalink / raw)
  To: Yang Weijiang, kvm, linux-kernel
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar, Yang Weijiang,
	pbonzini, sean.j.christopherson

Yang Weijiang <weijiang.yang@intel.com> writes:

> init_spp() must be called before {get, set}_subpage
> functions, it creates subpage access bitmaps for memory pages
> and issues a KVM request to setup SPPT root pages.
>
> kvm_mmu_set_subpages() is to enable SPP bit in EPT leaf page
> and setup corresponding SPPT entries. The mmu_lock
> is held before above operation. If it's called in EPT fault and
> SPPT mis-config induced handler, mmu_lock is acquired outside
> the function, otherwise, it's acquired inside it.
>
> kvm_mmu_get_subpages() is used to query access bitmap for
> protected page, it's also used in EPT fault handler to check
> whether the fault EPT page is SPP protected as well.
>
> Co-developed-by: He Chen <he.chen@linux.intel.com>
> Signed-off-by: He Chen <he.chen@linux.intel.com>
> Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
> Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  18 ++++
>  arch/x86/include/asm/vmx.h      |   2 +
>  arch/x86/kvm/mmu.c              | 160 ++++++++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/vmx.c          |  48 ++++++++++
>  arch/x86/kvm/x86.c              |  40 ++++++++
>  include/linux/kvm_host.h        |   4 +-
>  include/uapi/linux/kvm.h        |   9 ++
>  7 files changed, 280 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 44f6e1757861..5c4882015acc 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -398,8 +398,13 @@ struct kvm_mmu {
>  	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
>  	void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
>  			   u64 *spte, const void *pte);
> +	int (*get_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> +	int (*set_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> +	int (*init_spp)(struct kvm *kvm);
> +
>  	hpa_t root_hpa;
>  	gpa_t root_cr3;
> +	hpa_t sppt_root;

(I'm sorry if this was previously discussed, I didn't look into previous
submissions).

What happens when we launch a nested guest and switch vcpu->arch.mmu to
point at arch.guest_mmu? sppt_root will point to INVALID_PAGE and SPP
won't be enabled in VMCS?

(I'm sorry again, I'm likely missing something obvious)

-- 
Vitaly

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support
  2019-08-14 12:36 ` [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Paolo Bonzini
@ 2019-08-14 14:02   ` Yang Weijiang
  2019-08-14 14:06     ` Paolo Bonzini
  0 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14 14:02 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Wed, Aug 14, 2019 at 02:36:30PM +0200, Paolo Bonzini wrote:
> On 14/08/19 09:03, Yang Weijiang wrote:
> > EPT-Based Sub-Page write Protection(SPP)is a HW capability which allows
> > Virtual Machine Monitor(VMM) to specify write-permission for guest
> > physical memory at a sub-page(128 byte) granularity. When this
> > capability is enabled, the CPU enforces write-access check for sub-pages
> > within a 4KB page.
> > 
> > The feature is targeted to provide fine-grained memory protection for
> > usages such as device virtualization, memory check-point and VM
> > introspection etc.
> > 
> > SPP is active when the "sub-page write protection" (bit 23) is 1 in
> > Secondary VM-Execution Controls. The feature is backed with a Sub-Page
> > Permission Table(SPPT), SPPT is referenced via a 64-bit control field
> > called Sub-Page Permission Table Pointer (SPPTP) which contains a
> > 4K-aligned physical address.
> > 
> > Right now, only 4KB physical pages are supported for SPP. To enable SPP
> > for certain physical page, we need to first make the physical page
> > write-protected, then set bit 61 of the corresponding EPT leaf entry. 
> > While HW walks EPT, if bit 61 is set, it traverses SPPT with the guset
> > physical address to find out the sub-page permissions at the leaf entry.
> > If the corresponding bit is set, write to sub-page is permitted,
> > otherwise, SPP induced EPT violation is generated.
> 
> Still no testcases?
> 
> Paolo

Hi, Paolo,
The testcases are included in selftest: https://lkml.org/lkml/2019/6/18/1197


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support
  2019-08-14 14:02   ` Yang Weijiang
@ 2019-08-14 14:06     ` Paolo Bonzini
  0 siblings, 0 replies; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-14 14:06 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: kvm, linux-kernel, sean.j.christopherson, mst, rkrcmar, jmattson,
	yu.c.zhang, alazar

On 14/08/19 16:02, Yang Weijiang wrote:
> On Wed, Aug 14, 2019 at 02:36:30PM +0200, Paolo Bonzini wrote:
>> On 14/08/19 09:03, Yang Weijiang wrote:
>>> EPT-Based Sub-Page write Protection(SPP)is a HW capability which allows
>>> Virtual Machine Monitor(VMM) to specify write-permission for guest
>>> physical memory at a sub-page(128 byte) granularity. When this
>>> capability is enabled, the CPU enforces write-access check for sub-pages
>>> within a 4KB page.
>>>
>>> The feature is targeted to provide fine-grained memory protection for
>>> usages such as device virtualization, memory check-point and VM
>>> introspection etc.
>>>
>>> SPP is active when the "sub-page write protection" (bit 23) is 1 in
>>> Secondary VM-Execution Controls. The feature is backed with a Sub-Page
>>> Permission Table(SPPT), SPPT is referenced via a 64-bit control field
>>> called Sub-Page Permission Table Pointer (SPPTP) which contains a
>>> 4K-aligned physical address.
>>>
>>> Right now, only 4KB physical pages are supported for SPP. To enable SPP
>>> for certain physical page, we need to first make the physical page
>>> write-protected, then set bit 61 of the corresponding EPT leaf entry. 
>>> While HW walks EPT, if bit 61 is set, it traverses SPPT with the guset
>>> physical address to find out the sub-page permissions at the leaf entry.
>>> If the corresponding bit is set, write to sub-page is permitted,
>>> otherwise, SPP induced EPT violation is generated.
>>
>> Still no testcases?
>>
>> Paolo
> 
> Hi, Paolo,
> The testcases are included in selftest: https://lkml.org/lkml/2019/6/18/1197
> 

Good, thanks!

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-14 12:43   ` Vitaly Kuznetsov
@ 2019-08-14 14:34     ` Yang Weijiang
  2019-08-15 13:43     ` Yang Weijiang
  1 sibling, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-14 14:34 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Yang Weijiang, kvm, linux-kernel, mst, rkrcmar, jmattson,
	yu.c.zhang, alazar, pbonzini, sean.j.christopherson

On Wed, Aug 14, 2019 at 02:43:39PM +0200, Vitaly Kuznetsov wrote:
> Yang Weijiang <weijiang.yang@intel.com> writes:
> 
> > init_spp() must be called before {get, set}_subpage
> > functions, it creates subpage access bitmaps for memory pages
> > and issues a KVM request to setup SPPT root pages.
> >
> > kvm_mmu_set_subpages() is to enable SPP bit in EPT leaf page
> > and setup corresponding SPPT entries. The mmu_lock
> > is held before above operation. If it's called in EPT fault and
> > SPPT mis-config induced handler, mmu_lock is acquired outside
> > the function, otherwise, it's acquired inside it.
> >
> > kvm_mmu_get_subpages() is used to query access bitmap for
> > protected page, it's also used in EPT fault handler to check
> > whether the fault EPT page is SPP protected as well.
> >
> > Co-developed-by: He Chen <he.chen@linux.intel.com>
> > Signed-off-by: He Chen <he.chen@linux.intel.com>
> > Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
> > Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  18 ++++
> >  arch/x86/include/asm/vmx.h      |   2 +
> >  arch/x86/kvm/mmu.c              | 160 ++++++++++++++++++++++++++++++++
> >  arch/x86/kvm/vmx/vmx.c          |  48 ++++++++++
> >  arch/x86/kvm/x86.c              |  40 ++++++++
> >  include/linux/kvm_host.h        |   4 +-
> >  include/uapi/linux/kvm.h        |   9 ++
> >  7 files changed, 280 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 44f6e1757861..5c4882015acc 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -398,8 +398,13 @@ struct kvm_mmu {
> >  	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
> >  	void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> >  			   u64 *spte, const void *pte);
> > +	int (*get_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> > +	int (*set_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> > +	int (*init_spp)(struct kvm *kvm);
> > +
> >  	hpa_t root_hpa;
> >  	gpa_t root_cr3;
> > +	hpa_t sppt_root;
> 
> (I'm sorry if this was previously discussed, I didn't look into previous
> submissions).
> 
> What happens when we launch a nested guest and switch vcpu->arch.mmu to
> point at arch.guest_mmu? sppt_root will point to INVALID_PAGE and SPP
> won't be enabled in VMCS?
> 
> (I'm sorry again, I'm likely missing something obvious)
> 
> -- 
> Vitaly

Hi, Vitaly,
Thanks for raising a good qeustion, I must have missed the nested case,
I'll double check how to support the nested case.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-14 12:43   ` Vitaly Kuznetsov
  2019-08-14 14:34     ` Yang Weijiang
@ 2019-08-15 13:43     ` Yang Weijiang
  2019-08-15 14:03       ` Vitaly Kuznetsov
  2019-08-15 16:25       ` Jim Mattson
  1 sibling, 2 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-15 13:43 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Yang Weijiang, kvm, linux-kernel, mst, rkrcmar, jmattson,
	yu.c.zhang, alazar, pbonzini, sean.j.christopherson

On Wed, Aug 14, 2019 at 02:43:39PM +0200, Vitaly Kuznetsov wrote:
> Yang Weijiang <weijiang.yang@intel.com> writes:
> 
> > init_spp() must be called before {get, set}_subpage
> > functions, it creates subpage access bitmaps for memory pages
> > and issues a KVM request to setup SPPT root pages.
> >
> > kvm_mmu_set_subpages() is to enable SPP bit in EPT leaf page
> > and setup corresponding SPPT entries. The mmu_lock
> > is held before above operation. If it's called in EPT fault and
> > SPPT mis-config induced handler, mmu_lock is acquired outside
> > the function, otherwise, it's acquired inside it.
> >
> > kvm_mmu_get_subpages() is used to query access bitmap for
> > protected page, it's also used in EPT fault handler to check
> > whether the fault EPT page is SPP protected as well.
> >
> > Co-developed-by: He Chen <he.chen@linux.intel.com>
> > Signed-off-by: He Chen <he.chen@linux.intel.com>
> > Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
> > Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
> > Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> > ---
> >  arch/x86/include/asm/kvm_host.h |  18 ++++
> >  arch/x86/include/asm/vmx.h      |   2 +
> >  arch/x86/kvm/mmu.c              | 160 ++++++++++++++++++++++++++++++++
> >  arch/x86/kvm/vmx/vmx.c          |  48 ++++++++++
> >  arch/x86/kvm/x86.c              |  40 ++++++++
> >  include/linux/kvm_host.h        |   4 +-
> >  include/uapi/linux/kvm.h        |   9 ++
> >  7 files changed, 280 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 44f6e1757861..5c4882015acc 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -398,8 +398,13 @@ struct kvm_mmu {
> >  	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
> >  	void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> >  			   u64 *spte, const void *pte);
> > +	int (*get_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> > +	int (*set_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> > +	int (*init_spp)(struct kvm *kvm);
> > +
> >  	hpa_t root_hpa;
> >  	gpa_t root_cr3;
> > +	hpa_t sppt_root;
> 
> (I'm sorry if this was previously discussed, I didn't look into previous
> submissions).
> 
> What happens when we launch a nested guest and switch vcpu->arch.mmu to
> point at arch.guest_mmu? sppt_root will point to INVALID_PAGE and SPP
> won't be enabled in VMCS?
> 
> (I'm sorry again, I'm likely missing something obvious)
> 
> -- 
> Vitaly
Hi, Vitaly,
After looked into the issue and others, I feel to make SPP co-existing
with nested VM is not good, the major reason is, L1 pages protected by
SPP are transparent to L1 VM, if it launches L2 VM, probably the
pages would be allocated to L2 VM, and that will bother to L1 and L2.
Given the feature is new and I don't see nested VM can benefit
from it right now, I would like to make SPP and nested feature mutually
exclusive, i.e., detecting if the other part is active before activate one
feature,what do you think of it? 
thanks! 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-15 13:43     ` Yang Weijiang
@ 2019-08-15 14:03       ` Vitaly Kuznetsov
  2019-08-19 14:06         ` Yang Weijiang
  2019-08-15 16:25       ` Jim Mattson
  1 sibling, 1 reply; 40+ messages in thread
From: Vitaly Kuznetsov @ 2019-08-15 14:03 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: kvm, linux-kernel, mst, rkrcmar, jmattson, yu.c.zhang, alazar,
	pbonzini, sean.j.christopherson

Yang Weijiang <weijiang.yang@intel.com> writes:

> After looked into the issue and others, I feel to make SPP co-existing
> with nested VM is not good, the major reason is, L1 pages protected by
> SPP are transparent to L1 VM, if it launches L2 VM, probably the
> pages would be allocated to L2 VM, and that will bother to L1 and L2.
> Given the feature is new and I don't see nested VM can benefit
> from it right now, I would like to make SPP and nested feature mutually
> exclusive, i.e., detecting if the other part is active before activate one
> feature,what do you think of it? 

I was mostly worried about creating a loophole (if I understand
correctly) for guests to defeat SPP protection: just launching a nested
guest and giving it a protected page. I don't see a problem if we limit
SPP to non-nested guests as step 1: we, however, need to document this
side-effect of the ioctl. Also, if you decide to do this enforecement,
I'd suggest you forbid VMLAUCH/VMRESUME and not VMXON as kvm module
loads in linux guests automatically when the hardware is suitable.

Thanks,

-- 
Vitaly

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-15 13:43     ` Yang Weijiang
  2019-08-15 14:03       ` Vitaly Kuznetsov
@ 2019-08-15 16:25       ` Jim Mattson
  2019-08-15 16:38         ` Sean Christopherson
  1 sibling, 1 reply; 40+ messages in thread
From: Jim Mattson @ 2019-08-15 16:25 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: Vitaly Kuznetsov, kvm list, LKML, Michael S. Tsirkin,
	Radim Krčmář,
	yu.c.zhang, alazar, Paolo Bonzini, Sean Christopherson

On Thu, Aug 15, 2019 at 6:41 AM Yang Weijiang <weijiang.yang@intel.com> wrote:

> Hi, Vitaly,
> After looked into the issue and others, I feel to make SPP co-existing
> with nested VM is not good, the major reason is, L1 pages protected by
> SPP are transparent to L1 VM, if it launches L2 VM, probably the
> pages would be allocated to L2 VM, and that will bother to L1 and L2.
> Given the feature is new and I don't see nested VM can benefit
> from it right now, I would like to make SPP and nested feature mutually
> exclusive, i.e., detecting if the other part is active before activate one
> feature,what do you think of it?
> thanks!

How do you propose making the features mutually exclusive?

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-15 16:25       ` Jim Mattson
@ 2019-08-15 16:38         ` Sean Christopherson
  2019-08-16 13:31           ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Sean Christopherson @ 2019-08-15 16:38 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Yang Weijiang, Vitaly Kuznetsov, kvm list, LKML,
	Michael S. Tsirkin, Radim Krčmář,
	yu.c.zhang, alazar, Paolo Bonzini

On Thu, Aug 15, 2019 at 09:25:41AM -0700, Jim Mattson wrote:
> On Thu, Aug 15, 2019 at 6:41 AM Yang Weijiang <weijiang.yang@intel.com> wrote:
> 
> > Hi, Vitaly,
> > After looked into the issue and others, I feel to make SPP co-existing
> > with nested VM is not good, the major reason is, L1 pages protected by
> > SPP are transparent to L1 VM, if it launches L2 VM, probably the
> > pages would be allocated to L2 VM, and that will bother to L1 and L2.
> > Given the feature is new and I don't see nested VM can benefit
> > from it right now, I would like to make SPP and nested feature mutually
> > exclusive, i.e., detecting if the other part is active before activate one
> > feature,what do you think of it?
> > thanks!
> 
> How do you propose making the features mutually exclusive?

I haven't looked at the details or the end to end flow, but would it make
sense to exit to userspace on nested VMLAUNCH/VMRESUME if there are SPP
mappings?  And have the SPP ioctl() kick vCPUs out of guest.

KVM already exits on SPP violations, so presumably this is something that
can be punted to userspace.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-15 16:38         ` Sean Christopherson
@ 2019-08-16 13:31           ` Yang Weijiang
  2019-08-16 18:19             ` Jim Mattson
  0 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-08-16 13:31 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Jim Mattson, Yang Weijiang, Vitaly Kuznetsov, kvm list, LKML,
	Michael S. Tsirkin, Radim Krčmář,
	yu.c.zhang, alazar, Paolo Bonzini

On Thu, Aug 15, 2019 at 09:38:44AM -0700, Sean Christopherson wrote:
> On Thu, Aug 15, 2019 at 09:25:41AM -0700, Jim Mattson wrote:
> > On Thu, Aug 15, 2019 at 6:41 AM Yang Weijiang <weijiang.yang@intel.com> wrote:
> > 
> > > Hi, Vitaly,
> > > After looked into the issue and others, I feel to make SPP co-existing
> > > with nested VM is not good, the major reason is, L1 pages protected by
> > > SPP are transparent to L1 VM, if it launches L2 VM, probably the
> > > pages would be allocated to L2 VM, and that will bother to L1 and L2.
> > > Given the feature is new and I don't see nested VM can benefit
> > > from it right now, I would like to make SPP and nested feature mutually
> > > exclusive, i.e., detecting if the other part is active before activate one
> > > feature,what do you think of it?
> > > thanks!
> > 
> > How do you propose making the features mutually exclusive?
> 
> I haven't looked at the details or the end to end flow, but would it make
> sense to exit to userspace on nested VMLAUNCH/VMRESUME if there are SPP
> mappings?  And have the SPP ioctl() kick vCPUs out of guest.
> 
> KVM already exits on SPP violations, so presumably this is something that
> can be punted to userspace.
Thanks Jim and Sean! Could we add a new flag in kvm to identify if nested VM is on
or off? That would make things easier. When VMLAUNCH is trapped,
set the flag, if VMXOFF is trapped, clear the flag.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-16 13:31           ` Yang Weijiang
@ 2019-08-16 18:19             ` Jim Mattson
  2019-08-19  2:08               ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Jim Mattson @ 2019-08-16 18:19 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: Sean Christopherson, Vitaly Kuznetsov, kvm list, LKML,
	Michael S. Tsirkin, Radim Krčmář,
	yu.c.zhang, alazar, Paolo Bonzini

On Fri, Aug 16, 2019 at 6:29 AM Yang Weijiang <weijiang.yang@intel.com> wrote:

> Thanks Jim and Sean! Could we add a new flag in kvm to identify if nested VM is on
> or off? That would make things easier. When VMLAUNCH is trapped,
> set the flag, if VMXOFF is trapped, clear the flag.

KVM_GET_NESTED_STATE has the requested information. If
data.vmx.vmxon_pa is anything other than -1, then the vCPU is in VMX
operation. If (flags & KVM_STATE_NESTED_GUEST_MODE), then L2 is
active.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-16 18:19             ` Jim Mattson
@ 2019-08-19  2:08               ` Yang Weijiang
  2019-08-19 15:15                 ` Paolo Bonzini
  0 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-08-19  2:08 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Yang Weijiang, Sean Christopherson, Vitaly Kuznetsov, kvm list,
	LKML, Michael S. Tsirkin, Radim Krčmář,
	yu.c.zhang, alazar, Paolo Bonzini

On Fri, Aug 16, 2019 at 11:19:46AM -0700, Jim Mattson wrote:
> On Fri, Aug 16, 2019 at 6:29 AM Yang Weijiang <weijiang.yang@intel.com> wrote:
> 
> > Thanks Jim and Sean! Could we add a new flag in kvm to identify if nested VM is on
> > or off? That would make things easier. When VMLAUNCH is trapped,
> > set the flag, if VMXOFF is trapped, clear the flag.
> 
> KVM_GET_NESTED_STATE has the requested information. If
> data.vmx.vmxon_pa is anything other than -1, then the vCPU is in VMX
> operation. If (flags & KVM_STATE_NESTED_GUEST_MODE), then L2 is
> active.
Thanks Jim, I'll reference the code and make necessary change in next
SPP patch release.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-15 14:03       ` Vitaly Kuznetsov
@ 2019-08-19 14:06         ` Yang Weijiang
  0 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-19 14:06 UTC (permalink / raw)
  To: Vitaly Kuznetsov
  Cc: Yang Weijiang, kvm, linux-kernel, mst, rkrcmar, jmattson,
	yu.c.zhang, alazar, pbonzini, sean.j.christopherson

On Thu, Aug 15, 2019 at 04:03:31PM +0200, Vitaly Kuznetsov wrote:
> Yang Weijiang <weijiang.yang@intel.com> writes:
> 
> > After looked into the issue and others, I feel to make SPP co-existing
> > with nested VM is not good, the major reason is, L1 pages protected by
> > SPP are transparent to L1 VM, if it launches L2 VM, probably the
> > pages would be allocated to L2 VM, and that will bother to L1 and L2.
> > Given the feature is new and I don't see nested VM can benefit
> > from it right now, I would like to make SPP and nested feature mutually
> > exclusive, i.e., detecting if the other part is active before activate one
> > feature,what do you think of it? 
> 
> I was mostly worried about creating a loophole (if I understand
> correctly) for guests to defeat SPP protection: just launching a nested
> guest and giving it a protected page. I don't see a problem if we limit
> SPP to non-nested guests as step 1: we, however, need to document this
> side-effect of the ioctl. Also, if you decide to do this enforecement,
> I'd suggest you forbid VMLAUCH/VMRESUME and not VMXON as kvm module
> loads in linux guests automatically when the hardware is suitable.
> 
> Thanks,
> 
> -- 
> Vitaly
OK, I'll follow your suggestion to add the exclusion, thanks!

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault
  2019-08-14  7:04 ` [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault Yang Weijiang
@ 2019-08-19 14:43   ` Paolo Bonzini
  2019-08-19 15:04     ` Paolo Bonzini
  0 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-19 14:43 UTC (permalink / raw)
  To: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar

On 14/08/19 09:04, Yang Weijiang wrote:
> +			/*
> +			 * Record write protect fault caused by
> +			 * Sub-page Protection, let VMI decide
> +			 * the next step.
> +			 */
> +			if (spte & PT_SPP_MASK) {

Should this be "if (spte & PT_WRITABLE_MASK)" instead?  That is, if the
page is already writable, the fault must be an SPP fault.

Paolo

> +				fault_handled = true;
> +				vcpu->run->exit_reason = KVM_EXIT_SPP;
> +				vcpu->run->spp.addr = gva;
> +				kvm_skip_emulated_instruction(vcpu);
> +				break;
> +			}
> +
>  			new_spte |= PT_WRITABLE_MASK;


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup
  2019-08-14  7:04 ` [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup Yang Weijiang
@ 2019-08-19 14:46   ` Paolo Bonzini
  2019-08-20 13:12     ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-19 14:46 UTC (permalink / raw)
  To: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar

On 14/08/19 09:04, Yang Weijiang wrote:
> +
> +	if (vcpu->kvm->arch.spp_active && level == PT_PAGE_TABLE_LEVEL)
> +		kvm_enable_spp_protection(vcpu->kvm, gfn);
> +

This would not enable SPP if the guest is backed by huge pages.
Instead, either the PT_PAGE_TABLE_LEVEL level must be forced for all
pages covered by SPP ranges, or (better) kvm_enable_spp_protection must
be able to cover multiple pages at once.

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault
  2019-08-19 14:43   ` Paolo Bonzini
@ 2019-08-19 15:04     ` Paolo Bonzini
  2019-08-20 13:44       ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-19 15:04 UTC (permalink / raw)
  To: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar

On 19/08/19 16:43, Paolo Bonzini wrote:
>> +			/*
>> +			 * Record write protect fault caused by
>> +			 * Sub-page Protection, let VMI decide
>> +			 * the next step.
>> +			 */
>> +			if (spte & PT_SPP_MASK) {
> Should this be "if (spte & PT_WRITABLE_MASK)" instead?  That is, if the
> page is already writable, the fault must be an SPP fault.

Hmm, no I forgot how SPP works; still, this is *not* correct.  For
example, if SPP marks part of a page as read-write, but KVM wants to
write-protect the whole page for access or dirty tracking, that should
not cause an SPP exit.

So I think that when KVM wants to write-protect the whole page
(wrprot_ad_disabled_spte) it must also clear PT_SPP_MASK; for example it
could save it in bit 53 (PT64_SECOND_AVAIL_BITS_SHIFT + 1).  If the
saved bit is set, fast_page_fault must then set PT_SPP_MASK instead of
PT_WRITABLE_MASK.  On re-entry this will cause an SPP vmexit;
fast_page_fault should never trigger an SPP userspace exit on its own,
all the SPP handling should go through handle_spp.

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-14  7:03 ` [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP Yang Weijiang
  2019-08-14 12:43   ` Vitaly Kuznetsov
@ 2019-08-19 15:05   ` Paolo Bonzini
  2019-08-20 12:36     ` Yang Weijiang
  2019-08-19 15:13   ` Paolo Bonzini
  2 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-19 15:05 UTC (permalink / raw)
  To: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar

On 14/08/19 09:03, Yang Weijiang wrote:
> +
> +int kvm_mmu_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
> +			 bool mmu_locked)
> +{
> +	u32 *access = spp_info->access_map;
> +	gfn_t gfn = spp_info->base_gfn;
> +	int npages = spp_info->npages;
> +	struct kvm_memory_slot *slot;
> +	int i;
> +	int ret;
> +
> +	if (!kvm->arch.spp_active)
> +	      return -ENODEV;
> +
> +	if (!mmu_locked)
> +	      spin_lock(&kvm->mmu_lock);
> +

Do not add this argument.  Just lock mmu_lock in the callers.

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-14  7:03 ` [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP Yang Weijiang
  2019-08-14 12:43   ` Vitaly Kuznetsov
  2019-08-19 15:05   ` Paolo Bonzini
@ 2019-08-19 15:13   ` Paolo Bonzini
  2019-08-20 13:09     ` Yang Weijiang
  2 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-19 15:13 UTC (permalink / raw)
  To: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson
  Cc: mst, rkrcmar, jmattson, yu.c.zhang, alazar

On 14/08/19 09:03, Yang Weijiang wrote:
> init_spp() must be called before {get, set}_subpage
> functions, it creates subpage access bitmaps for memory pages
> and issues a KVM request to setup SPPT root pages.
> 
> kvm_mmu_set_subpages() is to enable SPP bit in EPT leaf page
> and setup corresponding SPPT entries. The mmu_lock
> is held before above operation. If it's called in EPT fault and
> SPPT mis-config induced handler, mmu_lock is acquired outside
> the function, otherwise, it's acquired inside it.
> 
> kvm_mmu_get_subpages() is used to query access bitmap for
> protected page, it's also used in EPT fault handler to check
> whether the fault EPT page is SPP protected as well.
> 
> Co-developed-by: He Chen <he.chen@linux.intel.com>
> Signed-off-by: He Chen <he.chen@linux.intel.com>
> Co-developed-by: Zhang Yi <yi.z.zhang@linux.intel.com>
> Signed-off-by: Zhang Yi <yi.z.zhang@linux.intel.com>
> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h |  18 ++++
>  arch/x86/include/asm/vmx.h      |   2 +
>  arch/x86/kvm/mmu.c              | 160 ++++++++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/vmx.c          |  48 ++++++++++
>  arch/x86/kvm/x86.c              |  40 ++++++++
>  include/linux/kvm_host.h        |   4 +-
>  include/uapi/linux/kvm.h        |   9 ++
>  7 files changed, 280 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 44f6e1757861..5c4882015acc 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -398,8 +398,13 @@ struct kvm_mmu {
>  	void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa);
>  	void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
>  			   u64 *spte, const void *pte);
> +	int (*get_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> +	int (*set_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> +	int (*init_spp)(struct kvm *kvm);
> +
>  	hpa_t root_hpa;
>  	gpa_t root_cr3;
> +	hpa_t sppt_root;
>  	union kvm_mmu_role mmu_role;
>  	u8 root_level;
>  	u8 shadow_root_level;
> @@ -927,6 +932,8 @@ struct kvm_arch {
>  
>  	bool guest_can_read_msr_platform_info;
>  	bool exception_payload_enabled;
> +
> +	bool spp_active;
>  };
>  
>  struct kvm_vm_stat {
> @@ -1199,6 +1206,11 @@ struct kvm_x86_ops {
>  	uint16_t (*nested_get_evmcs_version)(struct kvm_vcpu *vcpu);
>  
>  	bool (*need_emulation_on_page_fault)(struct kvm_vcpu *vcpu);
> +
> +	bool (*get_spp_status)(void);
> +	int (*get_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> +	int (*set_subpages)(struct kvm *kvm, struct kvm_subpage *spp_info);
> +	int (*init_spp)(struct kvm *kvm);
>  };
>  
>  struct kvm_arch_async_pf {
> @@ -1417,6 +1429,12 @@ void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva);
>  void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid);
>  void kvm_mmu_new_cr3(struct kvm_vcpu *vcpu, gpa_t new_cr3, bool skip_tlb_flush);
>  
> +int kvm_mmu_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
> +			 bool mmu_locked);
> +int kvm_mmu_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
> +			 bool mmu_locked);
> +int kvm_mmu_init_spp(struct kvm *kvm);
> +
>  void kvm_enable_tdp(void);
>  void kvm_disable_tdp(void);
>  
> diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
> index a2c9e18e0ad7..6cb05ac07453 100644
> --- a/arch/x86/include/asm/vmx.h
> +++ b/arch/x86/include/asm/vmx.h
> @@ -224,6 +224,8 @@ enum vmcs_field {
>  	XSS_EXIT_BITMAP_HIGH            = 0x0000202D,
>  	ENCLS_EXITING_BITMAP		= 0x0000202E,
>  	ENCLS_EXITING_BITMAP_HIGH	= 0x0000202F,
> +	SPPT_POINTER			= 0x00002030,
> +	SPPT_POINTER_HIGH		= 0x00002031,
>  	TSC_MULTIPLIER                  = 0x00002032,
>  	TSC_MULTIPLIER_HIGH             = 0x00002033,
>  	GUEST_PHYSICAL_ADDRESS          = 0x00002400,
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 16e390fd6e58..16d3ca544b67 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3756,6 +3756,9 @@ void kvm_mmu_free_roots(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
>  		    (mmu->root_level >= PT64_ROOT_4LEVEL || mmu->direct_map)) {
>  			mmu_free_root_page(vcpu->kvm, &mmu->root_hpa,
>  					   &invalid_list);
> +			if (vcpu->kvm->arch.spp_active)
> +				mmu_free_root_page(vcpu->kvm, &mmu->sppt_root,
> +						   &invalid_list);
>  		} else {
>  			for (i = 0; i < 4; ++i)
>  				if (mmu->pae_root[i] != 0)
> @@ -4414,6 +4417,158 @@ int kvm_mmu_setup_spp_structure(struct kvm_vcpu *vcpu,
>  	return ret;
>  }
>  EXPORT_SYMBOL_GPL(kvm_mmu_setup_spp_structure);
> +
> +int kvm_mmu_init_spp(struct kvm *kvm)
> +{
> +	int i, ret;
> +	struct kvm_vcpu *vcpu;
> +	int root_level;
> +	struct kvm_mmu_page *ssp_sp;
> +
> +
> +	if (!kvm_x86_ops->get_spp_status())
> +	      return -ENODEV;
> +
> +	if (kvm->arch.spp_active)
> +	      return 0;
> +
> +	ret = kvm_subpage_create_bitmaps(kvm);
> +
> +	if (ret)
> +	      return ret;
> +
> +	kvm_for_each_vcpu(i, vcpu, kvm) {
> +		/* prepare caches for SPP setup.*/
> +		mmu_topup_memory_caches(vcpu);
> +		root_level = vcpu->arch.mmu->shadow_root_level;
> +		ssp_sp = kvm_mmu_get_spp_page(vcpu, 0, root_level);
> +		++ssp_sp->root_count;
> +		vcpu->arch.mmu->sppt_root = __pa(ssp_sp->spt);
> +		kvm_make_request(KVM_REQ_LOAD_CR3, vcpu);
> +	}
> +
> +	kvm->arch.spp_active = true;
> +	return 0;
> +}
> +
> +int kvm_mmu_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
> +			 bool mmu_locked)
> +{
> +	u32 *access = spp_info->access_map;
> +	gfn_t gfn = spp_info->base_gfn;
> +	int npages = spp_info->npages;
> +	struct kvm_memory_slot *slot;
> +	int i;
> +	int ret;
> +
> +	if (!kvm->arch.spp_active)
> +	      return -ENODEV;
> +
> +	if (!mmu_locked)
> +	      spin_lock(&kvm->mmu_lock);
> +
> +	for (i = 0; i < npages; i++, gfn++) {
> +		slot = gfn_to_memslot(kvm, gfn);
> +		if (!slot) {
> +			ret = -EFAULT;
> +			goto out_unlock;
> +		}
> +		access[i] = *gfn_to_subpage_wp_info(slot, gfn);
> +	}
> +
> +	ret = i;
> +
> +out_unlock:
> +	if (!mmu_locked)
> +	      spin_unlock(&kvm->mmu_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(kvm_mmu_get_subpages);
> +
> +int kvm_mmu_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
> +			 bool mmu_locked)
> +{
> +	u32 *access = spp_info->access_map;
> +	gfn_t gfn = spp_info->base_gfn;
> +	int npages = spp_info->npages;
> +	struct kvm_memory_slot *slot;
> +	struct kvm_vcpu *vcpu;
> +	struct kvm_rmap_head *rmap_head;
> +	int i, k;
> +	u32 *wp_map;
> +	int ret = -EFAULT;
> +
> +	if (!kvm->arch.spp_active)
> +		return -ENODEV;
> +
> +	if (!mmu_locked)
> +	      spin_lock(&kvm->mmu_lock);
> +
> +	for (i = 0; i < npages; i++, gfn++) {
> +		slot = gfn_to_memslot(kvm, gfn);
> +		if (!slot)
> +			goto out_unlock;
> +
> +		/*
> +		 * check whether the target 4KB page exists in EPT leaf
> +		 * entries.If it's there, we can setup SPP protection now,
> +		 * otherwise, need to defer it to EPT page fault handler.
> +		 */
> +		rmap_head = __gfn_to_rmap(gfn, PT_PAGE_TABLE_LEVEL, slot);
> +
> +		if (rmap_head->val) {
> +			/*
> +			 * if all subpages are not writable, open SPP bit in
> +			 * EPT leaf entry to enable SPP protection for
> +			 * corresponding page.
> +			 */
> +			if (access[i] != FULL_SPP_ACCESS) {
> +				ret = kvm_mmu_open_subpage_write_protect(kvm,
> +						slot, gfn);
> +
> +				if (ret)
> +					goto out_err;
> +
> +				kvm_for_each_vcpu(k, vcpu, kvm)
> +					kvm_mmu_setup_spp_structure(vcpu,
> +						access[i], gfn);
> +			} else {
> +				ret = kvm_mmu_clear_subpage_write_protect(kvm,
> +						slot, gfn);
> +				if (ret)
> +					goto out_err;
> +			}
> +
> +		} else
> +			pr_info("%s - No ETP entry, gfn = 0x%llx, access = 0x%x.\n", __func__, gfn, access[i]);
> +
> +		/* if this function is called in tdp_page_fault() or
> +		 * spp_handler(), mmu_locked = true, SPP access bitmap
> +		 * is being used, otherwise, it's being stored.
> +		 */
> +		if (!mmu_locked) {
> +			wp_map = gfn_to_subpage_wp_info(slot, gfn);
> +			*wp_map = access[i];
> +		}
> +	}
> +
> +	ret = i;
> +out_err:
> +	if (ret < 0)
> +	      pr_info("SPP-Error, didn't get the gfn:" \
> +		      "%llx from EPT leaf.\n"
> +		      "Current we don't support SPP on" \
> +		      "huge page.\n"
> +		      "Please disable huge page and have" \
> +		      "another try.\n", gfn);
> +out_unlock:
> +	if (!mmu_locked)
> +	      spin_unlock(&kvm->mmu_lock);
> +
> +	return ret;
> +}
> +
>  static void nonpaging_init_context(struct kvm_vcpu *vcpu,
>  				   struct kvm_mmu *context)
>  {
> @@ -5104,6 +5259,9 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
>  	context->get_cr3 = get_cr3;
>  	context->get_pdptr = kvm_pdptr_read;
>  	context->inject_page_fault = kvm_inject_page_fault;
> +	context->get_subpages = kvm_x86_ops->get_subpages;
> +	context->set_subpages = kvm_x86_ops->set_subpages;
> +	context->init_spp = kvm_x86_ops->init_spp;
>  
>  	if (!is_paging(vcpu)) {
>  		context->nx = false;
> @@ -5309,6 +5467,8 @@ void kvm_init_mmu(struct kvm_vcpu *vcpu, bool reset_roots)
>  		uint i;
>  
>  		vcpu->arch.mmu->root_hpa = INVALID_PAGE;
> +		if (!vcpu->kvm->arch.spp_active)
> +			vcpu->arch.mmu->sppt_root = INVALID_PAGE;
>  
>  		for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++)
>  			vcpu->arch.mmu->prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index a0753a36155d..855d02dd94c7 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2847,11 +2847,17 @@ u64 construct_eptp(struct kvm_vcpu *vcpu, unsigned long root_hpa)
>  	return eptp;
>  }
>  
> +static inline u64 construct_spptp(unsigned long root_hpa)
> +{
> +	return root_hpa & PAGE_MASK;
> +}
> +
>  void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>  {
>  	struct kvm *kvm = vcpu->kvm;
>  	unsigned long guest_cr3;
>  	u64 eptp;
> +	u64 spptp;
>  
>  	guest_cr3 = cr3;
>  	if (enable_ept) {
> @@ -2874,6 +2880,12 @@ void vmx_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>  		ept_load_pdptrs(vcpu);
>  	}
>  
> +	if (kvm->arch.spp_active && VALID_PAGE(vcpu->arch.mmu->sppt_root)) {
> +		spptp = construct_spptp(vcpu->arch.mmu->sppt_root);
> +		vmcs_write64(SPPT_POINTER, spptp);
> +		vmx_flush_tlb(vcpu, true);
> +	}
> +
>  	vmcs_writel(GUEST_CR3, guest_cr3);
>  }
>  
> @@ -5735,6 +5747,9 @@ void dump_vmcs(void)
>  		pr_err("PostedIntrVec = 0x%02x\n", vmcs_read16(POSTED_INTR_NV));
>  	if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT))
>  		pr_err("EPT pointer = 0x%016llx\n", vmcs_read64(EPT_POINTER));
> +	if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_SPP))
> +		pr_err("SPPT pointer = 0x%016llx\n", vmcs_read64(SPPT_POINTER));
> +
>  	n = vmcs_read32(CR3_TARGET_COUNT);
>  	for (i = 0; i + 1 < n; i += 4)
>  		pr_err("CR3 target%u=%016lx target%u=%016lx\n",
> @@ -7546,6 +7561,12 @@ static __init int hardware_setup(void)
>  		kvm_x86_ops->enable_log_dirty_pt_masked = NULL;
>  	}
>  
> +	if (!spp_supported) {
> +		kvm_x86_ops->get_subpages = NULL;
> +		kvm_x86_ops->set_subpages = NULL;
> +		kvm_x86_ops->init_spp = NULL;
> +	}
> +
>  	if (!cpu_has_vmx_preemption_timer())
>  		kvm_x86_ops->request_immediate_exit = __kvm_request_immediate_exit;
>  
> @@ -7592,6 +7613,28 @@ static __exit void hardware_unsetup(void)
>  	free_kvm_area();
>  }
>  
> +static bool vmx_get_spp_status(void)
> +{
> +	return spp_supported;
> +}
> +
> +static int vmx_get_subpages(struct kvm *kvm,
> +			    struct kvm_subpage *spp_info)
> +{
> +	return kvm_get_subpages(kvm, spp_info);
> +}
> +
> +static int vmx_set_subpages(struct kvm *kvm,
> +			    struct kvm_subpage *spp_info)
> +{
> +	return kvm_set_subpages(kvm, spp_info);
> +}
> +
> +static int vmx_init_spp(struct kvm *kvm)
> +{
> +	return kvm_init_spp(kvm);
> +}
> +
>  static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  	.cpu_has_kvm_support = cpu_has_kvm_support,
>  	.disabled_by_bios = vmx_disabled_by_bios,
> @@ -7740,6 +7783,11 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>  	.get_vmcs12_pages = NULL,
>  	.nested_enable_evmcs = NULL,
>  	.need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
> +
> +	.get_spp_status = vmx_get_spp_status,
> +	.get_subpages = vmx_get_subpages,
> +	.set_subpages = vmx_set_subpages,
> +	.init_spp = vmx_init_spp,
>  };

There's no need for the get_subpages kvm_x86_ops.  You do need init_spp
of course, but you do not need get_spp_status either; instead you can
check if init_spp is NULL (if NULL, SPP is not supported).

In addition, kvm_mmu_set_subpages should not set up the SPP pages.  This
should be handled entirely by handle_spp when the processor reports an
SPPT miss, so you do not need that callback either.  You may need a
flush_subpages callback, called by kvm_mmu_set_subpages to clear the SPP
page tables for a given GPA range.  The next access then will cause an
SPPT miss.

Finally, please move all SPP-related code to arch/x86/kvm/vmx/{spp.c,spp.h}.

Thanks,

Paolo



>  static void vmx_cleanup_l1d_flush(void)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 26e4c574731c..c7cb17941344 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4589,6 +4589,44 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
>  	return r;
>  }
>  
> +int kvm_get_subpages(struct kvm *kvm,
> +		     struct kvm_subpage *spp_info)
> +{
> +	int ret;
> +
> +	mutex_lock(&kvm->slots_lock);
> +	ret = kvm_mmu_get_subpages(kvm, spp_info, false);
> +	mutex_unlock(&kvm->slots_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(kvm_get_subpages);
> +
> +int kvm_set_subpages(struct kvm *kvm,
> +		     struct kvm_subpage *spp_info)
> +{
> +	int ret;
> +
> +	mutex_lock(&kvm->slots_lock);
> +	ret = kvm_mmu_set_subpages(kvm, spp_info, false);
> +	mutex_unlock(&kvm->slots_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(kvm_set_subpages);
> +
> +int kvm_init_spp(struct kvm *kvm)
> +{
> +	int ret;
> +
> +	mutex_lock(&kvm->slots_lock);
> +	ret = kvm_mmu_init_spp(kvm);
> +	mutex_unlock(&kvm->slots_lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(kvm_init_spp);
> +
>  long kvm_arch_vm_ioctl(struct file *filp,
>  		       unsigned int ioctl, unsigned long arg)
>  {
> @@ -9349,6 +9387,8 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
>  	}
>  
>  	kvm_page_track_free_memslot(free, dont);
> +	if (kvm->arch.spp_active)
> +	      kvm_subpage_free_memslot(free, dont);
>  }
>  
>  int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 7f904d4d788c..da30fcbb2727 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -849,7 +849,9 @@ int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu);
>  
>  struct kvm_mmu_page *kvm_mmu_get_spp_page(struct kvm_vcpu *vcpu,
>  			gfn_t gfn, unsigned int level);
> -
> +int kvm_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info);
> +int kvm_set_subpages(struct kvm *kvm, struct kvm_subpage *spp_info);
> +int kvm_init_spp(struct kvm *kvm);
>  #ifndef __KVM_HAVE_ARCH_VM_ALLOC
>  /*
>   * All architectures that want to use vzalloc currently also
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 6d4ea4b6c922..2c75a87ab3b5 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -102,6 +102,15 @@ struct kvm_userspace_memory_region {
>  	__u64 userspace_addr; /* start of the userspace allocated memory */
>  };
>  
> +/* for KVM_SUBPAGES_GET_ACCESS and KVM_SUBPAGES_SET_ACCESS */
> +#define SUBPAGE_MAX_BITMAP   64
> +struct kvm_subpage {
> +	__u64 base_gfn;
> +	__u64 npages;
> +	 /* sub-page write-access bitmap array */
> +	__u32 access_map[SUBPAGE_MAX_BITMAP];
> +};
> +
>  /*
>   * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspace,
>   * other bits are reserved for kvm internal use which are defined in
> 


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-19  2:08               ` Yang Weijiang
@ 2019-08-19 15:15                 ` Paolo Bonzini
  2019-08-20 12:33                   ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-19 15:15 UTC (permalink / raw)
  To: Yang Weijiang, Jim Mattson
  Cc: Sean Christopherson, Vitaly Kuznetsov, kvm list, LKML,
	Michael S. Tsirkin, Radim Krčmář,
	yu.c.zhang, alazar

On 19/08/19 04:08, Yang Weijiang wrote:
>> KVM_GET_NESTED_STATE has the requested information. If
>> data.vmx.vmxon_pa is anything other than -1, then the vCPU is in VMX
>> operation. If (flags & KVM_STATE_NESTED_GUEST_MODE), then L2 is
>> active.
> Thanks Jim, I'll reference the code and make necessary change in next
> SPP patch release.

Since SPP will not be used very much in the beginning, it would be
simplest to enable it only if nested==0.

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-19 15:15                 ` Paolo Bonzini
@ 2019-08-20 12:33                   ` Yang Weijiang
  0 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-20 12:33 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, Jim Mattson, Sean Christopherson,
	Vitaly Kuznetsov, kvm list, LKML, Michael S. Tsirkin,
	Radim Krčmář,
	yu.c.zhang, alazar

On Mon, Aug 19, 2019 at 05:15:01PM +0200, Paolo Bonzini wrote:
> On 19/08/19 04:08, Yang Weijiang wrote:
> >> KVM_GET_NESTED_STATE has the requested information. If
> >> data.vmx.vmxon_pa is anything other than -1, then the vCPU is in VMX
> >> operation. If (flags & KVM_STATE_NESTED_GUEST_MODE), then L2 is
> >> active.
> > Thanks Jim, I'll reference the code and make necessary change in next
> > SPP patch release.
> 
> Since SPP will not be used very much in the beginning, it would be
> simplest to enable it only if nested==0.
> 
> Paolo
Thanks Paolo! Sure, will make the change and document it.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-19 15:05   ` Paolo Bonzini
@ 2019-08-20 12:36     ` Yang Weijiang
  0 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-20 12:36 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Mon, Aug 19, 2019 at 05:05:22PM +0200, Paolo Bonzini wrote:
> On 14/08/19 09:03, Yang Weijiang wrote:
> > +
> > +int kvm_mmu_get_subpages(struct kvm *kvm, struct kvm_subpage *spp_info,
> > +			 bool mmu_locked)
> > +{
> > +	u32 *access = spp_info->access_map;
> > +	gfn_t gfn = spp_info->base_gfn;
> > +	int npages = spp_info->npages;
> > +	struct kvm_memory_slot *slot;
> > +	int i;
> > +	int ret;
> > +
> > +	if (!kvm->arch.spp_active)
> > +	      return -ENODEV;
> > +
> > +	if (!mmu_locked)
> > +	      spin_lock(&kvm->mmu_lock);
> > +
> 
> Do not add this argument.  Just lock mmu_lock in the callers.
> 
> Paolo
OK, will remove it, thanks!

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP
  2019-08-19 15:13   ` Paolo Bonzini
@ 2019-08-20 13:09     ` Yang Weijiang
  0 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-20 13:09 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Mon, Aug 19, 2019 at 05:13:47PM +0200, Paolo Bonzini wrote:
> On 14/08/19 09:03, Yang Weijiang wrote:
> >  static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
> >  	.cpu_has_kvm_support = cpu_has_kvm_support,
> >  	.disabled_by_bios = vmx_disabled_by_bios,
> > @@ -7740,6 +7783,11 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
> >  	.get_vmcs12_pages = NULL,
> >  	.nested_enable_evmcs = NULL,
> >  	.need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
> > +
> > +	.get_spp_status = vmx_get_spp_status,
> > +	.get_subpages = vmx_get_subpages,
> > +	.set_subpages = vmx_set_subpages,
> > +	.init_spp = vmx_init_spp,
> >  };
> 
> There's no need for the get_subpages kvm_x86_ops.  You do need init_spp
> of course, but you do not need get_spp_status either; instead you can
> check if init_spp is NULL (if NULL, SPP is not supported).
So first set .init_spp = NULL, then if all SPP preconditions meet, set
.init_spp = vmx_init_spp?

> In addition, kvm_mmu_set_subpages should not set up the SPP pages.  This
> should be handled entirely by handle_spp when the processor reports an
> SPPT miss, so you do not need that callback either.  You may need a
> flush_subpages callback, called by kvm_mmu_set_subpages to clear the SPP
> page tables for a given GPA range.  The next access then will cause an
> SPPT miss.
Good suggestion, thanks! Will do change.

> Finally, please move all SPP-related code to arch/x86/kvm/vmx/{spp.c,spp.h}.
>
Sure.

> Thanks,
> 
> Paolo
> 
 

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup
  2019-08-19 14:46   ` Paolo Bonzini
@ 2019-08-20 13:12     ` Yang Weijiang
  2019-09-04 13:49       ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-08-20 13:12 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Mon, Aug 19, 2019 at 04:46:54PM +0200, Paolo Bonzini wrote:
> On 14/08/19 09:04, Yang Weijiang wrote:
> > +
> > +	if (vcpu->kvm->arch.spp_active && level == PT_PAGE_TABLE_LEVEL)
> > +		kvm_enable_spp_protection(vcpu->kvm, gfn);
> > +
> 
> This would not enable SPP if the guest is backed by huge pages.
> Instead, either the PT_PAGE_TABLE_LEVEL level must be forced for all
> pages covered by SPP ranges, or (better) kvm_enable_spp_protection must
> be able to cover multiple pages at once.
> 
> Paolo
OK, I'll figure out how to make it, thanks!

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault
  2019-08-19 15:04     ` Paolo Bonzini
@ 2019-08-20 13:44       ` Yang Weijiang
  2019-08-22 13:17         ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-08-20 13:44 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Mon, Aug 19, 2019 at 05:04:23PM +0200, Paolo Bonzini wrote:
> On 19/08/19 16:43, Paolo Bonzini wrote:
> >> +			/*
> >> +			 * Record write protect fault caused by
> >> +			 * Sub-page Protection, let VMI decide
> >> +			 * the next step.
> >> +			 */
> >> +			if (spte & PT_SPP_MASK) {
> > Should this be "if (spte & PT_WRITABLE_MASK)" instead?  That is, if the
> > page is already writable, the fault must be an SPP fault.
> 
> Hmm, no I forgot how SPP works; still, this is *not* correct.  For
> example, if SPP marks part of a page as read-write, but KVM wants to
> write-protect the whole page for access or dirty tracking, that should
> not cause an SPP exit.
> 
> So I think that when KVM wants to write-protect the whole page
> (wrprot_ad_disabled_spte) it must also clear PT_SPP_MASK; for example it
> could save it in bit 53 (PT64_SECOND_AVAIL_BITS_SHIFT + 1).  If the
> saved bit is set, fast_page_fault must then set PT_SPP_MASK instead of
> PT_WRITABLE_MASK.
Sure, will change the processing flow.

> On re-entry this will cause an SPP vmexit;
> fast_page_fault should never trigger an SPP userspace exit on its own,
> all the SPP handling should go through handle_spp.
> 
> Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault
  2019-08-20 13:44       ` Yang Weijiang
@ 2019-08-22 13:17         ` Yang Weijiang
  2019-08-22 16:38           ` Paolo Bonzini
  0 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-08-22 13:17 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Tue, Aug 20, 2019 at 09:44:35PM +0800, Yang Weijiang wrote:
> On Mon, Aug 19, 2019 at 05:04:23PM +0200, Paolo Bonzini wrote:
> > On 19/08/19 16:43, Paolo Bonzini wrote:
> > >> +			/*
> > >> +			 * Record write protect fault caused by
> > >> +			 * Sub-page Protection, let VMI decide
> > >> +			 * the next step.
> > >> +			 */
> > >> +			if (spte & PT_SPP_MASK) {
> > > Should this be "if (spte & PT_WRITABLE_MASK)" instead?  That is, if the
> > > page is already writable, the fault must be an SPP fault.
> > 
> > Hmm, no I forgot how SPP works; still, this is *not* correct.  For
> > example, if SPP marks part of a page as read-write, but KVM wants to
> > write-protect the whole page for access or dirty tracking, that should
> > not cause an SPP exit.
> > 
> > So I think that when KVM wants to write-protect the whole page
> > (wrprot_ad_disabled_spte) it must also clear PT_SPP_MASK; for example it
> > could save it in bit 53 (PT64_SECOND_AVAIL_BITS_SHIFT + 1).  If the
> > saved bit is set, fast_page_fault must then set PT_SPP_MASK instead of
> > PT_WRITABLE_MASK.
> Sure, will change the processing flow.
> 
> > On re-entry this will cause an SPP vmexit;
> > fast_page_fault should never trigger an SPP userspace exit on its own,
> > all the SPP handling should go through handle_spp.
 Hi, Paolo,
 According to the latest SDM(28.2.4), handle_spp only handles SPPT miss and SPPT
 misconfig(exit_reason==66), subpage write access violation causes EPT violation,
 so have to deal with the two cases into handlers.
> > Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault
  2019-08-22 13:17         ` Yang Weijiang
@ 2019-08-22 16:38           ` Paolo Bonzini
  2019-08-23  0:26             ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-08-22 16:38 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: kvm, linux-kernel, sean.j.christopherson, mst, rkrcmar, jmattson,
	yu.c.zhang, alazar

On 22/08/19 15:17, Yang Weijiang wrote:
> On Tue, Aug 20, 2019 at 09:44:35PM +0800, Yang Weijiang wrote:
>> On Mon, Aug 19, 2019 at 05:04:23PM +0200, Paolo Bonzini wrote:
>>> fast_page_fault should never trigger an SPP userspace exit on its own,
>>> all the SPP handling should go through handle_spp.
>  Hi, Paolo,
>  According to the latest SDM(28.2.4), handle_spp only handles SPPT miss and SPPT
>  misconfig(exit_reason==66), subpage write access violation causes EPT violation,
>  so have to deal with the two cases into handlers.

Ok, so this part has to remain, though you do have to save/restore
PT_SPP_MASK according to the rest of the email.

Paolo

>>> So I think that when KVM wants to write-protect the whole page
>>> (wrprot_ad_disabled_spte) it must also clear PT_SPP_MASK; for example it
>>> could save it in bit 53 (PT64_SECOND_AVAIL_BITS_SHIFT + 1).  If the
>>> saved bit is set, fast_page_fault must then set PT_SPP_MASK instead of
>>> PT_WRITABLE_MASK.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault
  2019-08-22 16:38           ` Paolo Bonzini
@ 2019-08-23  0:26             ` Yang Weijiang
  0 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-08-23  0:26 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Thu, Aug 22, 2019 at 06:38:41PM +0200, Paolo Bonzini wrote:
> On 22/08/19 15:17, Yang Weijiang wrote:
> > On Tue, Aug 20, 2019 at 09:44:35PM +0800, Yang Weijiang wrote:
> >> On Mon, Aug 19, 2019 at 05:04:23PM +0200, Paolo Bonzini wrote:
> >>> fast_page_fault should never trigger an SPP userspace exit on its own,
> >>> all the SPP handling should go through handle_spp.
> >  Hi, Paolo,
> >  According to the latest SDM(28.2.4), handle_spp only handles SPPT miss and SPPT
> >  misconfig(exit_reason==66), subpage write access violation causes EPT violation,
> >  so have to deal with the two cases into handlers.
> 
> Ok, so this part has to remain, though you do have to save/restore
> PT_SPP_MASK according to the rest of the email.
> 
> Paolo
>
Got it, thanks!
> >>> So I think that when KVM wants to write-protect the whole page
> >>> (wrprot_ad_disabled_spte) it must also clear PT_SPP_MASK; for example it
> >>> could save it in bit 53 (PT64_SECOND_AVAIL_BITS_SHIFT + 1).  If the
> >>> saved bit is set, fast_page_fault must then set PT_SPP_MASK instead of
> >>> PT_WRITABLE_MASK.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup
  2019-08-20 13:12     ` Yang Weijiang
@ 2019-09-04 13:49       ` Yang Weijiang
  2019-09-09 17:10         ` Paolo Bonzini
  0 siblings, 1 reply; 40+ messages in thread
From: Yang Weijiang @ 2019-09-04 13:49 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Tue, Aug 20, 2019 at 09:12:14PM +0800, Yang Weijiang wrote:
> On Mon, Aug 19, 2019 at 04:46:54PM +0200, Paolo Bonzini wrote:
> > On 14/08/19 09:04, Yang Weijiang wrote:
> > > +
> > > +	if (vcpu->kvm->arch.spp_active && level == PT_PAGE_TABLE_LEVEL)
> > > +		kvm_enable_spp_protection(vcpu->kvm, gfn);
> > > +
> > 
> > This would not enable SPP if the guest is backed by huge pages.
> > Instead, either the PT_PAGE_TABLE_LEVEL level must be forced for all
> > pages covered by SPP ranges, or (better) kvm_enable_spp_protection must
> > be able to cover multiple pages at once.
> > 
> > Paolo
> OK, I'll figure out how to make it, thanks!
Hi, Paolo,
Regarding this change, I have some concerns, splitting EPT huge page
entries(e.g., 1GB page)will take long time compared with normal EPT page
fault processing, especially for multiple vcpus/pages,so the in-flight time increases,
but HW walks EPT for translations in the meantime, would it bring any side effect? 
or there's a way to mitigate it?

Thanks!


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup
  2019-09-04 13:49       ` Yang Weijiang
@ 2019-09-09 17:10         ` Paolo Bonzini
  2019-09-11  0:23           ` Yang Weijiang
  0 siblings, 1 reply; 40+ messages in thread
From: Paolo Bonzini @ 2019-09-09 17:10 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: kvm, linux-kernel, sean.j.christopherson, mst, rkrcmar, jmattson,
	yu.c.zhang, alazar

On 04/09/19 15:49, Yang Weijiang wrote:
>>> This would not enable SPP if the guest is backed by huge pages.
>>> Instead, either the PT_PAGE_TABLE_LEVEL level must be forced for all
>>> pages covered by SPP ranges, or (better) kvm_enable_spp_protection must
>>> be able to cover multiple pages at once.
>>>
>>> Paolo
>> OK, I'll figure out how to make it, thanks!
> Hi, Paolo,
> Regarding this change, I have some concerns, splitting EPT huge page
> entries(e.g., 1GB page)will take long time compared with normal EPT page
> fault processing, especially for multiple vcpus/pages,so the in-flight time increases,
> but HW walks EPT for translations in the meantime, would it bring any side effect? 
> or there's a way to mitigate it?

Sub-page permissions are only defined on EPT PTEs, not on large pages.
Therefore, in order to allow subpage permissions the EPT page tables
must already be split.

Paolo

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup
  2019-09-09 17:10         ` Paolo Bonzini
@ 2019-09-11  0:23           ` Yang Weijiang
  0 siblings, 0 replies; 40+ messages in thread
From: Yang Weijiang @ 2019-09-11  0:23 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Yang Weijiang, kvm, linux-kernel, sean.j.christopherson, mst,
	rkrcmar, jmattson, yu.c.zhang, alazar

On Mon, Sep 09, 2019 at 07:10:22PM +0200, Paolo Bonzini wrote:
> On 04/09/19 15:49, Yang Weijiang wrote:
> >>> This would not enable SPP if the guest is backed by huge pages.
> >>> Instead, either the PT_PAGE_TABLE_LEVEL level must be forced for all
> >>> pages covered by SPP ranges, or (better) kvm_enable_spp_protection must
> >>> be able to cover multiple pages at once.
> >>>
> >>> Paolo
> >> OK, I'll figure out how to make it, thanks!
> > Hi, Paolo,
> > Regarding this change, I have some concerns, splitting EPT huge page
> > entries(e.g., 1GB page)will take long time compared with normal EPT page
> > fault processing, especially for multiple vcpus/pages,so the in-flight time increases,
> > but HW walks EPT for translations in the meantime, would it bring any side effect? 
> > or there's a way to mitigate it?
> 
> Sub-page permissions are only defined on EPT PTEs, not on large pages.
> Therefore, in order to allow subpage permissions the EPT page tables
> must already be split.
> 
> Paolo
Thanks, I've added code to handle hugepage, will be included in
next version patch.

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2019-09-11  0:22 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-14  7:03 [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Yang Weijiang
2019-08-14  7:03 ` [PATCH RESEND v4 1/9] Documentation: Introduce EPT based Subpage Protection Yang Weijiang
2019-08-14  7:03 ` [PATCH RESEND v4 2/9] KVM: VMX: Add control flags for SPP enabling Yang Weijiang
2019-08-14  7:03 ` [PATCH RESEND v4 3/9] KVM: VMX: Implement functions for SPPT paging setup Yang Weijiang
2019-08-14  7:03 ` [PATCH RESEND v4 4/9] KVM: VMX: Introduce SPP access bitmap and operation functions Yang Weijiang
2019-08-14  7:03 ` [PATCH RESEND v4 5/9] KVM: VMX: Add init/set/get functions for SPP Yang Weijiang
2019-08-14 12:43   ` Vitaly Kuznetsov
2019-08-14 14:34     ` Yang Weijiang
2019-08-15 13:43     ` Yang Weijiang
2019-08-15 14:03       ` Vitaly Kuznetsov
2019-08-19 14:06         ` Yang Weijiang
2019-08-15 16:25       ` Jim Mattson
2019-08-15 16:38         ` Sean Christopherson
2019-08-16 13:31           ` Yang Weijiang
2019-08-16 18:19             ` Jim Mattson
2019-08-19  2:08               ` Yang Weijiang
2019-08-19 15:15                 ` Paolo Bonzini
2019-08-20 12:33                   ` Yang Weijiang
2019-08-19 15:05   ` Paolo Bonzini
2019-08-20 12:36     ` Yang Weijiang
2019-08-19 15:13   ` Paolo Bonzini
2019-08-20 13:09     ` Yang Weijiang
2019-08-14  7:04 ` [PATCH RESEND v4 6/9] KVM: VMX: Introduce SPP user-space IOCTLs Yang Weijiang
2019-08-14  7:04 ` [PATCH RESEND v4 7/9] KVM: VMX: Handle SPP induced vmexit and page fault Yang Weijiang
2019-08-19 14:43   ` Paolo Bonzini
2019-08-19 15:04     ` Paolo Bonzini
2019-08-20 13:44       ` Yang Weijiang
2019-08-22 13:17         ` Yang Weijiang
2019-08-22 16:38           ` Paolo Bonzini
2019-08-23  0:26             ` Yang Weijiang
2019-08-14  7:04 ` [PATCH RESEND v4 8/9] KVM: MMU: Enable Lazy mode SPPT setup Yang Weijiang
2019-08-19 14:46   ` Paolo Bonzini
2019-08-20 13:12     ` Yang Weijiang
2019-09-04 13:49       ` Yang Weijiang
2019-09-09 17:10         ` Paolo Bonzini
2019-09-11  0:23           ` Yang Weijiang
2019-08-14  7:04 ` [PATCH RESEND v4 9/9] KVM: MMU: Handle host memory remapping and reclaim Yang Weijiang
2019-08-14 12:36 ` [PATCH RESEND v4 0/9] Enable Sub-page Write Protection Support Paolo Bonzini
2019-08-14 14:02   ` Yang Weijiang
2019-08-14 14:06     ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).