All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
@ 2023-07-19 14:41 Binbin Wu
  2023-07-19 14:41 ` [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK Binbin Wu
                   ` (10 more replies)
  0 siblings, 11 replies; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

===Feature Introduction===

Linear-address masking (LAM) [1], modifies the checking that is applied to
*64-bit* linear addresses, allowing software to use of the untranslated address
bits for metadata and masks the metadata bits before using them as linear 
addresses to access memory.

When the feature is virtualized and exposed to guest, it can be used for efficient
address sanitizers (ASAN) implementation and for optimizations in JITs and virtual
machines.

Regarding which pointer bits are masked and can be used for metadata, LAM has 2
modes:
- LAM_48: metadata bits 62:48, i.e. LAM width of 15.
- LAM_57: metadata bits 62:57, i.e. LAM width of 6.

* For user pointers:
  CR3.LAM_U57 = CR3.LAM_U48 = 0, LAM is off;
  CR3.LAM_U57 = 1, LAM57 is active;
  CR3.LAM_U57 = 0 and CR3.LAM_U48 = 1, LAM48 is active.
* For supervisor pointers: 
  CR4.LAM_SUP =0, LAM is off;
  CR4.LAM_SUP =1 with 5-level paging mode, LAM57 is active;
  CR4.LAM_SUP =1 with 4-level paging mode, LAM48 is active.

The modified LAM canonicality check:
* LAM_S48                : [ 1 ][ metadata ][ 1 ]
                             63               47
* LAM_U48                : [ 0 ][ metadata ][ 0 ]
                             63               47
* LAM_S57                : [ 1 ][ metadata ][ 1 ]
                             63               56
* LAM_U57 + 5-lvl paging : [ 0 ][ metadata ][ 0 ]
                             63               56
* LAM_U57 + 4-lvl paging : [ 0 ][ metadata ][ 0...0 ]
                             63               56..47

Note:
1. LAM applies to only data address, not to instructions.
2. LAM identification of an address as user or supervisor is based solely on the
   value of pointer bit 63 and does not depend on the CPL.
3. LAM doesn't apply to the writes to control registers or MSRs.
4. LAM masking applies before paging, so the faulting linear address in CR2
   doesn't contain the metadata.
5  The guest linear address saved in VMCS doesn't contain metadata.
6. For user mode address, it is possible that 5-level paging and LAM_U48 are both
   set, in this case, the effective usable linear address width is 48.
   (Currently, only LAM_U57 is enabled in Linux kernel. [2])

===LAM KVM Design===
LAM KVM enabling includes the following parts:
- Feature Enumeration
  LAM feature is enumerated by CPUID.7.1:EAX.LAM[bit 26].
  If hardware supports LAM and host doesn't disable it explicitly (e.g. via 
  clearcpuid), LAM feature will be exposed to user VMM.

- CR4 Virtualization
  LAM uses CR4.LAM_SUP (bit 28) to configure LAM on supervisor pointers.
  Add support to allow guests to set the new CR4 control bit for guests to enable
  LAM on supervisor pointers.

- CR3 Virtualization
  LAM uses CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to configure LAM on user
  pointers.
  Add support to allow guests to set two new CR3 non-address control bits for
  guests to enable LAM on user pointers.

- Modified Canonicality Check and Metadata Mask
  When LAM is enabled, 64-bit linear address may be tagged with metadata. Linear
  address should be checked for modified canonicality and untagged in instruction
  emulations and VMExit handlers when LAM is applicable.

LAM support in SGX enclave mode needs additional enabling and is not
included in this patch series.

This patch series depends on "governed" X86_FEATURE framework from Sean.
https://lore.kernel.org/kvm/20230217231022.816138-2-seanjc@google.com/

This patch series depends on the patches of refactor of instruction emulation flags,
using flags to identify the access type of instructions, sent along with LASS patch series.
https://lore.kernel.org/kvm/20230719024558.8539-2-guang.zeng@intel.com/
https://lore.kernel.org/kvm/20230719024558.8539-3-guang.zeng@intel.com/
https://lore.kernel.org/kvm/20230719024558.8539-4-guang.zeng@intel.com/
https://lore.kernel.org/kvm/20230719024558.8539-5-guang.zeng@intel.com/


LAM QEMU patch:
https://lists.nongnu.org/archive/html/qemu-devel/2023-05/msg07843.html

LAM kvm-unit-tests patch:
https://lore.kernel.org/kvm/20230530024356.24870-1-binbin.wu@linux.intel.com/

===Test===
1. Add test cases in kvm-unit-test [3] for LAM, including LAM_SUP and LAM_{U57,U48}.
   For supervisor pointers, the test covers CR4 LAM_SUP bits toggle, Memory/MMIO
   access with tagged pointer, and some special instructions (INVLPG, INVPCID,
   INVVPID), INVVPID cases also used to cover VMX instruction VMExit path.
   For uer pointers, the test covers CR3 LAM bits toggle, Memory/MMIO access with
   tagged pointer.
   MMIO cases are used to trigger instruction emulation path.
   Run the unit test with both LAM feature on/off (i.e. including negative cases).
   Run the unit test in L1 guest with both LAM feature on/off.
2. Run Kernel LAM kselftests [2] in guest, with both EPT=Y/N.
3. Launch a nested guest.

All tests have passed in Simics environment.

[1] Intel ISE https://cdrdv2.intel.com/v1/dl/getContent/671368
    Chapter Linear Address Masking (LAM)
[2] https://lore.kernel.org/all/20230312112612.31869-9-kirill.shutemov@linux.intel.com/
[3] https://lore.kernel.org/kvm/20230530024356.24870-1-binbin.wu@linux.intel.com/

---
Changelog
v10:
- Split out the patch "Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK". [Sean]
- Split out the patch "Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality". [Sean]
- Use "KVM-governed feature framework" to track if guest can use LAM. [Sean]
- Use emulation flags to describe the access instead of making the flag a command. [Sean]
- Split the implementation of vmx_get_untagged_addr() for LAM from emulator and kvm_x86_ops definition. [Per Sean's comment for LASS]
- Some improvement of implementation in vmx_get_untagged_addr(). [Sean]

v9:
https://lore.kernel.org/kvm/20230606091842.13123-1-binbin.wu@linux.intel.com/

Binbin Wu (7):
  KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
  KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
  KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  KVM: x86: Virtualize CR3.LAM_{U48,U57}
  KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in
    emulator
  KVM: VMX: Implement and wire get_untagged_addr() for LAM
  KVM: x86: Untag address for vmexit handlers when LAM applicable

Robert Hoo (2):
  KVM: x86: Virtualize CR4.LAM_SUP
  KVM: x86: Expose LAM feature to userspace VMM

 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  5 +++-
 arch/x86/kvm/cpuid.c               |  2 +-
 arch/x86/kvm/cpuid.h               |  8 ++++++
 arch/x86/kvm/emulate.c             |  2 +-
 arch/x86/kvm/governed_features.h   |  2 ++
 arch/x86/kvm/kvm_emulate.h         |  3 +++
 arch/x86/kvm/mmu.h                 |  8 ++++++
 arch/x86/kvm/mmu/mmu.c             |  2 +-
 arch/x86/kvm/mmu/mmu_internal.h    |  1 +
 arch/x86/kvm/mmu/paging_tmpl.h     |  2 +-
 arch/x86/kvm/svm/nested.c          |  4 +--
 arch/x86/kvm/vmx/nested.c          |  6 +++--
 arch/x86/kvm/vmx/sgx.c             |  1 +
 arch/x86/kvm/vmx/vmx.c             | 43 +++++++++++++++++++++++++++++-
 arch/x86/kvm/vmx/vmx.h             |  2 ++
 arch/x86/kvm/x86.c                 | 15 +++++++++--
 arch/x86/kvm/x86.h                 |  2 ++
 18 files changed, 97 insertions(+), 12 deletions(-)


base-commit: fdf0eaf11452d72945af31804e2a1048ee1b574c
prerequisite-patch-id: 3467bc611ce3774ba481ab72e187eba47000c01b
prerequisite-patch-id: 1bf4c9da384b39c92c21c467a5c6ed0d306ec266
prerequisite-patch-id: 226fd3d9a09ef80a5b8001a3bdc6fbf2c23d2a88
prerequisite-patch-id: 0c31cc0dec011d7e22efde1f7dde9847c86024d8
prerequisite-patch-id: f487db8bc77007679f4b0e670a9e487c1f63fcfe
-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-08-16 21:00   ` Sean Christopherson
  2023-07-19 14:41 ` [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality Binbin Wu
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK.

No functional change intended.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
 arch/x86/kvm/mmu/mmu_internal.h | 1 +
 arch/x86/kvm/mmu/paging_tmpl.h  | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index d39af5639ce9..7d2105432d66 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -21,6 +21,7 @@ extern bool dbg;
 #endif
 
 /* Page table builder macros common to shadow (host) PTEs and guest PTEs. */
+#define __PT_BASE_ADDR_MASK GENMASK_ULL(51, 12)
 #define __PT_LEVEL_SHIFT(level, bits_per_level)	\
 	(PAGE_SHIFT + ((level) - 1) * (bits_per_level))
 #define __PT_INDEX(address, level, bits_per_level) \
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 0662e0278e70..00c8193f5991 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -62,7 +62,7 @@
 #endif
 
 /* Common logic, but per-type values.  These also need to be undefined. */
-#define PT_BASE_ADDR_MASK	((pt_element_t)(((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)))
+#define PT_BASE_ADDR_MASK	((pt_element_t)__PT_BASE_ADDR_MASK)
 #define PT_LVL_ADDR_MASK(lvl)	__PT_LVL_ADDR_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS)
 #define PT_LVL_OFFSET_MASK(lvl)	__PT_LVL_OFFSET_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS)
 #define PT_INDEX(addr, lvl)	__PT_INDEX(addr, lvl, PT_LEVEL_BITS)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
  2023-07-19 14:41 ` [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-07-20 23:53   ` Isaku Yamahata
  2023-07-19 14:41 ` [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled" Binbin Wu
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

Add and use kvm_vcpu_is_legal_cr3() to check CR3's legality to provide
a clear distinction b/t CR3 and GPA checks. So that kvm_vcpu_is_legal_cr3()
can be adjusted according to new feature(s).

No functional change intended.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
 arch/x86/kvm/cpuid.h      | 5 +++++
 arch/x86/kvm/svm/nested.c | 4 ++--
 arch/x86/kvm/vmx/nested.c | 4 ++--
 arch/x86/kvm/x86.c        | 4 ++--
 4 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index f61a2106ba90..8b26d946f3e3 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -283,4 +283,9 @@ static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu,
 	return vcpu->arch.governed_features.enabled & kvm_governed_feature_bit(x86_feature);
 }
 
+static inline bool kvm_vcpu_is_legal_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
+{
+	return kvm_vcpu_is_legal_gpa(vcpu, cr3);
+}
+
 #endif
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 96936ddf1b3c..1df801a48451 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -311,7 +311,7 @@ static bool __nested_vmcb_check_save(struct kvm_vcpu *vcpu,
 	if ((save->efer & EFER_LME) && (save->cr0 & X86_CR0_PG)) {
 		if (CC(!(save->cr4 & X86_CR4_PAE)) ||
 		    CC(!(save->cr0 & X86_CR0_PE)) ||
-		    CC(kvm_vcpu_is_illegal_gpa(vcpu, save->cr3)))
+		    CC(!kvm_vcpu_is_legal_cr3(vcpu, save->cr3)))
 			return false;
 	}
 
@@ -520,7 +520,7 @@ static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu)
 static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3,
 			       bool nested_npt, bool reload_pdptrs)
 {
-	if (CC(kvm_vcpu_is_illegal_gpa(vcpu, cr3)))
+	if (CC(!kvm_vcpu_is_legal_cr3(vcpu, cr3)))
 		return -EINVAL;
 
 	if (reload_pdptrs && !nested_npt && is_pae_paging(vcpu) &&
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 516391cc0d64..76c9904c6625 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -1085,7 +1085,7 @@ static int nested_vmx_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3,
 			       bool nested_ept, bool reload_pdptrs,
 			       enum vm_entry_failure_code *entry_failure_code)
 {
-	if (CC(kvm_vcpu_is_illegal_gpa(vcpu, cr3))) {
+	if (CC(!kvm_vcpu_is_legal_cr3(vcpu, cr3))) {
 		*entry_failure_code = ENTRY_FAIL_DEFAULT;
 		return -EINVAL;
 	}
@@ -2912,7 +2912,7 @@ static int nested_vmx_check_host_state(struct kvm_vcpu *vcpu,
 
 	if (CC(!nested_host_cr0_valid(vcpu, vmcs12->host_cr0)) ||
 	    CC(!nested_host_cr4_valid(vcpu, vmcs12->host_cr4)) ||
-	    CC(kvm_vcpu_is_illegal_gpa(vcpu, vmcs12->host_cr3)))
+	    CC(!kvm_vcpu_is_legal_cr3(vcpu, vmcs12->host_cr3)))
 		return -EINVAL;
 
 	if (CC(is_noncanonical_address(vmcs12->host_ia32_sysenter_esp, vcpu)) ||
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a6b9bea62fb8..6a6d08238e8d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1271,7 +1271,7 @@ int kvm_set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 	 * stuff CR3, e.g. for RSM emulation, and there is no guarantee that
 	 * the current vCPU mode is accurate.
 	 */
-	if (kvm_vcpu_is_illegal_gpa(vcpu, cr3))
+	if (!kvm_vcpu_is_legal_cr3(vcpu, cr3))
 		return 1;
 
 	if (is_pae_paging(vcpu) && !load_pdptrs(vcpu, cr3))
@@ -11449,7 +11449,7 @@ static bool kvm_is_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
 		 */
 		if (!(sregs->cr4 & X86_CR4_PAE) || !(sregs->efer & EFER_LMA))
 			return false;
-		if (kvm_vcpu_is_illegal_gpa(vcpu, sregs->cr3))
+		if (!kvm_vcpu_is_legal_cr3(vcpu, sregs->cr3))
 			return false;
 	} else {
 		/*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
  2023-07-19 14:41 ` [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK Binbin Wu
  2023-07-19 14:41 ` [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-08-16  3:46   ` Huang, Kai
  2023-07-19 14:41 ` [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP Binbin Wu
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

Use the governed feature framework to track if Linear Address Masking (LAM)
is "enabled", i.e. if LAM can be used by the guest. So that guest_can_use()
can be used to support LAM virtualization.

LAM modifies the checking that is applied to 64-bit linear addresses, allowing
software to use of the untranslated address bits for metadata and masks the
metadata bits before using them as linear addresses to access memory.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
---
 arch/x86/kvm/governed_features.h | 2 ++
 arch/x86/kvm/vmx/vmx.c           | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_features.h
index 40ce8e6608cd..708578d60e6f 100644
--- a/arch/x86/kvm/governed_features.h
+++ b/arch/x86/kvm/governed_features.h
@@ -5,5 +5,7 @@ BUILD_BUG()
 
 #define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x)
 
+KVM_GOVERNED_X86_FEATURE(LAM)
+
 #undef KVM_GOVERNED_X86_FEATURE
 #undef KVM_GOVERNED_FEATURE
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 0ecf4be2c6af..ae47303c88d7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7783,6 +7783,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
 		vmx->msr_ia32_feature_control_valid_bits &=
 			~FEAT_CTL_SGX_LC_ENABLED;
 
+	if (boot_cpu_has(X86_FEATURE_LAM))
+		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LAM);
+
 	/* Refresh #PF interception to account for MAXPHYADDR changes. */
 	vmx_update_exception_bitmap(vcpu);
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
                   ` (2 preceding siblings ...)
  2023-07-19 14:41 ` [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled" Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-08-16 21:41   ` Sean Christopherson
  2023-07-19 14:41 ` [PATCH v10 5/9] KVM: x86: Virtualize CR3.LAM_{U48,U57} Binbin Wu
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

From: Robert Hoo <robert.hu@linux.intel.com>

Add support to allow guests to set the new CR4 control bit for guests to enable
the new Intel CPU feature Linear Address Masking (LAM) on supervisor pointers.

LAM modifies the checking that is applied to 64-bit linear addresses, allowing
software to use of the untranslated address bits for metadata and masks the
metadata bits before using them as linear addresses to access memory. LAM uses
CR4.LAM_SUP (bit 28) to configure LAM for supervisor pointers. LAM also changes
VMENTER to allow the bit to be set in VMCS's HOST_CR4 and GUEST_CR4 for
virtualization. Note CR4.LAM_SUP is allowed to be set even not in 64-bit mode,
but it will not take effect since LAM only applies to 64-bit linear addresses.

Move CR4.LAM_SUP out of CR4_RESERVED_BITS and its reservation depends on vcpu
supporting LAM feature or not. Leave the bit intercepted to prevent guest from
setting CR4.LAM_SUP bit if LAM is not exposed to guest as well as to avoid vmread
every time when KVM fetches its value, with the expectation that guest won't
toggle the bit frequently.

Set CR4.LAM_SUP bit in the emulated IA32_VMX_CR4_FIXED1 MSR for guests to allow
guests to enable LAM for supervisor pointers in nested VMX operation.

Hardware is not required to do TLB flush when CR4.LAM_SUP toggled, KVM doesn't
need to emulate TLB flush based on it.
There's no other features/vmx_exec_controls connection, no other code needed in
{kvm,vmx}_set_cr4().

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 3 ++-
 arch/x86/kvm/vmx/vmx.c          | 3 +++
 arch/x86/kvm/x86.h              | 2 ++
 3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e8e1101a90c8..881a0be862e1 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -125,7 +125,8 @@
 			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
 			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
 			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
-			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
+			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
+			  | X86_CR4_LAM_SUP))
 
 #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ae47303c88d7..a0d6ea87a2d0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7646,6 +7646,9 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
 	cr4_fixed1_update(X86_CR4_UMIP,       ecx, feature_bit(UMIP));
 	cr4_fixed1_update(X86_CR4_LA57,       ecx, feature_bit(LA57));
 
+	entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1);
+	cr4_fixed1_update(X86_CR4_LAM_SUP,    eax, feature_bit(LAM));
+
 #undef cr4_fixed1_update
 }
 
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 82e3dafc5453..24e2b56356b8 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -528,6 +528,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
 		__reserved_bits |= X86_CR4_VMXE;        \
 	if (!__cpu_has(__c, X86_FEATURE_PCID))          \
 		__reserved_bits |= X86_CR4_PCIDE;       \
+	if (!__cpu_has(__c, X86_FEATURE_LAM))           \
+		__reserved_bits |= X86_CR4_LAM_SUP;     \
 	__reserved_bits;                                \
 })
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v10 5/9] KVM: x86: Virtualize CR3.LAM_{U48,U57}
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
                   ` (3 preceding siblings ...)
  2023-07-19 14:41 ` [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-08-16 21:44   ` Sean Christopherson
  2023-07-19 14:41 ` [PATCH v10 6/9] KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in emulator Binbin Wu
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

Add support to allow guests to set two new CR3 non-address control bits for
guests to enable the new Intel CPU feature Linear Address Masking (LAM) on user
pointers.

LAM modifies the checking that is applied to 64-bit linear addresses, allowing
software to use of the untranslated address bits for metadata and masks the
metadata bits before using them as linear addresses to access memory. LAM uses
two new CR3 non-address bits LAM_U48 (bit 62) and AM_U57 (bit 61) to configure
LAM for user pointers. LAM also changes VMENTER to allow both bits to be set in
VMCS's HOST_CR3 and GUEST_CR3 for virtualization.

When EPT is on, CR3 is not trapped by KVM and it's up to the guest to set any of
the two LAM control bits. However, when EPT is off, the actual CR3 used by the
guest is generated from the shadow MMU root which is different from the CR3 that
is *set* by the guest, and KVM needs to manually apply any active control bits
to VMCS's GUEST_CR3 based on the cached CR3 *seen* by the guest.

KVM manually checks guest's CR3 to make sure it points to a valid guest physical
address (i.e. to support smaller MAXPHYSADDR in the guest). Extend this check
to allow the two LAM control bits to be set. After check, LAM bits of guest CR3
will be stripped off to extract guest physical address.

In case of nested, for a guest which supports LAM, both VMCS12's HOST_CR3 and
GUEST_CR3 are allowed to have the new LAM control bits set, i.e. when L0 enters
L1 to emulate a VMEXIT from L2 to L1 or when L0 enters L2 directly. KVM also
manually checks VMCS12's HOST_CR3 and GUEST_CR3 being valid physical address.
Extend such check to allow the new LAM control bits too.

Note, LAM doesn't have a global control bit to turn on/off LAM completely, but
purely depends on hardware's CPUID to determine it can be enabled or not. That
means, when EPT is on, even when KVM doesn't expose LAM to guest, the guest can
still set LAM control bits in CR3 w/o causing problem. This is an unfortunate
virtualization hole. KVM could choose to intercept CR3 in this case and inject
fault but this would hurt performance when running a normal VM w/o LAM support.
This is undesirable. Just choose to let the guest do such illegal thing as the
worst case is guest being killed when KVM eventually find out such illegal
behaviour and that is the guest to blame.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 arch/x86/kvm/cpuid.h   | 3 +++
 arch/x86/kvm/mmu.h     | 8 ++++++++
 arch/x86/kvm/mmu/mmu.c | 2 +-
 arch/x86/kvm/vmx/vmx.c | 3 ++-
 4 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
index 8b26d946f3e3..274f41d2250b 100644
--- a/arch/x86/kvm/cpuid.h
+++ b/arch/x86/kvm/cpuid.h
@@ -285,6 +285,9 @@ static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu,
 
 static inline bool kvm_vcpu_is_legal_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
 {
+	if (guest_can_use(vcpu, X86_FEATURE_LAM))
+		cr3 &= ~(X86_CR3_LAM_U48 | X86_CR3_LAM_U57);
+
 	return kvm_vcpu_is_legal_gpa(vcpu, cr3);
 }
 
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 92d5a1924fc1..e92395e6b876 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -144,6 +144,14 @@ static inline unsigned long kvm_get_active_pcid(struct kvm_vcpu *vcpu)
 	return kvm_get_pcid(vcpu, kvm_read_cr3(vcpu));
 }
 
+static inline unsigned long kvm_get_active_cr3_lam_bits(struct kvm_vcpu *vcpu)
+{
+	if (!guest_can_use(vcpu, X86_FEATURE_LAM))
+		return 0;
+
+	return kvm_read_cr3(vcpu) & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57);
+}
+
 static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
 {
 	u64 root_hpa = vcpu->arch.mmu->root.hpa;
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index ec169f5c7dce..0285536346c1 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3819,7 +3819,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
 	hpa_t root;
 
 	root_pgd = kvm_mmu_get_guest_pgd(vcpu, mmu);
-	root_gfn = root_pgd >> PAGE_SHIFT;
+	root_gfn = (root_pgd & __PT_BASE_ADDR_MASK) >> PAGE_SHIFT;
 
 	if (mmu_check_root(vcpu, root_gfn))
 		return 1;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a0d6ea87a2d0..bcee5dc3dd0b 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3358,7 +3358,8 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
 			update_guest_cr3 = false;
 		vmx_ept_load_pdptrs(vcpu);
 	} else {
-		guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu);
+		guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu) |
+		            kvm_get_active_cr3_lam_bits(vcpu);
 	}
 
 	if (update_guest_cr3)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v10 6/9] KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in emulator
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
                   ` (4 preceding siblings ...)
  2023-07-19 14:41 ` [PATCH v10 5/9] KVM: x86: Virtualize CR3.LAM_{U48,U57} Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-07-19 14:41 ` [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM Binbin Wu
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

Introduce a new interface get_untagged_addr() to kvm_x86_ops to untag
the metadata from linear address.  Call the interface in linearization
of instruction emulator for 64-bit mode.

When enabled feature like Intel Linear Address Masking or AMD Upper Address
Ignore, linear address may be tagged with metadata. Linear address should be
checked for modified canonicality and untagged in instruction emulator.

Introduce get_untagged_addr() to kvm_x86_ops to hide the vendor specific code.
Pass the 'flags' to allow vendor specific implementation to precisely identify
the access type.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 arch/x86/include/asm/kvm-x86-ops.h |  1 +
 arch/x86/include/asm/kvm_host.h    |  2 ++
 arch/x86/kvm/emulate.c             |  2 +-
 arch/x86/kvm/kvm_emulate.h         |  3 +++
 arch/x86/kvm/x86.c                 | 10 ++++++++++
 5 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 13bc212cd4bc..052652073a4b 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -133,6 +133,7 @@ KVM_X86_OP(msr_filter_changed)
 KVM_X86_OP(complete_emulated_msr)
 KVM_X86_OP(vcpu_deliver_sipi_vector)
 KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
+KVM_X86_OP(get_untagged_addr)
 
 #undef KVM_X86_OP
 #undef KVM_X86_OP_OPTIONAL
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 881a0be862e1..36267ee7806a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1743,6 +1743,8 @@ struct kvm_x86_ops {
 	 * Returns vCPU specific APICv inhibit reasons
 	 */
 	unsigned long (*vcpu_get_apicv_inhibit_reasons)(struct kvm_vcpu *vcpu);
+
+	gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
 };
 
 struct kvm_x86_nested_ops {
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 9b4b3ce6d52a..25983ebd95fa 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -701,7 +701,7 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt,
 	*max_size = 0;
 	switch (mode) {
 	case X86EMUL_MODE_PROT64:
-		*linear = la;
+		*linear = la = ctxt->ops->get_untagged_addr(ctxt, la, flags);
 		va_bits = ctxt_virt_addr_bits(ctxt);
 		if (!__is_canonical_address(la, va_bits))
 			goto bad;
diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
index c944055091e1..790cf2dea75b 100644
--- a/arch/x86/kvm/kvm_emulate.h
+++ b/arch/x86/kvm/kvm_emulate.h
@@ -232,6 +232,9 @@ struct x86_emulate_ops {
 	int (*leave_smm)(struct x86_emulate_ctxt *ctxt);
 	void (*triple_fault)(struct x86_emulate_ctxt *ctxt);
 	int (*set_xcr)(struct x86_emulate_ctxt *ctxt, u32 index, u64 xcr);
+
+	gva_t (*get_untagged_addr)(struct x86_emulate_ctxt *ctxt, gva_t addr,
+				   unsigned int flags);
 };
 
 /* Type, address-of, and value of an instruction's operand. */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6a6d08238e8d..339a113b45af 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8300,6 +8300,15 @@ static void emulator_vm_bugged(struct x86_emulate_ctxt *ctxt)
 		kvm_vm_bugged(kvm);
 }
 
+static gva_t emulator_get_untagged_addr(struct x86_emulate_ctxt *ctxt,
+					gva_t addr, unsigned int flags)
+{
+	if (!kvm_x86_ops.get_untagged_addr)
+		return addr;
+
+	return static_call(kvm_x86_get_untagged_addr)(emul_to_vcpu(ctxt), addr, flags);
+}
+
 static const struct x86_emulate_ops emulate_ops = {
 	.vm_bugged           = emulator_vm_bugged,
 	.read_gpr            = emulator_read_gpr,
@@ -8345,6 +8354,7 @@ static const struct x86_emulate_ops emulate_ops = {
 	.leave_smm           = emulator_leave_smm,
 	.triple_fault        = emulator_triple_fault,
 	.set_xcr             = emulator_set_xcr,
+	.get_untagged_addr   = emulator_get_untagged_addr,
 };
 
 static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
                   ` (5 preceding siblings ...)
  2023-07-19 14:41 ` [PATCH v10 6/9] KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in emulator Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-08-16 22:01   ` Sean Christopherson
  2023-07-19 14:41 ` [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable Binbin Wu
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

Implement LAM version of get_untagged_addr() in VMX.

Skip address untag for instruction fetch, branch target and operand of INVLPG,
which LAM doesn't apply to.  Skip address untag for implicit system accesses
since LAM doesn't apply to the loading of base addresses of memory management
registers and segment registers, their values still need to be canonical (for
now, get_untagged_addr() interface is not called for implicit system accesses,
just for future proof).

Co-developed-by: Robert Hoo <robert.hu@linux.intel.com>
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index bcee5dc3dd0b..abf6d42672cd 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8177,6 +8177,39 @@ static void vmx_vm_destroy(struct kvm *kvm)
 	free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
 }
 
+static gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva,
+			    unsigned int flags)
+{
+	unsigned long cr3_bits;
+	int lam_bit;
+
+	if (flags & (X86EMUL_F_FETCH | X86EMUL_F_BRANCH | X86EMUL_F_IMPLICIT |
+	             X86EMUL_F_INVTLB))
+		return gva;
+
+	if (!is_64_bit_mode(vcpu))
+		return gva;
+
+	/*
+	 * Bit 63 determines if the address should be treated as user address
+	 * or a supervisor address.
+	 */
+	if (!(gva & BIT_ULL(63))) {
+		cr3_bits = kvm_get_active_cr3_lam_bits(vcpu);
+		if (!(cr3_bits & (X86_CR3_LAM_U57 | X86_CR3_LAM_U48)))
+			return gva;
+
+		/* LAM_U48 is ignored if LAM_U57 is set. */
+		lam_bit = cr3_bits & X86_CR3_LAM_U57 ? 56 : 47;
+	} else {
+		if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_LAM_SUP))
+			return gva;
+
+		lam_bit = kvm_is_cr4_bit_set(vcpu, X86_CR4_LA57) ? 56 : 47;
+	}
+	return (sign_extend64(gva, lam_bit) & ~BIT_ULL(63)) | (gva & BIT_ULL(63));
+}
+
 static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.name = KBUILD_MODNAME,
 
@@ -8316,6 +8349,8 @@ static struct kvm_x86_ops vmx_x86_ops __initdata = {
 	.complete_emulated_msr = kvm_complete_insn_gp,
 
 	.vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector,
+
+	.get_untagged_addr = vmx_get_untagged_addr,
 };
 
 static unsigned int vmx_handle_intel_pt_intr(void)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
                   ` (6 preceding siblings ...)
  2023-07-19 14:41 ` [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-08-16 21:49   ` Sean Christopherson
  2023-08-16 22:10   ` Sean Christopherson
  2023-07-19 14:41 ` [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM Binbin Wu
                   ` (2 subsequent siblings)
  10 siblings, 2 replies; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

Untag address for 64-bit memory operand in VMExit handlers when LAM is applicable.

For VMExit handlers related to 64-bit linear address:
- Cases need to untag address (handled in get_vmx_mem_address())
  Operand(s) of VMX instructions and INVPCID.
  Operand(s) of SGX ENCLS.
- Cases LAM doesn't apply to (no change needed)
  Operand of INVLPG.
  Linear address in INVPCID descriptor.
  Linear address in INVVPID descriptor.
  BASEADDR specified in SESC of ECREATE.

Note:
LAM doesn't apply to the writes to control registers or MSRs.
LAM masking applies before paging, so the faulting linear address in CR2
doesn't contain the metadata.
The guest linear address saved in VMCS doesn't contain metadata.

Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
---
 arch/x86/kvm/vmx/nested.c | 2 ++
 arch/x86/kvm/vmx/sgx.c    | 1 +
 arch/x86/kvm/vmx/vmx.c    | 3 +--
 arch/x86/kvm/vmx/vmx.h    | 2 ++
 arch/x86/kvm/x86.c        | 1 +
 5 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 76c9904c6625..bd2c8936953a 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4980,6 +4980,7 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
 		else
 			*ret = off;
 
+		*ret = vmx_get_untagged_addr(vcpu, *ret, 0);
 		/* Long mode: #GP(0)/#SS(0) if the memory address is in a
 		 * non-canonical form. This is the only check on the memory
 		 * destination for long mode!
@@ -5797,6 +5798,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
 	vpid02 = nested_get_vpid02(vcpu);
 	switch (type) {
 	case VMX_VPID_EXTENT_INDIVIDUAL_ADDR:
+		/* LAM doesn't apply to the address in descriptor of invvpid */
 		if (!operand.vpid ||
 		    is_noncanonical_address(operand.gla, vcpu))
 			return nested_vmx_fail(vcpu,
diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index 3e822e582497..6fef01e0536e 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -37,6 +37,7 @@ static int sgx_get_encls_gva(struct kvm_vcpu *vcpu, unsigned long offset,
 	if (!IS_ALIGNED(*gva, alignment)) {
 		fault = true;
 	} else if (likely(is_64_bit_mode(vcpu))) {
+		*gva = vmx_get_untagged_addr(vcpu, *gva, 0);
 		fault = is_noncanonical_address(*gva, vcpu);
 	} else {
 		*gva &= 0xffffffff;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index abf6d42672cd..f18e610c4363 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -8177,8 +8177,7 @@ static void vmx_vm_destroy(struct kvm *kvm)
 	free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
 }
 
-static gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva,
-			    unsigned int flags)
+gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags)
 {
 	unsigned long cr3_bits;
 	int lam_bit;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 32384ba38499..6fb612355769 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -421,6 +421,8 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
 u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
 u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
 
+gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
+
 static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
 					     int type, bool value)
 {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 339a113b45af..d2a0cdfb77a5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13370,6 +13370,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva)
 
 	switch (type) {
 	case INVPCID_TYPE_INDIV_ADDR:
+		/* LAM doesn't apply to the address in descriptor of invpcid */
 		if ((!pcid_enabled && (operand.pcid != 0)) ||
 		    is_noncanonical_address(operand.gla, vcpu)) {
 			kvm_inject_gp(vcpu, 0);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
                   ` (7 preceding siblings ...)
  2023-07-19 14:41 ` [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable Binbin Wu
@ 2023-07-19 14:41 ` Binbin Wu
  2023-08-16 21:53   ` Sean Christopherson
  2023-08-15  2:05 ` [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
  2023-08-16 22:25 ` Sean Christopherson
  10 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-19 14:41 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng, binbin.wu

From: Robert Hoo <robert.hu@linux.intel.com>

LAM feature is enumerated by CPUID.7.1:EAX.LAM[bit 26].
Expose the feature to userspace as the final step after the following
supports:
- CR4.LAM_SUP virtualization
- CR3.LAM_U48 and CR3.LAM_U57 virtualization
- Check and untag 64-bit linear address when LAM applies in instruction
  emulations and VMExit handlers.

Exposing SGX LAM support is not supported yet. SGX LAM support is enumerated
in SGX's own CPUID and there's no hard requirement that it must be supported
when LAM is reported in CPUID leaf 0x7.

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---
 arch/x86/kvm/cpuid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 7ebf3ce1bb5f..21d525b01d45 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -645,7 +645,7 @@ void kvm_set_cpu_caps(void)
 	kvm_cpu_cap_mask(CPUID_7_1_EAX,
 		F(AVX_VNNI) | F(AVX512_BF16) | F(CMPCCXADD) |
 		F(FZRM) | F(FSRS) | F(FSRC) |
-		F(AMX_FP16) | F(AVX_IFMA)
+		F(AMX_FP16) | F(AVX_IFMA) | F(LAM)
 	);
 
 	kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
  2023-07-19 14:41 ` [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality Binbin Wu
@ 2023-07-20 23:53   ` Isaku Yamahata
  2023-07-21  2:20     ` Binbin Wu
  0 siblings, 1 reply; 42+ messages in thread
From: Isaku Yamahata @ 2023-07-20 23:53 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, seanjc, pbonzini, chao.gao, kai.huang,
	David.Laight, robert.hu, guang.zeng, isaku.yamahata

On Wed, Jul 19, 2023 at 10:41:24PM +0800,
Binbin Wu <binbin.wu@linux.intel.com> wrote:

> Add and use kvm_vcpu_is_legal_cr3() to check CR3's legality to provide
> a clear distinction b/t CR3 and GPA checks. So that kvm_vcpu_is_legal_cr3()
> can be adjusted according to new feature(s).
> 
> No functional change intended.
> 
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> ---
>  arch/x86/kvm/cpuid.h      | 5 +++++
>  arch/x86/kvm/svm/nested.c | 4 ++--
>  arch/x86/kvm/vmx/nested.c | 4 ++--
>  arch/x86/kvm/x86.c        | 4 ++--
>  4 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index f61a2106ba90..8b26d946f3e3 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -283,4 +283,9 @@ static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu,
>  	return vcpu->arch.governed_features.enabled & kvm_governed_feature_bit(x86_feature);
>  }
>  
> +static inline bool kvm_vcpu_is_legal_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
> +{
> +	return kvm_vcpu_is_legal_gpa(vcpu, cr3);
> +}
> +

The remaining user of kvm_vcpu_is_illegal_gpa() is one left.  Can we remove it
by replacing !kvm_vcpu_is_legal_gpa()?
-- 
Isaku Yamahata <isaku.yamahata@gmail.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
  2023-07-20 23:53   ` Isaku Yamahata
@ 2023-07-21  2:20     ` Binbin Wu
  2023-07-21 15:03       ` Sean Christopherson
  0 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-21  2:20 UTC (permalink / raw)
  To: Isaku Yamahata
  Cc: kvm, linux-kernel, seanjc, pbonzini, chao.gao, kai.huang,
	David.Laight, robert.hu, guang.zeng



On 7/21/2023 7:53 AM, Isaku Yamahata wrote:
> On Wed, Jul 19, 2023 at 10:41:24PM +0800,
> Binbin Wu <binbin.wu@linux.intel.com> wrote:
>
>> Add and use kvm_vcpu_is_legal_cr3() to check CR3's legality to provide
>> a clear distinction b/t CR3 and GPA checks. So that kvm_vcpu_is_legal_cr3()
>> can be adjusted according to new feature(s).
>>
>> No functional change intended.
>>
>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>> ---
>>   arch/x86/kvm/cpuid.h      | 5 +++++
>>   arch/x86/kvm/svm/nested.c | 4 ++--
>>   arch/x86/kvm/vmx/nested.c | 4 ++--
>>   arch/x86/kvm/x86.c        | 4 ++--
>>   4 files changed, 11 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
>> index f61a2106ba90..8b26d946f3e3 100644
>> --- a/arch/x86/kvm/cpuid.h
>> +++ b/arch/x86/kvm/cpuid.h
>> @@ -283,4 +283,9 @@ static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu,
>>   	return vcpu->arch.governed_features.enabled & kvm_governed_feature_bit(x86_feature);
>>   }
>>   
>> +static inline bool kvm_vcpu_is_legal_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>> +{
>> +	return kvm_vcpu_is_legal_gpa(vcpu, cr3);
>> +}
>> +
> The remaining user of kvm_vcpu_is_illegal_gpa() is one left.  Can we remove it
> by replacing !kvm_vcpu_is_legal_gpa()?

There are still two callsites of kvm_vcpu_is_illegal_gpa() left (basing 
on Linux 6.5-rc2), in handle_ept_violation() and nested_vmx_check_eptp().
But they could be replaced by !kvm_vcpu_is_legal_gpa() and then remove 
kvm_vcpu_is_illegal_gpa().
I am neutral to this.



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
  2023-07-21  2:20     ` Binbin Wu
@ 2023-07-21 15:03       ` Sean Christopherson
  2023-07-24  2:07         ` Binbin Wu
  0 siblings, 1 reply; 42+ messages in thread
From: Sean Christopherson @ 2023-07-21 15:03 UTC (permalink / raw)
  To: Binbin Wu
  Cc: Isaku Yamahata, kvm, linux-kernel, pbonzini, chao.gao, kai.huang,
	David.Laight, robert.hu, guang.zeng

On Fri, Jul 21, 2023, Binbin Wu wrote:
> 
> 
> On 7/21/2023 7:53 AM, Isaku Yamahata wrote:
> > On Wed, Jul 19, 2023 at 10:41:24PM +0800,
> > Binbin Wu <binbin.wu@linux.intel.com> wrote:
> > 
> > > Add and use kvm_vcpu_is_legal_cr3() to check CR3's legality to provide
> > > a clear distinction b/t CR3 and GPA checks. So that kvm_vcpu_is_legal_cr3()
> > > can be adjusted according to new feature(s).
> > > 
> > > No functional change intended.
> > > 
> > > Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> > > ---
> > >   arch/x86/kvm/cpuid.h      | 5 +++++
> > >   arch/x86/kvm/svm/nested.c | 4 ++--
> > >   arch/x86/kvm/vmx/nested.c | 4 ++--
> > >   arch/x86/kvm/x86.c        | 4 ++--
> > >   4 files changed, 11 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> > > index f61a2106ba90..8b26d946f3e3 100644
> > > --- a/arch/x86/kvm/cpuid.h
> > > +++ b/arch/x86/kvm/cpuid.h
> > > @@ -283,4 +283,9 @@ static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu,
> > >   	return vcpu->arch.governed_features.enabled & kvm_governed_feature_bit(x86_feature);
> > >   }
> > > +static inline bool kvm_vcpu_is_legal_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
> > > +{
> > > +	return kvm_vcpu_is_legal_gpa(vcpu, cr3);
> > > +}
> > > +
> > The remaining user of kvm_vcpu_is_illegal_gpa() is one left.  Can we remove it
> > by replacing !kvm_vcpu_is_legal_gpa()?
> 
> There are still two callsites of kvm_vcpu_is_illegal_gpa() left (basing on
> Linux 6.5-rc2), in handle_ept_violation() and nested_vmx_check_eptp().
> But they could be replaced by !kvm_vcpu_is_legal_gpa() and then remove
> kvm_vcpu_is_illegal_gpa().
> I am neutral to this.

I'm largely neutral on this as well, though I do like the idea of having only
"legal" APIs.  I think it makes sense to throw together a patch, we can always
ignore the patch if end we up deciding to keep kvm_vcpu_is_illegal_gpa().

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
  2023-07-21 15:03       ` Sean Christopherson
@ 2023-07-24  2:07         ` Binbin Wu
  2023-07-25 16:05           ` Sean Christopherson
  0 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-07-24  2:07 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Isaku Yamahata, kvm, linux-kernel, pbonzini, chao.gao, kai.huang,
	David.Laight, robert.hu, guang.zeng



On 7/21/2023 11:03 PM, Sean Christopherson wrote:
> On Fri, Jul 21, 2023, Binbin Wu wrote:
>>
>> On 7/21/2023 7:53 AM, Isaku Yamahata wrote:
>>> On Wed, Jul 19, 2023 at 10:41:24PM +0800,
>>> Binbin Wu <binbin.wu@linux.intel.com> wrote:
>>>
>>>> Add and use kvm_vcpu_is_legal_cr3() to check CR3's legality to provide
>>>> a clear distinction b/t CR3 and GPA checks. So that kvm_vcpu_is_legal_cr3()
>>>> can be adjusted according to new feature(s).
>>>>
>>>> No functional change intended.
>>>>
>>>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>>>> ---
>>>>    arch/x86/kvm/cpuid.h      | 5 +++++
>>>>    arch/x86/kvm/svm/nested.c | 4 ++--
>>>>    arch/x86/kvm/vmx/nested.c | 4 ++--
>>>>    arch/x86/kvm/x86.c        | 4 ++--
>>>>    4 files changed, 11 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
>>>> index f61a2106ba90..8b26d946f3e3 100644
>>>> --- a/arch/x86/kvm/cpuid.h
>>>> +++ b/arch/x86/kvm/cpuid.h
>>>> @@ -283,4 +283,9 @@ static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu,
>>>>    	return vcpu->arch.governed_features.enabled & kvm_governed_feature_bit(x86_feature);
>>>>    }
>>>> +static inline bool kvm_vcpu_is_legal_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>>>> +{
>>>> +	return kvm_vcpu_is_legal_gpa(vcpu, cr3);
>>>> +}
>>>> +
>>> The remaining user of kvm_vcpu_is_illegal_gpa() is one left.  Can we remove it
>>> by replacing !kvm_vcpu_is_legal_gpa()?
>> There are still two callsites of kvm_vcpu_is_illegal_gpa() left (basing on
>> Linux 6.5-rc2), in handle_ept_violation() and nested_vmx_check_eptp().
>> But they could be replaced by !kvm_vcpu_is_legal_gpa() and then remove
>> kvm_vcpu_is_illegal_gpa().
>> I am neutral to this.
> I'm largely neutral on this as well, though I do like the idea of having only
> "legal" APIs.  I think it makes sense to throw together a patch, we can always
> ignore the patch if end we up deciding to keep kvm_vcpu_is_illegal_gpa().
OK. Thanks for the advice.
Should I send a seperate patch or add a patch to remove 
kvm_vcpu_is_illegal_gpa() in next version?



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
  2023-07-24  2:07         ` Binbin Wu
@ 2023-07-25 16:05           ` Sean Christopherson
  0 siblings, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-07-25 16:05 UTC (permalink / raw)
  To: Binbin Wu
  Cc: Isaku Yamahata, kvm, linux-kernel, pbonzini, chao.gao, kai.huang,
	David.Laight, robert.hu, guang.zeng

On Mon, Jul 24, 2023, Binbin Wu wrote:
> 
> 
> On 7/21/2023 11:03 PM, Sean Christopherson wrote:
> > On Fri, Jul 21, 2023, Binbin Wu wrote:
> > > 
> > > On 7/21/2023 7:53 AM, Isaku Yamahata wrote:
> > > > On Wed, Jul 19, 2023 at 10:41:24PM +0800,
> > > > Binbin Wu <binbin.wu@linux.intel.com> wrote:
> > > > 
> > > > > Add and use kvm_vcpu_is_legal_cr3() to check CR3's legality to provide
> > > > > a clear distinction b/t CR3 and GPA checks. So that kvm_vcpu_is_legal_cr3()
> > > > > can be adjusted according to new feature(s).
> > > > > 
> > > > > No functional change intended.
> > > > > 
> > > > > Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> > > > > ---
> > > > >    arch/x86/kvm/cpuid.h      | 5 +++++
> > > > >    arch/x86/kvm/svm/nested.c | 4 ++--
> > > > >    arch/x86/kvm/vmx/nested.c | 4 ++--
> > > > >    arch/x86/kvm/x86.c        | 4 ++--
> > > > >    4 files changed, 11 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> > > > > index f61a2106ba90..8b26d946f3e3 100644
> > > > > --- a/arch/x86/kvm/cpuid.h
> > > > > +++ b/arch/x86/kvm/cpuid.h
> > > > > @@ -283,4 +283,9 @@ static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu,
> > > > >    	return vcpu->arch.governed_features.enabled & kvm_governed_feature_bit(x86_feature);
> > > > >    }
> > > > > +static inline bool kvm_vcpu_is_legal_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
> > > > > +{
> > > > > +	return kvm_vcpu_is_legal_gpa(vcpu, cr3);
> > > > > +}
> > > > > +
> > > > The remaining user of kvm_vcpu_is_illegal_gpa() is one left.  Can we remove it
> > > > by replacing !kvm_vcpu_is_legal_gpa()?
> > > There are still two callsites of kvm_vcpu_is_illegal_gpa() left (basing on
> > > Linux 6.5-rc2), in handle_ept_violation() and nested_vmx_check_eptp().
> > > But they could be replaced by !kvm_vcpu_is_legal_gpa() and then remove
> > > kvm_vcpu_is_illegal_gpa().
> > > I am neutral to this.
> > I'm largely neutral on this as well, though I do like the idea of having only
> > "legal" APIs.  I think it makes sense to throw together a patch, we can always
> > ignore the patch if end we up deciding to keep kvm_vcpu_is_illegal_gpa().
> OK. Thanks for the advice.
> Should I send a seperate patch or add a patch to remove
> kvm_vcpu_is_illegal_gpa() in next version?

Add a patch in the next version, eliminating kvm_vcpu_is_illegal_gpa() without
the context of this series probably isn't worth the churn.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
                   ` (8 preceding siblings ...)
  2023-07-19 14:41 ` [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM Binbin Wu
@ 2023-08-15  2:05 ` Binbin Wu
  2023-08-15 23:49   ` Sean Christopherson
  2023-08-16 22:25 ` Sean Christopherson
  10 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-08-15  2:05 UTC (permalink / raw)
  To: kvm, linux-kernel
  Cc: seanjc, pbonzini, chao.gao, kai.huang, David.Laight, robert.hu,
	guang.zeng

Gentle ping.


On 7/19/2023 10:41 PM, Binbin Wu wrote:
> ===Feature Introduction===
>
> Linear-address masking (LAM) [1], modifies the checking that is applied to
> *64-bit* linear addresses, allowing software to use of the untranslated address
> bits for metadata and masks the metadata bits before using them as linear
> addresses to access memory.
>
> When the feature is virtualized and exposed to guest, it can be used for efficient
> address sanitizers (ASAN) implementation and for optimizations in JITs and virtual
> machines.
>
> Regarding which pointer bits are masked and can be used for metadata, LAM has 2
> modes:
> - LAM_48: metadata bits 62:48, i.e. LAM width of 15.
> - LAM_57: metadata bits 62:57, i.e. LAM width of 6.
>
> * For user pointers:
>    CR3.LAM_U57 = CR3.LAM_U48 = 0, LAM is off;
>    CR3.LAM_U57 = 1, LAM57 is active;
>    CR3.LAM_U57 = 0 and CR3.LAM_U48 = 1, LAM48 is active.
> * For supervisor pointers:
>    CR4.LAM_SUP =0, LAM is off;
>    CR4.LAM_SUP =1 with 5-level paging mode, LAM57 is active;
>    CR4.LAM_SUP =1 with 4-level paging mode, LAM48 is active.
>
> The modified LAM canonicality check:
> * LAM_S48                : [ 1 ][ metadata ][ 1 ]
>                               63               47
> * LAM_U48                : [ 0 ][ metadata ][ 0 ]
>                               63               47
> * LAM_S57                : [ 1 ][ metadata ][ 1 ]
>                               63               56
> * LAM_U57 + 5-lvl paging : [ 0 ][ metadata ][ 0 ]
>                               63               56
> * LAM_U57 + 4-lvl paging : [ 0 ][ metadata ][ 0...0 ]
>                               63               56..47
>
> Note:
> 1. LAM applies to only data address, not to instructions.
> 2. LAM identification of an address as user or supervisor is based solely on the
>     value of pointer bit 63 and does not depend on the CPL.
> 3. LAM doesn't apply to the writes to control registers or MSRs.
> 4. LAM masking applies before paging, so the faulting linear address in CR2
>     doesn't contain the metadata.
> 5  The guest linear address saved in VMCS doesn't contain metadata.
> 6. For user mode address, it is possible that 5-level paging and LAM_U48 are both
>     set, in this case, the effective usable linear address width is 48.
>     (Currently, only LAM_U57 is enabled in Linux kernel. [2])
>
> ===LAM KVM Design===
> LAM KVM enabling includes the following parts:
> - Feature Enumeration
>    LAM feature is enumerated by CPUID.7.1:EAX.LAM[bit 26].
>    If hardware supports LAM and host doesn't disable it explicitly (e.g. via
>    clearcpuid), LAM feature will be exposed to user VMM.
>
> - CR4 Virtualization
>    LAM uses CR4.LAM_SUP (bit 28) to configure LAM on supervisor pointers.
>    Add support to allow guests to set the new CR4 control bit for guests to enable
>    LAM on supervisor pointers.
>
> - CR3 Virtualization
>    LAM uses CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to configure LAM on user
>    pointers.
>    Add support to allow guests to set two new CR3 non-address control bits for
>    guests to enable LAM on user pointers.
>
> - Modified Canonicality Check and Metadata Mask
>    When LAM is enabled, 64-bit linear address may be tagged with metadata. Linear
>    address should be checked for modified canonicality and untagged in instruction
>    emulations and VMExit handlers when LAM is applicable.
>
> LAM support in SGX enclave mode needs additional enabling and is not
> included in this patch series.
>
> This patch series depends on "governed" X86_FEATURE framework from Sean.
> https://lore.kernel.org/kvm/20230217231022.816138-2-seanjc@google.com/
>
> This patch series depends on the patches of refactor of instruction emulation flags,
> using flags to identify the access type of instructions, sent along with LASS patch series.
> https://lore.kernel.org/kvm/20230719024558.8539-2-guang.zeng@intel.com/
> https://lore.kernel.org/kvm/20230719024558.8539-3-guang.zeng@intel.com/
> https://lore.kernel.org/kvm/20230719024558.8539-4-guang.zeng@intel.com/
> https://lore.kernel.org/kvm/20230719024558.8539-5-guang.zeng@intel.com/
>
>
> LAM QEMU patch:
> https://lists.nongnu.org/archive/html/qemu-devel/2023-05/msg07843.html
>
> LAM kvm-unit-tests patch:
> https://lore.kernel.org/kvm/20230530024356.24870-1-binbin.wu@linux.intel.com/
>
> ===Test===
> 1. Add test cases in kvm-unit-test [3] for LAM, including LAM_SUP and LAM_{U57,U48}.
>     For supervisor pointers, the test covers CR4 LAM_SUP bits toggle, Memory/MMIO
>     access with tagged pointer, and some special instructions (INVLPG, INVPCID,
>     INVVPID), INVVPID cases also used to cover VMX instruction VMExit path.
>     For uer pointers, the test covers CR3 LAM bits toggle, Memory/MMIO access with
>     tagged pointer.
>     MMIO cases are used to trigger instruction emulation path.
>     Run the unit test with both LAM feature on/off (i.e. including negative cases).
>     Run the unit test in L1 guest with both LAM feature on/off.
> 2. Run Kernel LAM kselftests [2] in guest, with both EPT=Y/N.
> 3. Launch a nested guest.
>
> All tests have passed in Simics environment.
>
> [1] Intel ISE https://cdrdv2.intel.com/v1/dl/getContent/671368
>      Chapter Linear Address Masking (LAM)
> [2] https://lore.kernel.org/all/20230312112612.31869-9-kirill.shutemov@linux.intel.com/
> [3] https://lore.kernel.org/kvm/20230530024356.24870-1-binbin.wu@linux.intel.com/
>
> ---
> Changelog
> v10:
> - Split out the patch "Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK". [Sean]
> - Split out the patch "Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality". [Sean]
> - Use "KVM-governed feature framework" to track if guest can use LAM. [Sean]
> - Use emulation flags to describe the access instead of making the flag a command. [Sean]
> - Split the implementation of vmx_get_untagged_addr() for LAM from emulator and kvm_x86_ops definition. [Per Sean's comment for LASS]
> - Some improvement of implementation in vmx_get_untagged_addr(). [Sean]
>
> v9:
> https://lore.kernel.org/kvm/20230606091842.13123-1-binbin.wu@linux.intel.com/
>
> Binbin Wu (7):
>    KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
>    KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
>    KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
>    KVM: x86: Virtualize CR3.LAM_{U48,U57}
>    KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in
>      emulator
>    KVM: VMX: Implement and wire get_untagged_addr() for LAM
>    KVM: x86: Untag address for vmexit handlers when LAM applicable
>
> Robert Hoo (2):
>    KVM: x86: Virtualize CR4.LAM_SUP
>    KVM: x86: Expose LAM feature to userspace VMM
>
>   arch/x86/include/asm/kvm-x86-ops.h |  1 +
>   arch/x86/include/asm/kvm_host.h    |  5 +++-
>   arch/x86/kvm/cpuid.c               |  2 +-
>   arch/x86/kvm/cpuid.h               |  8 ++++++
>   arch/x86/kvm/emulate.c             |  2 +-
>   arch/x86/kvm/governed_features.h   |  2 ++
>   arch/x86/kvm/kvm_emulate.h         |  3 +++
>   arch/x86/kvm/mmu.h                 |  8 ++++++
>   arch/x86/kvm/mmu/mmu.c             |  2 +-
>   arch/x86/kvm/mmu/mmu_internal.h    |  1 +
>   arch/x86/kvm/mmu/paging_tmpl.h     |  2 +-
>   arch/x86/kvm/svm/nested.c          |  4 +--
>   arch/x86/kvm/vmx/nested.c          |  6 +++--
>   arch/x86/kvm/vmx/sgx.c             |  1 +
>   arch/x86/kvm/vmx/vmx.c             | 43 +++++++++++++++++++++++++++++-
>   arch/x86/kvm/vmx/vmx.h             |  2 ++
>   arch/x86/kvm/x86.c                 | 15 +++++++++--
>   arch/x86/kvm/x86.h                 |  2 ++
>   18 files changed, 97 insertions(+), 12 deletions(-)
>
>
> base-commit: fdf0eaf11452d72945af31804e2a1048ee1b574c
> prerequisite-patch-id: 3467bc611ce3774ba481ab72e187eba47000c01b
> prerequisite-patch-id: 1bf4c9da384b39c92c21c467a5c6ed0d306ec266
> prerequisite-patch-id: 226fd3d9a09ef80a5b8001a3bdc6fbf2c23d2a88
> prerequisite-patch-id: 0c31cc0dec011d7e22efde1f7dde9847c86024d8
> prerequisite-patch-id: f487db8bc77007679f4b0e670a9e487c1f63fcfe


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
  2023-08-15  2:05 ` [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
@ 2023-08-15 23:49   ` Sean Christopherson
  0 siblings, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-15 23:49 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Tue, Aug 15, 2023, Binbin Wu wrote:
> Gentle ping.

I expect to get to this tomorrow.  Sorry for the delay.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  2023-07-19 14:41 ` [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled" Binbin Wu
@ 2023-08-16  3:46   ` Huang, Kai
  2023-08-16  7:08     ` Binbin Wu
  0 siblings, 1 reply; 42+ messages in thread
From: Huang, Kai @ 2023-08-16  3:46 UTC (permalink / raw)
  To: kvm, linux-kernel, binbin.wu
  Cc: robert.hu, pbonzini, Zeng, Guang, Christopherson,,
	Sean, Gao, Chao, David.Laight

On Wed, 2023-07-19 at 22:41 +0800, Binbin Wu wrote:
> Use the governed feature framework to track if Linear Address Masking (LAM)
> is "enabled", i.e. if LAM can be used by the guest. So that guest_can_use()
> can be used to support LAM virtualization.

Better to explain why to use governed feature for LAM?  Is it because there's
hot path(s) calling guest_cpuid_has()?  Anyway some context of why can help
here.

> 
> LAM modifies the checking that is applied to 64-bit linear addresses, allowing
> software to use of the untranslated address bits for metadata and masks the
> metadata bits before using them as linear addresses to access memory.
> 
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> ---
>  arch/x86/kvm/governed_features.h | 2 ++
>  arch/x86/kvm/vmx/vmx.c           | 3 +++
>  2 files changed, 5 insertions(+)
> 
> diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_features.h
> index 40ce8e6608cd..708578d60e6f 100644
> --- a/arch/x86/kvm/governed_features.h
> +++ b/arch/x86/kvm/governed_features.h
> @@ -5,5 +5,7 @@ BUILD_BUG()
>  
>  #define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x)
>  
> +KVM_GOVERNED_X86_FEATURE(LAM)
> +
>  #undef KVM_GOVERNED_X86_FEATURE
>  #undef KVM_GOVERNED_FEATURE
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 0ecf4be2c6af..ae47303c88d7 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7783,6 +7783,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
>  		vmx->msr_ia32_feature_control_valid_bits &=
>  			~FEAT_CTL_SGX_LC_ENABLED;
>  
> +	if (boot_cpu_has(X86_FEATURE_LAM))
> +		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LAM);
> +

If you want to use boot_cpu_has(), it's better to be done at your last patch to
only set the cap bit when boot_cpu_has() is true, I suppose.

>  	/* Refresh #PF interception to account for MAXPHYADDR changes. */
>  	vmx_update_exception_bitmap(vcpu);
>  }


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  2023-08-16  3:46   ` Huang, Kai
@ 2023-08-16  7:08     ` Binbin Wu
  2023-08-16  9:47       ` Huang, Kai
  0 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-08-16  7:08 UTC (permalink / raw)
  To: Huang, Kai
  Cc: kvm, linux-kernel, robert.hu, pbonzini, Zeng, Guang,
	Christopherson,,
	Sean, Gao, Chao, David.Laight



On 8/16/2023 11:46 AM, Huang, Kai wrote:
> On Wed, 2023-07-19 at 22:41 +0800, Binbin Wu wrote:
>> Use the governed feature framework to track if Linear Address Masking (LAM)
>> is "enabled", i.e. if LAM can be used by the guest. So that guest_can_use()
>> can be used to support LAM virtualization.
> Better to explain why to use governed feature for LAM?  Is it because there's
> hot path(s) calling guest_cpuid_has()?  Anyway some context of why can help
> here.
Yes, to avoid calling guest_cpuid_has() in CR3 handling and instruction 
emulation paths.
I will add the context next version.
Thanks!

>
>> LAM modifies the checking that is applied to 64-bit linear addresses, allowing
>> software to use of the untranslated address bits for metadata and masks the
>> metadata bits before using them as linear addresses to access memory.
>>
>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>> ---
>>   arch/x86/kvm/governed_features.h | 2 ++
>>   arch/x86/kvm/vmx/vmx.c           | 3 +++
>>   2 files changed, 5 insertions(+)
>>
>> diff --git a/arch/x86/kvm/governed_features.h b/arch/x86/kvm/governed_features.h
>> index 40ce8e6608cd..708578d60e6f 100644
>> --- a/arch/x86/kvm/governed_features.h
>> +++ b/arch/x86/kvm/governed_features.h
>> @@ -5,5 +5,7 @@ BUILD_BUG()
>>   
>>   #define KVM_GOVERNED_X86_FEATURE(x) KVM_GOVERNED_FEATURE(X86_FEATURE_##x)
>>   
>> +KVM_GOVERNED_X86_FEATURE(LAM)
>> +
>>   #undef KVM_GOVERNED_X86_FEATURE
>>   #undef KVM_GOVERNED_FEATURE
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index 0ecf4be2c6af..ae47303c88d7 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -7783,6 +7783,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
>>   		vmx->msr_ia32_feature_control_valid_bits &=
>>   			~FEAT_CTL_SGX_LC_ENABLED;
>>   
>> +	if (boot_cpu_has(X86_FEATURE_LAM))
>> +		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LAM);
>> +
> If you want to use boot_cpu_has(), it's better to be done at your last patch to
> only set the cap bit when boot_cpu_has() is true, I suppose.
Yes, but new version of kvm_governed_feature_check_and_set() of 
KVM-governed feature framework will check against kvm_cpu_cap_has() as well.
I will remove the if statement and call 
kvm_governed_feature_check_and_set()  directly.
https://lore.kernel.org/kvm/20230815203653.519297-2-seanjc@google.com/


>
>>   	/* Refresh #PF interception to account for MAXPHYADDR changes. */
>>   	vmx_update_exception_bitmap(vcpu);
>>   }


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  2023-08-16  7:08     ` Binbin Wu
@ 2023-08-16  9:47       ` Huang, Kai
  2023-08-16 21:33         ` Sean Christopherson
  0 siblings, 1 reply; 42+ messages in thread
From: Huang, Kai @ 2023-08-16  9:47 UTC (permalink / raw)
  To: binbin.wu
  Cc: Gao, Chao, Christopherson,,
	Sean, David.Laight, Zeng, Guang, linux-kernel, pbonzini, kvm,
	robert.hu


> > > --- a/arch/x86/kvm/vmx/vmx.c
> > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > @@ -7783,6 +7783,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> > >   		vmx->msr_ia32_feature_control_valid_bits &=
> > >   			~FEAT_CTL_SGX_LC_ENABLED;
> > >   
> > > +	if (boot_cpu_has(X86_FEATURE_LAM))
> > > +		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LAM);
> > > +
> > If you want to use boot_cpu_has(), it's better to be done at your last patch to
> > only set the cap bit when boot_cpu_has() is true, I suppose.
> Yes, but new version of kvm_governed_feature_check_and_set() of 
> KVM-governed feature framework will check against kvm_cpu_cap_has() as well.
> I will remove the if statement and call 
> kvm_governed_feature_check_and_set()  directly.
> https://lore.kernel.org/kvm/20230815203653.519297-2-seanjc@google.com/
> 

I mean kvm_cpu_cap_has() checks against the host CPUID directly while here you
are using boot_cpu_has().  They are not the same.  

If LAM should be only supported when boot_cpu_has() is true then it seems you
can just only set the LAM cap bit when boot_cpu_has() is true.  As you also
mentioned above the kvm_governed_feature_check_and_set() here internally does
kvm_cpu_cap_has().

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
  2023-07-19 14:41 ` [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK Binbin Wu
@ 2023-08-16 21:00   ` Sean Christopherson
  2023-08-28  4:06     ` Binbin Wu
  0 siblings, 1 reply; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 21:00 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Wed, Jul 19, 2023, Binbin Wu wrote:
> Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK.

Using GENMASK_ULL() is an opportunistic cleanup, it is not the main purpose for
this patch.  The main purpose is to extract the maximum theoretical mask for guest
MAXPHYADDR so that it can be used to strip bits from CR3.

And rather than bury the actual use in "KVM: x86: Virtualize CR3.LAM_{U48,U57}",
I think it makes sense to do the masking in this patch.  That change only becomes
_necessary_ when LAM comes along, but it's completely valid without LAM.

That will also provide a place to explain why we decided to unconditionally mask
the pgd (it's harmless for 32-bit guests, querying 64-bit mode would be more
expensive, and for EPT the mask isn't tied to guest mode).  And it should also
explain that using PT_BASE_ADDR_MASK would actually be wrong (PAE has 64-bit
elements _except_ for CR3).

E.g. end up with a shortlog for this patch along the lines of:

  KVM: x86/mmu: Drop non-PA bits when getting GFN for guest's PGD

and write the changelog accordingly.

> No functional change intended.
> 
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> ---
>  arch/x86/kvm/mmu/mmu_internal.h | 1 +
>  arch/x86/kvm/mmu/paging_tmpl.h  | 2 +-
>  2 files changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index d39af5639ce9..7d2105432d66 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -21,6 +21,7 @@ extern bool dbg;
>  #endif
>  
>  /* Page table builder macros common to shadow (host) PTEs and guest PTEs. */
> +#define __PT_BASE_ADDR_MASK GENMASK_ULL(51, 12)
>  #define __PT_LEVEL_SHIFT(level, bits_per_level)	\
>  	(PAGE_SHIFT + ((level) - 1) * (bits_per_level))
>  #define __PT_INDEX(address, level, bits_per_level) \
> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
> index 0662e0278e70..00c8193f5991 100644
> --- a/arch/x86/kvm/mmu/paging_tmpl.h
> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
> @@ -62,7 +62,7 @@
>  #endif
>  
>  /* Common logic, but per-type values.  These also need to be undefined. */
> -#define PT_BASE_ADDR_MASK	((pt_element_t)(((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)))
> +#define PT_BASE_ADDR_MASK	((pt_element_t)__PT_BASE_ADDR_MASK)
>  #define PT_LVL_ADDR_MASK(lvl)	__PT_LVL_ADDR_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS)
>  #define PT_LVL_OFFSET_MASK(lvl)	__PT_LVL_OFFSET_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS)
>  #define PT_INDEX(addr, lvl)	__PT_INDEX(addr, lvl, PT_LEVEL_BITS)
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  2023-08-16  9:47       ` Huang, Kai
@ 2023-08-16 21:33         ` Sean Christopherson
  2023-08-16 23:03           ` Huang, Kai
  2023-08-17  1:28           ` Binbin Wu
  0 siblings, 2 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 21:33 UTC (permalink / raw)
  To: Kai Huang
  Cc: binbin.wu, Chao Gao, David.Laight@ACULAB.COM, Guang Zeng,
	linux-kernel, pbonzini, kvm, robert.hu

On Wed, Aug 16, 2023, Kai Huang wrote:
> 
> > > > --- a/arch/x86/kvm/vmx/vmx.c
> > > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > > @@ -7783,6 +7783,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> > > >   		vmx->msr_ia32_feature_control_valid_bits &=
> > > >   			~FEAT_CTL_SGX_LC_ENABLED;
> > > >   
> > > > +	if (boot_cpu_has(X86_FEATURE_LAM))
> > > > +		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LAM);
> > > > +
> > > If you want to use boot_cpu_has(), it's better to be done at your last patch to
> > > only set the cap bit when boot_cpu_has() is true, I suppose.
> > Yes, but new version of kvm_governed_feature_check_and_set() of 
> > KVM-governed feature framework will check against kvm_cpu_cap_has() as well.
> > I will remove the if statement and call 
> > kvm_governed_feature_check_and_set()  directly.
> > https://lore.kernel.org/kvm/20230815203653.519297-2-seanjc@google.com/
> > 
> 
> I mean kvm_cpu_cap_has() checks against the host CPUID directly while here you
> are using boot_cpu_has().  They are not the same.  
> 
> If LAM should be only supported when boot_cpu_has() is true then it seems you
> can just only set the LAM cap bit when boot_cpu_has() is true.  As you also
> mentioned above the kvm_governed_feature_check_and_set() here internally does
> kvm_cpu_cap_has().

That's covered by the last patch:

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e961e9a05847..06061c11d74d 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -677,7 +677,7 @@ void kvm_set_cpu_caps(void)
        kvm_cpu_cap_mask(CPUID_7_1_EAX,
                F(AVX_VNNI) | F(AVX512_BF16) | F(CMPCCXADD) |
                F(FZRM) | F(FSRS) | F(FSRC) |
-               F(AMX_FP16) | F(AVX_IFMA)
+               F(AMX_FP16) | F(AVX_IFMA) | F(LAM)
        );
 
        kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,


Which highlights a problem with activating a goverened feature before said feature
is actually supported by KVM: it's all kinds of confusing.

It'll generate a more churn in git history, but I think we should first enable
LAM without a goverened feature, and then activate a goverened feature later on.
Using a goverened feature is purely an optimization, i.e. the series needs to be
function without using a governed feature.

That should yield an easier-to-review series on all fronts: the initial supports
won't have any more hidden dependencies than absolutely necessary, switching to
a goverened feature should be a very mechanical conversion (if it's not, that's
a red flag), and last but not least, it makes it super easy to make a judgment
call as to whether using a governed feature flag is justified, because all of the
users will be in scope.

TL;DR: Do the whole goverened feature thing dead last.

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP
  2023-07-19 14:41 ` [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP Binbin Wu
@ 2023-08-16 21:41   ` Sean Christopherson
  0 siblings, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 21:41 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

This patch doesn't virtualize LAM_SUP, it simply allows the guest to enable
CR4.LAM_SUP (ignoring that that's not possible at this point because KVM will
reject CR4 values that *KVM* doesn't support).

Actually virtualizing LAM_SUP requires the bits from "KVM: VMX: Implement and wire
get_untagged_addr() for LAM".  You can still separate LAM_SUP from LAM_U*, but
these patches should come *after* the get_untagged_addr() hook is added.  The
first of LAM_SUP vs. LAM_U* can then implement vmx_get_untagged_addr(), and simply
return the raw gva in the "other" case.  E.g. if you add LAM_SUP first, the code
can be:

	if (!(gva & BIT_ULL(63))) {
		/* KVM doesn't yet virtualize LAM_U{48,57}. */
		return false;
	} else {
		if (!kvm_is_cr4_bit_set(vcpu, X86_CR4_LAM_SUP))
			return gva;

		lam_bit = kvm_is_cr4_bit_set(vcpu, X86_CR4_LA57) ? 56 : 47;
	}

On Wed, Jul 19, 2023, Binbin Wu wrote:
> From: Robert Hoo <robert.hu@linux.intel.com>
> 
> Add support to allow guests to set the new CR4 control bit for guests to enable
> the new Intel CPU feature Linear Address Masking (LAM) on supervisor pointers.
> 
> LAM modifies the checking that is applied to 64-bit linear addresses, allowing
> software to use of the untranslated address bits for metadata and masks the
> metadata bits before using them as linear addresses to access memory. LAM uses
> CR4.LAM_SUP (bit 28) to configure LAM for supervisor pointers. LAM also changes
> VMENTER to allow the bit to be set in VMCS's HOST_CR4 and GUEST_CR4 for
> virtualization. Note CR4.LAM_SUP is allowed to be set even not in 64-bit mode,
> but it will not take effect since LAM only applies to 64-bit linear addresses.
> 
> Move CR4.LAM_SUP out of CR4_RESERVED_BITS and its reservation depends on vcpu
> supporting LAM feature or not. Leave the bit intercepted to prevent guest from
> setting CR4.LAM_SUP bit if LAM is not exposed to guest as well as to avoid vmread
> every time when KVM fetches its value, with the expectation that guest won't
> toggle the bit frequently.
> 
> Set CR4.LAM_SUP bit in the emulated IA32_VMX_CR4_FIXED1 MSR for guests to allow
> guests to enable LAM for supervisor pointers in nested VMX operation.
> 
> Hardware is not required to do TLB flush when CR4.LAM_SUP toggled, KVM doesn't
> need to emulate TLB flush based on it.
> There's no other features/vmx_exec_controls connection, no other code needed in
> {kvm,vmx}_set_cr4().
> 
> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> Reviewed-by: Chao Gao <chao.gao@intel.com>
> Reviewed-by: Kai Huang <kai.huang@intel.com>
> Tested-by: Xuelian Guo <xuelian.guo@intel.com>
> ---
>  arch/x86/include/asm/kvm_host.h | 3 ++-
>  arch/x86/kvm/vmx/vmx.c          | 3 +++
>  arch/x86/kvm/x86.h              | 2 ++
>  3 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index e8e1101a90c8..881a0be862e1 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -125,7 +125,8 @@
>  			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
>  			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
>  			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
> -			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
> +			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
> +			  | X86_CR4_LAM_SUP))
>  
>  #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
>  
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index ae47303c88d7..a0d6ea87a2d0 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7646,6 +7646,9 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
>  	cr4_fixed1_update(X86_CR4_UMIP,       ecx, feature_bit(UMIP));
>  	cr4_fixed1_update(X86_CR4_LA57,       ecx, feature_bit(LA57));
>  
> +	entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1);
> +	cr4_fixed1_update(X86_CR4_LAM_SUP,    eax, feature_bit(LAM));
> +
>  #undef cr4_fixed1_update
>  }
>  
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 82e3dafc5453..24e2b56356b8 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -528,6 +528,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
>  		__reserved_bits |= X86_CR4_VMXE;        \
>  	if (!__cpu_has(__c, X86_FEATURE_PCID))          \
>  		__reserved_bits |= X86_CR4_PCIDE;       \
> +	if (!__cpu_has(__c, X86_FEATURE_LAM))           \
> +		__reserved_bits |= X86_CR4_LAM_SUP;     \
>  	__reserved_bits;                                \
>  })
>  
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 5/9] KVM: x86: Virtualize CR3.LAM_{U48,U57}
  2023-07-19 14:41 ` [PATCH v10 5/9] KVM: x86: Virtualize CR3.LAM_{U48,U57} Binbin Wu
@ 2023-08-16 21:44   ` Sean Christopherson
  0 siblings, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 21:44 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Wed, Jul 19, 2023, Binbin Wu wrote:
> Add support to allow guests to set two new CR3 non-address control bits for
> guests to enable the new Intel CPU feature Linear Address Masking (LAM) on user
> pointers.

Same feedback as the LAM_SUP patch.

> ---
>  arch/x86/kvm/cpuid.h   | 3 +++
>  arch/x86/kvm/mmu.h     | 8 ++++++++
>  arch/x86/kvm/mmu/mmu.c | 2 +-
>  arch/x86/kvm/vmx/vmx.c | 3 ++-
>  4 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
> index 8b26d946f3e3..274f41d2250b 100644
> --- a/arch/x86/kvm/cpuid.h
> +++ b/arch/x86/kvm/cpuid.h
> @@ -285,6 +285,9 @@ static __always_inline bool guest_can_use(struct kvm_vcpu *vcpu,
>  
>  static inline bool kvm_vcpu_is_legal_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
>  {
> +	if (guest_can_use(vcpu, X86_FEATURE_LAM))
> +		cr3 &= ~(X86_CR3_LAM_U48 | X86_CR3_LAM_U57);
> +
>  	return kvm_vcpu_is_legal_gpa(vcpu, cr3);
>  }
>  
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index 92d5a1924fc1..e92395e6b876 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -144,6 +144,14 @@ static inline unsigned long kvm_get_active_pcid(struct kvm_vcpu *vcpu)
>  	return kvm_get_pcid(vcpu, kvm_read_cr3(vcpu));
>  }
>  
> +static inline unsigned long kvm_get_active_cr3_lam_bits(struct kvm_vcpu *vcpu)
> +{
> +	if (!guest_can_use(vcpu, X86_FEATURE_LAM))
> +		return 0;
> +
> +	return kvm_read_cr3(vcpu) & (X86_CR3_LAM_U48 | X86_CR3_LAM_U57);
> +}
> +
>  static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
>  {
>  	u64 root_hpa = vcpu->arch.mmu->root.hpa;
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index ec169f5c7dce..0285536346c1 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3819,7 +3819,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
>  	hpa_t root;
>  
>  	root_pgd = kvm_mmu_get_guest_pgd(vcpu, mmu);
> -	root_gfn = root_pgd >> PAGE_SHIFT;
> +	root_gfn = (root_pgd & __PT_BASE_ADDR_MASK) >> PAGE_SHIFT;

And as mentioned previously, this should be in the patch that adds __PT_BASE_ADDR_MASK.

>  	if (mmu_check_root(vcpu, root_gfn))
>  		return 1;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index a0d6ea87a2d0..bcee5dc3dd0b 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -3358,7 +3358,8 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa,
>  			update_guest_cr3 = false;
>  		vmx_ept_load_pdptrs(vcpu);
>  	} else {
> -		guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu);
> +		guest_cr3 = root_hpa | kvm_get_active_pcid(vcpu) |
> +		            kvm_get_active_cr3_lam_bits(vcpu);
>  	}
>  
>  	if (update_guest_cr3)
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable
  2023-07-19 14:41 ` [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable Binbin Wu
@ 2023-08-16 21:49   ` Sean Christopherson
  2023-08-16 22:10   ` Sean Christopherson
  1 sibling, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 21:49 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Wed, Jul 19, 2023, Binbin Wu wrote:
> index abf6d42672cd..f18e610c4363 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -8177,8 +8177,7 @@ static void vmx_vm_destroy(struct kvm *kvm)
>  	free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
>  }
>  
> -static gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva,
> -			    unsigned int flags)
> +gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags)
>  {
>  	unsigned long cr3_bits;
>  	int lam_bit;
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 32384ba38499..6fb612355769 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -421,6 +421,8 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
>  u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
>  u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
>  
> +gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
> +

I think it makes sense to squash this with whatever patch first adds
vmx_get_untagged_addr().  It'll make that initial "virtual LAM_*" patch a fair
bit bigger, but overall I think the series/patches will be easier to review,
e.g. the rules for LAM_SUP will mostly be captured in a single patch.

One could even make an argument for squashing LAM_U* support with the LAM_SUP
patch, but my vote is to keep them separate.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM
  2023-07-19 14:41 ` [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM Binbin Wu
@ 2023-08-16 21:53   ` Sean Christopherson
  2023-08-17  1:59     ` Binbin Wu
  0 siblings, 1 reply; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 21:53 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

s/Expose/Advertise

And I would add an "enable" in there somehwere, because to Kai's point earlier in
the series about kvm_cpu_cap_has(), the guest can't actually use LAM until this
patch.  Sometimes we do just say "Advertise", but typically only for features
where there's not virtualization support, e.g. AVX instructions where the guest
can use them irrespective of what KVM says it supports.

This?

KVM: x86: Advertise and enable LAM (user and supervisor)

On Wed, Jul 19, 2023, Binbin Wu wrote:
> From: Robert Hoo <robert.hu@linux.intel.com>
> 
> LAM feature is enumerated by CPUID.7.1:EAX.LAM[bit 26].
> Expose the feature to userspace as the final step after the following
> supports:
> - CR4.LAM_SUP virtualization
> - CR3.LAM_U48 and CR3.LAM_U57 virtualization
> - Check and untag 64-bit linear address when LAM applies in instruction
>   emulations and VMExit handlers.
> 
> Exposing SGX LAM support is not supported yet. SGX LAM support is enumerated
> in SGX's own CPUID and there's no hard requirement that it must be supported
> when LAM is reported in CPUID leaf 0x7.
> 
> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
> Reviewed-by: Chao Gao <chao.gao@intel.com>
> Reviewed-by: Kai Huang <kai.huang@intel.com>
> Tested-by: Xuelian Guo <xuelian.guo@intel.com>
> ---
>  arch/x86/kvm/cpuid.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 7ebf3ce1bb5f..21d525b01d45 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -645,7 +645,7 @@ void kvm_set_cpu_caps(void)
>  	kvm_cpu_cap_mask(CPUID_7_1_EAX,
>  		F(AVX_VNNI) | F(AVX512_BF16) | F(CMPCCXADD) |
>  		F(FZRM) | F(FSRS) | F(FSRC) |
> -		F(AMX_FP16) | F(AVX_IFMA)
> +		F(AMX_FP16) | F(AVX_IFMA) | F(LAM)
>  	);
>  
>  	kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM
  2023-07-19 14:41 ` [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM Binbin Wu
@ 2023-08-16 22:01   ` Sean Christopherson
  2023-08-17  9:51     ` Binbin Wu
  0 siblings, 1 reply; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 22:01 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Wed, Jul 19, 2023, Binbin Wu wrote:
> +	return (sign_extend64(gva, lam_bit) & ~BIT_ULL(63)) | (gva & BIT_ULL(63));

Almost forgot.  Please add a comment explaning how LAM untags the address,
specifically the whole bit 63 preservation.  The logic is actually straightforward,
but the above looks way more complex than it actually is.  This?

	/*
	 * Untag the address by sign-extending the LAM bit, but NOT to bit 63.
	 * Bit 63 is retained from the raw virtual address so that untagging
	 * doesn't change a user access to a supervisor access, and vice versa.
	 */

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable
  2023-07-19 14:41 ` [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable Binbin Wu
  2023-08-16 21:49   ` Sean Christopherson
@ 2023-08-16 22:10   ` Sean Christopherson
  1 sibling, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 22:10 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Wed, Jul 19, 2023, Binbin Wu wrote:
> Untag address for 64-bit memory operand in VMExit handlers when LAM is applicable.
> 
> For VMExit handlers related to 64-bit linear address:
> - Cases need to untag address (handled in get_vmx_mem_address())
>   Operand(s) of VMX instructions and INVPCID.
>   Operand(s) of SGX ENCLS.
> - Cases LAM doesn't apply to (no change needed)
>   Operand of INVLPG.
>   Linear address in INVPCID descriptor.
>   Linear address in INVVPID descriptor.
>   BASEADDR specified in SESC of ECREATE.
> 
> Note:
> LAM doesn't apply to the writes to control registers or MSRs.
> LAM masking applies before paging, so the faulting linear address in CR2
> doesn't contain the metadata.
> The guest linear address saved in VMCS doesn't contain metadata.
> 
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> Reviewed-by: Chao Gao <chao.gao@intel.com>
> ---
>  arch/x86/kvm/vmx/nested.c | 2 ++
>  arch/x86/kvm/vmx/sgx.c    | 1 +
>  arch/x86/kvm/vmx/vmx.c    | 3 +--
>  arch/x86/kvm/vmx/vmx.h    | 2 ++
>  arch/x86/kvm/x86.c        | 1 +
>  5 files changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 76c9904c6625..bd2c8936953a 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4980,6 +4980,7 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
>  		else
>  			*ret = off;
>  
> +		*ret = vmx_get_untagged_addr(vcpu, *ret, 0);
>  		/* Long mode: #GP(0)/#SS(0) if the memory address is in a
>  		 * non-canonical form. This is the only check on the memory
>  		 * destination for long mode!
> @@ -5797,6 +5798,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
>  	vpid02 = nested_get_vpid02(vcpu);
>  	switch (type) {
>  	case VMX_VPID_EXTENT_INDIVIDUAL_ADDR:
> +		/* LAM doesn't apply to the address in descriptor of invvpid */

Nit, if we're going to bother with a comment, I think it makes sense to explain
that LAM doesn't apply to any TLB invalidation input, i.e. as opposed to just
saying the INVVPID is special.

		/*
		 * LAM doesn't apply to addresses that are inputs to TLB
		 * invalidation.
		 */

And then when LAM and LASS collide:

		/*
		 * LAM and LASS don't apply to ...
		 */

>  		if (!operand.vpid ||
>  		    is_noncanonical_address(operand.gla, vcpu))
>  			return nested_vmx_fail(vcpu,
> diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
> index 3e822e582497..6fef01e0536e 100644
> --- a/arch/x86/kvm/vmx/sgx.c
> +++ b/arch/x86/kvm/vmx/sgx.c
> @@ -37,6 +37,7 @@ static int sgx_get_encls_gva(struct kvm_vcpu *vcpu, unsigned long offset,
>  	if (!IS_ALIGNED(*gva, alignment)) {
>  		fault = true;
>  	} else if (likely(is_64_bit_mode(vcpu))) {
> +		*gva = vmx_get_untagged_addr(vcpu, *gva, 0);
>  		fault = is_noncanonical_address(*gva, vcpu);
>  	} else {
>  		*gva &= 0xffffffff;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index abf6d42672cd..f18e610c4363 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -8177,8 +8177,7 @@ static void vmx_vm_destroy(struct kvm *kvm)
>  	free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
>  }
>  
> -static gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva,
> -			    unsigned int flags)
> +gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags)
>  {
>  	unsigned long cr3_bits;
>  	int lam_bit;
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 32384ba38499..6fb612355769 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -421,6 +421,8 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
>  u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
>  u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
>  
> +gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
> +
>  static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
>  					     int type, bool value)
>  {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 339a113b45af..d2a0cdfb77a5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -13370,6 +13370,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva)
>  
>  	switch (type) {
>  	case INVPCID_TYPE_INDIV_ADDR:
> +		/* LAM doesn't apply to the address in descriptor of invpcid */

Same thing here.

>  		if ((!pcid_enabled && (operand.pcid != 0)) ||
>  		    is_noncanonical_address(operand.gla, vcpu)) {
>  			kvm_inject_gp(vcpu, 0);
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
  2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
                   ` (9 preceding siblings ...)
  2023-08-15  2:05 ` [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
@ 2023-08-16 22:25 ` Sean Christopherson
  2023-08-17  9:17   ` Binbin Wu
  10 siblings, 1 reply; 42+ messages in thread
From: Sean Christopherson @ 2023-08-16 22:25 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Wed, Jul 19, 2023, Binbin Wu wrote:
> Binbin Wu (7):
>   KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
>   KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
>   KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
>   KVM: x86: Virtualize CR3.LAM_{U48,U57}
>   KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in
>     emulator
>   KVM: VMX: Implement and wire get_untagged_addr() for LAM
>   KVM: x86: Untag address for vmexit handlers when LAM applicable
> 
> Robert Hoo (2):
>   KVM: x86: Virtualize CR4.LAM_SUP
>   KVM: x86: Expose LAM feature to userspace VMM

Looks good, just needs a bit of re-organination.  Same goes for the LASS series.

For the next version, can you (or Zeng) send a single series for LAM and LASS?
They're both pretty much ready to go, i.e. I don't expect one to hold up the other
at this point, and posting a single series will reduce the probability of me
screwing up a conflict resolution or missing a dependency when applying.

Lastly, a question: is there a pressing need to get LAM/LASS support merged _now_?
E.g. are there are there any publicly available CPUs that support LAM and/or LASS?

If not, I'll wait until v6.7 to grab these, e.g. so that you don't have to rush
madly to turn around the next version, and so that I'm not trying to squeeze too
much stuff in just before the merge window.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  2023-08-16 21:33         ` Sean Christopherson
@ 2023-08-16 23:03           ` Huang, Kai
  2023-08-17  1:28           ` Binbin Wu
  1 sibling, 0 replies; 42+ messages in thread
From: Huang, Kai @ 2023-08-16 23:03 UTC (permalink / raw)
  To: Christopherson,, Sean
  Cc: Gao, Chao, Zeng, Guang, David.Laight, binbin.wu, linux-kernel,
	robert.hu, kvm, pbonzini

On Wed, 2023-08-16 at 14:33 -0700, Sean Christopherson wrote:
> On Wed, Aug 16, 2023, Kai Huang wrote:
> > 
> > > > > --- a/arch/x86/kvm/vmx/vmx.c
> > > > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > > > @@ -7783,6 +7783,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> > > > >   		vmx->msr_ia32_feature_control_valid_bits &=
> > > > >   			~FEAT_CTL_SGX_LC_ENABLED;
> > > > >   
> > > > > +	if (boot_cpu_has(X86_FEATURE_LAM))
> > > > > +		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LAM);
> > > > > +
> > > > If you want to use boot_cpu_has(), it's better to be done at your last patch to
> > > > only set the cap bit when boot_cpu_has() is true, I suppose.
> > > Yes, but new version of kvm_governed_feature_check_and_set() of 
> > > KVM-governed feature framework will check against kvm_cpu_cap_has() as well.
> > > I will remove the if statement and call 
> > > kvm_governed_feature_check_and_set()  directly.
> > > https://lore.kernel.org/kvm/20230815203653.519297-2-seanjc@google.com/
> > > 
> > 
> > I mean kvm_cpu_cap_has() checks against the host CPUID directly while here you
> > are using boot_cpu_has().  They are not the same.  
> > 
> > If LAM should be only supported when boot_cpu_has() is true then it seems you
> > can just only set the LAM cap bit when boot_cpu_has() is true.  As you also
> > mentioned above the kvm_governed_feature_check_and_set() here internally does
> > kvm_cpu_cap_has().
> 
> That's covered by the last patch:
> 
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index e961e9a05847..06061c11d74d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -677,7 +677,7 @@ void kvm_set_cpu_caps(void)
>         kvm_cpu_cap_mask(CPUID_7_1_EAX,
>                 F(AVX_VNNI) | F(AVX512_BF16) | F(CMPCCXADD) |
>                 F(FZRM) | F(FSRS) | F(FSRC) |
> -               F(AMX_FP16) | F(AVX_IFMA)
> +               F(AMX_FP16) | F(AVX_IFMA) | F(LAM)
>         );
>  
>         kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,
> 

Ah I missed this piece of code in kvm_set_cpu_caps():

        memcpy(&kvm_cpu_caps, &boot_cpu_data.x86_capability,
               sizeof(kvm_cpu_caps) - (NKVMCAPINTS * sizeof(*kvm_cpu_caps)));

which makes sure KVM only reports one feature when it is set in boot_cpu_data,
which is obviously more reasonable.

Sorry for the noise :-)

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  2023-08-16 21:33         ` Sean Christopherson
  2023-08-16 23:03           ` Huang, Kai
@ 2023-08-17  1:28           ` Binbin Wu
  2023-08-17 19:46             ` Sean Christopherson
  1 sibling, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-08-17  1:28 UTC (permalink / raw)
  To: Sean Christopherson, Kai Huang
  Cc: Chao Gao, David.Laight@ACULAB.COM, Guang Zeng, linux-kernel,
	pbonzini, kvm, robert.hu



On 8/17/2023 5:33 AM, Sean Christopherson wrote:
> On Wed, Aug 16, 2023, Kai Huang wrote:
>>>>> --- a/arch/x86/kvm/vmx/vmx.c
>>>>> +++ b/arch/x86/kvm/vmx/vmx.c
>>>>> @@ -7783,6 +7783,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
>>>>>    		vmx->msr_ia32_feature_control_valid_bits &=
>>>>>    			~FEAT_CTL_SGX_LC_ENABLED;
>>>>>    
>>>>> +	if (boot_cpu_has(X86_FEATURE_LAM))
>>>>> +		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LAM);
>>>>> +
>>>> If you want to use boot_cpu_has(), it's better to be done at your last patch to
>>>> only set the cap bit when boot_cpu_has() is true, I suppose.
>>> Yes, but new version of kvm_governed_feature_check_and_set() of
>>> KVM-governed feature framework will check against kvm_cpu_cap_has() as well.
>>> I will remove the if statement and call
>>> kvm_governed_feature_check_and_set()  directly.
>>> https://lore.kernel.org/kvm/20230815203653.519297-2-seanjc@google.com/
>>>
>> I mean kvm_cpu_cap_has() checks against the host CPUID directly while here you
>> are using boot_cpu_has().  They are not the same.
>>
>> If LAM should be only supported when boot_cpu_has() is true then it seems you
>> can just only set the LAM cap bit when boot_cpu_has() is true.  As you also
>> mentioned above the kvm_governed_feature_check_and_set() here internally does
>> kvm_cpu_cap_has().
> That's covered by the last patch:
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index e961e9a05847..06061c11d74d 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -677,7 +677,7 @@ void kvm_set_cpu_caps(void)
>          kvm_cpu_cap_mask(CPUID_7_1_EAX,
>                  F(AVX_VNNI) | F(AVX512_BF16) | F(CMPCCXADD) |
>                  F(FZRM) | F(FSRS) | F(FSRC) |
> -               F(AMX_FP16) | F(AVX_IFMA)
> +               F(AMX_FP16) | F(AVX_IFMA) | F(LAM)
>          );
>   
>          kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,
>
>
> Which highlights a problem with activating a goverened feature before said feature
> is actually supported by KVM: it's all kinds of confusing.
>
> It'll generate a more churn in git history, but I think we should first enable
> LAM without a goverened feature, and then activate a goverened feature later on.
> Using a goverened feature is purely an optimization, i.e. the series needs to be
> function without using a governed feature.
OK, then how about the second option which has been listed in your v9 
patch series discussion.
https://lore.kernel.org/kvm/20230606091842.13123-1-binbin.wu@linux.intel.com/T/#m16ee5cec4a46954f985cb6afedb5f5a3435373a1

Temporarily add a bool can_use_lam in kvm_vcpu_arch and use the bool
"can_use_lam" instead of guest_can_use(vcpu, X86_FEATURE_LAM).
and then put the patch of adopting "KVM-governed feature framework" to 
the last.


>
> That should yield an easier-to-review series on all fronts: the initial supports
> won't have any more hidden dependencies than absolutely necessary, switching to
> a goverened feature should be a very mechanical conversion (if it's not, that's
> a red flag), and last but not least, it makes it super easy to make a judgment
> call as to whether using a governed feature flag is justified, because all of the
> users will be in scope.
>
> TL;DR: Do the whole goverened feature thing dead last.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM
  2023-08-16 21:53   ` Sean Christopherson
@ 2023-08-17  1:59     ` Binbin Wu
  0 siblings, 0 replies; 42+ messages in thread
From: Binbin Wu @ 2023-08-17  1:59 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng



On 8/17/2023 5:53 AM, Sean Christopherson wrote:
> s/Expose/Advertise
>
> And I would add an "enable" in there somehwere, because to Kai's point earlier in
> the series about kvm_cpu_cap_has(), the guest can't actually use LAM until this
> patch.  Sometimes we do just say "Advertise", but typically only for features
> where there's not virtualization support, e.g. AVX instructions where the guest
> can use them irrespective of what KVM says it supports.
>
> This?
>
> KVM: x86: Advertise and enable LAM (user and supervisor)
It looks good to me. Thanks.

>
> On Wed, Jul 19, 2023, Binbin Wu wrote:
>> From: Robert Hoo <robert.hu@linux.intel.com>
>>
>> LAM feature is enumerated by CPUID.7.1:EAX.LAM[bit 26].
>> Expose the feature to userspace as the final step after the following
>> supports:
>> - CR4.LAM_SUP virtualization
>> - CR3.LAM_U48 and CR3.LAM_U57 virtualization
>> - Check and untag 64-bit linear address when LAM applies in instruction
>>    emulations and VMExit handlers.
>>
>> Exposing SGX LAM support is not supported yet. SGX LAM support is enumerated
>> in SGX's own CPUID and there's no hard requirement that it must be supported
>> when LAM is reported in CPUID leaf 0x7.
>>
>> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>> Reviewed-by: Jingqi Liu <jingqi.liu@intel.com>
>> Reviewed-by: Chao Gao <chao.gao@intel.com>
>> Reviewed-by: Kai Huang <kai.huang@intel.com>
>> Tested-by: Xuelian Guo <xuelian.guo@intel.com>
>> ---
>>   arch/x86/kvm/cpuid.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>> index 7ebf3ce1bb5f..21d525b01d45 100644
>> --- a/arch/x86/kvm/cpuid.c
>> +++ b/arch/x86/kvm/cpuid.c
>> @@ -645,7 +645,7 @@ void kvm_set_cpu_caps(void)
>>   	kvm_cpu_cap_mask(CPUID_7_1_EAX,
>>   		F(AVX_VNNI) | F(AVX512_BF16) | F(CMPCCXADD) |
>>   		F(FZRM) | F(FSRS) | F(FSRC) |
>> -		F(AMX_FP16) | F(AVX_IFMA)
>> +		F(AMX_FP16) | F(AVX_IFMA) | F(LAM)
>>   	);
>>   
>>   	kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,
>> -- 
>> 2.25.1
>>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
  2023-08-16 22:25 ` Sean Christopherson
@ 2023-08-17  9:17   ` Binbin Wu
  2023-08-18  4:31     ` Binbin Wu
  0 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-08-17  9:17 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng



On 8/17/2023 6:25 AM, Sean Christopherson wrote:
> On Wed, Jul 19, 2023, Binbin Wu wrote:
>> Binbin Wu (7):
>>    KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
>>    KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
>>    KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
>>    KVM: x86: Virtualize CR3.LAM_{U48,U57}
>>    KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in
>>      emulator
>>    KVM: VMX: Implement and wire get_untagged_addr() for LAM
>>    KVM: x86: Untag address for vmexit handlers when LAM applicable
>>
>> Robert Hoo (2):
>>    KVM: x86: Virtualize CR4.LAM_SUP
>>    KVM: x86: Expose LAM feature to userspace VMM
> Looks good, just needs a bit of re-organination.  Same goes for the LASS series.
>
> For the next version, can you (or Zeng) send a single series for LAM and LASS?
> They're both pretty much ready to go, i.e. I don't expect one to hold up the other
> at this point, and posting a single series will reduce the probability of me
> screwing up a conflict resolution or missing a dependency when applying.
>
> Lastly, a question: is there a pressing need to get LAM/LASS support merged _now_?
> E.g. are there are there any publicly available CPUs that support LAM and/or LASS?
AFAIK, there is no publicly available CPU supporting LAM and LASS yet.

>
> If not, I'll wait until v6.7 to grab these, e.g. so that you don't have to rush
> madly to turn around the next version, and so that I'm not trying to squeeze too
> much stuff in just before the merge window.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM
  2023-08-16 22:01   ` Sean Christopherson
@ 2023-08-17  9:51     ` Binbin Wu
  2023-08-17 14:44       ` Sean Christopherson
  0 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-08-17  9:51 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng



On 8/17/2023 6:01 AM, Sean Christopherson wrote:
> On Wed, Jul 19, 2023, Binbin Wu wrote:
>> +	return (sign_extend64(gva, lam_bit) & ~BIT_ULL(63)) | (gva & BIT_ULL(63));
> Almost forgot.  Please add a comment explaning how LAM untags the address,
> specifically the whole bit 63 preservation.  The logic is actually straightforward,
> but the above looks way more complex than it actually is.  This?
>
> 	/*
> 	 * Untag the address by sign-extending the LAM bit, but NOT to bit 63.
> 	 * Bit 63 is retained from the raw virtual address so that untagging
> 	 * doesn't change a user access to a supervisor access, and vice versa.
> 	 */
OK.

Besides it, I find I forgot adding the comments for the function. I will 
add it back if you don't object.

+/*
+ * Only called in 64-bit mode.
+ *
+ * LAM has a modified canonical check when applicable:
+ * LAM_S48                : [ 1 ][ metadata ][ 1 ]
+ *                            63               47
+ * LAM_U48                : [ 0 ][ metadata ][ 0 ]
+ *                            63               47
+ * LAM_S57                : [ 1 ][ metadata ][ 1 ]
+ *                            63               56
+ * LAM_U57 + 5-lvl paging : [ 0 ][ metadata ][ 0 ]
+ *                            63               56
+ * LAM_U57 + 4-lvl paging : [ 0 ][ metadata ][ 0...0 ]
+ *                            63               56..47
+ *
+ * Note that KVM masks the metadata in addresses, performs the (original)
+ * canonicality checking and then walks page table. This is slightly
+ * different from hardware behavior but achieves the same effect.
+ * Specifically, if LAM is enabled, the processor performs a modified
+ * canonicality checking where the metadata are ignored instead of
+ * masked. After the modified canonicality checking, the processor masks
+ * the metadata before passing addresses for paging translation.
+ */

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM
  2023-08-17  9:51     ` Binbin Wu
@ 2023-08-17 14:44       ` Sean Christopherson
  0 siblings, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-17 14:44 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Thu, Aug 17, 2023, Binbin Wu wrote:
> 
> 
> On 8/17/2023 6:01 AM, Sean Christopherson wrote:
> > On Wed, Jul 19, 2023, Binbin Wu wrote:
> > > +	return (sign_extend64(gva, lam_bit) & ~BIT_ULL(63)) | (gva & BIT_ULL(63));
> > Almost forgot.  Please add a comment explaning how LAM untags the address,
> > specifically the whole bit 63 preservation.  The logic is actually straightforward,
> > but the above looks way more complex than it actually is.  This?
> > 
> > 	/*
> > 	 * Untag the address by sign-extending the LAM bit, but NOT to bit 63.
> > 	 * Bit 63 is retained from the raw virtual address so that untagging
> > 	 * doesn't change a user access to a supervisor access, and vice versa.
> > 	 */
> OK.
> 
> Besides it, I find I forgot adding the comments for the function. I will add
> it back if you don't object.
> 
> +/*
> + * Only called in 64-bit mode.

This is no longer true.

> + *
> + * LAM has a modified canonical check when applicable:
> + * LAM_S48                : [ 1 ][ metadata ][ 1 ]
> + *                            63               47
> + * LAM_U48                : [ 0 ][ metadata ][ 0 ]
> + *                            63               47
> + * LAM_S57                : [ 1 ][ metadata ][ 1 ]
> + *                            63               56
> + * LAM_U57 + 5-lvl paging : [ 0 ][ metadata ][ 0 ]
> + *                            63               56
> + * LAM_U57 + 4-lvl paging : [ 0 ][ metadata ][ 0...0 ]
> + *                            63               56..47

I vote to not include the table, IMO it does more harm than good, e.g. I only
understood what the last U57+4-lvl entry is conveying after reading the same
figure in the ISE.  Again, the concept of masking bits 62:{56,47} is quite
straightforward, and that's what this function handles.  The gory details of
userspace not 


> + * Note that KVM masks the metadata in addresses, performs the (original)
> + * canonicality checking and then walks page table. This is slightly
> + * different from hardware behavior but achieves the same effect.
> + * Specifically, if LAM is enabled, the processor performs a modified
> + * canonicality checking where the metadata are ignored instead of
> + * masked. After the modified canonicality checking, the processor masks
> + * the metadata before passing addresses for paging translation.

Please drop this.  I don't think we can extrapolate exact hardware behavior from
the ISE blurbs that say the masking is applied after the modified canonicality
check.  Hardware/ucode could very well take the exact same approach as KVM, all
that matters is that the behavior is architecturally correct.

If we're concerned about the blurbs saying the masking is performed *after* the
canonicality checks, e.g. this

  After this modified canonicality check is performed, bits 62:48 are masked by
  sign-extending the value of bit 47 (1)

then the comment should focus on whether or not KVM adheres to the architecture
(SDM), e.g.

/*
 * Note, the SDM states that the linear address is masked *after* the modified
 * canonicality check, whereas KVM masks (untags) the address and then performs
 * a "normal" canonicality check.  Functionally, the two methods are identical,
 * and when the masking occurs relative to the canonicality check isn't visible
 * to software, i.e. KVM's behavior doesn't violate the SDM.
 */

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
  2023-08-17  1:28           ` Binbin Wu
@ 2023-08-17 19:46             ` Sean Christopherson
  0 siblings, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-17 19:46 UTC (permalink / raw)
  To: Binbin Wu
  Cc: Kai Huang, Chao Gao, David.Laight@ACULAB.COM, Guang Zeng,
	linux-kernel, pbonzini, kvm, robert.hu

On Thu, Aug 17, 2023, Binbin Wu wrote:
> 
> 
> On 8/17/2023 5:33 AM, Sean Christopherson wrote:
> > On Wed, Aug 16, 2023, Kai Huang wrote:
> > > > > > --- a/arch/x86/kvm/vmx/vmx.c
> > > > > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > > > > @@ -7783,6 +7783,9 @@ static void vmx_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> > > > > >    		vmx->msr_ia32_feature_control_valid_bits &=
> > > > > >    			~FEAT_CTL_SGX_LC_ENABLED;
> > > > > > +	if (boot_cpu_has(X86_FEATURE_LAM))
> > > > > > +		kvm_governed_feature_check_and_set(vcpu, X86_FEATURE_LAM);
> > > > > > +
> > > > > If you want to use boot_cpu_has(), it's better to be done at your last patch to
> > > > > only set the cap bit when boot_cpu_has() is true, I suppose.
> > > > Yes, but new version of kvm_governed_feature_check_and_set() of
> > > > KVM-governed feature framework will check against kvm_cpu_cap_has() as well.
> > > > I will remove the if statement and call
> > > > kvm_governed_feature_check_and_set()  directly.
> > > > https://lore.kernel.org/kvm/20230815203653.519297-2-seanjc@google.com/
> > > > 
> > > I mean kvm_cpu_cap_has() checks against the host CPUID directly while here you
> > > are using boot_cpu_has().  They are not the same.
> > > 
> > > If LAM should be only supported when boot_cpu_has() is true then it seems you
> > > can just only set the LAM cap bit when boot_cpu_has() is true.  As you also
> > > mentioned above the kvm_governed_feature_check_and_set() here internally does
> > > kvm_cpu_cap_has().
> > That's covered by the last patch:
> > 
> > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> > index e961e9a05847..06061c11d74d 100644
> > --- a/arch/x86/kvm/cpuid.c
> > +++ b/arch/x86/kvm/cpuid.c
> > @@ -677,7 +677,7 @@ void kvm_set_cpu_caps(void)
> >          kvm_cpu_cap_mask(CPUID_7_1_EAX,
> >                  F(AVX_VNNI) | F(AVX512_BF16) | F(CMPCCXADD) |
> >                  F(FZRM) | F(FSRS) | F(FSRC) |
> > -               F(AMX_FP16) | F(AVX_IFMA)
> > +               F(AMX_FP16) | F(AVX_IFMA) | F(LAM)
> >          );
> >          kvm_cpu_cap_init_kvm_defined(CPUID_7_1_EDX,
> > 
> > 
> > Which highlights a problem with activating a goverened feature before said feature
> > is actually supported by KVM: it's all kinds of confusing.
> > 
> > It'll generate a more churn in git history, but I think we should first enable
> > LAM without a goverened feature, and then activate a goverened feature later on.
> > Using a goverened feature is purely an optimization, i.e. the series needs to be
> > function without using a governed feature.
> OK, then how about the second option which has been listed in your v9 patch
> series discussion.
> https://lore.kernel.org/kvm/20230606091842.13123-1-binbin.wu@linux.intel.com/T/#m16ee5cec4a46954f985cb6afedb5f5a3435373a1
> 
> Temporarily add a bool can_use_lam in kvm_vcpu_arch and use the bool
> "can_use_lam" instead of guest_can_use(vcpu, X86_FEATURE_LAM).
> and then put the patch of adopting "KVM-governed feature framework" to the
> last.

No, just do the completely unoptimized, but functionally obvious thing:

	if (kvm_cpu_cap_has(x86_FEATURE_LAM) &&
	    guest_cpuid_has(vcpu, x86_FEATURE_LAM))
		...

I don't expect anyone to push back on using a governed feature, i.e. I don't expect
to ever see a kernel release with the unoptimized code.  If someone is bisecting
or doing something *really* weird with their kernel management, then yes, they
might see suboptimal performance.

Again, the goal is to separate the addition of functionality from the optimization
of that functionality, e.g. to make it easier to review and understand each change.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
  2023-08-17  9:17   ` Binbin Wu
@ 2023-08-18  4:31     ` Binbin Wu
  2023-08-18 13:53       ` Sean Christopherson
  0 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-08-18  4:31 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng



On 8/17/2023 5:17 PM, Binbin Wu wrote:
>
>
> On 8/17/2023 6:25 AM, Sean Christopherson wrote:
>> On Wed, Jul 19, 2023, Binbin Wu wrote:
>>> Binbin Wu (7):
>>>    KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
>>>    KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
>>>    KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
>>>    KVM: x86: Virtualize CR3.LAM_{U48,U57}
>>>    KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call 
>>> it in
>>>      emulator
>>>    KVM: VMX: Implement and wire get_untagged_addr() for LAM
>>>    KVM: x86: Untag address for vmexit handlers when LAM applicable
>>>
>>> Robert Hoo (2):
>>>    KVM: x86: Virtualize CR4.LAM_SUP
>>>    KVM: x86: Expose LAM feature to userspace VMM
>> Looks good, just needs a bit of re-organination.  Same goes for the 
>> LASS series.
>>
>> For the next version, can you (or Zeng) send a single series for LAM 
>> and LASS?
>> They're both pretty much ready to go, i.e. I don't expect one to hold 
>> up the other
>> at this point, and posting a single series will reduce the 
>> probability of me
>> screwing up a conflict resolution or missing a dependency when applying.
>>
Hi Sean,
Do you still prefer a single series for LAM and LASS  for the next 
version when we don't need to rush for v6.6?

>> Lastly, a question: is there a pressing need to get LAM/LASS support 
>> merged _now_?
>> E.g. are there are there any publicly available CPUs that support LAM 
>> and/or LASS?
> AFAIK, there is no publicly available CPU supporting LAM and LASS yet.
>
>>
>> If not, I'll wait until v6.7 to grab these, e.g. so that you don't 
>> have to rush
>> madly to turn around the next version, and so that I'm not trying to 
>> squeeze too
>> much stuff in just before the merge window.
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
  2023-08-18  4:31     ` Binbin Wu
@ 2023-08-18 13:53       ` Sean Christopherson
  2023-08-25 14:18         ` Zeng Guang
  0 siblings, 1 reply; 42+ messages in thread
From: Sean Christopherson @ 2023-08-18 13:53 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Fri, Aug 18, 2023, Binbin Wu wrote:
> 
> On 8/17/2023 5:17 PM, Binbin Wu wrote:
> > 
> > On 8/17/2023 6:25 AM, Sean Christopherson wrote:
> > > On Wed, Jul 19, 2023, Binbin Wu wrote:
> > > > Binbin Wu (7):
> > > >    KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
> > > >    KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
> > > >    KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
> > > >    KVM: x86: Virtualize CR3.LAM_{U48,U57}
> > > >    KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and
> > > > call it in
> > > >      emulator
> > > >    KVM: VMX: Implement and wire get_untagged_addr() for LAM
> > > >    KVM: x86: Untag address for vmexit handlers when LAM applicable
> > > > 
> > > > Robert Hoo (2):
> > > >    KVM: x86: Virtualize CR4.LAM_SUP
> > > >    KVM: x86: Expose LAM feature to userspace VMM
> > > Looks good, just needs a bit of re-organination.  Same goes for the
> > > LASS series.
> > > 
> > > For the next version, can you (or Zeng) send a single series for LAM
> > > and LASS?
> > > They're both pretty much ready to go, i.e. I don't expect one to
> > > hold up the other
> > > at this point, and posting a single series will reduce the
> > > probability of me
> > > screwing up a conflict resolution or missing a dependency when applying.
> > > 
> Hi Sean,
> Do you still prefer a single series for LAM and LASS  for the next version
> when we don't need to rush for v6.6?

Yes, if it's not too much trouble on your end.  Since the two have overlapping
prep work and concepts, and both series are in good shape, my strong preference
is to grab them at the same time.  I would much rather apply what you've tested
and reduce the probability of messing up any conflicts.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
  2023-08-18 13:53       ` Sean Christopherson
@ 2023-08-25 14:18         ` Zeng Guang
  2023-08-31 20:24           ` Sean Christopherson
  0 siblings, 1 reply; 42+ messages in thread
From: Zeng Guang @ 2023-08-25 14:18 UTC (permalink / raw)
  To: Sean Christopherson, Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, Gao, Chao, Huang, Kai, David.Laight,
	robert.hu


On 8/18/2023 9:53 PM, Sean Christopherson wrote:
> On Fri, Aug 18, 2023, Binbin Wu wrote:
>> On 8/17/2023 5:17 PM, Binbin Wu wrote:
>>> On 8/17/2023 6:25 AM, Sean Christopherson wrote:
>>>> On Wed, Jul 19, 2023, Binbin Wu wrote:
>>>>> Binbin Wu (7):
>>>>>     KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
>>>>>     KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality
>>>>>     KVM: x86: Use KVM-governed feature framework to track "LAM enabled"
>>>>>     KVM: x86: Virtualize CR3.LAM_{U48,U57}
>>>>>     KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and
>>>>> call it in
>>>>>       emulator
>>>>>     KVM: VMX: Implement and wire get_untagged_addr() for LAM
>>>>>     KVM: x86: Untag address for vmexit handlers when LAM applicable
>>>>>
>>>>> Robert Hoo (2):
>>>>>     KVM: x86: Virtualize CR4.LAM_SUP
>>>>>     KVM: x86: Expose LAM feature to userspace VMM
>>>> Looks good, just needs a bit of re-organination.  Same goes for the
>>>> LASS series.
>>>>
>>>> For the next version, can you (or Zeng) send a single series for LAM
>>>> and LASS?
>>>> They're both pretty much ready to go, i.e. I don't expect one to
>>>> hold up the other
>>>> at this point, and posting a single series will reduce the
>>>> probability of me
>>>> screwing up a conflict resolution or missing a dependency when applying.
>>>>
>> Hi Sean,
>> Do you still prefer a single series for LAM and LASS  for the next version
>> when we don't need to rush for v6.6?
> Yes, if it's not too much trouble on your end.  Since the two have overlapping
> prep work and concepts, and both series are in good shape, my strong preference
> is to grab them at the same time.  I would much rather apply what you've tested
> and reduce the probability of messing up any conflicts.
>
>
>
Hi Sean,
One more concern, KVM LASS patch has an extra dependency on kernel LASS 
series in which
enumerates lass feature bit (X86_FEATURE_LASS/X86_CR4_LASS). So far 
kernel lass patch is
still under review, as alternative we may extract relevant patch part 
along with kvm lass patch
set for a single series in case kernel lass cannot be merged before v6.7.
Do you think it OK in that way?


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
  2023-08-16 21:00   ` Sean Christopherson
@ 2023-08-28  4:06     ` Binbin Wu
  2023-08-31 19:26       ` Sean Christopherson
  0 siblings, 1 reply; 42+ messages in thread
From: Binbin Wu @ 2023-08-28  4:06 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng



On 8/17/2023 5:00 AM, Sean Christopherson wrote:
> On Wed, Jul 19, 2023, Binbin Wu wrote:
>> Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK.
> Using GENMASK_ULL() is an opportunistic cleanup, it is not the main purpose for
> this patch.  The main purpose is to extract the maximum theoretical mask for guest
> MAXPHYADDR so that it can be used to strip bits from CR3.
>
> And rather than bury the actual use in "KVM: x86: Virtualize CR3.LAM_{U48,U57}",
> I think it makes sense to do the masking in this patch.  That change only becomes
> _necessary_ when LAM comes along, but it's completely valid without LAM.
>
> That will also provide a place to explain why we decided to unconditionally mask
> the pgd (it's harmless for 32-bit guests, querying 64-bit mode would be more
> expensive, and for EPT the mask isn't tied to guest mode).
OK.

> And it should also
> explain that using PT_BASE_ADDR_MASK would actually be wrong (PAE has 64-bit
> elements _except_ for CR3).
Hi Sean, I am not sure if I understand it correctly.
Do you mean when KVM shadows the page table of guest using 32-bit paging 
or PAE paging,
guest CR3 is or can be 32 bits for 32-bit paging or PAE paging, so that 
apply the mask to a 32-bit
value CR3 "would actually be wrong" ?


>
> E.g. end up with a shortlog for this patch along the lines of:
>
>    KVM: x86/mmu: Drop non-PA bits when getting GFN for guest's PGD
>
> and write the changelog accordingly.
>
>> No functional change intended.
>>
>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>> ---
>>   arch/x86/kvm/mmu/mmu_internal.h | 1 +
>>   arch/x86/kvm/mmu/paging_tmpl.h  | 2 +-
>>   2 files changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
>> index d39af5639ce9..7d2105432d66 100644
>> --- a/arch/x86/kvm/mmu/mmu_internal.h
>> +++ b/arch/x86/kvm/mmu/mmu_internal.h
>> @@ -21,6 +21,7 @@ extern bool dbg;
>>   #endif
>>   
>>   /* Page table builder macros common to shadow (host) PTEs and guest PTEs. */
>> +#define __PT_BASE_ADDR_MASK GENMASK_ULL(51, 12)
>>   #define __PT_LEVEL_SHIFT(level, bits_per_level)	\
>>   	(PAGE_SHIFT + ((level) - 1) * (bits_per_level))
>>   #define __PT_INDEX(address, level, bits_per_level) \
>> diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
>> index 0662e0278e70..00c8193f5991 100644
>> --- a/arch/x86/kvm/mmu/paging_tmpl.h
>> +++ b/arch/x86/kvm/mmu/paging_tmpl.h
>> @@ -62,7 +62,7 @@
>>   #endif
>>   
>>   /* Common logic, but per-type values.  These also need to be undefined. */
>> -#define PT_BASE_ADDR_MASK	((pt_element_t)(((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)))
>> +#define PT_BASE_ADDR_MASK	((pt_element_t)__PT_BASE_ADDR_MASK)
>>   #define PT_LVL_ADDR_MASK(lvl)	__PT_LVL_ADDR_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS)
>>   #define PT_LVL_OFFSET_MASK(lvl)	__PT_LVL_OFFSET_MASK(PT_BASE_ADDR_MASK, lvl, PT_LEVEL_BITS)
>>   #define PT_INDEX(addr, lvl)	__PT_INDEX(addr, lvl, PT_LEVEL_BITS)
>> -- 
>> 2.25.1
>>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK
  2023-08-28  4:06     ` Binbin Wu
@ 2023-08-31 19:26       ` Sean Christopherson
  0 siblings, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-31 19:26 UTC (permalink / raw)
  To: Binbin Wu
  Cc: kvm, linux-kernel, pbonzini, chao.gao, kai.huang, David.Laight,
	robert.hu, guang.zeng

On Mon, Aug 28, 2023, Binbin Wu wrote:
> 
> 
> On 8/17/2023 5:00 AM, Sean Christopherson wrote:
> > On Wed, Jul 19, 2023, Binbin Wu wrote:
> > > Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK.
> > Using GENMASK_ULL() is an opportunistic cleanup, it is not the main purpose for
> > this patch.  The main purpose is to extract the maximum theoretical mask for guest
> > MAXPHYADDR so that it can be used to strip bits from CR3.
> > 
> > And rather than bury the actual use in "KVM: x86: Virtualize CR3.LAM_{U48,U57}",
> > I think it makes sense to do the masking in this patch.  That change only becomes
> > _necessary_ when LAM comes along, but it's completely valid without LAM.
> > 
> > That will also provide a place to explain why we decided to unconditionally mask
> > the pgd (it's harmless for 32-bit guests, querying 64-bit mode would be more
> > expensive, and for EPT the mask isn't tied to guest mode).
> OK.
> 
> > And it should also
> > explain that using PT_BASE_ADDR_MASK would actually be wrong (PAE has 64-bit
> > elements _except_ for CR3).
> Hi Sean, I am not sure if I understand it correctly.  Do you mean when KVM
> shadows the page table of guest using 32-bit paging or PAE paging, guest CR3
> is or can be 32 bits for 32-bit paging or PAE paging, so that apply the mask
> to a 32-bit value CR3 "would actually be wrong" ?

It would be technically wrong for PAE paging, as the PTEs themselves are 64 bits,
i.e. PT_BASE_ADDR_MASK would be 51:12, but CR3 is still only 32 bits.  Wouldn't
matter in practice, but I think it's worth calling out that going out of our way
to use PT_BASE_ADDR_MASK wouldn't actually "fix" anything.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling
  2023-08-25 14:18         ` Zeng Guang
@ 2023-08-31 20:24           ` Sean Christopherson
  0 siblings, 0 replies; 42+ messages in thread
From: Sean Christopherson @ 2023-08-31 20:24 UTC (permalink / raw)
  To: Zeng Guang
  Cc: Binbin Wu, kvm, linux-kernel, pbonzini, Chao Gao, Kai Huang,
	David.Laight, robert.hu

On Fri, Aug 25, 2023, Zeng Guang wrote:
> 
> On 8/18/2023 9:53 PM, Sean Christopherson wrote:
> > On Fri, Aug 18, 2023, Binbin Wu wrote:
> > > On 8/17/2023 5:17 PM, Binbin Wu wrote:
> > > > On 8/17/2023 6:25 AM, Sean Christopherson wrote:
> > > > > On Wed, Jul 19, 2023, Binbin Wu wrote:
> > > > > For the next version, can you (or Zeng) send a single series for LAM
> > > > > and LASS?  They're both pretty much ready to go, i.e. I don't expect
> > > > > one to hold up the other at this point, and posting a single series
> > > > > will reduce the probability of me screwing up a conflict resolution
> > > > > or missing a dependency when applying.
> > > > > 
> > > Hi Sean,
> > > Do you still prefer a single series for LAM and LASS  for the next version
> > > when we don't need to rush for v6.6?
> > Yes, if it's not too much trouble on your end.  Since the two have overlapping
> > prep work and concepts, and both series are in good shape, my strong preference
> > is to grab them at the same time.  I would much rather apply what you've tested
> > and reduce the probability of messing up any conflicts.
> > 
> > 
> > 
> Hi Sean,
> One more concern, KVM LASS patch has an extra dependency on kernel LASS
> series in which enumerates lass feature bit (X86_FEATURE_LASS/X86_CR4_LASS).
> So far kernel lass patch is still under review, as alternative we may extract
> relevant patch part along with kvm lass patch set for a single series in case
> kernel lass cannot be merged before v6.7.  Do you think it OK in that way?

Hmm, since getting LASS support in KVM isn't urgent, I think it makes sense to
wait for kernel support, no reason to complicate things.

To avoid delaying LAM, just put all the LAM patches first, it's trivally easy for
me to grab a partial series.

Speaking of kernel support, one thing we should explicit discuss is whether or
not KVM should require kernel support for LASS, i.e. if KVM should support LASS
if it's present in hardware, even if it's not enabled in the host.

Looking at the kernel patches, LASS will be disabled if vsyscall is in emulate
mode.  Ah, digging deeper, that's an incredibly esoteric and deprecated mode.
bf00745e7791 ("x86/vsyscall: Remove CONFIG_LEGACY_VSYSCALL_EMULATE").

So scratch that, let's again keep things simple.  Might be worth a call out in
the changelog that adds F(LASS), though.

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2023-08-31 20:24 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
2023-07-19 14:41 ` [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK Binbin Wu
2023-08-16 21:00   ` Sean Christopherson
2023-08-28  4:06     ` Binbin Wu
2023-08-31 19:26       ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality Binbin Wu
2023-07-20 23:53   ` Isaku Yamahata
2023-07-21  2:20     ` Binbin Wu
2023-07-21 15:03       ` Sean Christopherson
2023-07-24  2:07         ` Binbin Wu
2023-07-25 16:05           ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled" Binbin Wu
2023-08-16  3:46   ` Huang, Kai
2023-08-16  7:08     ` Binbin Wu
2023-08-16  9:47       ` Huang, Kai
2023-08-16 21:33         ` Sean Christopherson
2023-08-16 23:03           ` Huang, Kai
2023-08-17  1:28           ` Binbin Wu
2023-08-17 19:46             ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP Binbin Wu
2023-08-16 21:41   ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 5/9] KVM: x86: Virtualize CR3.LAM_{U48,U57} Binbin Wu
2023-08-16 21:44   ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 6/9] KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in emulator Binbin Wu
2023-07-19 14:41 ` [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM Binbin Wu
2023-08-16 22:01   ` Sean Christopherson
2023-08-17  9:51     ` Binbin Wu
2023-08-17 14:44       ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable Binbin Wu
2023-08-16 21:49   ` Sean Christopherson
2023-08-16 22:10   ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM Binbin Wu
2023-08-16 21:53   ` Sean Christopherson
2023-08-17  1:59     ` Binbin Wu
2023-08-15  2:05 ` [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
2023-08-15 23:49   ` Sean Christopherson
2023-08-16 22:25 ` Sean Christopherson
2023-08-17  9:17   ` Binbin Wu
2023-08-18  4:31     ` Binbin Wu
2023-08-18 13:53       ` Sean Christopherson
2023-08-25 14:18         ` Zeng Guang
2023-08-31 20:24           ` Sean Christopherson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.