All of lore.kernel.org
 help / color / mirror / Atom feed
From: Binbin Wu <binbin.wu@linux.intel.com>
To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: seanjc@google.com, pbonzini@redhat.com, chao.gao@intel.com,
	kai.huang@intel.com, David.Laight@ACULAB.COM,
	robert.hu@linux.intel.com, guang.zeng@intel.com,
	binbin.wu@linux.intel.com
Subject: [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP
Date: Wed, 19 Jul 2023 22:41:26 +0800	[thread overview]
Message-ID: <20230719144131.29052-5-binbin.wu@linux.intel.com> (raw)
In-Reply-To: <20230719144131.29052-1-binbin.wu@linux.intel.com>

From: Robert Hoo <robert.hu@linux.intel.com>

Add support to allow guests to set the new CR4 control bit for guests to enable
the new Intel CPU feature Linear Address Masking (LAM) on supervisor pointers.

LAM modifies the checking that is applied to 64-bit linear addresses, allowing
software to use of the untranslated address bits for metadata and masks the
metadata bits before using them as linear addresses to access memory. LAM uses
CR4.LAM_SUP (bit 28) to configure LAM for supervisor pointers. LAM also changes
VMENTER to allow the bit to be set in VMCS's HOST_CR4 and GUEST_CR4 for
virtualization. Note CR4.LAM_SUP is allowed to be set even not in 64-bit mode,
but it will not take effect since LAM only applies to 64-bit linear addresses.

Move CR4.LAM_SUP out of CR4_RESERVED_BITS and its reservation depends on vcpu
supporting LAM feature or not. Leave the bit intercepted to prevent guest from
setting CR4.LAM_SUP bit if LAM is not exposed to guest as well as to avoid vmread
every time when KVM fetches its value, with the expectation that guest won't
toggle the bit frequently.

Set CR4.LAM_SUP bit in the emulated IA32_VMX_CR4_FIXED1 MSR for guests to allow
guests to enable LAM for supervisor pointers in nested VMX operation.

Hardware is not required to do TLB flush when CR4.LAM_SUP toggled, KVM doesn't
need to emulate TLB flush based on it.
There's no other features/vmx_exec_controls connection, no other code needed in
{kvm,vmx}_set_cr4().

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Tested-by: Xuelian Guo <xuelian.guo@intel.com>
---
 arch/x86/include/asm/kvm_host.h | 3 ++-
 arch/x86/kvm/vmx/vmx.c          | 3 +++
 arch/x86/kvm/x86.h              | 2 ++
 3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index e8e1101a90c8..881a0be862e1 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -125,7 +125,8 @@
 			  | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
 			  | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
 			  | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
-			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
+			  | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
+			  | X86_CR4_LAM_SUP))
 
 #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ae47303c88d7..a0d6ea87a2d0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7646,6 +7646,9 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
 	cr4_fixed1_update(X86_CR4_UMIP,       ecx, feature_bit(UMIP));
 	cr4_fixed1_update(X86_CR4_LA57,       ecx, feature_bit(LA57));
 
+	entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1);
+	cr4_fixed1_update(X86_CR4_LAM_SUP,    eax, feature_bit(LAM));
+
 #undef cr4_fixed1_update
 }
 
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 82e3dafc5453..24e2b56356b8 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -528,6 +528,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
 		__reserved_bits |= X86_CR4_VMXE;        \
 	if (!__cpu_has(__c, X86_FEATURE_PCID))          \
 		__reserved_bits |= X86_CR4_PCIDE;       \
+	if (!__cpu_has(__c, X86_FEATURE_LAM))           \
+		__reserved_bits |= X86_CR4_LAM_SUP;     \
 	__reserved_bits;                                \
 })
 
-- 
2.25.1


  parent reply	other threads:[~2023-07-19 14:42 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
2023-07-19 14:41 ` [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK Binbin Wu
2023-08-16 21:00   ` Sean Christopherson
2023-08-28  4:06     ` Binbin Wu
2023-08-31 19:26       ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality Binbin Wu
2023-07-20 23:53   ` Isaku Yamahata
2023-07-21  2:20     ` Binbin Wu
2023-07-21 15:03       ` Sean Christopherson
2023-07-24  2:07         ` Binbin Wu
2023-07-25 16:05           ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled" Binbin Wu
2023-08-16  3:46   ` Huang, Kai
2023-08-16  7:08     ` Binbin Wu
2023-08-16  9:47       ` Huang, Kai
2023-08-16 21:33         ` Sean Christopherson
2023-08-16 23:03           ` Huang, Kai
2023-08-17  1:28           ` Binbin Wu
2023-08-17 19:46             ` Sean Christopherson
2023-07-19 14:41 ` Binbin Wu [this message]
2023-08-16 21:41   ` [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 5/9] KVM: x86: Virtualize CR3.LAM_{U48,U57} Binbin Wu
2023-08-16 21:44   ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 6/9] KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in emulator Binbin Wu
2023-07-19 14:41 ` [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM Binbin Wu
2023-08-16 22:01   ` Sean Christopherson
2023-08-17  9:51     ` Binbin Wu
2023-08-17 14:44       ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable Binbin Wu
2023-08-16 21:49   ` Sean Christopherson
2023-08-16 22:10   ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM Binbin Wu
2023-08-16 21:53   ` Sean Christopherson
2023-08-17  1:59     ` Binbin Wu
2023-08-15  2:05 ` [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
2023-08-15 23:49   ` Sean Christopherson
2023-08-16 22:25 ` Sean Christopherson
2023-08-17  9:17   ` Binbin Wu
2023-08-18  4:31     ` Binbin Wu
2023-08-18 13:53       ` Sean Christopherson
2023-08-25 14:18         ` Zeng Guang
2023-08-31 20:24           ` Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230719144131.29052-5-binbin.wu@linux.intel.com \
    --to=binbin.wu@linux.intel.com \
    --cc=David.Laight@ACULAB.COM \
    --cc=chao.gao@intel.com \
    --cc=guang.zeng@intel.com \
    --cc=kai.huang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=robert.hu@linux.intel.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.