kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling
@ 2021-05-10  8:15 Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 01/10] perf/x86/intel: Fix the comment about guest LBR support on KVM Like Xu
                   ` (11 more replies)
  0 siblings, 12 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

Hi geniuses,

A new kernel cycle has begun, and this version looks promising. 

From the end user's point of view, the usage of Arch LBR is the same as
the legacy LBR we have merged in the mainline, but it is much faster.

The Architectural Last Branch Records (LBRs) is published 
in the 319433-040 release of Intel Architecture Instruction
Set Extensions and Future Features Programming Reference[0].

The main advantages for the Arch LBR users are [1]:
- Faster context switching due to XSAVES support and faster reset of
  LBR MSRs via the new DEPTH MSR
- Faster LBR read for a non-PEBS event due to XSAVES support, which
  lowers the overhead of the NMI handler.
- Linux kernel can support the LBR features without knowing the model
  number of the current CPU.

Please check more details in each commit and feel free to comment.

[0] https://software.intel.com/content/www/us/en/develop/download/
intel-architecture-instruction-set-extensions-and-future-features-programming-reference.html
[1] https://lore.kernel.org/lkml/1593780569-62993-1-git-send-email-kan.liang@linux.intel.com/

---
v13->v13 RESEND Changelog:
- Rebase to kvm/queue tree tag: kvm-5.13-2;
- Includes two XSS dependency patches from kvm/intel tree;

v3->v4 Changelog:
- Add one more host patch to reuse ARCH_LBR_CTL_MASK;
- Add reserve_lbr_buffers() instead of using GFP_ATOMIC;
- Fia a bug in the arch_lbr_depth_is_valid();
- Add LBR_CTL_EN to unify DEBUGCTLMSR_LBR and ARCH_LBR_CTL_LBREN;
- Add vmx->host_lbrctlmsr to save/restore host values;
- Add KVM_SUPPORTED_XSS to refactoring supported_xss;
- Clear Arch_LBR ans its XSS bit if it's not supported;
- Add negative testing to the related kvm-unit-tests;
- Refine code and commit messages;

Previous:
v4: https://lore.kernel.org/kvm/20210314155225.206661-1-like.xu@linux.intel.com/
v3: https://lore.kernel.org/kvm/20210303135756.1546253-1-like.xu@linux.intel.com/

Like Xu (8):
  perf/x86/intel: Fix the comment about guest LBR support on KVM
  perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
  KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR
  KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL emulation for Arch LBR
  KVM: vmx/pmu: Add Arch LBR emulation and its VMCS field
  KVM: x86: Expose Architectural LBR CPUID leaf
  KVM: x86: Refine the matching and clearing logic for supported_xss
  KVM: x86: Add XSAVE Support for Architectural LBRs

Sean Christopherson (1):
  KVM: x86: Report XSS as an MSR to be saved if there are supported
    features

Yang Weijiang (1):
  KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS

 arch/x86/events/intel/core.c     |   3 +-
 arch/x86/events/intel/lbr.c      |   6 +-
 arch/x86/include/asm/kvm_host.h  |   1 +
 arch/x86/include/asm/msr-index.h |   1 +
 arch/x86/include/asm/vmx.h       |   4 ++
 arch/x86/kvm/cpuid.c             |  46 ++++++++++++--
 arch/x86/kvm/vmx/capabilities.h  |  25 +++++---
 arch/x86/kvm/vmx/pmu_intel.c     | 103 ++++++++++++++++++++++++++++---
 arch/x86/kvm/vmx/vmx.c           |  50 +++++++++++++--
 arch/x86/kvm/vmx/vmx.h           |   4 ++
 arch/x86/kvm/x86.c               |  19 +++++-
 11 files changed, 226 insertions(+), 36 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 01/10] perf/x86/intel: Fix the comment about guest LBR support on KVM
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 02/10] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Like Xu
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

Starting from v5.12, KVM reports guest LBR and extra_regs support
when the host has relevant support. Just delete this part of the
comment and fix a typo incidentally.

Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
---
I personally recommend this patch to hit the mainline through the KVM tree.

 arch/x86/events/intel/core.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 2521d03de5e0..612b4bcf0634 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -6249,8 +6249,7 @@ __init int intel_pmu_init(void)
 					  x86_pmu.intel_ctrl);
 	/*
 	 * Access LBR MSR may cause #GP under certain circumstances.
-	 * E.g. KVM doesn't support LBR MSR
-	 * Check all LBT MSR here.
+	 * Check all LBR MSR here.
 	 * Disable LBR access if any LBR MSRs can not be accessed.
 	 */
 	if (x86_pmu.lbr_nr && !check_msr(x86_pmu.lbr_tos, 0x3UL))
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 02/10] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 01/10] perf/x86/intel: Fix the comment about guest LBR support on KVM Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 03/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR Like Xu
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

The x86_pmu.lbr_info is 0 unless explicitly initialized, so there's
no point checking x86_pmu.intel_cap.lbr_format.

Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
---
I personally recommend this patch to hit the mainline through the KVM tree.

 arch/x86/events/intel/lbr.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index 76dbab6ac9fb..f72276f4a5ce 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -1833,12 +1833,10 @@ void __init intel_pmu_arch_lbr_init(void)
  */
 int x86_perf_get_lbr(struct x86_pmu_lbr *lbr)
 {
-	int lbr_fmt = x86_pmu.intel_cap.lbr_format;
-
 	lbr->nr = x86_pmu.lbr_nr;
 	lbr->from = x86_pmu.lbr_from;
 	lbr->to = x86_pmu.lbr_to;
-	lbr->info = (lbr_fmt == LBR_FORMAT_INFO) ? x86_pmu.lbr_info : 0;
+	lbr->info = x86_pmu.lbr_info;
 
 	return 0;
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 03/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 01/10] perf/x86/intel: Fix the comment about guest LBR support on KVM Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 02/10] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-06-23 18:03   ` Jim Mattson
  2021-05-10  8:15 ` [RESEND PATCH v4 04/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL " Like Xu
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

The number of Arch LBR entries available for recording operations
is dictated by the value in MSR_ARCH_LBR_DEPTH.DEPTH. The supported
LBR depth values can be found in CPUID.(EAX=01CH, ECX=0):EAX[7:0]
and for each bit "n" set in this field, the MSR_ARCH_LBR_DEPTH.DEPTH
value of "8*(n+1)" is supported.

On a guest write to MSR_ARCH_LBR_DEPTH, all LBR entries are reset to 0.
KVM emulates the reset behavior by introducing lbr_desc->arch_lbr_reset.
KVM writes the guest requested value to the native ARCH_LBR_DEPTH MSR
(this is safe because the two values will be the same) when the Arch LBR
records MSRs are pass-through to the guest.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
---
 arch/x86/kvm/vmx/pmu_intel.c | 43 ++++++++++++++++++++++++++++++++++++
 arch/x86/kvm/vmx/vmx.h       |  3 +++
 2 files changed, 46 insertions(+)

diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 9efc1a6b8693..d9c9cb6c9a4b 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -220,6 +220,9 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
 	case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
 		ret = pmu->version > 1;
 		break;
+	case MSR_ARCH_LBR_DEPTH:
+		ret = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR);
+		break;
 	default:
 		ret = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0) ||
 			get_gp_pmc(pmu, msr, MSR_P6_EVNTSEL0) ||
@@ -250,6 +253,7 @@ static inline void intel_pmu_release_guest_lbr_event(struct kvm_vcpu *vcpu)
 	if (lbr_desc->event) {
 		perf_event_release_kernel(lbr_desc->event);
 		lbr_desc->event = NULL;
+		lbr_desc->arch_lbr_reset = false;
 		vcpu_to_pmu(vcpu)->event_count--;
 	}
 }
@@ -348,10 +352,26 @@ static bool intel_pmu_handle_lbr_msrs_access(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+/*
+ * Check if the requested depth values is supported
+ * based on the bits [0:7] of the guest cpuid.1c.eax.
+ */
+static bool arch_lbr_depth_is_valid(struct kvm_vcpu *vcpu, u64 depth)
+{
+	struct kvm_cpuid_entry2 *best;
+
+	best = kvm_find_cpuid_entry(vcpu, 0x1c, 0);
+	if (best && depth && (depth < 65) && !(depth & 7))
+		return best->eax & BIT_ULL(depth / 8 - 1);
+
+	return false;
+}
+
 static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
+	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
 	u32 msr = msr_info->index;
 
 	switch (msr) {
@@ -367,6 +387,9 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
 		msr_info->data = pmu->global_ovf_ctrl;
 		return 0;
+	case MSR_ARCH_LBR_DEPTH:
+		msr_info->data = lbr_desc->records.nr;
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -393,6 +416,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
+	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
 	u32 msr = msr_info->index;
 	u64 data = msr_info->data;
 
@@ -427,6 +451,12 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 0;
 		}
 		break;
+	case MSR_ARCH_LBR_DEPTH:
+		if (!arch_lbr_depth_is_valid(vcpu, data))
+			return 1;
+		lbr_desc->records.nr = data;
+		lbr_desc->arch_lbr_reset = true;
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -566,6 +596,7 @@ static void intel_pmu_init(struct kvm_vcpu *vcpu)
 	lbr_desc->records.nr = 0;
 	lbr_desc->event = NULL;
 	lbr_desc->msr_passthrough = false;
+	lbr_desc->arch_lbr_reset = false;
 }
 
 static void intel_pmu_reset(struct kvm_vcpu *vcpu)
@@ -623,6 +654,15 @@ static void intel_pmu_deliver_pmi(struct kvm_vcpu *vcpu)
 		intel_pmu_legacy_freezing_lbrs_on_pmi(vcpu);
 }
 
+static void intel_pmu_arch_lbr_reset(struct kvm_vcpu *vcpu)
+{
+	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
+
+	/* On a software write to IA32_LBR_DEPTH, all LBR entries are reset to 0. */
+	wrmsrl(MSR_ARCH_LBR_DEPTH, lbr_desc->records.nr);
+	lbr_desc->arch_lbr_reset = false;
+}
+
 static void vmx_update_intercept_for_lbr_msrs(struct kvm_vcpu *vcpu, bool set)
 {
 	struct x86_pmu_lbr *lbr = vcpu_to_lbr_records(vcpu);
@@ -654,6 +694,9 @@ static inline void vmx_enable_lbr_msrs_passthrough(struct kvm_vcpu *vcpu)
 {
 	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
 
+	if (unlikely(lbr_desc->arch_lbr_reset))
+		intel_pmu_arch_lbr_reset(vcpu);
+
 	if (lbr_desc->msr_passthrough)
 		return;
 
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 16e4e457ba23..cc362e2d3eaa 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -116,6 +116,9 @@ struct lbr_desc {
 
 	/* True if LBRs are marked as not intercepted in the MSR bitmap */
 	bool msr_passthrough;
+
+	/* Reset all LBR entries on a guest write to MSR_ARCH_LBR_DEPTH */
+	bool arch_lbr_reset;
 };
 
 /*
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 04/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL emulation for Arch LBR
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (2 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND PATCH v4 03/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-06-23 18:29   ` Jim Mattson
  2021-05-10  8:15 ` [RESEND PATCH v4 05/10] KVM: vmx/pmu: Add Arch LBR emulation and its VMCS field Like Xu
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

Arch LBRs are enabled by setting MSR_ARCH_LBR_CTL.LBREn to 1. A new guest
state field named "Guest IA32_LBR_CTL" is added to enhance guest LBR usage.
When guest Arch LBR is enabled, a guest LBR event will be created like the
model-specific LBR does.

On processors that support Arch LBR, MSR_IA32_DEBUGCTLMSR[bit 0] has no
meaning. It can be written to 0 or 1, but reads will always return 0.
Like IA32_DEBUGCTL, IA32_ARCH_LBR_CTL msr is also reserved on INIT.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
---
 arch/x86/events/intel/lbr.c      |  2 --
 arch/x86/include/asm/msr-index.h |  1 +
 arch/x86/include/asm/vmx.h       |  2 ++
 arch/x86/kvm/vmx/pmu_intel.c     | 31 ++++++++++++++++++++++++++-----
 arch/x86/kvm/vmx/vmx.c           |  9 +++++++++
 5 files changed, 38 insertions(+), 7 deletions(-)

diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index f72276f4a5ce..df965fee5988 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -168,8 +168,6 @@ enum {
 	 ARCH_LBR_RETURN		|\
 	 ARCH_LBR_OTHER_BRANCH)
 
-#define ARCH_LBR_CTL_MASK			0x7f000e
-
 static void intel_pmu_lbr_filter(struct cpu_hw_events *cpuc);
 
 static __always_inline bool is_lbr_call_stack_bit_set(u64 config)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 742d89a00721..5d84e8e21330 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -169,6 +169,7 @@
 #define LBR_INFO_BR_TYPE		(0xfull << LBR_INFO_BR_TYPE_OFFSET)
 
 #define MSR_ARCH_LBR_CTL		0x000014ce
+#define ARCH_LBR_CTL_MASK		0x7f000e
 #define ARCH_LBR_CTL_LBREN		BIT(0)
 #define ARCH_LBR_CTL_CPL_OFFSET		1
 #define ARCH_LBR_CTL_CPL		(0x3ull << ARCH_LBR_CTL_CPL_OFFSET)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 0ffaa3156a4e..ea3be961cc8e 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -245,6 +245,8 @@ enum vmcs_field {
 	GUEST_BNDCFGS_HIGH              = 0x00002813,
 	GUEST_IA32_RTIT_CTL		= 0x00002814,
 	GUEST_IA32_RTIT_CTL_HIGH	= 0x00002815,
+	GUEST_IA32_LBR_CTL		= 0x00002816,
+	GUEST_IA32_LBR_CTL_HIGH		= 0x00002817,
 	HOST_IA32_PAT			= 0x00002c00,
 	HOST_IA32_PAT_HIGH		= 0x00002c01,
 	HOST_IA32_EFER			= 0x00002c02,
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index d9c9cb6c9a4b..15490d31b828 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -19,6 +19,12 @@
 #include "pmu.h"
 
 #define MSR_PMC_FULL_WIDTH_BIT      (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0)
+/*
+ * Regardless of the Arch LBR or legacy LBR, when the LBREn bit 0 of the
+ * corresponding control MSR is set to 1, LBR recording will be enabled.
+ */
+#define LBR_CTL_EN	BIT(0)
+#define KVM_ARCH_LBR_CTL_MASK	(ARCH_LBR_CTL_MASK | LBR_CTL_EN)
 
 static struct kvm_event_hw_type_mapping intel_arch_events[] = {
 	/* Index must match CPUID 0x0A.EBX bit vector */
@@ -221,6 +227,7 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
 		ret = pmu->version > 1;
 		break;
 	case MSR_ARCH_LBR_DEPTH:
+	case MSR_ARCH_LBR_CTL:
 		ret = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR);
 		break;
 	default:
@@ -390,6 +397,9 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_ARCH_LBR_DEPTH:
 		msr_info->data = lbr_desc->records.nr;
 		return 0;
+	case MSR_ARCH_LBR_CTL:
+		msr_info->data = vmcs_read64(GUEST_IA32_LBR_CTL);
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -457,6 +467,14 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		lbr_desc->records.nr = data;
 		lbr_desc->arch_lbr_reset = true;
 		return 0;
+	case MSR_ARCH_LBR_CTL:
+		if (data & ~KVM_ARCH_LBR_CTL_MASK)
+			break;
+		vmcs_write64(GUEST_IA32_LBR_CTL, data);
+		if (intel_pmu_lbr_is_enabled(vcpu) && !lbr_desc->event &&
+		    (data & ARCH_LBR_CTL_LBREN))
+			intel_pmu_create_guest_lbr_event(vcpu);
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -635,12 +653,15 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
  */
 static void intel_pmu_legacy_freezing_lbrs_on_pmi(struct kvm_vcpu *vcpu)
 {
-	u64 data = vmcs_read64(GUEST_IA32_DEBUGCTL);
+	u32 lbr_ctl_field = GUEST_IA32_DEBUGCTL;
 
-	if (data & DEBUGCTLMSR_FREEZE_LBRS_ON_PMI) {
-		data &= ~DEBUGCTLMSR_LBR;
-		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
-	}
+	if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_FREEZE_LBRS_ON_PMI))
+		return;
+
+	if (guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR))
+		lbr_ctl_field = GUEST_IA32_LBR_CTL;
+
+	vmcs_write64(lbr_ctl_field, vmcs_read64(lbr_ctl_field) & ~LBR_CTL_EN);
 }
 
 static void intel_pmu_deliver_pmi(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f2fd447eed45..458d84672104 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2087,6 +2087,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 						VM_EXIT_SAVE_DEBUG_CONTROLS)
 			get_vmcs12(vcpu)->guest_ia32_debugctl = data;
 
+		/*
+		 * For Arch LBR, IA32_DEBUGCTL[bit 0] has no meaning.
+		 * It can be written to 0 or 1, but reads will always return 0.
+		 */
+		if (guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR))
+			data &= ~DEBUGCTLMSR_LBR;
+
 		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
 		if (intel_pmu_lbr_is_enabled(vcpu) && !to_vmx(vcpu)->lbr_desc.event &&
 		    (data & DEBUGCTLMSR_LBR))
@@ -4526,6 +4533,8 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 		vmcs_writel(GUEST_SYSENTER_ESP, 0);
 		vmcs_writel(GUEST_SYSENTER_EIP, 0);
 		vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
+		if (cpu_has_vmx_arch_lbr())
+			vmcs_write64(GUEST_IA32_LBR_CTL, 0);
 	}
 
 	kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 05/10] KVM: vmx/pmu: Add Arch LBR emulation and its VMCS field
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (3 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND PATCH v4 04/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL " Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 06/10] KVM: x86: Expose Architectural LBR CPUID leaf Like Xu
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

New VMX controls bits for Arch LBR are added. When bit 21 in vmentry_ctrl
is set, VM entry will write the value from the "Guest IA32_LBR_CTL" guest
state field to IA32_LBR_CTL. When bit 26 in vmexit_ctrl is set, VM exit
will clear IA32_LBR_CTL after the value has been saved to the "Guest
IA32_LBR_CTL" guest state field. The host value would be saved before
vm-entry and restored after vm-exit like the legacy host_debugctlmsr;

To enable guest Arch LBR, KVM should set both the "Load Guest IA32_LBR_CTL"
entry control and the "Clear IA32_LBR_CTL" exit control bits. If these two
conditions cannot be met, KVM will clear the LBR_FMT bits and will not
expose the Arch LBR feature.

If Arch LBR is exposed on KVM, the guest should set both the ARCH_LBR CPUID
and the same LBR_FMT value as the host via MSR_IA32_PERF_CAPABILITIES to
enable guest Arch LBR.

KVM will bypass the host/guest x86 cpu model check and the records msrs can
still be pass-through to guest as usual and work like a model-specific LBR.
KVM is consistent with the host and does not support the LER entry.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
---
 arch/x86/include/asm/vmx.h      |  2 ++
 arch/x86/kvm/vmx/capabilities.h | 25 +++++++++++++++++--------
 arch/x86/kvm/vmx/pmu_intel.c    | 27 ++++++++++++++++++++++-----
 arch/x86/kvm/vmx/vmx.c          | 32 ++++++++++++++++++++++++++++++--
 arch/x86/kvm/vmx/vmx.h          |  1 +
 5 files changed, 72 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index ea3be961cc8e..d9b1dffc4638 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -95,6 +95,7 @@
 #define VM_EXIT_CLEAR_BNDCFGS                   0x00800000
 #define VM_EXIT_PT_CONCEAL_PIP			0x01000000
 #define VM_EXIT_CLEAR_IA32_RTIT_CTL		0x02000000
+#define VM_EXIT_CLEAR_IA32_LBR_CTL		0x04000000
 
 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR	0x00036dff
 
@@ -108,6 +109,7 @@
 #define VM_ENTRY_LOAD_BNDCFGS                   0x00010000
 #define VM_ENTRY_PT_CONCEAL_PIP			0x00020000
 #define VM_ENTRY_LOAD_IA32_RTIT_CTL		0x00040000
+#define VM_ENTRY_LOAD_IA32_LBR_CTL		0x00200000
 
 #define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR	0x000011ff
 
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index 8dee8a5fbc17..ff2904950bb2 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -378,20 +378,29 @@ static inline bool vmx_pt_mode_is_host_guest(void)
 	return pt_mode == PT_MODE_HOST_GUEST;
 }
 
-static inline u64 vmx_get_perf_capabilities(void)
+static inline bool cpu_has_vmx_arch_lbr(void)
 {
-	u64 perf_cap = 0;
-
-	if (boot_cpu_has(X86_FEATURE_PDCM))
-		rdmsrl(MSR_IA32_PERF_CAPABILITIES, perf_cap);
-
-	perf_cap &= PMU_CAP_LBR_FMT;
+	return (vmcs_config.vmexit_ctrl & VM_EXIT_CLEAR_IA32_LBR_CTL) &&
+		(vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_LBR_CTL);
+}
 
+static inline u64 vmx_get_perf_capabilities(void)
+{
 	/*
 	 * Since counters are virtualized, KVM would support full
 	 * width counting unconditionally, even if the host lacks it.
 	 */
-	return PMU_CAP_FW_WRITES | perf_cap;
+	u64 perf_cap = PMU_CAP_FW_WRITES;
+	u64 host_perf_cap = 0;
+
+	if (boot_cpu_has(X86_FEATURE_PDCM))
+		rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap);
+
+	perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
+	if (boot_cpu_has(X86_FEATURE_ARCH_LBR) && !cpu_has_vmx_arch_lbr())
+		perf_cap &= ~PMU_CAP_LBR_FMT;
+
+	return perf_cap;
 }
 
 static inline u64 vmx_supported_debugctl(void)
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 15490d31b828..9199d3974d57 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -181,12 +181,16 @@ static inline struct kvm_pmc *get_fw_gp_pmc(struct kvm_pmu *pmu, u32 msr)
 
 bool intel_pmu_lbr_is_compatible(struct kvm_vcpu *vcpu)
 {
+	if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR))
+		return guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR);
+
 	/*
 	 * As a first step, a guest could only enable LBR feature if its
 	 * cpu model is the same as the host because the LBR registers
 	 * would be pass-through to the guest and they're model specific.
 	 */
-	return boot_cpu_data.x86_model == guest_cpuid_model(vcpu);
+	return !boot_cpu_has(X86_FEATURE_ARCH_LBR) &&
+		boot_cpu_data.x86_model == guest_cpuid_model(vcpu);
 }
 
 bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu)
@@ -204,8 +208,11 @@ static bool intel_pmu_is_valid_lbr_msr(struct kvm_vcpu *vcpu, u32 index)
 	if (!intel_pmu_lbr_is_enabled(vcpu))
 		return ret;
 
-	ret = (index == MSR_LBR_SELECT) || (index == MSR_LBR_TOS) ||
-		(index >= records->from && index < records->from + records->nr) ||
+	if (!guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR))
+		ret = (index == MSR_LBR_SELECT) || (index == MSR_LBR_TOS);
+
+	if (!ret)
+		ret = (index >= records->from && index < records->from + records->nr) ||
 		(index >= records->to && index < records->to + records->nr);
 
 	if (!ret && records->info)
@@ -696,6 +703,9 @@ static void vmx_update_intercept_for_lbr_msrs(struct kvm_vcpu *vcpu, bool set)
 			vmx_set_intercept_for_msr(vcpu, lbr->info + i, MSR_TYPE_RW, set);
 	}
 
+	if (guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR))
+		return;
+
 	vmx_set_intercept_for_msr(vcpu, MSR_LBR_SELECT, MSR_TYPE_RW, set);
 	vmx_set_intercept_for_msr(vcpu, MSR_LBR_TOS, MSR_TYPE_RW, set);
 }
@@ -739,10 +749,13 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
+	bool lbr_enable = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) ?
+		(vmcs_read64(GUEST_IA32_LBR_CTL) & ARCH_LBR_CTL_LBREN) :
+		(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR);
 
 	if (!lbr_desc->event) {
 		vmx_disable_lbr_msrs_passthrough(vcpu);
-		if (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR)
+		if (lbr_enable)
 			goto warn;
 		if (test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use))
 			goto warn;
@@ -765,7 +778,11 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
 
 static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
 {
-	if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR))
+	bool lbr_enable = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) ?
+		(vmcs_read64(GUEST_IA32_LBR_CTL) & ARCH_LBR_CTL_LBREN) :
+		(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR);
+
+	if (!lbr_enable)
 		intel_pmu_release_guest_lbr_event(vcpu);
 }
 
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 458d84672104..74f0b302f4a2 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -670,6 +670,9 @@ static bool is_valid_passthrough_msr(u32 msr)
 	case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 31:
 	case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8:
 	case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8:
+	case MSR_ARCH_LBR_FROM_0 ... MSR_ARCH_LBR_FROM_0 + 31:
+	case MSR_ARCH_LBR_TO_0 ... MSR_ARCH_LBR_TO_0 + 31:
+	case MSR_ARCH_LBR_INFO_0 ... MSR_ARCH_LBR_INFO_0 + 31:
 		/* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */
 		return true;
 	}
@@ -1396,6 +1399,26 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
 		decache_tsc_multiplier(vmx);
 }
 
+static inline unsigned long get_lbrctlmsr(void)
+{
+	unsigned long lbrctlmsr = 0;
+
+	if (!static_cpu_has(X86_FEATURE_ARCH_LBR))
+		return 0;
+
+	rdmsrl(MSR_ARCH_LBR_CTL, lbrctlmsr);
+
+	return lbrctlmsr;
+}
+
+static inline void update_lbrctlmsr(unsigned long lbrctlmsr)
+{
+	if (!static_cpu_has(X86_FEATURE_ARCH_LBR))
+		return;
+
+	wrmsrl(MSR_ARCH_LBR_CTL, lbrctlmsr);
+}
+
 /*
  * Switches to specified vcpu, until a matching vcpu_put(), but assumes
  * vcpu mutex is already taken.
@@ -1409,6 +1432,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 	vmx_vcpu_pi_load(vcpu, cpu);
 
 	vmx->host_debugctlmsr = get_debugctlmsr();
+	vmx->host_lbrctlmsr = get_lbrctlmsr();
 }
 
 static void vmx_vcpu_put(struct kvm_vcpu *vcpu)
@@ -2595,7 +2619,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	      VM_EXIT_LOAD_IA32_EFER |
 	      VM_EXIT_CLEAR_BNDCFGS |
 	      VM_EXIT_PT_CONCEAL_PIP |
-	      VM_EXIT_CLEAR_IA32_RTIT_CTL;
+	      VM_EXIT_CLEAR_IA32_RTIT_CTL |
+	      VM_EXIT_CLEAR_IA32_LBR_CTL;
 	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_EXIT_CTLS,
 				&_vmexit_control) < 0)
 		return -EIO;
@@ -2619,7 +2644,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	      VM_ENTRY_LOAD_IA32_EFER |
 	      VM_ENTRY_LOAD_BNDCFGS |
 	      VM_ENTRY_PT_CONCEAL_PIP |
-	      VM_ENTRY_LOAD_IA32_RTIT_CTL;
+	      VM_ENTRY_LOAD_IA32_RTIT_CTL |
+	      VM_ENTRY_LOAD_IA32_LBR_CTL;
 	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_ENTRY_CTLS,
 				&_vmentry_control) < 0)
 		return -EIO;
@@ -6828,6 +6854,8 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
 	if (vmx->host_debugctlmsr)
 		update_debugctlmsr(vmx->host_debugctlmsr);
+	if (vmx->host_lbrctlmsr)
+		update_lbrctlmsr(vmx->host_lbrctlmsr);
 
 #ifndef CONFIG_X86_64
 	/*
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index cc362e2d3eaa..69e243fea23d 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -328,6 +328,7 @@ struct vcpu_vmx {
 	u64 current_tsc_ratio;
 
 	unsigned long host_debugctlmsr;
+	unsigned long host_lbrctlmsr;
 
 	/*
 	 * Only bits masked by msr_ia32_feature_control_valid_bits can be set in
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 06/10] KVM: x86: Expose Architectural LBR CPUID leaf
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (4 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND PATCH v4 05/10] KVM: vmx/pmu: Add Arch LBR emulation and its VMCS field Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 07/10] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS Like Xu
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

If CPUID.(EAX=07H, ECX=0):EDX[19] is set to 1, then KVM supports Arch
LBRs and CPUID leaf 01CH indicates details of the Arch LBRs capabilities.
Currently, KVM only supports the current host LBR depth for guests,
which is also the maximum supported depth on the host.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
---
 arch/x86/kvm/cpuid.c   | 25 ++++++++++++++++++++++++-
 arch/x86/kvm/vmx/vmx.c |  2 ++
 2 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 9a48f138832d..e7527b6cadb4 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -475,7 +475,7 @@ void kvm_set_cpu_caps(void)
 		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
 		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
 		F(MD_CLEAR) | F(AVX512_VP2INTERSECT) | F(FSRM) |
-		F(SERIALIZE) | F(TSXLDTRK) | F(AVX512_FP16)
+		F(SERIALIZE) | F(TSXLDTRK) | F(AVX512_FP16) | F(ARCH_LBR)
 	);
 
 	/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
@@ -886,6 +886,29 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 				goto out;
 		}
 		break;
+	/* Architectural LBR */
+	case 0x1c:
+	{
+		u64 lbr_depth_mask = entry->eax & 0xff;
+
+		if (!lbr_depth_mask || !kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) {
+			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+			break;
+		}
+
+		/*
+		 * KVM only exposes the maximum supported depth,
+		 * which is also the fixed value used on the host.
+		 *
+		 * KVM doesn't allow VMM user sapce to adjust depth
+		 * per guest, because the guest LBR emulation depends
+		 * on the implementation of the host LBR driver.
+		 */
+		lbr_depth_mask = 1UL << (fls(lbr_depth_mask) - 1);
+		entry->eax &= ~0xff;
+		entry->eax |= lbr_depth_mask;
+		break;
+	}
 	case KVM_CPUID_SIGNATURE: {
 		static const char signature[12] = "KVMKVMKVM\0\0";
 		const u32 *sigptr = (const u32 *)signature;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 74f0b302f4a2..f88c6e8f7a3a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7370,6 +7370,8 @@ static __init void vmx_set_cpu_caps(void)
 		kvm_cpu_cap_clear(X86_FEATURE_INVPCID);
 	if (vmx_pt_mode_is_host_guest())
 		kvm_cpu_cap_check_and_set(X86_FEATURE_INTEL_PT);
+	if (!cpu_has_vmx_arch_lbr())
+		kvm_cpu_cap_clear(X86_FEATURE_ARCH_LBR);
 
 	if (!enable_sgx) {
 		kvm_cpu_cap_clear(X86_FEATURE_SGX);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 07/10] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (5 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND PATCH v4 06/10] KVM: x86: Expose Architectural LBR CPUID leaf Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 08/10] KVM: x86: Report XSS as an MSR to be saved if there are supported features Like Xu
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

From: Yang Weijiang <weijiang.yang@intel.com>

Updated CPUID.0xD.0x1, which reports the current required storage size
of all features enabled via XCR0 | XSS, when the guest's XSS is modified.

Note, KVM does not yet support any XSS based features, i.e. supported_xss
is guaranteed to be zero at this time.

Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20210203113421.5759-3-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/include/asm/kvm_host.h |  1 +
 arch/x86/kvm/cpuid.c            | 21 ++++++++++++++++++---
 arch/x86/kvm/x86.c              |  7 +++++--
 3 files changed, 24 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 55efbacfc244..cac41bae5f4e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -651,6 +651,7 @@ struct kvm_vcpu_arch {
 
 	u64 xcr0;
 	u64 guest_supported_xcr0;
+	u64 guest_supported_xss;
 
 	struct kvm_pio_request pio;
 	void *pio_data;
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e7527b6cadb4..8a1855760e43 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -131,9 +131,24 @@ void kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu)
 		best->ebx = xstate_required_size(vcpu->arch.xcr0, false);
 
 	best = kvm_find_cpuid_entry(vcpu, 0xD, 1);
-	if (best && (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
-		     cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
-		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
+	if (best) {
+		if (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
+		    cpuid_entry_has(best, X86_FEATURE_XSAVEC))  {
+			u64 xstate = vcpu->arch.xcr0 | vcpu->arch.ia32_xss;
+
+			best->ebx = xstate_required_size(xstate, true);
+		}
+
+		if (!cpuid_entry_has(best, X86_FEATURE_XSAVES)) {
+			best->ecx = 0;
+			best->edx = 0;
+		}
+		vcpu->arch.guest_supported_xss =
+			(((u64)best->edx << 32) | best->ecx) & supported_xss;
+
+	} else {
+		vcpu->arch.guest_supported_xss = 0;
+	}
 
 	best = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
 	if (kvm_hlt_in_guest(vcpu->kvm) && best &&
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 5bd550eaf683..3a67ad29fa9d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3247,9 +3247,12 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		 * IA32_XSS[bit 8]. Guests have to use RDMSR/WRMSR rather than
 		 * XSAVES/XRSTORS to save/restore PT MSRs.
 		 */
-		if (data & ~supported_xss)
+		if (data & ~vcpu->arch.guest_supported_xss)
 			return 1;
-		vcpu->arch.ia32_xss = data;
+		if (vcpu->arch.ia32_xss != data) {
+			vcpu->arch.ia32_xss = data;
+			kvm_update_cpuid_runtime(vcpu);
+		}
 		break;
 	case MSR_SMI_COUNT:
 		if (!msr_info->host_initiated)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 08/10] KVM: x86: Report XSS as an MSR to be saved if there are supported features
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (6 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND PATCH v4 07/10] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 09/10] KVM: x86: Refine the matching and clearing logic for supported_xss Like Xu
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add MSR_IA32_XSS to the list of MSRs reported to userspace if
supported_xss is non-zero, i.e. KVM supports at least one XSS based
feature.

Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20210203113421.5759-2-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3a67ad29fa9d..b699f7a37b1a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1298,6 +1298,8 @@ static const u32 msrs_to_save_all[] = {
 	MSR_ARCH_PERFMON_EVENTSEL0 + 12, MSR_ARCH_PERFMON_EVENTSEL0 + 13,
 	MSR_ARCH_PERFMON_EVENTSEL0 + 14, MSR_ARCH_PERFMON_EVENTSEL0 + 15,
 	MSR_ARCH_PERFMON_EVENTSEL0 + 16, MSR_ARCH_PERFMON_EVENTSEL0 + 17,
+
+	MSR_IA32_XSS,
 };
 
 static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
@@ -6035,6 +6037,10 @@ static void kvm_init_msr_list(void)
 			    min(INTEL_PMC_MAX_GENERIC, x86_pmu.num_counters_gp))
 				continue;
 			break;
+		case MSR_IA32_XSS:
+			if (!supported_xss)
+				continue;
+			break;
 		default:
 			break;
 		}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 09/10] KVM: x86: Refine the matching and clearing logic for supported_xss
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (7 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND PATCH v4 08/10] KVM: x86: Report XSS as an MSR to be saved if there are supported features Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-05-10  8:15 ` [RESEND PATCH v4 10/10] KVM: x86: Add XSAVE Support for Architectural LBRs Like Xu
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

Refine the code path of the existing clearing of supported_xss in this way:
initialize the supported_xss with the filter of KVM_SUPPORTED_XSS mask and
update its value in a bit clear manner (rather than bit setting).

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
---
 arch/x86/kvm/vmx/vmx.c | 5 +++--
 arch/x86/kvm/x86.c     | 6 +++++-
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f88c6e8f7a3a..d080bf163565 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7384,9 +7384,10 @@ static __init void vmx_set_cpu_caps(void)
 		kvm_cpu_cap_set(X86_FEATURE_UMIP);
 
 	/* CPUID 0xD.1 */
-	supported_xss = 0;
-	if (!cpu_has_vmx_xsaves())
+	if (!cpu_has_vmx_xsaves()) {
 		kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
+		supported_xss = 0;
+	}
 
 	/* CPUID 0x80000001 and 0x7 (RDPID) */
 	if (!cpu_has_vmx_rdtscp()) {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b699f7a37b1a..3a32bea2277e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -203,6 +203,8 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs;
 				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
 				| XFEATURE_MASK_PKRU)
 
+#define KVM_SUPPORTED_XSS     0
+
 u64 __read_mostly host_efer;
 EXPORT_SYMBOL_GPL(host_efer);
 
@@ -10619,8 +10621,10 @@ int kvm_arch_hardware_setup(void *opaque)
 
 	rdmsrl_safe(MSR_EFER, &host_efer);
 
-	if (boot_cpu_has(X86_FEATURE_XSAVES))
+	if (boot_cpu_has(X86_FEATURE_XSAVES)) {
 		rdmsrl(MSR_IA32_XSS, host_xss);
+		supported_xss = host_xss & KVM_SUPPORTED_XSS;
+	}
 
 	r = ops->hardware_setup();
 	if (r != 0)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND PATCH v4 10/10] KVM: x86: Add XSAVE Support for Architectural LBRs
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (8 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND PATCH v4 09/10] KVM: x86: Refine the matching and clearing logic for supported_xss Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-05-10  8:15 ` [RESEND kvm-unit-tests PATCH v2] x86: Update guest LBR tests for Architectural LBR Like Xu
  2021-06-22  9:01 ` [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Yang Weijiang
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

On processors whose XSAVE feature set supports XSAVES and XRSTORS,
the availability of support for Architectural LBR configuration state save
and restore can be determined from CPUID.(EAX=0DH, ECX=1):EDX:ECX[bit 15].
The detailed leaf for Arch LBRs is enumerated in CPUID.(EAX=0DH, ECX=0FH).

XSAVES provides a faster means than RDMSR for guest to read all LBRs.
When guest IA32_XSS[bit 15] is set, the Arch LBRs state can be saved using
XSAVES and restored by XRSTORS with the appropriate RFBM.

If the KVM fails to pass-through the LBR msrs to the guest, the LBR msrs
will be reset to prevent the leakage of host records via XSAVES. In this
case, the guest results may be inaccurate as the legacy LBR.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
---
 arch/x86/kvm/vmx/pmu_intel.c | 2 ++
 arch/x86/kvm/vmx/vmx.c       | 4 +++-
 arch/x86/kvm/x86.c           | 2 +-
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 9199d3974d57..7666292094ec 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -772,6 +772,8 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
 	return;
 
 warn:
+	if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR))
+		wrmsrl(MSR_ARCH_LBR_DEPTH, lbr_desc->records.nr);
 	pr_warn_ratelimited("kvm: vcpu-%d: fail to passthrough LBR.\n",
 		vcpu->vcpu_id);
 }
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d080bf163565..9f610da71649 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7370,8 +7370,10 @@ static __init void vmx_set_cpu_caps(void)
 		kvm_cpu_cap_clear(X86_FEATURE_INVPCID);
 	if (vmx_pt_mode_is_host_guest())
 		kvm_cpu_cap_check_and_set(X86_FEATURE_INTEL_PT);
-	if (!cpu_has_vmx_arch_lbr())
+	if (!cpu_has_vmx_arch_lbr()) {
 		kvm_cpu_cap_clear(X86_FEATURE_ARCH_LBR);
+		supported_xss &= ~XFEATURE_MASK_LBR;
+	}
 
 	if (!enable_sgx) {
 		kvm_cpu_cap_clear(X86_FEATURE_SGX);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3a32bea2277e..7db24f287268 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -203,7 +203,7 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs;
 				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
 				| XFEATURE_MASK_PKRU)
 
-#define KVM_SUPPORTED_XSS     0
+#define KVM_SUPPORTED_XSS     XFEATURE_MASK_LBR
 
 u64 __read_mostly host_efer;
 EXPORT_SYMBOL_GPL(host_efer);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RESEND kvm-unit-tests PATCH v2] x86: Update guest LBR tests for Architectural LBR
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (9 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND PATCH v4 10/10] KVM: x86: Add XSAVE Support for Architectural LBRs Like Xu
@ 2021-05-10  8:15 ` Like Xu
  2021-06-22  9:01 ` [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Yang Weijiang
  11 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-05-10  8:15 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
	Joerg Roedel, weijiang.yang, wei.w.wang, kvm, linux-kernel

This unit-test is intended to test the basic KVM's support for
Architectural LBRs which is a Architectural performance monitor
unit (PMU) feature on Intel processors including negative testing
on the MSR LBR_DEPTH values.

If the LBR bit is set to 1 in the MSR_ARCH_LBR_CTL, the processor
will record a running trace of the most recent branches guest
taken in the LBR entries for guest to read.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Andrew Jones <drjones@redhat.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
---
 x86/pmu_lbr.c | 88 +++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 79 insertions(+), 9 deletions(-)

diff --git a/x86/pmu_lbr.c b/x86/pmu_lbr.c
index 3bd9e9f..5257f76 100644
--- a/x86/pmu_lbr.c
+++ b/x86/pmu_lbr.c
@@ -6,6 +6,7 @@
 #define MAX_NUM_LBR_ENTRY	  32
 #define DEBUGCTLMSR_LBR	  (1UL <<  0)
 #define PMU_CAP_LBR_FMT	  0x3f
+#define KVM_ARCH_LBR_CTL_MASK	  0x7f000f
 
 #define MSR_LBR_NHM_FROM	0x00000680
 #define MSR_LBR_NHM_TO		0x000006c0
@@ -13,6 +14,10 @@
 #define MSR_LBR_CORE_TO	0x00000060
 #define MSR_LBR_TOS		0x000001c9
 #define MSR_LBR_SELECT		0x000001c8
+#define MSR_ARCH_LBR_CTL	0x000014ce
+#define MSR_ARCH_LBR_DEPTH	0x000014cf
+#define MSR_ARCH_LBR_FROM_0	0x00001500
+#define MSR_ARCH_LBR_TO_0	0x00001600
 
 volatile int count;
 
@@ -61,11 +66,26 @@ static bool test_init_lbr_from_exception(u64 index)
 	return test_for_exception(GP_VECTOR, init_lbr, &index);
 }
 
+static void change_archlbr_depth(void *depth)
+{
+	wrmsr(MSR_ARCH_LBR_DEPTH, *(u64 *)depth);
+}
+
+static bool test_change_archlbr_depth_from_exception(u64 depth)
+{
+	return test_for_exception(GP_VECTOR, change_archlbr_depth, &depth);
+}
+
 int main(int ac, char **av)
 {
 	struct cpuid id = cpuid(10);
+	struct cpuid id_7 = cpuid(7);
+	struct cpuid id_1c;
 	u64 perf_cap;
 	int max, i;
+	bool arch_lbr = false;
+	u32 ctl_msr = MSR_IA32_DEBUGCTLMSR;
+	u64 ctl_value = DEBUGCTLMSR_LBR;
 
 	setup_vm();
 	perf_cap = rdmsr(MSR_IA32_PERF_CAPABILITIES);
@@ -80,8 +100,19 @@ int main(int ac, char **av)
 		return report_summary();
 	}
 
+	if (id_7.d & (1UL << 19)) {
+		arch_lbr = true;
+		ctl_msr = MSR_ARCH_LBR_CTL;
+		/* DEPTH defaults to the maximum number of LBRs entries. */
+		max = rdmsr(MSR_ARCH_LBR_DEPTH) - 1;
+		ctl_value = KVM_ARCH_LBR_CTL_MASK;
+	}
+
 	printf("PMU version:		 %d\n", eax.split.version_id);
-	printf("LBR version:		 %ld\n", perf_cap & PMU_CAP_LBR_FMT);
+	if (!arch_lbr)
+		printf("LBR version:		 %ld\n", perf_cap & PMU_CAP_LBR_FMT);
+	else
+		printf("Architectural LBR depth:		 %d\n", max + 1);
 
 	/* Look for LBR from and to MSRs */
 	lbr_from = MSR_LBR_CORE_FROM;
@@ -90,32 +121,71 @@ int main(int ac, char **av)
 		lbr_from = MSR_LBR_NHM_FROM;
 		lbr_to = MSR_LBR_NHM_TO;
 	}
+	if (test_init_lbr_from_exception(0)) {
+		lbr_from = MSR_ARCH_LBR_FROM_0;
+		lbr_to = MSR_ARCH_LBR_TO_0;
+	}
 
 	if (test_init_lbr_from_exception(0)) {
 		printf("LBR on this platform is not supported!\n");
 		return report_summary();
 	}
 
-	wrmsr(MSR_LBR_SELECT, 0);
-	wrmsr(MSR_LBR_TOS, 0);
-	for (max = 0; max < MAX_NUM_LBR_ENTRY; max++) {
-		if (test_init_lbr_from_exception(max))
-			break;
+	if (arch_lbr) {
+		/*
+		 * On processors that support Architectural LBRs,
+		 * IA32_PERF_CAPABILITIES.LBR_FMT will have the value 03FH.
+		 */
+		report(0x3f == (perf_cap & PMU_CAP_LBR_FMT), "The guest LBR_FMT value is good.");
 	}
 
+	/* Reset the guest LBR entries. */
+	if (arch_lbr) {
+		/* On a software write to IA32_LBR_DEPTH, all LBR entries are reset to 0.*/
+		wrmsr(MSR_ARCH_LBR_DEPTH, max + 1);
+	} else {
+		wrmsr(MSR_LBR_SELECT, 0);
+		wrmsr(MSR_LBR_TOS, 0);
+		for (max = 0; max < MAX_NUM_LBR_ENTRY; max++) {
+			if (test_init_lbr_from_exception(max))
+				break;
+		}
+	}
 	report(max > 0, "The number of guest LBR entries is good.");
 
+	/* Check the guest LBR entries are initialized. */
+	for (i = 0; i < max; ++i) {
+		if (rdmsr(lbr_to + i) || rdmsr(lbr_from + i))
+			break;
+	}
+	report(i == max, "The guest LBR initialized FROM_IP/TO_IP values are good.");
+
 	/* Do some branch instructions. */
-	wrmsr(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR);
+	wrmsr(ctl_msr, ctl_value);
 	lbr_test();
-	wrmsr(MSR_IA32_DEBUGCTLMSR, 0);
+	wrmsr(ctl_msr, 0);
 
-	report(rdmsr(MSR_LBR_TOS) != 0, "The guest LBR MSR_LBR_TOS value is good.");
+	/* Check if the guest LBR has recorded some branches. */
+	if (!arch_lbr) {
+		report(rdmsr(MSR_LBR_TOS) != 0, "The guest LBR MSR_LBR_TOS value is good.");
+	}
 	for (i = 0; i < max; ++i) {
 		if (!rdmsr(lbr_to + i) || !rdmsr(lbr_from + i))
 			break;
 	}
 	report(i == max, "The guest LBR FROM_IP/TO_IP values are good.");
 
+	if (!arch_lbr)
+		return report_summary();
+
+	/* Negative testing on the LBR_DEPTH MSR values */
+	id_1c = cpuid(0x1c);
+	for (i = 0; i < 8; i++) {
+		if (id_1c.a & (1UL << i))
+			continue;
+		report(test_change_archlbr_depth_from_exception(8*(i+1)) == 1,
+			"Negative test: guest LBR depth %d is unsupported.", 8*(i+1));
+	}
+
 	return report_summary();
 }
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling
  2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
                   ` (10 preceding siblings ...)
  2021-05-10  8:15 ` [RESEND kvm-unit-tests PATCH v2] x86: Update guest LBR tests for Architectural LBR Like Xu
@ 2021-06-22  9:01 ` Yang Weijiang
  2021-06-23 13:32   ` Like Xu
  11 siblings, 1 reply; 18+ messages in thread
From: Yang Weijiang @ 2021-06-22  9:01 UTC (permalink / raw)
  To: pbonzini
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, weijiang.yang, wei.w.wang, kvm,
	linux-kernel


Hello, maintainers,

I took over the work from Like and will carry it forward. Here I'd like to
get your valuable comments on this patch series before I post next version.

Thanks a lot!

> Hi geniuses,
> 
> A new kernel cycle has begun, and this version looks promising. 
> 
> >From the end user's point of view, the usage of Arch LBR is the same as
> the legacy LBR we have merged in the mainline, but it is much faster.
> 
> The Architectural Last Branch Records (LBRs) is published 
> in the 319433-040 release of Intel Architecture Instruction
> Set Extensions and Future Features Programming Reference[0].
> 
> The main advantages for the Arch LBR users are [1]:
> - Faster context switching due to XSAVES support and faster reset of
>   LBR MSRs via the new DEPTH MSR
> - Faster LBR read for a non-PEBS event due to XSAVES support, which
>   lowers the overhead of the NMI handler.
> - Linux kernel can support the LBR features without knowing the model
>   number of the current CPU.
> 
> Please check more details in each commit and feel free to comment.
> 
> [0] https://software.intel.com/content/www/us/en/develop/download/
> intel-architecture-instruction-set-extensions-and-future-features-programming-reference.html
> [1] https://lore.kernel.org/lkml/1593780569-62993-1-git-send-email-kan.liang@linux.intel.com/
> 
> ---
> v13->v13 RESEND Changelog:
> - Rebase to kvm/queue tree tag: kvm-5.13-2;
> - Includes two XSS dependency patches from kvm/intel tree;
> 
> v3->v4 Changelog:
> - Add one more host patch to reuse ARCH_LBR_CTL_MASK;
> - Add reserve_lbr_buffers() instead of using GFP_ATOMIC;
> - Fia a bug in the arch_lbr_depth_is_valid();
> - Add LBR_CTL_EN to unify DEBUGCTLMSR_LBR and ARCH_LBR_CTL_LBREN;
> - Add vmx->host_lbrctlmsr to save/restore host values;
> - Add KVM_SUPPORTED_XSS to refactoring supported_xss;
> - Clear Arch_LBR ans its XSS bit if it's not supported;
> - Add negative testing to the related kvm-unit-tests;
> - Refine code and commit messages;
> 
> Previous:
> v4: https://lore.kernel.org/kvm/20210314155225.206661-1-like.xu@linux.intel.com/
> v3: https://lore.kernel.org/kvm/20210303135756.1546253-1-like.xu@linux.intel.com/
> 
> Like Xu (8):
>   perf/x86/intel: Fix the comment about guest LBR support on KVM
>   perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
>   KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR
>   KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL emulation for Arch LBR
>   KVM: vmx/pmu: Add Arch LBR emulation and its VMCS field
>   KVM: x86: Expose Architectural LBR CPUID leaf
>   KVM: x86: Refine the matching and clearing logic for supported_xss
>   KVM: x86: Add XSAVE Support for Architectural LBRs
> 
> Sean Christopherson (1):
>   KVM: x86: Report XSS as an MSR to be saved if there are supported
>     features
> 
> Yang Weijiang (1):
>   KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS
> 
>  arch/x86/events/intel/core.c     |   3 +-
>  arch/x86/events/intel/lbr.c      |   6 +-
>  arch/x86/include/asm/kvm_host.h  |   1 +
>  arch/x86/include/asm/msr-index.h |   1 +
>  arch/x86/include/asm/vmx.h       |   4 ++
>  arch/x86/kvm/cpuid.c             |  46 ++++++++++++--
>  arch/x86/kvm/vmx/capabilities.h  |  25 +++++---
>  arch/x86/kvm/vmx/pmu_intel.c     | 103 ++++++++++++++++++++++++++++---
>  arch/x86/kvm/vmx/vmx.c           |  50 +++++++++++++--
>  arch/x86/kvm/vmx/vmx.h           |   4 ++
>  arch/x86/kvm/x86.c               |  19 +++++-
>  11 files changed, 226 insertions(+), 36 deletions(-)
> 
> -- 
> 2.31.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling
  2021-06-22  9:01 ` [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Yang Weijiang
@ 2021-06-23 13:32   ` Like Xu
  0 siblings, 0 replies; 18+ messages in thread
From: Like Xu @ 2021-06-23 13:32 UTC (permalink / raw)
  To: pbonzini, Sean Christopherson
  Cc: Vitaly Kuznetsov, Wanpeng Li, Jim Mattson, Joerg Roedel,
	wei.w.wang, kvm, linux-kernel, Yang Weijiang

Hi Paolo,

It looks that we don't have perf subsystem dependency
to enable guest Arch LBR here and it also has tests to cover it.

Do you have any concerns about not accepting them ?

Thanks,
Like Xu

On 22/6/2021 5:01 pm, Yang Weijiang wrote:
> 
> Hello, maintainers,
> 
> I took over the work from Like and will carry it forward. Here I'd like to
> get your valuable comments on this patch series before I post next version.
> 
> Thanks a lot!
> 
>> Hi geniuses,
>>
>> A new kernel cycle has begun, and this version looks promising.
>>
>> >From the end user's point of view, the usage of Arch LBR is the same as
>> the legacy LBR we have merged in the mainline, but it is much faster.
>>
>> The Architectural Last Branch Records (LBRs) is published
>> in the 319433-040 release of Intel Architecture Instruction
>> Set Extensions and Future Features Programming Reference[0].
>>
>> The main advantages for the Arch LBR users are [1]:
>> - Faster context switching due to XSAVES support and faster reset of
>>    LBR MSRs via the new DEPTH MSR
>> - Faster LBR read for a non-PEBS event due to XSAVES support, which
>>    lowers the overhead of the NMI handler.
>> - Linux kernel can support the LBR features without knowing the model
>>    number of the current CPU.
>>
>> Please check more details in each commit and feel free to comment.
>>
>> [0] https://software.intel.com/content/www/us/en/develop/download/
>> intel-architecture-instruction-set-extensions-and-future-features-programming-reference.html
>> [1] https://lore.kernel.org/lkml/1593780569-62993-1-git-send-email-kan.liang@linux.intel.com/
>>
>> ---
>> v13->v13 RESEND Changelog:
>> - Rebase to kvm/queue tree tag: kvm-5.13-2;
>> - Includes two XSS dependency patches from kvm/intel tree;
>>
>> v3->v4 Changelog:
>> - Add one more host patch to reuse ARCH_LBR_CTL_MASK;
>> - Add reserve_lbr_buffers() instead of using GFP_ATOMIC;
>> - Fia a bug in the arch_lbr_depth_is_valid();
>> - Add LBR_CTL_EN to unify DEBUGCTLMSR_LBR and ARCH_LBR_CTL_LBREN;
>> - Add vmx->host_lbrctlmsr to save/restore host values;
>> - Add KVM_SUPPORTED_XSS to refactoring supported_xss;
>> - Clear Arch_LBR ans its XSS bit if it's not supported;
>> - Add negative testing to the related kvm-unit-tests;
>> - Refine code and commit messages;
>>
>> Previous:
>> v4: https://lore.kernel.org/kvm/20210314155225.206661-1-like.xu@linux.intel.com/
>> v3: https://lore.kernel.org/kvm/20210303135756.1546253-1-like.xu@linux.intel.com/
>>
>> Like Xu (8):
>>    perf/x86/intel: Fix the comment about guest LBR support on KVM
>>    perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
>>    KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR
>>    KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL emulation for Arch LBR
>>    KVM: vmx/pmu: Add Arch LBR emulation and its VMCS field
>>    KVM: x86: Expose Architectural LBR CPUID leaf
>>    KVM: x86: Refine the matching and clearing logic for supported_xss
>>    KVM: x86: Add XSAVE Support for Architectural LBRs
>>
>> Sean Christopherson (1):
>>    KVM: x86: Report XSS as an MSR to be saved if there are supported
>>      features
>>
>> Yang Weijiang (1):
>>    KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS
>>
>>   arch/x86/events/intel/core.c     |   3 +-
>>   arch/x86/events/intel/lbr.c      |   6 +-
>>   arch/x86/include/asm/kvm_host.h  |   1 +
>>   arch/x86/include/asm/msr-index.h |   1 +
>>   arch/x86/include/asm/vmx.h       |   4 ++
>>   arch/x86/kvm/cpuid.c             |  46 ++++++++++++--
>>   arch/x86/kvm/vmx/capabilities.h  |  25 +++++---
>>   arch/x86/kvm/vmx/pmu_intel.c     | 103 ++++++++++++++++++++++++++++---
>>   arch/x86/kvm/vmx/vmx.c           |  50 +++++++++++++--
>>   arch/x86/kvm/vmx/vmx.h           |   4 ++
>>   arch/x86/kvm/x86.c               |  19 +++++-
>>   11 files changed, 226 insertions(+), 36 deletions(-)
>>
>> -- 
>> 2.31.1
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RESEND PATCH v4 03/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR
  2021-05-10  8:15 ` [RESEND PATCH v4 03/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR Like Xu
@ 2021-06-23 18:03   ` Jim Mattson
  2021-06-24  1:31     ` Yang Weijiang
  0 siblings, 1 reply; 18+ messages in thread
From: Jim Mattson @ 2021-06-23 18:03 UTC (permalink / raw)
  To: Like Xu
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Joerg Roedel, Yang Weijiang, Wei Wang, kvm list, LKML

On Mon, May 10, 2021 at 1:16 AM Like Xu <like.xu@linux.intel.com> wrote:
>
> The number of Arch LBR entries available for recording operations
> is dictated by the value in MSR_ARCH_LBR_DEPTH.DEPTH. The supported
> LBR depth values can be found in CPUID.(EAX=01CH, ECX=0):EAX[7:0]
> and for each bit "n" set in this field, the MSR_ARCH_LBR_DEPTH.DEPTH
> value of "8*(n+1)" is supported.
>
> On a guest write to MSR_ARCH_LBR_DEPTH, all LBR entries are reset to 0.
> KVM emulates the reset behavior by introducing lbr_desc->arch_lbr_reset.
> KVM writes the guest requested value to the native ARCH_LBR_DEPTH MSR
> (this is safe because the two values will be the same) when the Arch LBR
> records MSRs are pass-through to the guest.
>
> Signed-off-by: Like Xu <like.xu@linux.intel.com>
> ---
>  arch/x86/kvm/vmx/pmu_intel.c | 43 ++++++++++++++++++++++++++++++++++++
>  arch/x86/kvm/vmx/vmx.h       |  3 +++
>  2 files changed, 46 insertions(+)
>
> diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> index 9efc1a6b8693..d9c9cb6c9a4b 100644
> --- a/arch/x86/kvm/vmx/pmu_intel.c
> +++ b/arch/x86/kvm/vmx/pmu_intel.c
> @@ -220,6 +220,9 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
>         case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
>                 ret = pmu->version > 1;
>                 break;
> +       case MSR_ARCH_LBR_DEPTH:
> +               ret = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR);
> +               break;

This doesn't seem like a very safe test, since userspace can provide
whatever CPUID tables it likes. You should definitely think about
hardening this code against a malicious userspace.

When you add a new guest MSR, it should be enumerated by
KVM_GET_MSR_INDEX_LIST. Otherwise, userspace will not save/restore the
MSR value on suspend/resume.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RESEND PATCH v4 04/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL emulation for Arch LBR
  2021-05-10  8:15 ` [RESEND PATCH v4 04/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL " Like Xu
@ 2021-06-23 18:29   ` Jim Mattson
  2021-06-24  1:35     ` Yang Weijiang
  0 siblings, 1 reply; 18+ messages in thread
From: Jim Mattson @ 2021-06-23 18:29 UTC (permalink / raw)
  To: Like Xu
  Cc: Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Joerg Roedel, Yang Weijiang, Wei Wang, kvm list, LKML

On Mon, May 10, 2021 at 1:16 AM Like Xu <like.xu@linux.intel.com> wrote:
>
> Arch LBRs are enabled by setting MSR_ARCH_LBR_CTL.LBREn to 1. A new guest
> state field named "Guest IA32_LBR_CTL" is added to enhance guest LBR usage.
> When guest Arch LBR is enabled, a guest LBR event will be created like the
> model-specific LBR does.
>
> On processors that support Arch LBR, MSR_IA32_DEBUGCTLMSR[bit 0] has no
> meaning. It can be written to 0 or 1, but reads will always return 0.
> Like IA32_DEBUGCTL, IA32_ARCH_LBR_CTL msr is also reserved on INIT.
>
> Signed-off-by: Like Xu <like.xu@linux.intel.com>
> ---
>  arch/x86/events/intel/lbr.c      |  2 --
>  arch/x86/include/asm/msr-index.h |  1 +
>  arch/x86/include/asm/vmx.h       |  2 ++
>  arch/x86/kvm/vmx/pmu_intel.c     | 31 ++++++++++++++++++++++++++-----
>  arch/x86/kvm/vmx/vmx.c           |  9 +++++++++
>  5 files changed, 38 insertions(+), 7 deletions(-)
>
Same comments as on the previous patch. Your guard for ensuring that
the new VMCS fields exist can be spoofed by a malicious userspace, and
the new MSR has to be enumerated by KVM_GET_MSR_INDEX_LIST.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RESEND PATCH v4 03/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR
  2021-06-23 18:03   ` Jim Mattson
@ 2021-06-24  1:31     ` Yang Weijiang
  0 siblings, 0 replies; 18+ messages in thread
From: Yang Weijiang @ 2021-06-24  1:31 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Like Xu, Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov,
	Wanpeng Li, Joerg Roedel, Yang Weijiang, Wei Wang, kvm list,
	LKML

On Wed, Jun 23, 2021 at 11:03:39AM -0700, Jim Mattson wrote:
> On Mon, May 10, 2021 at 1:16 AM Like Xu <like.xu@linux.intel.com> wrote:
> >
> > The number of Arch LBR entries available for recording operations
> > is dictated by the value in MSR_ARCH_LBR_DEPTH.DEPTH. The supported
> > LBR depth values can be found in CPUID.(EAX=01CH, ECX=0):EAX[7:0]
> > and for each bit "n" set in this field, the MSR_ARCH_LBR_DEPTH.DEPTH
> > value of "8*(n+1)" is supported.
> >
> > On a guest write to MSR_ARCH_LBR_DEPTH, all LBR entries are reset to 0.
> > KVM emulates the reset behavior by introducing lbr_desc->arch_lbr_reset.
> > KVM writes the guest requested value to the native ARCH_LBR_DEPTH MSR
> > (this is safe because the two values will be the same) when the Arch LBR
> > records MSRs are pass-through to the guest.
> >
> > Signed-off-by: Like Xu <like.xu@linux.intel.com>
> > ---
> >  arch/x86/kvm/vmx/pmu_intel.c | 43 ++++++++++++++++++++++++++++++++++++
> >  arch/x86/kvm/vmx/vmx.h       |  3 +++
> >  2 files changed, 46 insertions(+)
> >
> > diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
> > index 9efc1a6b8693..d9c9cb6c9a4b 100644
> > --- a/arch/x86/kvm/vmx/pmu_intel.c
> > +++ b/arch/x86/kvm/vmx/pmu_intel.c
> > @@ -220,6 +220,9 @@ static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
> >         case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> >                 ret = pmu->version > 1;
> >                 break;
> > +       case MSR_ARCH_LBR_DEPTH:
> > +               ret = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR);
> > +               break;
> 
> This doesn't seem like a very safe test, since userspace can provide
> whatever CPUID tables it likes. You should definitely think about
> hardening this code against a malicious userspace.
> 
> When you add a new guest MSR, it should be enumerated by
> KVM_GET_MSR_INDEX_LIST. Otherwise, userspace will not save/restore the
> MSR value on suspend/resume.

Thanks Jim! Will improve this part in next version.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RESEND PATCH v4 04/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL emulation for Arch LBR
  2021-06-23 18:29   ` Jim Mattson
@ 2021-06-24  1:35     ` Yang Weijiang
  0 siblings, 0 replies; 18+ messages in thread
From: Yang Weijiang @ 2021-06-24  1:35 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Like Xu, Paolo Bonzini, Sean Christopherson, Vitaly Kuznetsov,
	Wanpeng Li, Joerg Roedel, Yang Weijiang, Wei Wang, kvm list,
	LKML

On Wed, Jun 23, 2021 at 11:29:08AM -0700, Jim Mattson wrote:
> On Mon, May 10, 2021 at 1:16 AM Like Xu <like.xu@linux.intel.com> wrote:
> >
> > Arch LBRs are enabled by setting MSR_ARCH_LBR_CTL.LBREn to 1. A new guest
> > state field named "Guest IA32_LBR_CTL" is added to enhance guest LBR usage.
> > When guest Arch LBR is enabled, a guest LBR event will be created like the
> > model-specific LBR does.
> >
> > On processors that support Arch LBR, MSR_IA32_DEBUGCTLMSR[bit 0] has no
> > meaning. It can be written to 0 or 1, but reads will always return 0.
> > Like IA32_DEBUGCTL, IA32_ARCH_LBR_CTL msr is also reserved on INIT.
> >
> > Signed-off-by: Like Xu <like.xu@linux.intel.com>
> > ---
> >  arch/x86/events/intel/lbr.c      |  2 --
> >  arch/x86/include/asm/msr-index.h |  1 +
> >  arch/x86/include/asm/vmx.h       |  2 ++
> >  arch/x86/kvm/vmx/pmu_intel.c     | 31 ++++++++++++++++++++++++++-----
> >  arch/x86/kvm/vmx/vmx.c           |  9 +++++++++
> >  5 files changed, 38 insertions(+), 7 deletions(-)
> >
> Same comments as on the previous patch. Your guard for ensuring that
> the new VMCS fields exist can be spoofed by a malicious userspace, and
> the new MSR has to be enumerated by KVM_GET_MSR_INDEX_LIST.

OK, will modify the code, thanks!

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-06-24  1:20 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-10  8:15 [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Like Xu
2021-05-10  8:15 ` [RESEND PATCH v4 01/10] perf/x86/intel: Fix the comment about guest LBR support on KVM Like Xu
2021-05-10  8:15 ` [RESEND PATCH v4 02/10] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Like Xu
2021-05-10  8:15 ` [RESEND PATCH v4 03/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_DEPTH emulation for Arch LBR Like Xu
2021-06-23 18:03   ` Jim Mattson
2021-06-24  1:31     ` Yang Weijiang
2021-05-10  8:15 ` [RESEND PATCH v4 04/10] KVM: vmx/pmu: Add MSR_ARCH_LBR_CTL " Like Xu
2021-06-23 18:29   ` Jim Mattson
2021-06-24  1:35     ` Yang Weijiang
2021-05-10  8:15 ` [RESEND PATCH v4 05/10] KVM: vmx/pmu: Add Arch LBR emulation and its VMCS field Like Xu
2021-05-10  8:15 ` [RESEND PATCH v4 06/10] KVM: x86: Expose Architectural LBR CPUID leaf Like Xu
2021-05-10  8:15 ` [RESEND PATCH v4 07/10] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS Like Xu
2021-05-10  8:15 ` [RESEND PATCH v4 08/10] KVM: x86: Report XSS as an MSR to be saved if there are supported features Like Xu
2021-05-10  8:15 ` [RESEND PATCH v4 09/10] KVM: x86: Refine the matching and clearing logic for supported_xss Like Xu
2021-05-10  8:15 ` [RESEND PATCH v4 10/10] KVM: x86: Add XSAVE Support for Architectural LBRs Like Xu
2021-05-10  8:15 ` [RESEND kvm-unit-tests PATCH v2] x86: Update guest LBR tests for Architectural LBR Like Xu
2021-06-22  9:01 ` [RESEND PATCH v4 00/10] KVM: x86/pmu: Guest Architectural LBR Enabling Yang Weijiang
2021-06-23 13:32   ` Like Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).