All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/15] Introduce Architectural LBR for vPMU
@ 2022-08-31 22:34 Yang Weijiang
  2022-08-31 22:34 ` [PATCH 01/15] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Yang Weijiang
                   ` (15 more replies)
  0 siblings, 16 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

Intel CPU model-specific LBR(Legacy LBR) evolved into Architectural
LBR(Arch LBR[0]), it's the replacement of legacy LBR on new platforms.
The native support patches were merged into 5.9 kernel tree, and this
patch series is to enable Arch LBR in vPMU so that guest can benefit
from the feature.

The main advantages of Arch LBR are [1]:
- Faster context switching due to XSAVES support and faster reset of
  LBR MSRs via the new DEPTH MSR
- Faster LBR read for a non-PEBS event due to XSAVES support, which
  lowers the overhead of the NMI handler.
- Linux kernel can support the LBR features without knowing the model
  number of the current CPU.

From end user's point of view, the usage of Arch LBR is the same as
the Legacy LBR that has been merged in the mainline.

Note, in this series, we impose one restriction for guest Arch LBR:
Guest can only set the same LBR record depth as host, this is due to
the special behavior of MSR_ARCH_LBR_DEPTH: 1) On write to the MSR,
it'll reset all Arch LBR recording MSRs to 0s. 2) XRSTORS resets all
record MSRs to 0s if the saved depth mismatches MSR_ARCH_LBR_DEPTH.
Enforcing the restriction keeps the KVM enabling patch simple and
straightforward.

The old patch series was queued in KVM/queue for a while and finally
moved to below branch after Paolo's refactor. This new patch set is 
built on top of Paolo's work + some fixes, it's tested on legacy platform
(non-ArchLBR) and SPR platform(ArchLBR capable).

[0] https://software.intel.com/sites/default/files/managed/c5/15/architecture-instruction-set-extensions-programming-reference.pdf
[1] https://lore.kernel.org/lkml/1593780569-62993-1-git-send-email-kan.liang@linux.intel.com/

Original patch set:
https://git.kernel.org/pub/scm/virt/kvm/kvm.git/log/?h=lbr-for-weijiang

Changes in this version:
1. Fixed some minor issues in the refactored patch set.
2. Added a few minor fixes due to recent vPMU code cleanup.
3. Removed Paolo's SOBs in some modified patches.
4. Rebased to queue:kvm/kvm.git


Like Xu (3):
  perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
  KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR
  KVM: x86: Add XSAVE Support for Architectural LBR

Paolo Bonzini (4):
  KVM: PMU: disable LBR handling if architectural LBR is available
  KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL for guest Arch LBR
  KVM: VMX: Support passthrough of architectural LBRs
  KVM: x86: Refine the matching and clearing logic for supported_xss

Sean Christopherson (1):
  KVM: x86: Report XSS as an MSR to be saved if there are supported
    features

Yang Weijiang (7):
  KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS
  KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list
  KVM: x86/vmx: Check Arch LBR config when return perf capabilities
  KVM: x86/vmx: Clear Arch LBREn bit before inject #DB to guest
  KVM: x86/vmx: Flip Arch LBREn bit on guest state change
  KVM: x86: Add Arch LBR data MSR access interface
  KVM: x86/cpuid: Advertise Arch LBR feature in CPUID

 arch/x86/events/intel/lbr.c      |   6 +-
 arch/x86/include/asm/kvm_host.h  |   3 +
 arch/x86/include/asm/msr-index.h |   1 +
 arch/x86/include/asm/vmx.h       |   4 +
 arch/x86/kvm/cpuid.c             |  52 +++++++++-
 arch/x86/kvm/vmx/capabilities.h  |   8 ++
 arch/x86/kvm/vmx/nested.c        |   8 ++
 arch/x86/kvm/vmx/pmu_intel.c     | 160 +++++++++++++++++++++++++++----
 arch/x86/kvm/vmx/vmx.c           |  81 +++++++++++++++-
 arch/x86/kvm/x86.c               |  27 +++++-
 10 files changed, 317 insertions(+), 33 deletions(-)


base-commit: 372d07084593dc7a399bf9bee815711b1fb1bcf2
-- 
2.27.0


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 01/15] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-09-01 14:19   ` Sean Christopherson
  2022-08-31 22:34 ` [PATCH 02/15] KVM: x86: Report XSS as an MSR to be saved if there are supported features Yang Weijiang
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

From: Like Xu <like.xu@linux.intel.com>

The x86_pmu.lbr_info is 0 unless explicitly initialized, so there's
no point checking x86_pmu.intel_cap.lbr_format.

Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220517154100.29983-3-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/events/intel/lbr.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index 4f70fb6c2c1e..4ed6d3691e10 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -1873,12 +1873,10 @@ void __init intel_pmu_arch_lbr_init(void)
  */
 int x86_perf_get_lbr(struct x86_pmu_lbr *lbr)
 {
-	int lbr_fmt = x86_pmu.intel_cap.lbr_format;
-
 	lbr->nr = x86_pmu.lbr_nr;
 	lbr->from = x86_pmu.lbr_from;
 	lbr->to = x86_pmu.lbr_to;
-	lbr->info = (lbr_fmt == LBR_FORMAT_INFO) ? x86_pmu.lbr_info : 0;
+	lbr->info = x86_pmu.lbr_info;
 
 	return 0;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 02/15] KVM: x86: Report XSS as an MSR to be saved if there are supported features
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
  2022-08-31 22:34 ` [PATCH 01/15] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 03/15] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS Yang Weijiang
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

From: Sean Christopherson <sean.j.christopherson@intel.com>

Add MSR_IA32_XSS to the list of MSRs reported to userspace if
supported_xss is non-zero, i.e. KVM supports at least one XSS based
feature.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220517154100.29983-4-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/x86.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index d7374d768296..17663cade3fa 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1451,6 +1451,7 @@ static const u32 msrs_to_save_all[] = {
 	MSR_F15H_PERF_CTR0, MSR_F15H_PERF_CTR1, MSR_F15H_PERF_CTR2,
 	MSR_F15H_PERF_CTR3, MSR_F15H_PERF_CTR4, MSR_F15H_PERF_CTR5,
 	MSR_IA32_XFD, MSR_IA32_XFD_ERR,
+	MSR_IA32_XSS,
 };
 
 static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
@@ -6941,6 +6942,10 @@ static void kvm_init_msr_list(void)
 			if (!kvm_cpu_cap_has(X86_FEATURE_XFD))
 				continue;
 			break;
+		case MSR_IA32_XSS:
+			if (!kvm_caps.supported_xss)
+				continue;
+			break;
 		default:
 			break;
 		}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 03/15] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
  2022-08-31 22:34 ` [PATCH 01/15] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Yang Weijiang
  2022-08-31 22:34 ` [PATCH 02/15] KVM: x86: Report XSS as an MSR to be saved if there are supported features Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 04/15] KVM: PMU: disable LBR handling if architectural LBR is available Yang Weijiang
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

Updated CPUID.0xD.0x1, which reports the current required storage size
of all features enabled via XCR0 | XSS, when the guest's XSS is modified.

Note, KVM does not yet support any XSS based features, i.e. supported_xss
is guaranteed to be zero at this time.

Co-developed-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Zhang Yi Z <yi.z.zhang@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220517154100.29983-5-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/cpuid.c | 16 +++++++++++++---
 arch/x86/kvm/x86.c   |  6 ++++--
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 75dcf7a72605..9ca592e969e3 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -272,9 +272,19 @@ static void __kvm_update_cpuid_runtime(struct kvm_vcpu *vcpu, struct kvm_cpuid_e
 		best->ebx = xstate_required_size(vcpu->arch.xcr0, false);
 
 	best = cpuid_entry2_find(entries, nent, 0xD, 1);
-	if (best && (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
-		     cpuid_entry_has(best, X86_FEATURE_XSAVEC)))
-		best->ebx = xstate_required_size(vcpu->arch.xcr0, true);
+	if (best) {
+		if (cpuid_entry_has(best, X86_FEATURE_XSAVES) ||
+		    cpuid_entry_has(best, X86_FEATURE_XSAVEC))  {
+			u64 xstate = vcpu->arch.xcr0 | vcpu->arch.ia32_xss;
+
+			best->ebx = xstate_required_size(xstate, true);
+		}
+
+		if (!cpuid_entry_has(best, X86_FEATURE_XSAVES)) {
+			best->ecx = 0;
+			best->edx = 0;
+		}
+	}
 
 	best = __kvm_find_kvm_cpuid_features(vcpu, entries, nent);
 	if (kvm_hlt_in_guest(vcpu->kvm) && best &&
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 17663cade3fa..92ff5f7d944b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3647,8 +3647,10 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		 */
 		if (data & ~kvm_caps.supported_xss)
 			return 1;
-		vcpu->arch.ia32_xss = data;
-		kvm_update_cpuid_runtime(vcpu);
+		if (vcpu->arch.ia32_xss != data) {
+			vcpu->arch.ia32_xss = data;
+			kvm_update_cpuid_runtime(vcpu);
+		}
 		break;
 	case MSR_SMI_COUNT:
 		if (!msr_info->host_initiated)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 04/15] KVM: PMU: disable LBR handling if architectural LBR is available
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (2 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 03/15] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 05/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR Yang Weijiang
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

From: Paolo Bonzini <pbonzini@redhat.com>

Traditional LBR is absent on CPU models that have architectural LBR, so
disable all processing of traditional LBR MSRs if they are not there.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/pmu_intel.c | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index c399637a3a79..89cb75bb0280 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -174,19 +174,23 @@ static inline struct kvm_pmc *get_fw_gp_pmc(struct kvm_pmu *pmu, u32 msr)
 static bool intel_pmu_is_valid_lbr_msr(struct kvm_vcpu *vcpu, u32 index)
 {
 	struct x86_pmu_lbr *records = vcpu_to_lbr_records(vcpu);
-	bool ret = false;
 
 	if (!intel_pmu_lbr_is_enabled(vcpu))
-		return ret;
+		return false;
 
-	ret = (index == MSR_LBR_SELECT) || (index == MSR_LBR_TOS) ||
-		(index >= records->from && index < records->from + records->nr) ||
-		(index >= records->to && index < records->to + records->nr);
+	if (!guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) &&
+	    (index == MSR_LBR_SELECT || index == MSR_LBR_TOS))
+		return true;
 
-	if (!ret && records->info)
-		ret = (index >= records->info && index < records->info + records->nr);
+	if ((index >= records->from && index < records->from + records->nr) ||
+	    (index >= records->to && index < records->to + records->nr))
+		return true;
 
-	return ret;
+	if (records->info && index >= records->info &&
+	    index < records->info + records->nr)
+		return true;
+
+	return false;
 }
 
 static bool intel_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
@@ -703,6 +707,9 @@ static void vmx_update_intercept_for_lbr_msrs(struct kvm_vcpu *vcpu, bool set)
 			vmx_set_intercept_for_msr(vcpu, lbr->info + i, MSR_TYPE_RW, set);
 	}
 
+	if (guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR))
+		return;
+
 	vmx_set_intercept_for_msr(vcpu, MSR_LBR_SELECT, MSR_TYPE_RW, set);
 	vmx_set_intercept_for_msr(vcpu, MSR_LBR_TOS, MSR_TYPE_RW, set);
 }
@@ -743,10 +750,12 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
+	bool lbr_enable = !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) &&
+		(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR);
 
 	if (!lbr_desc->event) {
 		vmx_disable_lbr_msrs_passthrough(vcpu);
-		if (vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR)
+		if (lbr_enable)
 			goto warn;
 		if (test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use))
 			goto warn;
@@ -769,7 +778,10 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
 
 static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
 {
-	if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR))
+	bool lbr_enable = !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) &&
+		(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR);
+
+	if (!lbr_enable)
 		intel_pmu_release_guest_lbr_event(vcpu);
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 05/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (3 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 04/15] KVM: PMU: disable LBR handling if architectural LBR is available Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 06/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL " Yang Weijiang
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

From: Like Xu <like.xu@linux.intel.com>

The number of Arch LBR entries available is determined by the value
in host MSR_ARCH_LBR_DEPTH.DEPTH. The supported LBR depth values are
enumerated in CPUID.(EAX=01CH, ECX=0):EAX[7:0]. For each bit "n" set
in this field, the MSR_ARCH_LBR_DEPTH.DEPTH value of "8*(n+1)" is
supported. In the first generation of Arch LBR, max entry size is 32,
host configures the max size and guest always honors the setting.

Write to MSR_ARCH_LBR_DEPTH has side-effect, all LBR entries are reset
to 0. Kernel PMU driver can leverage this effect to do fask reset to
LBR record MSRs. KVM allows guest to achieve it when Arch LBR records
MSRs are passed through to the guest.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Co-developed-by: Yang Weijiang <weijiang.yang@intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
---
 arch/x86/include/asm/kvm_host.h |  3 ++
 arch/x86/kvm/vmx/pmu_intel.c    | 57 +++++++++++++++++++++++++++++++--
 2 files changed, 58 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 2c96c43c313a..bcc1dca08a17 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -549,6 +549,9 @@ struct kvm_pmu {
 	 * redundant check before cleanup if guest don't use vPMU at all.
 	 */
 	u8 event_count;
+
+	/* Guest arch lbr depth supported by KVM. */
+	u64 kvm_arch_lbr_depth;
 };
 
 struct kvm_pmu_ops;
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 89cb75bb0280..eb35cf2845ca 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -182,6 +182,10 @@ static bool intel_pmu_is_valid_lbr_msr(struct kvm_vcpu *vcpu, u32 index)
 	    (index == MSR_LBR_SELECT || index == MSR_LBR_TOS))
 		return true;
 
+	if (index == MSR_ARCH_LBR_DEPTH)
+		return kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) &&
+		       guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR);
+
 	if ((index >= records->from && index < records->from + records->nr) ||
 	    (index >= records->to && index < records->to + records->nr))
 		return true;
@@ -349,6 +353,7 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
+	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
 	u32 msr = msr_info->index;
 
 	switch (msr) {
@@ -373,6 +378,9 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_PEBS_DATA_CFG:
 		msr_info->data = pmu->pebs_data_cfg;
 		return 0;
+	case MSR_ARCH_LBR_DEPTH:
+		msr_info->data = lbr_desc->records.nr;
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -399,6 +407,7 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
+	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
 	u32 msr = msr_info->index;
 	u64 data = msr_info->data;
 	u64 reserved_bits;
@@ -456,6 +465,24 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 0;
 		}
 		break;
+	case MSR_ARCH_LBR_DEPTH:
+		if (!pmu->kvm_arch_lbr_depth && !msr_info->host_initiated)
+			return 1;
+		/*
+		 * When guest/host depth are different, the handling would be tricky,
+		 * so only max depth is supported for both host and guest.
+		 */
+		if (data != pmu->kvm_arch_lbr_depth)
+			return 1;
+
+		lbr_desc->records.nr = data;
+		/*
+		 * Writing depth MSR from guest could either setting the
+		 * MSR or resetting the LBR records with the side-effect.
+		 */
+		if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR))
+			wrmsrl(MSR_ARCH_LBR_DEPTH, lbr_desc->records.nr);
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -506,6 +533,32 @@ static void setup_fixed_pmc_eventsel(struct kvm_pmu *pmu)
 	}
 }
 
+static bool cpuid_enable_lbr(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+	struct kvm_cpuid_entry2 *entry;
+	int depth_bit;
+
+	if (!kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR))
+		return !static_cpu_has(X86_FEATURE_ARCH_LBR) &&
+			cpuid_model_is_consistent(vcpu);
+
+	pmu->kvm_arch_lbr_depth = 0;
+	if (!guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR))
+		return false;
+
+	entry = kvm_find_cpuid_entry(vcpu, 0x1C);
+	if (!entry)
+		return false;
+
+	depth_bit = fls(cpuid_eax(0x1C) & 0xff);
+	if ((entry->eax & 0xff) != (1 << (depth_bit - 1)))
+		return false;
+
+	pmu->kvm_arch_lbr_depth = depth_bit * 8;
+	return true;
+}
+
 static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
@@ -590,8 +643,8 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 		INTEL_PMC_MAX_GENERIC, pmu->nr_arch_fixed_counters);
 
 	perf_capabilities = vcpu_get_perf_capabilities(vcpu);
-	if (cpuid_model_is_consistent(vcpu) &&
-	    (perf_capabilities & PMU_CAP_LBR_FMT))
+	if ((perf_capabilities & PMU_CAP_LBR_FMT) &&
+	    cpuid_enable_lbr(vcpu))
 		x86_perf_get_lbr(&lbr_desc->records);
 	else
 		lbr_desc->records.nr = 0;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 06/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL for guest Arch LBR
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (4 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 05/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 07/15] KVM: VMX: Support passthrough of architectural LBRs Yang Weijiang
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

From: Paolo Bonzini <pbonzini@redhat.com>

Arch LBR is enabled by setting MSR_ARCH_LBR_CTL.LBREn to 1. A new guest
state field named "Guest IA32_LBR_CTL" is added to enhance guest LBR usage.
When guest Arch LBR is enabled, a guest LBR event will be created like the
model-specific LBR does. Clear guest LBR enable bit on host PMI handling so
guest can see expected config.

On processors that support Arch LBR, MSR_IA32_DEBUGCTLMSR[bit 0] has no
meaning. It can be written to 0 or 1, but reads will always return 0.
Like IA32_DEBUGCTL, IA32_ARCH_LBR_CTL msr is also preserved on INIT.

Regardless of the Arch LBR or legacy LBR, when the LBR_EN bit 0 of the
corresponding control MSR is set to 1, LBR recording will be enabled.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Co-developed-by: Yang Weijiang <weijiang.yang@intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Message-Id: <20220517154100.29983-8-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/events/intel/lbr.c      |  2 -
 arch/x86/include/asm/msr-index.h |  1 +
 arch/x86/include/asm/vmx.h       |  2 +
 arch/x86/kvm/vmx/pmu_intel.c     | 67 ++++++++++++++++++++++++++++----
 arch/x86/kvm/vmx/vmx.c           |  7 ++++
 5 files changed, 69 insertions(+), 10 deletions(-)

diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index 4ed6d3691e10..1d2c83c3644f 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -160,8 +160,6 @@ enum {
 	 ARCH_LBR_RETURN		|\
 	 ARCH_LBR_OTHER_BRANCH)
 
-#define ARCH_LBR_CTL_MASK			0x7f000e
-
 static void intel_pmu_lbr_filter(struct cpu_hw_events *cpuc);
 
 static __always_inline bool is_lbr_call_stack_bit_set(u64 config)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 6674bdb096f3..5508ff3f1bd6 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -215,6 +215,7 @@
 #define LBR_INFO_BR_TYPE		(0xfull << LBR_INFO_BR_TYPE_OFFSET)
 
 #define MSR_ARCH_LBR_CTL		0x000014ce
+#define ARCH_LBR_CTL_MASK		0x7f000e
 #define ARCH_LBR_CTL_LBREN		BIT(0)
 #define ARCH_LBR_CTL_CPL_OFFSET		1
 #define ARCH_LBR_CTL_CPL		(0x3ull << ARCH_LBR_CTL_CPL_OFFSET)
diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index c371ef695fcc..50c6f36daaea 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -257,6 +257,8 @@ enum vmcs_field {
 	GUEST_BNDCFGS_HIGH              = 0x00002813,
 	GUEST_IA32_RTIT_CTL		= 0x00002814,
 	GUEST_IA32_RTIT_CTL_HIGH	= 0x00002815,
+	GUEST_IA32_LBR_CTL		= 0x00002816,
+	GUEST_IA32_LBR_CTL_HIGH		= 0x00002817,
 	HOST_IA32_PAT			= 0x00002c00,
 	HOST_IA32_PAT_HIGH		= 0x00002c01,
 	HOST_IA32_EFER			= 0x00002c02,
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index eb35cf2845ca..e06de1f29fe7 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -19,6 +19,7 @@
 #include "pmu.h"
 
 #define MSR_PMC_FULL_WIDTH_BIT      (MSR_IA32_PMC0 - MSR_IA32_PERFCTR0)
+#define KVM_ARCH_LBR_CTL_MASK  (ARCH_LBR_CTL_MASK | ARCH_LBR_CTL_LBREN)
 
 static struct kvm_event_hw_type_mapping intel_arch_events[] = {
 	[0] = { 0x3c, 0x00, PERF_COUNT_HW_CPU_CYCLES },
@@ -182,7 +183,7 @@ static bool intel_pmu_is_valid_lbr_msr(struct kvm_vcpu *vcpu, u32 index)
 	    (index == MSR_LBR_SELECT || index == MSR_LBR_TOS))
 		return true;
 
-	if (index == MSR_ARCH_LBR_DEPTH)
+	if (index == MSR_ARCH_LBR_DEPTH || index == MSR_ARCH_LBR_CTL)
 		return kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) &&
 		       guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR);
 
@@ -349,6 +350,30 @@ static bool intel_pmu_handle_lbr_msrs_access(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static bool arch_lbr_ctl_is_valid(struct kvm_vcpu *vcpu, u64 ctl)
+{
+	struct kvm_cpuid_entry2 *entry;
+	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+
+	if (!pmu->kvm_arch_lbr_depth)
+		return false;
+
+	if (ctl & ~KVM_ARCH_LBR_CTL_MASK)
+		return false;
+
+	entry = kvm_find_cpuid_entry(vcpu, 0x1c);
+	if (!entry)
+		return false;
+
+	if (!(entry->ebx & BIT(0)) && (ctl & ARCH_LBR_CTL_CPL))
+		return false;
+	if (!(entry->ebx & BIT(2)) && (ctl & ARCH_LBR_CTL_STACK))
+		return false;
+	if (!(entry->ebx & BIT(1)) && (ctl & ARCH_LBR_CTL_FILTER))
+		return false;
+	return true;
+}
+
 static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
@@ -381,6 +406,14 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_ARCH_LBR_DEPTH:
 		msr_info->data = lbr_desc->records.nr;
 		return 0;
+	case MSR_ARCH_LBR_CTL:
+		if (!kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) {
+			WARN_ON_ONCE(!msr_info->host_initiated);
+			msr_info->data = 0;
+		} else {
+			msr_info->data = vmcs_read64(GUEST_IA32_LBR_CTL);
+		}
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -483,6 +516,18 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR))
 			wrmsrl(MSR_ARCH_LBR_DEPTH, lbr_desc->records.nr);
 		return 0;
+	case MSR_ARCH_LBR_CTL:
+		if (msr_info->host_initiated && !pmu->kvm_arch_lbr_depth)
+			return data != 0;
+
+		if (!arch_lbr_ctl_is_valid(vcpu, data))
+			break;
+
+		vmcs_write64(GUEST_IA32_LBR_CTL, data);
+		if (intel_pmu_lbr_is_enabled(vcpu) && !lbr_desc->event &&
+		    (data & ARCH_LBR_CTL_LBREN))
+			intel_pmu_create_guest_lbr_event(vcpu);
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -729,12 +774,16 @@ static void intel_pmu_reset(struct kvm_vcpu *vcpu)
  */
 static void intel_pmu_legacy_freezing_lbrs_on_pmi(struct kvm_vcpu *vcpu)
 {
-	u64 data = vmcs_read64(GUEST_IA32_DEBUGCTL);
+	u32 lbr_ctl_field = GUEST_IA32_DEBUGCTL;
 
-	if (data & DEBUGCTLMSR_FREEZE_LBRS_ON_PMI) {
-		data &= ~DEBUGCTLMSR_LBR;
-		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
-	}
+	if (!(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_FREEZE_LBRS_ON_PMI))
+		return;
+
+	if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) &&
+	    guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR))
+		lbr_ctl_field = GUEST_IA32_LBR_CTL;
+
+	vmcs_write64(lbr_ctl_field, vmcs_read64(lbr_ctl_field) & ~0x1ULL);
 }
 
 static void intel_pmu_deliver_pmi(struct kvm_vcpu *vcpu)
@@ -803,7 +852,8 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
-	bool lbr_enable = !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) &&
+	bool lbr_enable = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) ?
+		(vmcs_read64(GUEST_IA32_LBR_CTL) & ARCH_LBR_CTL_LBREN) :
 		(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR);
 
 	if (!lbr_desc->event) {
@@ -831,7 +881,8 @@ void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu)
 
 static void intel_pmu_cleanup(struct kvm_vcpu *vcpu)
 {
-	bool lbr_enable = !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) &&
+	bool lbr_enable = guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR) ?
+		(vmcs_read64(GUEST_IA32_LBR_CTL) & ARCH_LBR_CTL_LBREN) :
 		(vmcs_read64(GUEST_IA32_DEBUGCTL) & DEBUGCTLMSR_LBR);
 
 	if (!lbr_enable)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index c9b49a09e6b5..020db207215b 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2104,6 +2104,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 						VM_EXIT_SAVE_DEBUG_CONTROLS)
 			get_vmcs12(vcpu)->guest_ia32_debugctl = data;
 
+		/*
+		 * For Arch LBR, IA32_DEBUGCTL[bit 0] has no meaning.
+		 * It can be written to 0 or 1, but reads will always return 0.
+		 */
+		if (guest_cpuid_has(vcpu, X86_FEATURE_ARCH_LBR))
+			data &= ~DEBUGCTLMSR_LBR;
+
 		vmcs_write64(GUEST_IA32_DEBUGCTL, data);
 		if (intel_pmu_lbr_is_enabled(vcpu) && !to_vmx(vcpu)->lbr_desc.event &&
 		    (data & DEBUGCTLMSR_LBR))
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 07/15] KVM: VMX: Support passthrough of architectural LBRs
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (5 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 06/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL " Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-09-01 14:20   ` Sean Christopherson
  2022-08-31 22:34 ` [PATCH 08/15] KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list Yang Weijiang
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

From: Paolo Bonzini <pbonzini@redhat.com>

MSR_ARCH_LBR_* can be pointed to by records->from, records->to and
records->info, so list them in is_valid_passthrough_msr.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 020db207215b..77bad663d804 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -638,6 +638,9 @@ static bool is_valid_passthrough_msr(u32 msr)
 	case MSR_LBR_NHM_TO ... MSR_LBR_NHM_TO + 31:
 	case MSR_LBR_CORE_FROM ... MSR_LBR_CORE_FROM + 8:
 	case MSR_LBR_CORE_TO ... MSR_LBR_CORE_TO + 8:
+	case MSR_ARCH_LBR_FROM_0 ... MSR_ARCH_LBR_FROM_0 + 31:
+	case MSR_ARCH_LBR_TO_0 ... MSR_ARCH_LBR_TO_0 + 31:
+	case MSR_ARCH_LBR_INFO_0 ... MSR_ARCH_LBR_INFO_0 + 31:
 		/* LBR MSRs. These are handled in vmx_update_intercept_for_lbr_msrs() */
 		return true;
 	}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 08/15] KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (6 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 07/15] KVM: VMX: Support passthrough of architectural LBRs Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 09/15] KVM: x86: Refine the matching and clearing logic for supported_xss Yang Weijiang
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

Arch LBR MSR_ARCH_LBR_DEPTH and MSR_ARCH_LBR_CTL are queried by
userspace application before it wants to {save|restore} the Arch LBR
data. Other LBR related data MSRs are omitted here intentionally due
to lengthy list(32*3). Userspace can still use KVM_{GET|SET}_MSRS to
access them if necessary.

Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/x86.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 92ff5f7d944b..b3d2a7ad1d18 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1452,6 +1452,7 @@ static const u32 msrs_to_save_all[] = {
 	MSR_F15H_PERF_CTR3, MSR_F15H_PERF_CTR4, MSR_F15H_PERF_CTR5,
 	MSR_IA32_XFD, MSR_IA32_XFD_ERR,
 	MSR_IA32_XSS,
+	MSR_ARCH_LBR_CTL, MSR_ARCH_LBR_DEPTH,
 };
 
 static u32 msrs_to_save[ARRAY_SIZE(msrs_to_save_all)];
@@ -3839,6 +3840,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_PEBS_ENABLE:
 	case MSR_IA32_DS_AREA:
 	case MSR_PEBS_DATA_CFG:
+	case MSR_ARCH_LBR_CTL:
+	case MSR_ARCH_LBR_DEPTH:
 	case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:
 		if (kvm_pmu_is_valid_msr(vcpu, msr))
 			return kvm_pmu_set_msr(vcpu, msr_info);
@@ -3942,6 +3945,8 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_IA32_PEBS_ENABLE:
 	case MSR_IA32_DS_AREA:
 	case MSR_PEBS_DATA_CFG:
+	case MSR_ARCH_LBR_CTL:
+	case MSR_ARCH_LBR_DEPTH:
 	case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:
 		if (kvm_pmu_is_valid_msr(vcpu, msr_info->index))
 			return kvm_pmu_get_msr(vcpu, msr_info);
@@ -6948,6 +6953,11 @@ static void kvm_init_msr_list(void)
 			if (!kvm_caps.supported_xss)
 				continue;
 			break;
+		case MSR_ARCH_LBR_DEPTH:
+		case MSR_ARCH_LBR_CTL:
+			if (!kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR))
+				continue;
+			break;
 		default:
 			break;
 		}
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 09/15] KVM: x86: Refine the matching and clearing logic for supported_xss
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (7 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 08/15] KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 10/15] KVM: x86/vmx: Check Arch LBR config when return perf capabilities Yang Weijiang
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

From: Paolo Bonzini <pbonzini@redhat.com>

Refine the code path of the existing clearing of supported_xss in this way:
initialize the supported_xss with the filter of KVM_SUPPORTED_XSS mask and
update its value in a bit clear manner (rather than bit setting).

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220517154100.29983-10-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 5 +++--
 arch/x86/kvm/x86.c     | 6 +++++-
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 77bad663d804..32e41bd217e3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7717,9 +7717,10 @@ static __init void vmx_set_cpu_caps(void)
 		kvm_cpu_cap_set(X86_FEATURE_UMIP);
 
 	/* CPUID 0xD.1 */
-	kvm_caps.supported_xss = 0;
-	if (!cpu_has_vmx_xsaves())
+	if (!cpu_has_vmx_xsaves()) {
 		kvm_cpu_cap_clear(X86_FEATURE_XSAVES);
+		kvm_caps.supported_xss = 0;
+	}
 
 	/* CPUID 0x80000001 and 0x7 (RDPID) */
 	if (!cpu_has_vmx_rdtscp()) {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b3d2a7ad1d18..19cb5840300b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -213,6 +213,8 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs;
 				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
 				| XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE)
 
+#define KVM_SUPPORTED_XSS     0
+
 u64 __read_mostly host_efer;
 EXPORT_SYMBOL_GPL(host_efer);
 
@@ -11965,8 +11967,10 @@ int kvm_arch_hardware_setup(void *opaque)
 
 	rdmsrl_safe(MSR_EFER, &host_efer);
 
-	if (boot_cpu_has(X86_FEATURE_XSAVES))
+	if (boot_cpu_has(X86_FEATURE_XSAVES)) {
 		rdmsrl(MSR_IA32_XSS, host_xss);
+		kvm_caps.supported_xss = host_xss & KVM_SUPPORTED_XSS;
+	}
 
 	kvm_init_pmu_capability();
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 10/15] KVM: x86/vmx: Check Arch LBR config when return perf capabilities
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (8 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 09/15] KVM: x86: Refine the matching and clearing logic for supported_xss Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 11/15] KVM: x86: Add XSAVE Support for Architectural LBR Yang Weijiang
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

Two new bit fields(VM_EXIT_CLEAR_IA32_LBR_CTL, VM_ENTRY_LOAD_IA32_LBR_CTL)
are added to support guest Arch LBR. These two bits should be set in order
to make Arch LBR workable in both guest and host. Since we don't support
Arch LBR in nested guest, clear the two bits before run L2 VM.

Co-developed-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
---
 arch/x86/include/asm/vmx.h      |  2 ++
 arch/x86/kvm/vmx/capabilities.h |  8 ++++++++
 arch/x86/kvm/vmx/nested.c       |  8 ++++++++
 arch/x86/kvm/vmx/vmx.c          | 16 ++++++++++++++--
 4 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
index 50c6f36daaea..e8781df8cc2e 100644
--- a/arch/x86/include/asm/vmx.h
+++ b/arch/x86/include/asm/vmx.h
@@ -102,6 +102,7 @@
 #define VM_EXIT_CLEAR_BNDCFGS                   0x00800000
 #define VM_EXIT_PT_CONCEAL_PIP			0x01000000
 #define VM_EXIT_CLEAR_IA32_RTIT_CTL		0x02000000
+#define VM_EXIT_CLEAR_IA32_LBR_CTL		0x04000000
 
 #define VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR	0x00036dff
 
@@ -115,6 +116,7 @@
 #define VM_ENTRY_LOAD_BNDCFGS                   0x00010000
 #define VM_ENTRY_PT_CONCEAL_PIP			0x00020000
 #define VM_ENTRY_LOAD_IA32_RTIT_CTL		0x00040000
+#define VM_ENTRY_LOAD_IA32_LBR_CTL		0x00200000
 
 #define VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR	0x000011ff
 
diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index c5e5dfef69c7..b7147698ad82 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -401,6 +401,11 @@ static inline bool vmx_pebs_supported(void)
 	return boot_cpu_has(X86_FEATURE_PEBS) && kvm_pmu_cap.pebs_ept;
 }
 
+static inline bool cpu_has_vmx_arch_lbr(void)
+{
+	return vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_LBR_CTL;
+}
+
 static inline u64 vmx_get_perf_capabilities(void)
 {
 	u64 perf_cap = PMU_CAP_FW_WRITES;
@@ -420,6 +425,9 @@ static inline u64 vmx_get_perf_capabilities(void)
 			perf_cap &= ~PERF_CAP_PEBS_BASELINE;
 	}
 
+	if (boot_cpu_has(X86_FEATURE_ARCH_LBR) && !cpu_has_vmx_arch_lbr())
+		perf_cap &= ~PMU_CAP_LBR_FMT;
+
 	return perf_cap;
 }
 
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index ddd4367d4826..d4b05354e7ab 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2338,6 +2338,10 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
 		if (guest_efer != host_efer)
 			exec_control |= VM_ENTRY_LOAD_IA32_EFER;
 	}
+
+	if (cpu_has_vmx_arch_lbr())
+		exec_control &= ~VM_ENTRY_LOAD_IA32_LBR_CTL;
+
 	vm_entry_controls_set(vmx, exec_control);
 
 	/*
@@ -2352,6 +2356,10 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
 		exec_control |= VM_EXIT_LOAD_IA32_EFER;
 	else
 		exec_control &= ~VM_EXIT_LOAD_IA32_EFER;
+
+	if (cpu_has_vmx_arch_lbr())
+		exec_control &= ~VM_EXIT_CLEAR_IA32_LBR_CTL;
+
 	vm_exit_controls_set(vmx, exec_control);
 
 	/*
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 32e41bd217e3..cdf65cdcb45a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2559,6 +2559,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 		{ VM_ENTRY_LOAD_IA32_EFER,		VM_EXIT_LOAD_IA32_EFER },
 		{ VM_ENTRY_LOAD_BNDCFGS,		VM_EXIT_CLEAR_BNDCFGS },
 		{ VM_ENTRY_LOAD_IA32_RTIT_CTL,		VM_EXIT_CLEAR_IA32_RTIT_CTL },
+		{ VM_ENTRY_LOAD_IA32_LBR_CTL,		VM_EXIT_CLEAR_IA32_LBR_CTL },
 	};
 
 	memset(vmcs_conf, 0, sizeof(*vmcs_conf));
@@ -2679,7 +2680,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	      VM_EXIT_LOAD_IA32_EFER |
 	      VM_EXIT_CLEAR_BNDCFGS |
 	      VM_EXIT_PT_CONCEAL_PIP |
-	      VM_EXIT_CLEAR_IA32_RTIT_CTL;
+	      VM_EXIT_CLEAR_IA32_RTIT_CTL |
+	      VM_EXIT_CLEAR_IA32_LBR_CTL;
 	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_EXIT_CTLS,
 				&_vmexit_control) < 0)
 		return -EIO;
@@ -2703,7 +2705,8 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf,
 	      VM_ENTRY_LOAD_IA32_EFER |
 	      VM_ENTRY_LOAD_BNDCFGS |
 	      VM_ENTRY_PT_CONCEAL_PIP |
-	      VM_ENTRY_LOAD_IA32_RTIT_CTL;
+	      VM_ENTRY_LOAD_IA32_RTIT_CTL |
+	      VM_ENTRY_LOAD_IA32_LBR_CTL;
 	if (adjust_vmx_controls(min, opt, MSR_IA32_VMX_ENTRY_CTLS,
 				&_vmentry_control) < 0)
 		return -EIO;
@@ -4803,6 +4806,11 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	vpid_sync_context(vmx->vpid);
 
 	vmx_update_fb_clear_dis(vcpu, vmx);
+
+	if (!init_event) {
+		if (cpu_has_vmx_arch_lbr())
+			vmcs_write64(GUEST_IA32_LBR_CTL, 0);
+	}
 }
 
 static void vmx_enable_irq_window(struct kvm_vcpu *vcpu)
@@ -6198,6 +6206,10 @@ void dump_vmcs(struct kvm_vcpu *vcpu)
 	    vmentry_ctl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL)
 		pr_err("PerfGlobCtl = 0x%016llx\n",
 		       vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL));
+	if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) &&
+	    vmentry_ctl & VM_ENTRY_LOAD_IA32_LBR_CTL)
+		pr_err("ArchLBRCtl = 0x%016llx\n",
+		       vmcs_read64(GUEST_IA32_LBR_CTL));
 	if (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS)
 		pr_err("BndCfgS = 0x%016llx\n", vmcs_read64(GUEST_BNDCFGS));
 	pr_err("Interruptibility = %08x  ActivityState = %08x\n",
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 11/15] KVM: x86: Add XSAVE Support for Architectural LBR
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (9 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 10/15] KVM: x86/vmx: Check Arch LBR config when return perf capabilities Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 12/15] KVM: x86/vmx: Clear Arch LBREn bit before inject #DB to guest Yang Weijiang
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

From: Like Xu <like.xu@linux.intel.com>

On processors supporting XSAVES and XRSTORS, Architectural LBR XSAVE
support is enumerated from CPUID.(EAX=0DH, ECX=1):ECX[bit 15].
The detailed sub-leaf for Arch LBR is enumerated in CPUID.(0DH, 0FH).

XSAVES provides a faster means than RDMSR for guest to read all LBRs.
When guest IA32_XSS[bit 15] is set, the Arch LBR state can be saved using
XSAVES and restored by XRSTORS with the appropriate RFBM.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220517154100.29983-12-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 4 ++++
 arch/x86/kvm/x86.c     | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index cdf65cdcb45a..9d50e3703ea2 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7714,6 +7714,10 @@ static __init void vmx_set_cpu_caps(void)
 		kvm_cpu_cap_check_and_set(X86_FEATURE_DS);
 		kvm_cpu_cap_check_and_set(X86_FEATURE_DTES64);
 	}
+	if (!cpu_has_vmx_arch_lbr()) {
+		kvm_cpu_cap_clear(X86_FEATURE_ARCH_LBR);
+		kvm_caps.supported_xss &= ~XFEATURE_MASK_LBR;
+	}
 
 	if (!enable_pmu)
 		kvm_cpu_cap_clear(X86_FEATURE_PDCM);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 19cb5840300b..e9f0f97014de 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -213,7 +213,7 @@ static struct kvm_user_return_msrs __percpu *user_return_msrs;
 				| XFEATURE_MASK_BNDCSR | XFEATURE_MASK_AVX512 \
 				| XFEATURE_MASK_PKRU | XFEATURE_MASK_XTILE)
 
-#define KVM_SUPPORTED_XSS     0
+#define KVM_SUPPORTED_XSS     XFEATURE_MASK_LBR
 
 u64 __read_mostly host_efer;
 EXPORT_SYMBOL_GPL(host_efer);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 12/15] KVM: x86/vmx: Clear Arch LBREn bit before inject #DB to guest
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (10 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 11/15] KVM: x86: Add XSAVE Support for Architectural LBR Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 13/15] KVM: x86/vmx: Flip Arch LBREn bit on guest state change Yang Weijiang
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

On a debug breakpoint event (#DB), IA32_LBR_CTL.LBREn is cleared.
So need to clear the bit manually before inject #DB.

Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Message-Id: <20220517154100.29983-14-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9d50e3703ea2..dddba2a48542 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1687,6 +1687,20 @@ static void vmx_clear_hlt(struct kvm_vcpu *vcpu)
 		vmcs_write32(GUEST_ACTIVITY_STATE, GUEST_ACTIVITY_ACTIVE);
 }
 
+static void disable_arch_lbr_ctl(struct kvm_vcpu *vcpu)
+{
+	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
+	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
+
+	if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) &&
+	    test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use) &&
+	    lbr_desc->event) {
+		u64 ctl = vmcs_read64(GUEST_IA32_LBR_CTL);
+
+		vmcs_write64(GUEST_IA32_LBR_CTL, ctl & ~ARCH_LBR_CTL_LBREN);
+	}
+}
+
 static void vmx_queue_exception(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -1722,6 +1736,9 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu)
 	vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, intr_info);
 
 	vmx_clear_hlt(vcpu);
+
+	if (nr == DB_VECTOR)
+		disable_arch_lbr_ctl(vcpu);
 }
 
 static void vmx_setup_uret_msr(struct vcpu_vmx *vmx, unsigned int msr,
@@ -4886,6 +4903,9 @@ static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
 			INTR_TYPE_NMI_INTR | INTR_INFO_VALID_MASK | NMI_VECTOR);
 
 	vmx_clear_hlt(vcpu);
+
+	if (vcpu->arch.exception.nr == DB_VECTOR)
+		disable_arch_lbr_ctl(vcpu);
 }
 
 bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 13/15] KVM: x86/vmx: Flip Arch LBREn bit on guest state change
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (11 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 12/15] KVM: x86/vmx: Clear Arch LBREn bit before inject #DB to guest Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 14/15] KVM: x86: Add Arch LBR data MSR access interface Yang Weijiang
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

Per spec:"IA32_LBR_CTL.LBREn is saved and cleared on #SMI, and restored
on RSM. On a warm reset, all LBR MSRs, including IA32_LBR_DEPTH, have their
values preserved. However, IA32_LBR_CTL.LBREn is cleared to 0, disabling
LBRs." At guest SMM entry, store guest IA32_LBR_CTL in SMRAM and clear LBREn
in VMCS, do reverse things at SMM exit. Also clear LBREn at warm reset.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Message-Id: <20220517154100.29983-15-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/vmx.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index dddba2a48542..82b1bde382bb 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4827,6 +4827,8 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	if (!init_event) {
 		if (cpu_has_vmx_arch_lbr())
 			vmcs_write64(GUEST_IA32_LBR_CTL, 0);
+	} else {
+		disable_arch_lbr_ctl(vcpu);
 	}
 }
 
@@ -7967,6 +7969,8 @@ static int vmx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
 
 static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 {
+	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
+	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 
 	/*
@@ -7983,11 +7987,22 @@ static int vmx_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
 	vmx->nested.smm.vmxon = vmx->nested.vmxon;
 	vmx->nested.vmxon = false;
 	vmx_clear_hlt(vcpu);
+
+	if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) &&
+	    test_bit(INTEL_PMC_IDX_FIXED_VLBR, pmu->pmc_in_use) &&
+	    lbr_desc->event && guest_cpuid_has(vcpu, X86_FEATURE_LM)) {
+		u64 ctl = vmcs_read64(GUEST_IA32_LBR_CTL);
+
+		put_smstate(u64, smstate, 0x7f10, ctl);
+		vmcs_write64(GUEST_IA32_LBR_CTL, ctl & ~ARCH_LBR_CTL_LBREN);
+	}
+
 	return 0;
 }
 
 static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 {
+	struct lbr_desc *lbr_desc = vcpu_to_lbr_desc(vcpu);
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	int ret;
 
@@ -8004,6 +8019,17 @@ static int vmx_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
 		vmx->nested.nested_run_pending = 1;
 		vmx->nested.smm.guest_mode = false;
 	}
+
+	if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) &&
+	    guest_cpuid_has(vcpu, X86_FEATURE_LM)) {
+		u64 ctl = GET_SMSTATE(u64, smstate, 0x7f10);
+
+		vmcs_write64(GUEST_IA32_LBR_CTL, ctl | ARCH_LBR_CTL_LBREN);
+
+		if (intel_pmu_lbr_is_enabled(vcpu) && !lbr_desc->event)
+			intel_pmu_create_guest_lbr_event(vcpu);
+	}
+
 	return 0;
 }
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 14/15] KVM: x86: Add Arch LBR data MSR access interface
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (12 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 13/15] KVM: x86/vmx: Flip Arch LBREn bit on guest state change Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-08-31 22:34 ` [PATCH 15/15] KVM: x86/cpuid: Advertise Arch LBR feature in CPUID Yang Weijiang
  2022-09-01 14:23 ` [PATCH 00/15] Introduce Architectural LBR for vPMU Sean Christopherson
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

Arch LBR MSRs are xsave-supported, but they're operated as "independent"
xsave feature by PMU code, i.e., during thread/process context switch,
the MSRs are saved/restored with perf_event_task_sched_{in|out} instead
of generic kernel fpu switch code, i.e.,save_fpregs_to_fpstate() and
restore_fpregs_from_fpstate(). When vcpu guest/host fpu state swap happens,
Arch LBR MSRs are retained so they can be accessed directly.

Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Message-Id: <20220517154100.29983-16-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/vmx/pmu_intel.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index e06de1f29fe7..006969bd00fe 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -414,6 +414,11 @@ static int intel_pmu_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			msr_info->data = vmcs_read64(GUEST_IA32_LBR_CTL);
 		}
 		return 0;
+	case MSR_ARCH_LBR_FROM_0 ... MSR_ARCH_LBR_FROM_0 + 31:
+	case MSR_ARCH_LBR_TO_0 ... MSR_ARCH_LBR_TO_0 + 31:
+	case MSR_ARCH_LBR_INFO_0 ... MSR_ARCH_LBR_INFO_0 + 31:
+		rdmsrl(msr_info->index, msr_info->data);
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
@@ -528,6 +533,11 @@ static int intel_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		    (data & ARCH_LBR_CTL_LBREN))
 			intel_pmu_create_guest_lbr_event(vcpu);
 		return 0;
+	case MSR_ARCH_LBR_FROM_0 ... MSR_ARCH_LBR_FROM_0 + 31:
+	case MSR_ARCH_LBR_TO_0 ... MSR_ARCH_LBR_TO_0 + 31:
+	case MSR_ARCH_LBR_INFO_0 ... MSR_ARCH_LBR_INFO_0 + 31:
+		wrmsrl(msr_info->index, msr_info->data);
+		return 0;
 	default:
 		if ((pmc = get_gp_pmc(pmu, msr, MSR_IA32_PERFCTR0)) ||
 		    (pmc = get_gp_pmc(pmu, msr, MSR_IA32_PMC0))) {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 15/15] KVM: x86/cpuid: Advertise Arch LBR feature in CPUID
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (13 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 14/15] KVM: x86: Add Arch LBR data MSR access interface Yang Weijiang
@ 2022-08-31 22:34 ` Yang Weijiang
  2022-09-01 14:23 ` [PATCH 00/15] Introduce Architectural LBR for vPMU Sean Christopherson
  15 siblings, 0 replies; 24+ messages in thread
From: Yang Weijiang @ 2022-08-31 22:34 UTC (permalink / raw)
  To: pbonzini, seanjc, kvm; +Cc: like.xu.linux, kan.liang, wei.w.wang, linux-kernel

Add Arch LBR feature bit in CPU cap-mask to expose the feature.
Only max LBR depth is supported for guest, and it's consistent
with host Arch LBR settings.

Co-developed-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Message-Id: <20220517154100.29983-17-weijiang.yang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 arch/x86/kvm/cpuid.c | 36 +++++++++++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 9ca592e969e3..cf2a0b28c239 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -134,6 +134,19 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu,
 		if (vaddr_bits != 48 && vaddr_bits != 57 && vaddr_bits != 0)
 			return -EINVAL;
 	}
+	if (kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) {
+		best = cpuid_entry2_find(entries, nent, 0x1c, 0);
+		if (best) {
+			unsigned int eax, ebx, ecx, edx;
+
+			/* Reject user-space CPUID if depth is different from host's.*/
+			cpuid_count(0x1c, 0, &eax, &ebx, &ecx, &edx);
+
+			if ((eax & 0xff) &&
+			    (best->eax & 0xff) != BIT(fls(eax & 0xff) - 1))
+				return -EINVAL;
+		}
+	}
 
 	/*
 	 * Exposing dynamic xfeatures to the guest requires additional
@@ -631,7 +644,7 @@ void kvm_set_cpu_caps(void)
 		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
 		F(MD_CLEAR) | F(AVX512_VP2INTERSECT) | F(FSRM) |
 		F(SERIALIZE) | F(TSXLDTRK) | F(AVX512_FP16) |
-		F(AMX_TILE) | F(AMX_INT8) | F(AMX_BF16)
+		F(AMX_TILE) | F(AMX_INT8) | F(AMX_BF16) | F(ARCH_LBR)
 	);
 
 	/* TSC_ADJUST and ARCH_CAPABILITIES are emulated in software. */
@@ -1055,6 +1068,27 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
 				goto out;
 		}
 		break;
+	/* Architectural LBR */
+	case 0x1c: {
+		u32 lbr_depth_mask = entry->eax & 0xff;
+
+		if (!lbr_depth_mask ||
+		    !kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR)) {
+			entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+			break;
+		}
+		/*
+		 * KVM only exposes the maximum supported depth, which is the
+		 * fixed value used on the host side.
+		 * KVM doesn't allow VMM userspace to adjust LBR depth because
+		 * guest LBR emulation depends on the configuration of host LBR
+		 * driver.
+		 */
+		lbr_depth_mask = BIT((fls(lbr_depth_mask) - 1));
+		entry->eax &= ~0xff;
+		entry->eax |= lbr_depth_mask;
+		break;
+	}
 	/* Intel AMX TILE */
 	case 0x1d:
 		if (!kvm_cpu_cap_has(X86_FEATURE_AMX_TILE)) {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 01/15] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
  2022-08-31 22:34 ` [PATCH 01/15] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Yang Weijiang
@ 2022-09-01 14:19   ` Sean Christopherson
  2022-09-02  3:05     ` Yang, Weijiang
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-09-01 14:19 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: pbonzini, kvm, like.xu.linux, kan.liang, wei.w.wang, linux-kernel

On Wed, Aug 31, 2022, Yang Weijiang wrote:
> From: Like Xu <like.xu@linux.intel.com>
> 
> The x86_pmu.lbr_info is 0 unless explicitly initialized, so there's
> no point checking x86_pmu.intel_cap.lbr_format.
> 
> Cc: Peter Zijlstra <peterz@infradead.org>
> Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
> Reviewed-by: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: Like Xu <like.xu@linux.intel.com>
> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
> Message-Id: <20220517154100.29983-3-weijiang.yang@intel.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---

No need to carry Paolo's SOB for patches that Paolo temporarily queued.  And please
delete the "Message-Id" entries as well.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 07/15] KVM: VMX: Support passthrough of architectural LBRs
  2022-08-31 22:34 ` [PATCH 07/15] KVM: VMX: Support passthrough of architectural LBRs Yang Weijiang
@ 2022-09-01 14:20   ` Sean Christopherson
  2022-09-02  3:04     ` Yang, Weijiang
  0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-09-01 14:20 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: pbonzini, kvm, like.xu.linux, kan.liang, wei.w.wang, linux-kernel

On Wed, Aug 31, 2022, Yang Weijiang wrote:
> From: Paolo Bonzini <pbonzini@redhat.com>
> 
> MSR_ARCH_LBR_* can be pointed to by records->from, records->to and
> records->info, so list them in is_valid_passthrough_msr.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

As the person sending the patches, these need your SOB.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/15] Introduce Architectural LBR for vPMU
  2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
                   ` (14 preceding siblings ...)
  2022-08-31 22:34 ` [PATCH 15/15] KVM: x86/cpuid: Advertise Arch LBR feature in CPUID Yang Weijiang
@ 2022-09-01 14:23 ` Sean Christopherson
  2022-09-02  3:44   ` Yang, Weijiang
  15 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2022-09-01 14:23 UTC (permalink / raw)
  To: Yang Weijiang
  Cc: pbonzini, kvm, like.xu.linux, kan.liang, wei.w.wang, linux-kernel

On Wed, Aug 31, 2022, Yang Weijiang wrote:
> The old patch series was queued in KVM/queue for a while and finally
> moved to below branch after Paolo's refactor. This new patch set is 
> built on top of Paolo's work + some fixes, it's tested on legacy platform
> (non-ArchLBR) and SPR platform(ArchLBR capable).

...

> Changes in this version:
> 1. Fixed some minor issues in the refactored patch set.
> 2. Added a few minor fixes due to recent vPMU code cleanup.

Please elaborate on what was broken, i.e. why this was de-queued, as well as on
what was fixed an dhow.  That will help bring me up to speed and expedite review.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 07/15] KVM: VMX: Support passthrough of architectural LBRs
  2022-09-01 14:20   ` Sean Christopherson
@ 2022-09-02  3:04     ` Yang, Weijiang
  0 siblings, 0 replies; 24+ messages in thread
From: Yang, Weijiang @ 2022-09-02  3:04 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: pbonzini, kvm, like.xu.linux, kan.liang, Wang, Wei W, linux-kernel


On 9/1/2022 10:20 PM, Sean Christopherson wrote:
> On Wed, Aug 31, 2022, Yang Weijiang wrote:
>> From: Paolo Bonzini <pbonzini@redhat.com>
>>
>> MSR_ARCH_LBR_* can be pointed to by records->from, records->to and
>> records->info, so list them in is_valid_passthrough_msr.
>>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> As the person sending the patches, these need your SOB.
Oops,  will add.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 01/15] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers
  2022-09-01 14:19   ` Sean Christopherson
@ 2022-09-02  3:05     ` Yang, Weijiang
  0 siblings, 0 replies; 24+ messages in thread
From: Yang, Weijiang @ 2022-09-02  3:05 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: pbonzini, kvm, like.xu.linux, kan.liang, Wang, Wei W, linux-kernel


On 9/1/2022 10:19 PM, Sean Christopherson wrote:
> On Wed, Aug 31, 2022, Yang Weijiang wrote:
>> From: Like Xu <like.xu@linux.intel.com>
>>
>> The x86_pmu.lbr_info is 0 unless explicitly initialized, so there's
>> no point checking x86_pmu.intel_cap.lbr_format.
>>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
>> Reviewed-by: Andi Kleen <ak@linux.intel.com>
>> Signed-off-by: Like Xu <like.xu@linux.intel.com>
>> Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
>> Message-Id: <20220517154100.29983-3-weijiang.yang@intel.com>
>> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>> ---
> No need to carry Paolo's SOB for patches that Paolo temporarily queued.  And please
> delete the "Message-Id" entries as well.
Sure, will remove them, thanks!

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/15] Introduce Architectural LBR for vPMU
  2022-09-01 14:23 ` [PATCH 00/15] Introduce Architectural LBR for vPMU Sean Christopherson
@ 2022-09-02  3:44   ` Yang, Weijiang
  2022-10-21  2:14     ` Yang, Weijiang
  0 siblings, 1 reply; 24+ messages in thread
From: Yang, Weijiang @ 2022-09-02  3:44 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: pbonzini, kvm, like.xu.linux, kan.liang, Wang, Wei W, linux-kernel


On 9/1/2022 10:23 PM, Sean Christopherson wrote:
> On Wed, Aug 31, 2022, Yang Weijiang wrote:
>> The old patch series was queued in KVM/queue for a while and finally
>> moved to below branch after Paolo's refactor. This new patch set is
>> built on top of Paolo's work + some fixes, it's tested on legacy platform
> Please elaborate on what was broken, i.e. why this was de-queued, as well as on
> what was fixed an dhow.  That will help bring me up to speed and expedite review.
Thanks Sean!
The de-queued reason I read from community is, the PEBS and Arch-LBR 
patches broke
selftest/KUTs due to host-initiated 0 writes to PMU msrs. Paolo tried to 
fix it but you
didn't agree on the solution. Plus your comments below:


On 6/1/2022 4:54 PM, Paolo Bonzini wrote:
 > On 5/31/22 20:37, Sean Christopherson wrote:
 >> Can we just punt this out of kvm/queue until its been properly reviewed?
 > Yes, I agree.  I have started making some changes and pushed the
 > result to kvm/arch-lbr-for-weijiang.

What are fixed in this series:

1.  An missing of -1: if ((entry->eax & 0xff) != (1 << (depth_bit - 1)))

2.  Removed exit bit check in  cpu_has_vmx_arch_lbr(void), moved it to 
setup_vmcs_config().

3.  A redundant check kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) in 
kvm_check_cpuid().

4.  KUT/selftest failures due to lack of MSR_ARCH_LBR_CTL and 
MSR_ARCH_LBR_DEPTH in kvm_set_msr_common() before validate pmu msrs.

5.  Calltrace in L1 when L1 tried to vmcs_write64(GUEST_IA32_LBR_CTL, 0) 
in vmx_vcpu_reset(), use cpu_has_vmx_arch_lbr() instead.

6.  Removed VM_ENTRY_LOAD_IA32_LBR_CTL and VM_EXIT_CLEAR_IA32_LBR_CTL 
from exec_control in nested case.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/15] Introduce Architectural LBR for vPMU
  2022-09-02  3:44   ` Yang, Weijiang
@ 2022-10-21  2:14     ` Yang, Weijiang
  2022-10-30  6:06       ` Yang, Weijiang
  0 siblings, 1 reply; 24+ messages in thread
From: Yang, Weijiang @ 2022-10-21  2:14 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: pbonzini, kvm, like.xu.linux, kan.liang, Wang, Wei W, linux-kernel


On 9/2/2022 11:44 AM, Yang, Weijiang wrote:
>
> On 9/1/2022 10:23 PM, Sean Christopherson wrote:
>> On Wed, Aug 31, 2022, Yang Weijiang wrote:
>>> The old patch series was queued in KVM/queue for a while and finally
>>> moved to below branch after Paolo's refactor. This new patch set is
>>> built on top of Paolo's work + some fixes, it's tested on legacy 
>>> platform
>> Please elaborate on what was broken, i.e. why this was de-queued, as 
>> well as on
>> what was fixed an dhow.  That will help bring me up to speed and 
>> expedite review.
> Thanks Sean!
> The de-queued reason I read from community is, the PEBS and Arch-LBR 
> patches broke
> selftest/KUTs due to host-initiated 0 writes to PMU msrs. Paolo tried 
> to fix it but you
> didn't agree on the solution. Plus your comments below:
>
>
> On 6/1/2022 4:54 PM, Paolo Bonzini wrote:
> > On 5/31/22 20:37, Sean Christopherson wrote:
> >> Can we just punt this out of kvm/queue until its been properly 
> reviewed?
> > Yes, I agree.  I have started making some changes and pushed the
> > result to kvm/arch-lbr-for-weijiang.
>
> What are fixed in this series:
>
> 1.  An missing of -1: if ((entry->eax & 0xff) != (1 << (depth_bit - 1)))
>
> 2.  Removed exit bit check in  cpu_has_vmx_arch_lbr(void), moved it to 
> setup_vmcs_config().
>
> 3.  A redundant check kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) in 
> kvm_check_cpuid().
>
> 4.  KUT/selftest failures due to lack of MSR_ARCH_LBR_CTL and 
> MSR_ARCH_LBR_DEPTH in kvm_set_msr_common() before validate pmu msrs.
>
> 5.  Calltrace in L1 when L1 tried to vmcs_write64(GUEST_IA32_LBR_CTL, 
> 0) in vmx_vcpu_reset(), use cpu_has_vmx_arch_lbr() instead.
>
> 6.  Removed VM_ENTRY_LOAD_IA32_LBR_CTL and VM_EXIT_CLEAR_IA32_LBR_CTL 
> from exec_control in nested case.
>
Hi, Sean,

Could you kindly review this post and give some comments on the series 
so that I can prepare next version?

Thanks!


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 00/15] Introduce Architectural LBR for vPMU
  2022-10-21  2:14     ` Yang, Weijiang
@ 2022-10-30  6:06       ` Yang, Weijiang
  0 siblings, 0 replies; 24+ messages in thread
From: Yang, Weijiang @ 2022-10-30  6:06 UTC (permalink / raw)
  To: Christopherson,, Sean
  Cc: pbonzini, kvm, like.xu.linux, kan.liang, Wang, Wei W, linux-kernel


On 10/21/2022 10:14 AM, Yang, Weijiang wrote:
> On 9/2/2022 11:44 AM, Yang, Weijiang wrote:
>> On 9/1/2022 10:23 PM, Sean Christopherson wrote:
>>> On Wed, Aug 31, 2022, Yang Weijiang wrote:
>>>> The old patch series was queued in KVM/queue for a while and finally
>>>> moved to below branch after Paolo's refactor. This new patch set is
>>>> built on top of Paolo's work + some fixes, it's tested on legacy
>>>> platform
[...]
>> What are fixed in this series:
>>
>> 1.  An missing of -1: if ((entry->eax & 0xff) != (1 << (depth_bit - 1)))
>>
>> 2.  Removed exit bit check in  cpu_has_vmx_arch_lbr(void), moved it to
>> setup_vmcs_config().
>>
>> 3.  A redundant check kvm_cpu_cap_has(X86_FEATURE_ARCH_LBR) in
>> kvm_check_cpuid().
>>
>> 4.  KUT/selftest failures due to lack of MSR_ARCH_LBR_CTL and
>> MSR_ARCH_LBR_DEPTH in kvm_set_msr_common() before validate pmu msrs.
>>
>> 5.  Calltrace in L1 when L1 tried to vmcs_write64(GUEST_IA32_LBR_CTL,
>> 0) in vmx_vcpu_reset(), use cpu_has_vmx_arch_lbr() instead.
>>
>> 6.  Removed VM_ENTRY_LOAD_IA32_LBR_CTL and VM_EXIT_CLEAR_IA32_LBR_CTL
>> from exec_control in nested case.
>>
> Hi, Sean,
>
> Could you kindly review this post and give some comments on the series
> so that I can prepare next version?
>
> Thanks!

Ping...

In case you missed previous email...


>

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2022-10-30  6:07 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-31 22:34 [PATCH 00/15] Introduce Architectural LBR for vPMU Yang Weijiang
2022-08-31 22:34 ` [PATCH 01/15] perf/x86/lbr: Simplify the exposure check for the LBR_INFO registers Yang Weijiang
2022-09-01 14:19   ` Sean Christopherson
2022-09-02  3:05     ` Yang, Weijiang
2022-08-31 22:34 ` [PATCH 02/15] KVM: x86: Report XSS as an MSR to be saved if there are supported features Yang Weijiang
2022-08-31 22:34 ` [PATCH 03/15] KVM: x86: Refresh CPUID on writes to MSR_IA32_XSS Yang Weijiang
2022-08-31 22:34 ` [PATCH 04/15] KVM: PMU: disable LBR handling if architectural LBR is available Yang Weijiang
2022-08-31 22:34 ` [PATCH 05/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_DEPTH for guest Arch LBR Yang Weijiang
2022-08-31 22:34 ` [PATCH 06/15] KVM: vmx/pmu: Emulate MSR_ARCH_LBR_CTL " Yang Weijiang
2022-08-31 22:34 ` [PATCH 07/15] KVM: VMX: Support passthrough of architectural LBRs Yang Weijiang
2022-09-01 14:20   ` Sean Christopherson
2022-09-02  3:04     ` Yang, Weijiang
2022-08-31 22:34 ` [PATCH 08/15] KVM: x86: Add Arch LBR MSRs to msrs_to_save_all list Yang Weijiang
2022-08-31 22:34 ` [PATCH 09/15] KVM: x86: Refine the matching and clearing logic for supported_xss Yang Weijiang
2022-08-31 22:34 ` [PATCH 10/15] KVM: x86/vmx: Check Arch LBR config when return perf capabilities Yang Weijiang
2022-08-31 22:34 ` [PATCH 11/15] KVM: x86: Add XSAVE Support for Architectural LBR Yang Weijiang
2022-08-31 22:34 ` [PATCH 12/15] KVM: x86/vmx: Clear Arch LBREn bit before inject #DB to guest Yang Weijiang
2022-08-31 22:34 ` [PATCH 13/15] KVM: x86/vmx: Flip Arch LBREn bit on guest state change Yang Weijiang
2022-08-31 22:34 ` [PATCH 14/15] KVM: x86: Add Arch LBR data MSR access interface Yang Weijiang
2022-08-31 22:34 ` [PATCH 15/15] KVM: x86/cpuid: Advertise Arch LBR feature in CPUID Yang Weijiang
2022-09-01 14:23 ` [PATCH 00/15] Introduce Architectural LBR for vPMU Sean Christopherson
2022-09-02  3:44   ` Yang, Weijiang
2022-10-21  2:14     ` Yang, Weijiang
2022-10-30  6:06       ` Yang, Weijiang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.