linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups
@ 2022-08-03 19:26 Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 1/7] KVM: x86: Refresh PMU after writes to MSR_IA32_PERF_CAPABILITIES Sean Christopherson
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-08-03 19:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Sean Christopherson, Paolo Bonzini
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Like Xu

Bug fixes and cleanups related to KVM's handling of PERF_CAPABILITIES.

Bug #1 - Refresh KVM's vPMU after userspace writes PERF_CAPABILITIES, and
then leverage that fix to avoiding checking guest_cpuid_has(X86_FEATURE_PDCM)
during VM-Enter, which is slow enough that it shows up in perf traces[*].

Bug #2 - Don't advertise PMU_CAP_LBR_FMT to userspace if perf has disabled
LBRs, e.g. because probing one or more LBR MSRs during setup hit a #GP.

The non-KVM patches remove unnecessary stubs and unreachable error paths,
which allows for a cleaner fix for bug #2.

[*] https://gist.github.com/avagin/f50a6d569440c9ae382281448c187f4e

v2:
 - Add patches to fix bug #2. [Like]
 - Add a patch to clean up the capability check.
 - Tweak the changelog for the PMU refresh bug fix to call out that
   KVM should disallow changing feature MSRs after KVM_RUN. [Like]

v1: https://lore.kernel.org/all/20220727233424.2968356-1-seanjc@google.com

Sean Christopherson (7):
  KVM: x86: Refresh PMU after writes to MSR_IA32_PERF_CAPABILITIES
  perf/x86/core: Remove unnecessary stubs provided for KVM-only helpers
  perf/x86/core: Drop the unnecessary return value from
    x86_perf_get_lbr()
  KVM: VMX: Advertise PMU LBRs if and only if perf supports LBRs
  KVM: VMX: Use proper type-safe functions for vCPU => LBRs helpers
  KVM: VMX: Adjust number of LBR records for PERF_CAPABILITIES at
    refresh
  KVM: VMX: Simplify capability check when handling PERF_CAPABILITIES
    write

 arch/x86/events/intel/lbr.c       |  6 +---
 arch/x86/include/asm/perf_event.h | 55 ++++++++-----------------------
 arch/x86/kvm/vmx/capabilities.h   |  5 ++-
 arch/x86/kvm/vmx/pmu_intel.c      | 12 ++-----
 arch/x86/kvm/vmx/vmx.c            |  6 ++--
 arch/x86/kvm/vmx/vmx.h            | 53 +++++++++++++++++------------
 arch/x86/kvm/x86.c                |  4 +--
 7 files changed, 58 insertions(+), 83 deletions(-)


base-commit: 93472b79715378a2386598d6632c654a2223267b
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/7] KVM: x86: Refresh PMU after writes to MSR_IA32_PERF_CAPABILITIES
  2022-08-03 19:26 [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups Sean Christopherson
@ 2022-08-03 19:26 ` Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 2/7] perf/x86/core: Remove unnecessary stubs provided for KVM-only helpers Sean Christopherson
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-08-03 19:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Sean Christopherson, Paolo Bonzini
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Like Xu

Refresh the PMU if userspace modifies MSR_IA32_PERF_CAPABILITIES.  KVM
consumes the vCPU's PERF_CAPABILITIES when enumerating PEBS support, but
relies on CPUID updates to refresh the PMU.  I.e. KVM will do the wrong
thing if userspace stuffs PERF_CAPABILITIES _after_ setting guest CPUID.

Note, KVM may do the "wrong" thing if userspace changes PERF_CAPABILITIES
after running the vCPU, i.e. after KVM_RUN.  Similar to disallowing CPUID
changes after KVM_RUN, KVM should also disallow changing feature MSRs
after KVM_RUN to prevent unexpected behavior.  That problem will be
addressed separately at it affects MSRs other than PERF_CAPABILITES.

Opportunistically fix a curly-brace indentation.

Fixes: c59a1f106f5c ("KVM: x86/pmu: Add IA32_PEBS_ENABLE MSR emulation for extended PEBS")
Cc: Like Xu <like.xu.linux@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 33560bfa0cac..dc19298e7150 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3546,9 +3546,9 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 
 		vcpu->arch.perf_capabilities = data;
-
+		kvm_pmu_refresh(vcpu);
 		return 0;
-		}
+	}
 	case MSR_EFER:
 		return set_efer(vcpu, msr_info);
 	case MSR_K7_HWCR:
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/7] perf/x86/core: Remove unnecessary stubs provided for KVM-only helpers
  2022-08-03 19:26 [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 1/7] KVM: x86: Refresh PMU after writes to MSR_IA32_PERF_CAPABILITIES Sean Christopherson
@ 2022-08-03 19:26 ` Sean Christopherson
  2022-08-04  8:49   ` Like Xu
  2022-08-03 19:26 ` [PATCH v2 3/7] perf/x86/core: Drop the unnecessary return value from x86_perf_get_lbr() Sean Christopherson
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2022-08-03 19:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Sean Christopherson, Paolo Bonzini
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Like Xu

Remove CONFIG_PERF_EVENT=n stubs for functions that are effectively
KVM-only.  KVM selects PERF_EVENT and will never consume the stubs.
Dropping the unnecessary stubs will allow simplifying x86_perf_get_lbr()
by getting rid of the impossible-to-hit error path (which KVM doesn't
even check).

Opportunstically reorganize the declarations to collapse multiple
CONFIG_PERF_EVENTS #ifdefs.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/include/asm/perf_event.h | 53 ++++++++-----------------------
 1 file changed, 13 insertions(+), 40 deletions(-)

diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index cc47044401ff..aba196172500 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -509,46 +509,18 @@ extern u64 perf_get_hw_event_config(int hw_event);
 extern void perf_check_microcode(void);
 extern void perf_clear_dirty_counters(void);
 extern int x86_perf_rdpmc_index(struct perf_event *event);
-#else
-static inline void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap)
-{
-	memset(cap, 0, sizeof(*cap));
-}
 
-static inline u64 perf_get_hw_event_config(int hw_event)
-{
-	return 0;
-}
-
-static inline void perf_events_lapic_init(void)	{ }
-static inline void perf_check_microcode(void) { }
-#endif
-
-#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
+#ifdef CONFIG_CPU_SUP_INTEL
 extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr, void *data);
 extern int x86_perf_get_lbr(struct x86_pmu_lbr *lbr);
-#else
-struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr, void *data);
-static inline int x86_perf_get_lbr(struct x86_pmu_lbr *lbr)
-{
-	return -1;
-}
-#endif
+extern void intel_pt_handle_vmx(int on);
+#endif /* CONFIG_CPU_SUP_INTEL */
 
-#ifdef CONFIG_CPU_SUP_INTEL
- extern void intel_pt_handle_vmx(int on);
-#else
-static inline void intel_pt_handle_vmx(int on)
-{
+#ifdef CONFIG_CPU_SUP_AMD
+extern void amd_pmu_enable_virt(void);
+extern void amd_pmu_disable_virt(void);
 
-}
-#endif
-
-#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD)
- extern void amd_pmu_enable_virt(void);
- extern void amd_pmu_disable_virt(void);
-
-#if defined(CONFIG_PERF_EVENTS_AMD_BRS)
+#ifdef CONFIG_PERF_EVENTS_AMD_BRS
 
 #define PERF_NEEDS_LOPWR_CB 1
 
@@ -566,12 +538,13 @@ static inline void perf_lopwr_cb(bool lopwr_in)
 	static_call_mod(perf_lopwr_cb)(lopwr_in);
 }
 
-#endif /* PERF_NEEDS_LOPWR_CB */
+#endif /* CONFIG_PERF_EVENTS_AMD_BRS */
+#endif /* CONFIG_CPU_SUP_AMD */
 
-#else
- static inline void amd_pmu_enable_virt(void) { }
- static inline void amd_pmu_disable_virt(void) { }
-#endif
+#else  /* !CONFIG_PERF_EVENTS */
+static inline void perf_events_lapic_init(void)	{ }
+static inline void perf_check_microcode(void) { }
+#endif /* CONFIG_PERF_EVENTS */
 
 #define arch_perf_out_copy_user copy_from_user_nmi
 
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/7] perf/x86/core: Drop the unnecessary return value from x86_perf_get_lbr()
  2022-08-03 19:26 [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 1/7] KVM: x86: Refresh PMU after writes to MSR_IA32_PERF_CAPABILITIES Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 2/7] perf/x86/core: Remove unnecessary stubs provided for KVM-only helpers Sean Christopherson
@ 2022-08-03 19:26 ` Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 4/7] KVM: VMX: Advertise PMU LBRs if and only if perf supports LBRs Sean Christopherson
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-08-03 19:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Sean Christopherson, Paolo Bonzini
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Like Xu

Drop the return value from x86_perf_get_lbr() now that there's no stub,
i.e. now that success is guaranteed (which is a bit of a lie since
success was always guaranteed, it's just more obvious now).

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/events/intel/lbr.c       | 6 +-----
 arch/x86/include/asm/perf_event.h | 2 +-
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index 4f70fb6c2c1e..b8ad31c52cf0 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -1868,10 +1868,8 @@ void __init intel_pmu_arch_lbr_init(void)
  * x86_perf_get_lbr - get the LBR records information
  *
  * @lbr: the caller's memory to store the LBR records information
- *
- * Returns: 0 indicates the LBR info has been successfully obtained
  */
-int x86_perf_get_lbr(struct x86_pmu_lbr *lbr)
+void x86_perf_get_lbr(struct x86_pmu_lbr *lbr)
 {
 	int lbr_fmt = x86_pmu.intel_cap.lbr_format;
 
@@ -1879,8 +1877,6 @@ int x86_perf_get_lbr(struct x86_pmu_lbr *lbr)
 	lbr->from = x86_pmu.lbr_from;
 	lbr->to = x86_pmu.lbr_to;
 	lbr->info = (lbr_fmt == LBR_FORMAT_INFO) ? x86_pmu.lbr_info : 0;
-
-	return 0;
 }
 EXPORT_SYMBOL_GPL(x86_perf_get_lbr);
 
diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index aba196172500..102fd3ad4605 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -512,7 +512,7 @@ extern int x86_perf_rdpmc_index(struct perf_event *event);
 
 #ifdef CONFIG_CPU_SUP_INTEL
 extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr, void *data);
-extern int x86_perf_get_lbr(struct x86_pmu_lbr *lbr);
+extern void x86_perf_get_lbr(struct x86_pmu_lbr *lbr);
 extern void intel_pt_handle_vmx(int on);
 #endif /* CONFIG_CPU_SUP_INTEL */
 
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 4/7] KVM: VMX: Advertise PMU LBRs if and only if perf supports LBRs
  2022-08-03 19:26 [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups Sean Christopherson
                   ` (2 preceding siblings ...)
  2022-08-03 19:26 ` [PATCH v2 3/7] perf/x86/core: Drop the unnecessary return value from x86_perf_get_lbr() Sean Christopherson
@ 2022-08-03 19:26 ` Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 5/7] KVM: VMX: Use proper type-safe functions for vCPU => LBRs helpers Sean Christopherson
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-08-03 19:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Sean Christopherson, Paolo Bonzini
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Like Xu

Advertise LBR support to userspace via MSR_IA32_PERF_CAPABILITIES if and
only if perf fully supports LBRs.  Perf may disable LBRs (by zeroing the
number of LBRs) even on platforms the allegedly support LBRs, e.g. if
probing any LBR MSRs during setup fails.

Fixes: be635e34c284 ("KVM: vmx/pmu: Expose LBR_FMT in the MSR_IA32_PERF_CAPABILITIES")
Reported-by: Like Xu <like.xu.linux@gmail.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/capabilities.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/vmx/capabilities.h b/arch/x86/kvm/vmx/capabilities.h
index c5e5dfef69c7..d2fdaf888d33 100644
--- a/arch/x86/kvm/vmx/capabilities.h
+++ b/arch/x86/kvm/vmx/capabilities.h
@@ -404,6 +404,7 @@ static inline bool vmx_pebs_supported(void)
 static inline u64 vmx_get_perf_capabilities(void)
 {
 	u64 perf_cap = PMU_CAP_FW_WRITES;
+	struct x86_pmu_lbr lbr;
 	u64 host_perf_cap = 0;
 
 	if (!enable_pmu)
@@ -412,7 +413,9 @@ static inline u64 vmx_get_perf_capabilities(void)
 	if (boot_cpu_has(X86_FEATURE_PDCM))
 		rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap);
 
-	perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
+	x86_perf_get_lbr(&lbr);
+	if (lbr.nr)
+		perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT;
 
 	if (vmx_pebs_supported()) {
 		perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK;
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 5/7] KVM: VMX: Use proper type-safe functions for vCPU => LBRs helpers
  2022-08-03 19:26 [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups Sean Christopherson
                   ` (3 preceding siblings ...)
  2022-08-03 19:26 ` [PATCH v2 4/7] KVM: VMX: Advertise PMU LBRs if and only if perf supports LBRs Sean Christopherson
@ 2022-08-03 19:26 ` Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 6/7] KVM: VMX: Adjust number of LBR records for PERF_CAPABILITIES at refresh Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 7/7] KVM: VMX: Simplify capability check when handling PERF_CAPABILITIES write Sean Christopherson
  6 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-08-03 19:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Sean Christopherson, Paolo Bonzini
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Like Xu

Turn vcpu_to_lbr_desc() and vcpu_to_lbr_records() into functions in order
to provide type safety, to document exactly what they return, and to
allow consuming the helpers in vmx.h.  Move the definitions as necessary
(the macros "reference" to_vmx() before its definition).

Opportunistically move the other PMU definitions/declarations to keep the
PMU stuff bundled together.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/vmx.h | 50 ++++++++++++++++++++++++------------------
 1 file changed, 29 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index fb8e3480a9d7..35b39dab175d 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -6,6 +6,7 @@
 
 #include <asm/kvm.h>
 #include <asm/intel_pt.h>
+#include <asm/perf_event.h>
 
 #include "capabilities.h"
 #include "../kvm_cache_regs.h"
@@ -92,27 +93,6 @@ union vmx_exit_reason {
 	u32 full;
 };
 
-static inline bool intel_pmu_has_perf_global_ctrl(struct kvm_pmu *pmu)
-{
-	/*
-	 * Architecturally, Intel's SDM states that IA32_PERF_GLOBAL_CTRL is
-	 * supported if "CPUID.0AH: EAX[7:0] > 0", i.e. if the PMU version is
-	 * greater than zero.  However, KVM only exposes and emulates the MSR
-	 * to/for the guest if the guest PMU supports at least "Architectural
-	 * Performance Monitoring Version 2".
-	 */
-	return pmu->version > 1;
-}
-
-#define vcpu_to_lbr_desc(vcpu) (&to_vmx(vcpu)->lbr_desc)
-#define vcpu_to_lbr_records(vcpu) (&to_vmx(vcpu)->lbr_desc.records)
-
-void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu);
-bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu);
-
-int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu);
-void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu);
-
 struct lbr_desc {
 	/* Basic info about guest LBR records. */
 	struct x86_pmu_lbr records;
@@ -542,6 +522,34 @@ static inline struct vcpu_vmx *to_vmx(struct kvm_vcpu *vcpu)
 	return container_of(vcpu, struct vcpu_vmx, vcpu);
 }
 
+static inline struct lbr_desc *vcpu_to_lbr_desc(struct kvm_vcpu *vcpu)
+{
+	return &to_vmx(vcpu)->lbr_desc;
+}
+
+static inline struct x86_pmu_lbr *vcpu_to_lbr_records(struct kvm_vcpu *vcpu)
+{
+	return &vcpu_to_lbr_desc(vcpu)->records;
+}
+
+static inline bool intel_pmu_has_perf_global_ctrl(struct kvm_pmu *pmu)
+{
+	/*
+	 * Architecturally, Intel's SDM states that IA32_PERF_GLOBAL_CTRL is
+	 * supported if "CPUID.0AH: EAX[7:0] > 0", i.e. if the PMU version is
+	 * greater than zero.  However, KVM only exposes and emulates the MSR
+	 * to/for the guest if the guest PMU supports at least "Architectural
+	 * Performance Monitoring Version 2".
+	 */
+	return pmu->version > 1;
+}
+
+void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu);
+bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu);
+
+int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu);
+void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu);
+
 static inline unsigned long vmx_get_exit_qual(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 6/7] KVM: VMX: Adjust number of LBR records for PERF_CAPABILITIES at refresh
  2022-08-03 19:26 [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups Sean Christopherson
                   ` (4 preceding siblings ...)
  2022-08-03 19:26 ` [PATCH v2 5/7] KVM: VMX: Use proper type-safe functions for vCPU => LBRs helpers Sean Christopherson
@ 2022-08-03 19:26 ` Sean Christopherson
  2022-08-03 19:26 ` [PATCH v2 7/7] KVM: VMX: Simplify capability check when handling PERF_CAPABILITIES write Sean Christopherson
  6 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-08-03 19:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Sean Christopherson, Paolo Bonzini
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Like Xu

Now that the PMU is refreshed when MSR_IA32_PERF_CAPABILITIES is written
by host userspace, zero out the number of LBR records for a vCPU during
PMU refresh if PMU_CAP_LBR_FMT is not set in PERF_CAPABILITIES instead of
handling the check at run-time.

guest_cpuid_has() is expensive due to the linear search of guest CPUID
entries, intel_pmu_lbr_is_enabled() is checked on every VM-Enter, _and_
simply enumerating the same "Model" as the host causes KVM to set the
number of LBR records to a non-zero value.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/pmu_intel.c | 12 +++---------
 arch/x86/kvm/vmx/vmx.h       |  7 +++++--
 2 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index 862c1a4d971b..c399637a3a79 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -171,13 +171,6 @@ static inline struct kvm_pmc *get_fw_gp_pmc(struct kvm_pmu *pmu, u32 msr)
 	return get_gp_pmc(pmu, msr, MSR_IA32_PMC0);
 }
 
-bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu)
-{
-	struct x86_pmu_lbr *lbr = vcpu_to_lbr_records(vcpu);
-
-	return lbr->nr && (vcpu_get_perf_capabilities(vcpu) & PMU_CAP_LBR_FMT);
-}
-
 static bool intel_pmu_is_valid_lbr_msr(struct kvm_vcpu *vcpu, u32 index)
 {
 	struct x86_pmu_lbr *records = vcpu_to_lbr_records(vcpu);
@@ -592,7 +585,9 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 	bitmap_set(pmu->all_valid_pmc_idx,
 		INTEL_PMC_MAX_GENERIC, pmu->nr_arch_fixed_counters);
 
-	if (cpuid_model_is_consistent(vcpu))
+	perf_capabilities = vcpu_get_perf_capabilities(vcpu);
+	if (cpuid_model_is_consistent(vcpu) &&
+	    (perf_capabilities & PMU_CAP_LBR_FMT))
 		x86_perf_get_lbr(&lbr_desc->records);
 	else
 		lbr_desc->records.nr = 0;
@@ -600,7 +595,6 @@ static void intel_pmu_refresh(struct kvm_vcpu *vcpu)
 	if (lbr_desc->records.nr)
 		bitmap_set(pmu->all_valid_pmc_idx, INTEL_PMC_IDX_FIXED_VLBR, 1);
 
-	perf_capabilities = vcpu_get_perf_capabilities(vcpu);
 	if (perf_capabilities & PERF_CAP_PEBS_FORMAT) {
 		if (perf_capabilities & PERF_CAP_PEBS_BASELINE) {
 			pmu->pebs_enable_mask = counter_mask;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index 35b39dab175d..413702dc1315 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -532,6 +532,11 @@ static inline struct x86_pmu_lbr *vcpu_to_lbr_records(struct kvm_vcpu *vcpu)
 	return &vcpu_to_lbr_desc(vcpu)->records;
 }
 
+static inline bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu)
+{
+	return !!vcpu_to_lbr_records(vcpu)->nr;
+}
+
 static inline bool intel_pmu_has_perf_global_ctrl(struct kvm_pmu *pmu)
 {
 	/*
@@ -545,8 +550,6 @@ static inline bool intel_pmu_has_perf_global_ctrl(struct kvm_pmu *pmu)
 }
 
 void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu);
-bool intel_pmu_lbr_is_enabled(struct kvm_vcpu *vcpu);
-
 int intel_pmu_create_guest_lbr_event(struct kvm_vcpu *vcpu);
 void vmx_passthrough_lbr_msrs(struct kvm_vcpu *vcpu);
 
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 7/7] KVM: VMX: Simplify capability check when handling PERF_CAPABILITIES write
  2022-08-03 19:26 [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups Sean Christopherson
                   ` (5 preceding siblings ...)
  2022-08-03 19:26 ` [PATCH v2 6/7] KVM: VMX: Adjust number of LBR records for PERF_CAPABILITIES at refresh Sean Christopherson
@ 2022-08-03 19:26 ` Sean Christopherson
  2022-08-04  7:52   ` Like Xu
  6 siblings, 1 reply; 12+ messages in thread
From: Sean Christopherson @ 2022-08-03 19:26 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Sean Christopherson, Paolo Bonzini
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Like Xu

Explicitly check for the absence of host support for LBRs or PEBS when
userspace attempts to enable said features by writing PERF_CAPABILITIES.
Comparing host support against the incoming value is unnecessary and
weird since the checks are buried inside an if-statement that verifies
userspace wants to enable the feature.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/vmx/vmx.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d7f8331d6f7e..0ada0ee234b7 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2323,15 +2323,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		if (data && !vcpu_to_pmu(vcpu)->version)
 			return 1;
 		if (data & PMU_CAP_LBR_FMT) {
-			if ((data & PMU_CAP_LBR_FMT) !=
-			    (vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT))
+			if (!(vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT))
 				return 1;
 			if (!cpuid_model_is_consistent(vcpu))
 				return 1;
 		}
 		if (data & PERF_CAP_PEBS_FORMAT) {
-			if ((data & PERF_CAP_PEBS_MASK) !=
-			    (vmx_get_perf_capabilities() & PERF_CAP_PEBS_MASK))
+			if (!(vmx_get_perf_capabilities() & PERF_CAP_PEBS_MASK))
 				return 1;
 			if (!guest_cpuid_has(vcpu, X86_FEATURE_DS))
 				return 1;
-- 
2.37.1.559.g78731f0fdb-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 7/7] KVM: VMX: Simplify capability check when handling PERF_CAPABILITIES write
  2022-08-03 19:26 ` [PATCH v2 7/7] KVM: VMX: Simplify capability check when handling PERF_CAPABILITIES write Sean Christopherson
@ 2022-08-04  7:52   ` Like Xu
  2022-08-04 15:00     ` Sean Christopherson
  0 siblings, 1 reply; 12+ messages in thread
From: Like Xu @ 2022-08-04  7:52 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Paolo Bonzini,
	Peter Zijlstra, Arnaldo Carvalho de Melo, Ingo Molnar

On 4/8/2022 3:26 am, Sean Christopherson wrote:
> Explicitly check for the absence of host support for LBRs or PEBS when
> userspace attempts to enable said features by writing PERF_CAPABILITIES.
> Comparing host support against the incoming value is unnecessary and
> weird since the checks are buried inside an if-statement that verifies
> userspace wants to enable the feature.

If you mean this part in the KVM:

	case MSR_IA32_PERF_CAPABILITIES: {
		...
		if (data & ~msr_ent.data)
			return 1;
		...

then this patch brings a flaw, for example:

a user space can successfully set 0x1 on a host that reports a value of 0x5,
which should not happen since the semantics of 0x1 and 0x5 for LBR_FMT
may be completely different from the guest LBR driver's perspective.

For such a model-specific feature, it needs to write to PERF_CAPABILITIES
the exact value reported by the host/kvm.

A selftest is proposed in the hope of guarding this contract.

> 
> No functional change intended.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   arch/x86/kvm/vmx/vmx.c | 6 ++----
>   1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index d7f8331d6f7e..0ada0ee234b7 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2323,15 +2323,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>   		if (data && !vcpu_to_pmu(vcpu)->version)
>   			return 1;
>   		if (data & PMU_CAP_LBR_FMT) {
> -			if ((data & PMU_CAP_LBR_FMT) !=
> -			    (vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT))
> +			if (!(vmx_get_perf_capabilities() & PMU_CAP_LBR_FMT))
>   				return 1;
>   			if (!cpuid_model_is_consistent(vcpu))
>   				return 1;
>   		}
>   		if (data & PERF_CAP_PEBS_FORMAT) {
> -			if ((data & PERF_CAP_PEBS_MASK) !=
> -			    (vmx_get_perf_capabilities() & PERF_CAP_PEBS_MASK))
> +			if (!(vmx_get_perf_capabilities() & PERF_CAP_PEBS_MASK))
>   				return 1;
>   			if (!guest_cpuid_has(vcpu, X86_FEATURE_DS))
>   				return 1;

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/7] perf/x86/core: Remove unnecessary stubs provided for KVM-only helpers
  2022-08-03 19:26 ` [PATCH v2 2/7] perf/x86/core: Remove unnecessary stubs provided for KVM-only helpers Sean Christopherson
@ 2022-08-04  8:49   ` Like Xu
  2022-08-04 15:07     ` Sean Christopherson
  0 siblings, 1 reply; 12+ messages in thread
From: Like Xu @ 2022-08-04  8:49 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Paolo Bonzini, Ingo Molnar,
	Peter Zijlstra, Arnaldo Carvalho de Melo

On 4/8/2022 3:26 am, Sean Christopherson wrote:
> Remove CONFIG_PERF_EVENT=n stubs for functions that are effectively
> KVM-only.  KVM selects PERF_EVENT and will never consume the stubs.
> Dropping the unnecessary stubs will allow simplifying x86_perf_get_lbr()

Giggling, I used to have a similar cleanup patch sitting in a corner somewhere.

> by getting rid of the impossible-to-hit error path (which KVM doesn't
> even check).
> 
> Opportunstically reorganize the declarations to collapse multiple
> CONFIG_PERF_EVENTS #ifdefs.
> 
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
>   arch/x86/include/asm/perf_event.h | 53 ++++++++-----------------------
>   1 file changed, 13 insertions(+), 40 deletions(-)
> 
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index cc47044401ff..aba196172500 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -509,46 +509,18 @@ extern u64 perf_get_hw_event_config(int hw_event);
>   extern void perf_check_microcode(void);
>   extern void perf_clear_dirty_counters(void);
>   extern int x86_perf_rdpmc_index(struct perf_event *event);
> -#else
> -static inline void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap)
> -{
> -	memset(cap, 0, sizeof(*cap));
> -}
>   
> -static inline u64 perf_get_hw_event_config(int hw_event)
> -{
> -	return 0;
> -}
> -
> -static inline void perf_events_lapic_init(void)	{ }
> -static inline void perf_check_microcode(void) { }
> -#endif
> -
> -#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_INTEL)
> +#ifdef CONFIG_CPU_SUP_INTEL
>   extern struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr, void *data);
>   extern int x86_perf_get_lbr(struct x86_pmu_lbr *lbr);
> -#else
> -struct perf_guest_switch_msr *perf_guest_get_msrs(int *nr, void *data);
> -static inline int x86_perf_get_lbr(struct x86_pmu_lbr *lbr)
> -{
> -	return -1;
> -}
> -#endif
> +extern void intel_pt_handle_vmx(int on);
> +#endif /* CONFIG_CPU_SUP_INTEL */
>   
> -#ifdef CONFIG_CPU_SUP_INTEL
> - extern void intel_pt_handle_vmx(int on);
> -#else
> -static inline void intel_pt_handle_vmx(int on)
> -{
> +#ifdef CONFIG_CPU_SUP_AMD
> +extern void amd_pmu_enable_virt(void);
> +extern void amd_pmu_disable_virt(void);
>   
> -}
> -#endif
> -
> -#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD)
> - extern void amd_pmu_enable_virt(void);
> - extern void amd_pmu_disable_virt(void);
> -
> -#if defined(CONFIG_PERF_EVENTS_AMD_BRS)
> +#ifdef CONFIG_PERF_EVENTS_AMD_BRS
>   
>   #define PERF_NEEDS_LOPWR_CB 1
>   
> @@ -566,12 +538,13 @@ static inline void perf_lopwr_cb(bool lopwr_in)
>   	static_call_mod(perf_lopwr_cb)(lopwr_in);
>   }
>   
> -#endif /* PERF_NEEDS_LOPWR_CB */

Oops, now the definition of PERF_NEEDS_LOPWR_CB will not be unset.
This is not mentioned in the commit message and may cause trouble.

> +#endif /* CONFIG_PERF_EVENTS_AMD_BRS */
> +#endif /* CONFIG_CPU_SUP_AMD */
>   
> -#else
> - static inline void amd_pmu_enable_virt(void) { }
> - static inline void amd_pmu_disable_virt(void) { }
> -#endif
> +#else  /* !CONFIG_PERF_EVENTS */
> +static inline void perf_events_lapic_init(void)	{ }
> +static inline void perf_check_microcode(void) { }
> +#endif /* CONFIG_PERF_EVENTS */
>   
>   #define arch_perf_out_copy_user copy_from_user_nmi
>   

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 7/7] KVM: VMX: Simplify capability check when handling PERF_CAPABILITIES write
  2022-08-04  7:52   ` Like Xu
@ 2022-08-04 15:00     ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-08-04 15:00 UTC (permalink / raw)
  To: Like Xu
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Paolo Bonzini,
	Peter Zijlstra, Arnaldo Carvalho de Melo, Ingo Molnar

On Thu, Aug 04, 2022, Like Xu wrote:
> On 4/8/2022 3:26 am, Sean Christopherson wrote:
> > Explicitly check for the absence of host support for LBRs or PEBS when
> > userspace attempts to enable said features by writing PERF_CAPABILITIES.
> > Comparing host support against the incoming value is unnecessary and
> > weird since the checks are buried inside an if-statement that verifies
> > userspace wants to enable the feature.
> 
> If you mean this part in the KVM:
> 
> 	case MSR_IA32_PERF_CAPABILITIES: {
> 		...
> 		if (data & ~msr_ent.data)
> 			return 1;
> 		...
> 
> then this patch brings a flaw, for example:
> 
> a user space can successfully set 0x1 on a host that reports a value of 0x5,
> which should not happen since the semantics of 0x1 and 0x5 for LBR_FMT
> may be completely different from the guest LBR driver's perspective.

/facepalm

I keep forgetting the caps need to match the host exactly.  Thanks!

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 2/7] perf/x86/core: Remove unnecessary stubs provided for KVM-only helpers
  2022-08-04  8:49   ` Like Xu
@ 2022-08-04 15:07     ` Sean Christopherson
  0 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2022-08-04 15:07 UTC (permalink / raw)
  To: Like Xu
  Cc: Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	linux-perf-users, linux-kernel, kvm, Paolo Bonzini, Ingo Molnar,
	Peter Zijlstra, Arnaldo Carvalho de Melo

On Thu, Aug 04, 2022, Like Xu wrote:
> On 4/8/2022 3:26 am, Sean Christopherson wrote:
> > -#if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD)
> > - extern void amd_pmu_enable_virt(void);
> > - extern void amd_pmu_disable_virt(void);
> > -
> > -#if defined(CONFIG_PERF_EVENTS_AMD_BRS)
> > +#ifdef CONFIG_PERF_EVENTS_AMD_BRS
> >   #define PERF_NEEDS_LOPWR_CB 1
> > @@ -566,12 +538,13 @@ static inline void perf_lopwr_cb(bool lopwr_in)
> >   	static_call_mod(perf_lopwr_cb)(lopwr_in);
> >   }
> > -#endif /* PERF_NEEDS_LOPWR_CB */
> 
> Oops, now the definition of PERF_NEEDS_LOPWR_CB will not be unset.
> This is not mentioned in the commit message and may cause trouble.

PERF_NEEDS_LOPWR_CB isn't being "unset" in the existing code, the comment is simply
wrong.  The #endif pairs with CONFIG_PERF_EVENTS_AMD_BRS.

  #if defined(CONFIG_PERF_EVENTS_AMD_BRS)

  #define PERF_NEEDS_LOPWR_CB 1

  /*
   * architectural low power callback impacts
   * drivers/acpi/processor_idle.c
   * drivers/acpi/acpi_pad.c
   */
  extern void perf_amd_brs_lopwr_cb(bool lopwr_in);

  DECLARE_STATIC_CALL(perf_lopwr_cb, perf_amd_brs_lopwr_cb);

  static inline void perf_lopwr_cb(bool lopwr_in)
  {
  	static_call_mod(perf_lopwr_cb)(lopwr_in);
  }

  #endif /* PERF_NEEDS_LOPWR_CB */ <=== should be /* CONFIG_PERF_EVENTS_AMD_BRS */

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-08-04 15:07 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-03 19:26 [PATCH v2 0/7] KVM: x86: Intel PERF_CAPABILITIES fixes and cleanups Sean Christopherson
2022-08-03 19:26 ` [PATCH v2 1/7] KVM: x86: Refresh PMU after writes to MSR_IA32_PERF_CAPABILITIES Sean Christopherson
2022-08-03 19:26 ` [PATCH v2 2/7] perf/x86/core: Remove unnecessary stubs provided for KVM-only helpers Sean Christopherson
2022-08-04  8:49   ` Like Xu
2022-08-04 15:07     ` Sean Christopherson
2022-08-03 19:26 ` [PATCH v2 3/7] perf/x86/core: Drop the unnecessary return value from x86_perf_get_lbr() Sean Christopherson
2022-08-03 19:26 ` [PATCH v2 4/7] KVM: VMX: Advertise PMU LBRs if and only if perf supports LBRs Sean Christopherson
2022-08-03 19:26 ` [PATCH v2 5/7] KVM: VMX: Use proper type-safe functions for vCPU => LBRs helpers Sean Christopherson
2022-08-03 19:26 ` [PATCH v2 6/7] KVM: VMX: Adjust number of LBR records for PERF_CAPABILITIES at refresh Sean Christopherson
2022-08-03 19:26 ` [PATCH v2 7/7] KVM: VMX: Simplify capability check when handling PERF_CAPABILITIES write Sean Christopherson
2022-08-04  7:52   ` Like Xu
2022-08-04 15:00     ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).