All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting
@ 2023-03-07 14:13 Aaron Lewis
  2023-03-07 14:13 ` [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Aaron Lewis @ 2023-03-07 14:13 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, like.xu.linux, Aaron Lewis

This series fixes an issue with the PMU event "Instructions Retired"
(0xc0), then tests the fix to verify it works.  Running the test
updates without the fix will result in a failed test.

v2 -> v3:
 - s/pmc_is_allowed/event_is_allowed/ [Like]

v1 -> v2:
 - Add pmc_is_allowed() as common helper [Sean]
 - Split test into multiple commits [Sean]
 - Add macros for counting and not counting [Sean]
 - Removed un-needed pr_info [Sean]


Aaron Lewis (5):
  KVM: x86/pmu: Prevent the PMU from counting disallowed events
  KVM: selftests: Add a common helper to the guest
  KVM: selftests: Add helpers for PMC asserts
  KVM: selftests: Fixup test asserts
  KVM: selftests: Test the PMU event "Instructions retired"

 arch/x86/kvm/pmu.c                            |  13 +-
 .../kvm/x86_64/pmu_event_filter_test.c        | 146 ++++++++++++------
 2 files changed, 108 insertions(+), 51 deletions(-)

-- 
2.40.0.rc0.216.gc4246ad0f0-goog


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events
  2023-03-07 14:13 [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Aaron Lewis
@ 2023-03-07 14:13 ` Aaron Lewis
  2023-03-07 15:19   ` Like Xu
  2023-03-07 14:13 ` [PATCH v3 2/5] KVM: selftests: Add a common helper to the guest Aaron Lewis
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Aaron Lewis @ 2023-03-07 14:13 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, like.xu.linux, Aaron Lewis

When counting "Instructions Retired" (0xc0) in a guest, KVM will
occasionally increment the PMU counter regardless of if that event is
being filtered. This is because some PMU events are incremented via
kvm_pmu_trigger_event(), which doesn't know about the event filter. Add
the event filter to kvm_pmu_trigger_event(), so events that are
disallowed do not increment their counters.

Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions")
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/kvm/pmu.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 612e6c70ce2e..9914a9027c60 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
 	return is_fixed_event_allowed(filter, pmc->idx);
 }
 
+static bool event_is_allowed(struct kvm_pmc *pmc)
+{
+	return pmc_is_enabled(pmc) && pmc_speculative_in_use(pmc) &&
+	       check_pmu_event_filter(pmc);
+}
+
 static void reprogram_counter(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
@@ -409,10 +415,7 @@ static void reprogram_counter(struct kvm_pmc *pmc)
 
 	pmc_pause_counter(pmc);
 
-	if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc))
-		goto reprogram_complete;
-
-	if (!check_pmu_event_filter(pmc))
+	if (!event_is_allowed(pmc))
 		goto reprogram_complete;
 
 	if (pmc->counter < pmc->prev_counter)
@@ -684,7 +687,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
 	for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) {
 		pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i);
 
-		if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc))
+		if (!pmc || !event_is_allowed(pmc))
 			continue;
 
 		/* Ignore checks for edge detect, pin control, invert and CMASK bits */
-- 
2.40.0.rc0.216.gc4246ad0f0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 2/5] KVM: selftests: Add a common helper to the guest
  2023-03-07 14:13 [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Aaron Lewis
  2023-03-07 14:13 ` [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis
@ 2023-03-07 14:13 ` Aaron Lewis
  2023-04-07 18:43   ` Sean Christopherson
  2023-03-07 14:13 ` [PATCH v3 3/5] KVM: selftests: Add helpers for PMC asserts Aaron Lewis
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Aaron Lewis @ 2023-03-07 14:13 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, like.xu.linux, Aaron Lewis

Split out the common parts of the Intel and AMD guest code into a
helper function.  This is in preparation for adding
additional counters to the test.

No functional changes intended.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 .../kvm/x86_64/pmu_event_filter_test.c        | 31 ++++++++++++-------
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index bad7ef8c5b92..f33079fc552b 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -100,6 +100,17 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)
 		GUEST_SYNC(0);
 }
 
+static uint64_t test_guest(uint32_t msr_base)
+{
+	uint64_t br0, br1;
+
+	br0 = rdmsr(msr_base + 0);
+	__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
+	br1 = rdmsr(msr_base + 0);
+
+	return br1 - br0;
+}
+
 static void intel_guest_code(void)
 {
 	check_msr(MSR_CORE_PERF_GLOBAL_CTRL, 1);
@@ -108,16 +119,15 @@ static void intel_guest_code(void)
 	GUEST_SYNC(1);
 
 	for (;;) {
-		uint64_t br0, br1;
+		uint64_t count;
 
 		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 		wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
 		      ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED);
-		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1);
-		br0 = rdmsr(MSR_IA32_PMC0);
-		__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
-		br1 = rdmsr(MSR_IA32_PMC0);
-		GUEST_SYNC(br1 - br0);
+		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1);
+
+		count = test_guest(MSR_IA32_PMC0);
+		GUEST_SYNC(count);
 	}
 }
 
@@ -133,15 +143,14 @@ static void amd_guest_code(void)
 	GUEST_SYNC(1);
 
 	for (;;) {
-		uint64_t br0, br1;
+		uint64_t count;
 
 		wrmsr(MSR_K7_EVNTSEL0, 0);
 		wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
 		      ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED);
-		br0 = rdmsr(MSR_K7_PERFCTR0);
-		__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
-		br1 = rdmsr(MSR_K7_PERFCTR0);
-		GUEST_SYNC(br1 - br0);
+
+		count = test_guest(MSR_K7_PERFCTR0);
+		GUEST_SYNC(count);
 	}
 }
 
-- 
2.40.0.rc0.216.gc4246ad0f0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 3/5] KVM: selftests: Add helpers for PMC asserts
  2023-03-07 14:13 [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Aaron Lewis
  2023-03-07 14:13 ` [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis
  2023-03-07 14:13 ` [PATCH v3 2/5] KVM: selftests: Add a common helper to the guest Aaron Lewis
@ 2023-03-07 14:13 ` Aaron Lewis
  2023-04-07 18:47   ` Sean Christopherson
  2023-03-07 14:13 ` [PATCH v3 4/5] KVM: selftests: Fixup test asserts Aaron Lewis
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Aaron Lewis @ 2023-03-07 14:13 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, like.xu.linux, Aaron Lewis

Add the helpers, ASSERT_PMC_COUNTING and ASSERT_PMC_NOT_COUNTING, to
split out the asserts into one place.  This will make it easier to add
additional asserts related to counting later on.

No functional changes intended.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 .../kvm/x86_64/pmu_event_filter_test.c        | 70 ++++++++++---------
 1 file changed, 36 insertions(+), 34 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index f33079fc552b..8277b8f49dca 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -250,14 +250,27 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
 	return f;
 }
 
+#define ASSERT_PMC_COUNTING(count)							\
+do {											\
+	if (count != NUM_BRANCHES)							\
+		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",	\
+			__func__, count, NUM_BRANCHES);					\
+	TEST_ASSERT(count, "Allowed PMU event is not counting.");			\
+} while (0)
+
+#define ASSERT_PMC_NOT_COUNTING(count)							\
+do {											\
+	if (count)									\
+		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",		\
+			__func__, count);						\
+	TEST_ASSERT(!count, "Disallowed PMU Event is counting");			\
+} while (0)
+
 static void test_without_filter(struct kvm_vcpu *vcpu)
 {
-	uint64_t count = run_vcpu_to_sync(vcpu);
+	uint64_t c = run_vcpu_to_sync(vcpu);
 
-	if (count != NUM_BRANCHES)
-		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
-			__func__, count, NUM_BRANCHES);
-	TEST_ASSERT(count, "Allowed PMU event is not counting");
+	ASSERT_PMC_COUNTING(c);
 }
 
 static uint64_t test_with_filter(struct kvm_vcpu *vcpu,
@@ -271,70 +284,59 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu)
 {
 	uint64_t event = EVENT(0x1C2, 0);
 	struct kvm_pmu_event_filter *f;
-	uint64_t count;
+	uint64_t c;
 
 	f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0);
-	count = test_with_filter(vcpu, f);
-
+	c = test_with_filter(vcpu, f);
 	free(f);
-	if (count != NUM_BRANCHES)
-		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
-			__func__, count, NUM_BRANCHES);
-	TEST_ASSERT(count, "Allowed PMU event is not counting");
+
+	ASSERT_PMC_COUNTING(c);
 }
 
 static void test_member_deny_list(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
-	uint64_t count = test_with_filter(vcpu, f);
+	uint64_t c = test_with_filter(vcpu, f);
 
 	free(f);
-	if (count)
-		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
-			__func__, count);
-	TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+
+	ASSERT_PMC_NOT_COUNTING(c);
 }
 
 static void test_member_allow_list(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
-	uint64_t count = test_with_filter(vcpu, f);
+	uint64_t c = test_with_filter(vcpu, f);
 
 	free(f);
-	if (count != NUM_BRANCHES)
-		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
-			__func__, count, NUM_BRANCHES);
-	TEST_ASSERT(count, "Allowed PMU event is not counting");
+
+	ASSERT_PMC_COUNTING(c);
 }
 
 static void test_not_member_deny_list(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
-	uint64_t count;
+	uint64_t c;
 
 	remove_event(f, INTEL_BR_RETIRED);
 	remove_event(f, AMD_ZEN_BR_RETIRED);
-	count = test_with_filter(vcpu, f);
+	c = test_with_filter(vcpu, f);
 	free(f);
-	if (count != NUM_BRANCHES)
-		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
-			__func__, count, NUM_BRANCHES);
-	TEST_ASSERT(count, "Allowed PMU event is not counting");
+
+	ASSERT_PMC_COUNTING(c);
 }
 
 static void test_not_member_allow_list(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
-	uint64_t count;
+	uint64_t c;
 
 	remove_event(f, INTEL_BR_RETIRED);
 	remove_event(f, AMD_ZEN_BR_RETIRED);
-	count = test_with_filter(vcpu, f);
+	c = test_with_filter(vcpu, f);
 	free(f);
-	if (count)
-		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
-			__func__, count);
-	TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+
+	ASSERT_PMC_NOT_COUNTING(c);
 }
 
 /*
-- 
2.40.0.rc0.216.gc4246ad0f0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 4/5] KVM: selftests: Fixup test asserts
  2023-03-07 14:13 [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Aaron Lewis
                   ` (2 preceding siblings ...)
  2023-03-07 14:13 ` [PATCH v3 3/5] KVM: selftests: Add helpers for PMC asserts Aaron Lewis
@ 2023-03-07 14:13 ` Aaron Lewis
  2023-04-07 18:53   ` Sean Christopherson
  2023-03-07 14:14 ` [PATCH v3 5/5] KVM: selftests: Test the PMU event "Instructions retired" Aaron Lewis
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Aaron Lewis @ 2023-03-07 14:13 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, like.xu.linux, Aaron Lewis

Fix up both ASSERT_PMC_COUNTING and ASSERT_PMC_NOT_COUNTING in the
pmu_event_filter_test by adding additional context in the assert
message.

With the added context the print in ASSERT_PMC_NOT_COUNTING is
redundant.  Remove it.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 .../selftests/kvm/x86_64/pmu_event_filter_test.c      | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 8277b8f49dca..78bb48fcd33e 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -252,18 +252,17 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
 
 #define ASSERT_PMC_COUNTING(count)							\
 do {											\
-	if (count != NUM_BRANCHES)							\
+	if (count && count != NUM_BRANCHES)						\
 		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",	\
 			__func__, count, NUM_BRANCHES);					\
-	TEST_ASSERT(count, "Allowed PMU event is not counting.");			\
+	TEST_ASSERT(count, "%s: Branch instructions retired = %lu (expected > 0)",	\
+		    __func__, count);							\
 } while (0)
 
 #define ASSERT_PMC_NOT_COUNTING(count)							\
 do {											\
-	if (count)									\
-		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",		\
-			__func__, count);						\
-	TEST_ASSERT(!count, "Disallowed PMU Event is counting");			\
+	TEST_ASSERT(!count, "%s: Branch instructions retired = %lu (expected 0)",	\
+		    __func__, count);							\
 } while (0)
 
 static void test_without_filter(struct kvm_vcpu *vcpu)
-- 
2.40.0.rc0.216.gc4246ad0f0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 5/5] KVM: selftests: Test the PMU event "Instructions retired"
  2023-03-07 14:13 [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Aaron Lewis
                   ` (3 preceding siblings ...)
  2023-03-07 14:13 ` [PATCH v3 4/5] KVM: selftests: Fixup test asserts Aaron Lewis
@ 2023-03-07 14:14 ` Aaron Lewis
  2023-04-07 20:17   ` Sean Christopherson
  2023-04-07  9:06 ` [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Like Xu
  2023-04-07 21:30 ` Sean Christopherson
  6 siblings, 1 reply; 17+ messages in thread
From: Aaron Lewis @ 2023-03-07 14:14 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, like.xu.linux, Aaron Lewis

Add testing for the event "Instructions retired" (0xc0) in the PMU
event filter on both Intel and AMD to ensure that the event doesn't
count when it is disallowed.  Unlike most of the other events, the
event "Instructions retired", will be incremented by KVM when an
instruction is emulated.  Test that this case is being properly handled
and that KVM doesn't increment the counter when that event is
disallowed.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 .../kvm/x86_64/pmu_event_filter_test.c        | 80 ++++++++++++++-----
 1 file changed, 62 insertions(+), 18 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 78bb48fcd33e..9e932b99d4fa 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -54,6 +54,21 @@
 
 #define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0)
 
+
+/*
+ * "Retired instructions", from Processor Programming Reference
+ * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors,
+ * Preliminary Processor Programming Reference (PPR) for AMD Family
+ * 17h Model 31h, Revision B0 Processors, and Preliminary Processor
+ * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision
+ * B1 Processors Volume 1 of 2.
+ *                      --- and ---
+ * "Instructions retired", from the Intel SDM, volume 3,
+ * "Pre-defined Architectural Performance Events."
+ */
+
+#define INST_RETIRED EVENT(0xc0, 0)
+
 /*
  * This event list comprises Intel's eight architectural events plus
  * AMD's "retired branch instructions" for Zen[123] (and possibly
@@ -61,7 +76,7 @@
  */
 static const uint64_t event_list[] = {
 	EVENT(0x3c, 0),
-	EVENT(0xc0, 0),
+	INST_RETIRED,
 	EVENT(0x3c, 1),
 	EVENT(0x2e, 0x4f),
 	EVENT(0x2e, 0x41),
@@ -71,6 +86,16 @@ static const uint64_t event_list[] = {
 	AMD_ZEN_BR_RETIRED,
 };
 
+struct perf_results {
+	union {
+		uint64_t raw;
+		struct {
+			uint64_t br_count:32;
+			uint64_t ir_count:32;
+		};
+	};
+};
+
 /*
  * If we encounter a #GP during the guest PMU sanity check, then the guest
  * PMU is not functional. Inform the hypervisor via GUEST_SYNC(0).
@@ -102,13 +127,20 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)
 
 static uint64_t test_guest(uint32_t msr_base)
 {
+	struct perf_results r;
 	uint64_t br0, br1;
+	uint64_t ir0, ir1;
 
 	br0 = rdmsr(msr_base + 0);
+	ir0 = rdmsr(msr_base + 1);
 	__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
 	br1 = rdmsr(msr_base + 0);
+	ir1 = rdmsr(msr_base + 1);
 
-	return br1 - br0;
+	r.br_count = br1 - br0;
+	r.ir_count = ir1 - ir0;
+
+	return r.raw;
 }
 
 static void intel_guest_code(void)
@@ -119,15 +151,17 @@ static void intel_guest_code(void)
 	GUEST_SYNC(1);
 
 	for (;;) {
-		uint64_t count;
+		uint64_t counts;
 
 		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 		wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
 		      ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED);
-		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1);
+		wrmsr(MSR_P6_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED);
+		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x3);
 
-		count = test_guest(MSR_IA32_PMC0);
-		GUEST_SYNC(count);
+		counts = test_guest(MSR_IA32_PMC0);
+		GUEST_SYNC(counts);
 	}
 }
 
@@ -143,14 +177,16 @@ static void amd_guest_code(void)
 	GUEST_SYNC(1);
 
 	for (;;) {
-		uint64_t count;
+		uint64_t counts;
 
 		wrmsr(MSR_K7_EVNTSEL0, 0);
 		wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
 		      ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED);
+		wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED);
 
-		count = test_guest(MSR_K7_PERFCTR0);
-		GUEST_SYNC(count);
+		counts = test_guest(MSR_K7_PERFCTR0);
+		GUEST_SYNC(counts);
 	}
 }
 
@@ -250,19 +286,25 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
 	return f;
 }
 
-#define ASSERT_PMC_COUNTING(count)							\
+#define ASSERT_PMC_COUNTING(counts)							\
 do {											\
-	if (count && count != NUM_BRANCHES)						\
-		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",	\
-			__func__, count, NUM_BRANCHES);					\
-	TEST_ASSERT(count, "%s: Branch instructions retired = %lu (expected > 0)",	\
-		    __func__, count);							\
+	struct perf_results r = {.raw = counts};					\
+	if (r.br_count && r.br_count != NUM_BRANCHES)					\
+		pr_info("%s: Branch instructions retired = %u (expected %u)\n",		\
+			__func__, r.br_count, NUM_BRANCHES);				\
+	TEST_ASSERT(r.br_count,	"%s: Branch instructions retired = %u (expected > 0)",	\
+		    __func__, r.br_count);						\
+	TEST_ASSERT(r.ir_count,	"%s: Instructions retired = %u (expected > 0)",		\
+		    __func__, r.ir_count);						\
 } while (0)
 
-#define ASSERT_PMC_NOT_COUNTING(count)							\
+#define ASSERT_PMC_NOT_COUNTING(counts)							\
 do {											\
-	TEST_ASSERT(!count, "%s: Branch instructions retired = %lu (expected 0)",	\
-		    __func__, count);							\
+	struct perf_results r = {.raw = counts};					\
+	TEST_ASSERT(!r.br_count, "%s: Branch instructions retired = %u (expected 0)",	\
+		    __func__, r.br_count);						\
+	TEST_ASSERT(!r.ir_count, "%s: Instructions retired = %u (expected 0)",		\
+		    __func__, r.ir_count);						\
 } while (0)
 
 static void test_without_filter(struct kvm_vcpu *vcpu)
@@ -317,6 +359,7 @@ static void test_not_member_deny_list(struct kvm_vcpu *vcpu)
 	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
 	uint64_t c;
 
+	remove_event(f, INST_RETIRED);
 	remove_event(f, INTEL_BR_RETIRED);
 	remove_event(f, AMD_ZEN_BR_RETIRED);
 	c = test_with_filter(vcpu, f);
@@ -330,6 +373,7 @@ static void test_not_member_allow_list(struct kvm_vcpu *vcpu)
 	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
 	uint64_t c;
 
+	remove_event(f, INST_RETIRED);
 	remove_event(f, INTEL_BR_RETIRED);
 	remove_event(f, AMD_ZEN_BR_RETIRED);
 	c = test_with_filter(vcpu, f);
-- 
2.40.0.rc0.216.gc4246ad0f0-goog


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events
  2023-03-07 14:13 ` [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis
@ 2023-03-07 15:19   ` Like Xu
  2023-03-07 15:52     ` Aaron Lewis
  0 siblings, 1 reply; 17+ messages in thread
From: Like Xu @ 2023-03-07 15:19 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: pbonzini, jmattson, seanjc, kvm

On 2023/3/7 22:13, Aaron Lewis wrote:

> When counting "Instructions Retired" (0xc0) in a guest, KVM will
> occasionally increment the PMU counter regardless of if that event is
> being filtered. This is because some PMU events are incremented via
> kvm_pmu_trigger_event(), which doesn't know about the event filter. Add
> the event filter to kvm_pmu_trigger_event(), so events that are
> disallowed do not increment their counters.
It would be nice to have:

     Reported-by: Jinrong Liang <cloudliang@tencent.com>

, since he also found the same issue.

> Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions")
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>

Reviewed-by: Like Xu <likexu@tencent.com>

> ---
>   arch/x86/kvm/pmu.c | 13 ++++++++-----
>   1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index 612e6c70ce2e..9914a9027c60 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
>   	return is_fixed_event_allowed(filter, pmc->idx);
>   }
>   
> +static bool event_is_allowed(struct kvm_pmc *pmc)

Nit, an inline event_is_allowed() here might be better.

> +{
> +	return pmc_is_enabled(pmc) && pmc_speculative_in_use(pmc) &&
> +	       check_pmu_event_filter(pmc);
> +}
> +
>   static void reprogram_counter(struct kvm_pmc *pmc)
>   {
>   	struct kvm_pmu *pmu = pmc_to_pmu(pmc);
> @@ -409,10 +415,7 @@ static void reprogram_counter(struct kvm_pmc *pmc)
>   
>   	pmc_pause_counter(pmc);
>   
> -	if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc))
> -		goto reprogram_complete;
> -
> -	if (!check_pmu_event_filter(pmc))
> +	if (!event_is_allowed(pmc))
>   		goto reprogram_complete;
>   
>   	if (pmc->counter < pmc->prev_counter)
> @@ -684,7 +687,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
>   	for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) {
>   		pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i);
>   
> -		if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc))
> +		if (!pmc || !event_is_allowed(pmc))
>   			continue;
>   
>   		/* Ignore checks for edge detect, pin control, invert and CMASK bits */

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events
  2023-03-07 15:19   ` Like Xu
@ 2023-03-07 15:52     ` Aaron Lewis
  2023-03-07 16:01       ` Sean Christopherson
  0 siblings, 1 reply; 17+ messages in thread
From: Aaron Lewis @ 2023-03-07 15:52 UTC (permalink / raw)
  To: Like Xu; +Cc: pbonzini, jmattson, seanjc, kvm

On Tue, Mar 7, 2023 at 7:19 AM Like Xu <like.xu.linux@gmail.com> wrote:
>
> On 2023/3/7 22:13, Aaron Lewis wrote:
>
> > When counting "Instructions Retired" (0xc0) in a guest, KVM will
> > occasionally increment the PMU counter regardless of if that event is
> > being filtered. This is because some PMU events are incremented via
> > kvm_pmu_trigger_event(), which doesn't know about the event filter. Add
> > the event filter to kvm_pmu_trigger_event(), so events that are
> > disallowed do not increment their counters.
> It would be nice to have:
>
>      Reported-by: Jinrong Liang <cloudliang@tencent.com>
>
> , since he also found the same issue.
>

Sure, I can add that.

> > Fixes: 9cd803d496e7 ("KVM: x86: Update vPMCs when retiring instructions")
> > Signed-off-by: Aaron Lewis <aaronlewis@google.com>
>
> Reviewed-by: Like Xu <likexu@tencent.com>
>
> > ---
> >   arch/x86/kvm/pmu.c | 13 ++++++++-----
> >   1 file changed, 8 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > index 612e6c70ce2e..9914a9027c60 100644
> > --- a/arch/x86/kvm/pmu.c
> > +++ b/arch/x86/kvm/pmu.c
> > @@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
> >       return is_fixed_event_allowed(filter, pmc->idx);
> >   }
> >
> > +static bool event_is_allowed(struct kvm_pmc *pmc)
>
> Nit, an inline event_is_allowed() here might be better.

I purposely didn't inline this because Sean generally discourages its
use and has commented in several reviews to not use 'inline' and
instead leave it up to the compiler to decide, unless using
__always_inline.  I think the sentiment is either use the strong hint
or don't use it at all.  This seems like an example of where the
compiler can decide, and a strong hint isn't needed.

>
> > +{
> > +     return pmc_is_enabled(pmc) && pmc_speculative_in_use(pmc) &&
> > +            check_pmu_event_filter(pmc);
> > +}
> > +
> >   static void reprogram_counter(struct kvm_pmc *pmc)
> >   {
> >       struct kvm_pmu *pmu = pmc_to_pmu(pmc);
> > @@ -409,10 +415,7 @@ static void reprogram_counter(struct kvm_pmc *pmc)
> >
> >       pmc_pause_counter(pmc);
> >
> > -     if (!pmc_speculative_in_use(pmc) || !pmc_is_enabled(pmc))
> > -             goto reprogram_complete;
> > -
> > -     if (!check_pmu_event_filter(pmc))
> > +     if (!event_is_allowed(pmc))
> >               goto reprogram_complete;
> >
> >       if (pmc->counter < pmc->prev_counter)
> > @@ -684,7 +687,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
> >       for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) {
> >               pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i);
> >
> > -             if (!pmc || !pmc_is_enabled(pmc) || !pmc_speculative_in_use(pmc))
> > +             if (!pmc || !event_is_allowed(pmc))
> >                       continue;
> >
> >               /* Ignore checks for edge detect, pin control, invert and CMASK bits */

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events
  2023-03-07 15:52     ` Aaron Lewis
@ 2023-03-07 16:01       ` Sean Christopherson
  2023-03-08  2:45         ` Like Xu
  0 siblings, 1 reply; 17+ messages in thread
From: Sean Christopherson @ 2023-03-07 16:01 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: Like Xu, pbonzini, jmattson, kvm

On Tue, Mar 07, 2023, Aaron Lewis wrote:
> On Tue, Mar 7, 2023 at 7:19 AM Like Xu <like.xu.linux@gmail.com> wrote:
> > > ---
> > >   arch/x86/kvm/pmu.c | 13 ++++++++-----
> > >   1 file changed, 8 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > > index 612e6c70ce2e..9914a9027c60 100644
> > > --- a/arch/x86/kvm/pmu.c
> > > +++ b/arch/x86/kvm/pmu.c
> > > @@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
> > >       return is_fixed_event_allowed(filter, pmc->idx);
> > >   }
> > >
> > > +static bool event_is_allowed(struct kvm_pmc *pmc)
> >
> > Nit, an inline event_is_allowed() here might be better.
> 
> I purposely didn't inline this because Sean generally discourages its
> use and has commented in several reviews to not use 'inline' and
> instead leave it up to the compiler to decide, unless using
> __always_inline.

Ya.

> I think the sentiment is either use the strong hint or don't use it at all.
> This seems like an example of where the compiler can decide, and a strong
> hint isn't needed.

Not quite.  __always_inline is not a hint, it's a command.  The kernel *requires*
functions tagged with __always_inline to be (surprise!) always inlined, even when
building with features that cause the compiler to generate non-inlined functions
for even the most trivial helpers, e.g. KASAN can cause a literal nop function to
be non-inlined.  __alway_inlined is used to ensure like no-instrumentation regions
and __init sections are preserved when invoking common helpers.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events
  2023-03-07 16:01       ` Sean Christopherson
@ 2023-03-08  2:45         ` Like Xu
  2023-03-08 19:46           ` Sean Christopherson
  0 siblings, 1 reply; 17+ messages in thread
From: Like Xu @ 2023-03-08  2:45 UTC (permalink / raw)
  To: Sean Christopherson, Aaron Lewis; +Cc: pbonzini, jmattson, kvm

On 8/3/2023 12:01 am, Sean Christopherson wrote:
> On Tue, Mar 07, 2023, Aaron Lewis wrote:
>> On Tue, Mar 7, 2023 at 7:19 AM Like Xu <like.xu.linux@gmail.com> wrote:
>>>> ---
>>>>    arch/x86/kvm/pmu.c | 13 ++++++++-----
>>>>    1 file changed, 8 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
>>>> index 612e6c70ce2e..9914a9027c60 100644
>>>> --- a/arch/x86/kvm/pmu.c
>>>> +++ b/arch/x86/kvm/pmu.c
>>>> @@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
>>>>        return is_fixed_event_allowed(filter, pmc->idx);
>>>>    }
>>>>
>>>> +static bool event_is_allowed(struct kvm_pmc *pmc)
>>>
>>> Nit, an inline event_is_allowed() here might be better.
>>
>> I purposely didn't inline this because Sean generally discourages its
>> use and has commented in several reviews to not use 'inline' and
>> instead leave it up to the compiler to decide, unless using
>> __always_inline.
> 
> Ya.

I think we all respect mainatiner's personal preferences for sure. However,
I'm not sure how to define Sean's "generally discourage", nor does my
binary bi-directional verifier-bot (losing control of these details at the code
level can be frustrating, especially for people who care about performance
gains but can't use the fresh new tool chain for some supply chain policy
reasons), and we don't have someone like Sean or other kernel worlds to
eliminate all inline in the kernel world.

> 
>> I think the sentiment is either use the strong hint or don't use it at all.
>> This seems like an example of where the compiler can decide, and a strong
>> hint isn't needed.
> 
> Not quite.  __always_inline is not a hint, it's a command.  The kernel *requires*
> functions tagged with __always_inline to be (surprise!) always inlined, even when
> building with features that cause the compiler to generate non-inlined functions
> for even the most trivial helpers, e.g. KASAN can cause a literal nop function to
> be non-inlined.  __alway_inlined is used to ensure like no-instrumentation regions
> and __init sections are preserved when invoking common helpers.

So, do you think "__always_inline event_is_allowed()" in the highly recurring path
reprogram_counter() is a better move ? I'd say yes, and am not willing to risk 
paying
for a function call overhead since any advancement in this direction is encouraged.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events
  2023-03-08  2:45         ` Like Xu
@ 2023-03-08 19:46           ` Sean Christopherson
  0 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2023-03-08 19:46 UTC (permalink / raw)
  To: Like Xu; +Cc: Aaron Lewis, pbonzini, jmattson, kvm

On Wed, Mar 08, 2023, Like Xu wrote:
> On 8/3/2023 12:01 am, Sean Christopherson wrote:
> > On Tue, Mar 07, 2023, Aaron Lewis wrote:
> > > On Tue, Mar 7, 2023 at 7:19 AM Like Xu <like.xu.linux@gmail.com> wrote:
> > > > > ---
> > > > >    arch/x86/kvm/pmu.c | 13 ++++++++-----
> > > > >    1 file changed, 8 insertions(+), 5 deletions(-)
> > > > > 
> > > > > diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> > > > > index 612e6c70ce2e..9914a9027c60 100644
> > > > > --- a/arch/x86/kvm/pmu.c
> > > > > +++ b/arch/x86/kvm/pmu.c
> > > > > @@ -400,6 +400,12 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
> > > > >        return is_fixed_event_allowed(filter, pmc->idx);
> > > > >    }
> > > > > 
> > > > > +static bool event_is_allowed(struct kvm_pmc *pmc)
> > > > 
> > > > Nit, an inline event_is_allowed() here might be better.
> > > 
> > > I purposely didn't inline this because Sean generally discourages its
> > > use and has commented in several reviews to not use 'inline' and
> > > instead leave it up to the compiler to decide, unless using
> > > __always_inline.
> > 
> > Ya.
> 
> I think we all respect mainatiner's personal preferences for sure. However,
> I'm not sure how to define Sean's "generally discourage", nor does my
> binary bi-directional verifier-bot (losing control of these details at the code
> level can be frustrating, especially for people who care about performance
> gains but can't use the fresh new tool chain for some supply chain policy
> reasons),

I'm not buying that argument.  Modern compilers are much smarter than humans when
it comes to performance optimizations and will do the right thing 99% of the time.
There are exceptions, e.g. coercing the compiler into generating arithmetic instead
of conditional branches, but those are few and far between.

If you care about performance to the point where a CALL+RET (which is not at all
expensive on modern CPUs) and _maybe_ a few arithmetic ops are concerning, _and_
your toolchain is so awful that I can't do a good job of optimizing straightforward
code like this, then you have much bigger problems.

If someone can provide data to show that forcing a particularly function to be
inlined meaningful changes runtime performance, then I'll happily take a patch.

> and we don't have someone like Sean or other kernel worlds to eliminate all
> inline in the kernel world.

Huh?  I'm not saying "inline is bad", I'm saying modern compilers are plenty smart
enough to inline (or not) when appropriate in the overwhelming majority of cases,
and that outside of select edge cases and truly performance critical paths, the
days when humans can handcode better code than the compiler are long gone.  For
functions that should result in literally one or two instructions, I'm fine with
tagging them inline even though I think it's unnecessary.  But this proposed
helper is not that case.

> > > I think the sentiment is either use the strong hint or don't use it at all.
> > > This seems like an example of where the compiler can decide, and a strong
> > > hint isn't needed.
> > 
> > Not quite.  __always_inline is not a hint, it's a command.  The kernel *requires*
> > functions tagged with __always_inline to be (surprise!) always inlined, even when
> > building with features that cause the compiler to generate non-inlined functions
> > for even the most trivial helpers, e.g. KASAN can cause a literal nop function to
> > be non-inlined.  __alway_inlined is used to ensure like no-instrumentation regions
> > and __init sections are preserved when invoking common helpers.
> 
> So, do you think "__always_inline event_is_allowed()" in the highly recurring
> path reprogram_counter() is a better move ? I'd say yes, and am not willing
> to risk paying for a function call overhead since any advancement in this
> direction is encouraged.

Absolutely not.  __always_inline is for situations where the code _must_ be inlined,
or as above, where someone can prove with data that (a) modern compilers aren't
smart enough to do the right thing and (b) that inlining provides meaningful
performance benefits.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting
  2023-03-07 14:13 [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Aaron Lewis
                   ` (4 preceding siblings ...)
  2023-03-07 14:14 ` [PATCH v3 5/5] KVM: selftests: Test the PMU event "Instructions retired" Aaron Lewis
@ 2023-04-07  9:06 ` Like Xu
  2023-04-07 21:30 ` Sean Christopherson
  6 siblings, 0 replies; 17+ messages in thread
From: Like Xu @ 2023-04-07  9:06 UTC (permalink / raw)
  To: Sean Christopherson; +Cc: pbonzini, jmattson, Aaron Lewis, kvm list

Sean, would you pick this series up ?

On 7/3/2023 10:13 pm, Aaron Lewis wrote:
> This series fixes an issue with the PMU event "Instructions Retired"
> (0xc0), then tests the fix to verify it works.  Running the test
> updates without the fix will result in a failed test.
> 
> v2 -> v3:
>   - s/pmc_is_allowed/event_is_allowed/ [Like]
> 
> v1 -> v2:
>   - Add pmc_is_allowed() as common helper [Sean]
>   - Split test into multiple commits [Sean]
>   - Add macros for counting and not counting [Sean]
>   - Removed un-needed pr_info [Sean]
> 
> 
> Aaron Lewis (5):
>    KVM: x86/pmu: Prevent the PMU from counting disallowed events
>    KVM: selftests: Add a common helper to the guest
>    KVM: selftests: Add helpers for PMC asserts
>    KVM: selftests: Fixup test asserts
>    KVM: selftests: Test the PMU event "Instructions retired"
> 
>   arch/x86/kvm/pmu.c                            |  13 +-
>   .../kvm/x86_64/pmu_event_filter_test.c        | 146 ++++++++++++------
>   2 files changed, 108 insertions(+), 51 deletions(-)
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 2/5] KVM: selftests: Add a common helper to the guest
  2023-03-07 14:13 ` [PATCH v3 2/5] KVM: selftests: Add a common helper to the guest Aaron Lewis
@ 2023-04-07 18:43   ` Sean Christopherson
  0 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2023-04-07 18:43 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson, like.xu.linux

LOL, the shortlog is hilariously generic.  I realize it's annoying to have to
clarify what test is affected, but it's not exactly hard, e.g.

  KVM: selftests: Add a common helper for the PMU event filter guest code

On Tue, Mar 07, 2023, Aaron Lewis wrote:
> Split out the common parts of the Intel and AMD guest code into a
> helper function.  This is in preparation for adding
> additional counters to the test.

Similar whining here

  Split out the common parts of the Intel and AMD guest code in the PMU
  event filter test into a helper function.  This is in preparation for
  adding additional counters to the test.

> 
> No functional changes intended.
> 
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> ---
>  .../kvm/x86_64/pmu_event_filter_test.c        | 31 ++++++++++++-------
>  1 file changed, 20 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> index bad7ef8c5b92..f33079fc552b 100644
> --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> @@ -100,6 +100,17 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)
>  		GUEST_SYNC(0);
>  }
>  
> +static uint64_t test_guest(uint32_t msr_base)

test_guest() is again too generic, and arguably inaccurate, e.g. it's not really
testing anything, just running code and capturing event counts.  How about
run_and_measure_loop()?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 3/5] KVM: selftests: Add helpers for PMC asserts
  2023-03-07 14:13 ` [PATCH v3 3/5] KVM: selftests: Add helpers for PMC asserts Aaron Lewis
@ 2023-04-07 18:47   ` Sean Christopherson
  0 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2023-04-07 18:47 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson, like.xu.linux

Same issue on this one, it's not obvious that the helper is being added to a single
test.

On Tue, Mar 07, 2023, Aaron Lewis wrote:
> Add the helpers, ASSERT_PMC_COUNTING and ASSERT_PMC_NOT_COUNTING, to
> split out the asserts into one place.  This will make it easier to add
> additional asserts related to counting later on.
> 
> No functional changes intended.
> 
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> ---
>  .../kvm/x86_64/pmu_event_filter_test.c        | 70 ++++++++++---------
>  1 file changed, 36 insertions(+), 34 deletions(-)
> 
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> index f33079fc552b..8277b8f49dca 100644
> --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> @@ -250,14 +250,27 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
>  	return f;
>  }
>  
> +#define ASSERT_PMC_COUNTING(count)							\

Looking at the future patches, I think it makes sense to append INSTRUCTIONS to
the name to clarify what it's counting.  That's arguably slightly misleading for
counting branch instructions, but they are still instructions...

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 4/5] KVM: selftests: Fixup test asserts
  2023-03-07 14:13 ` [PATCH v3 4/5] KVM: selftests: Fixup test asserts Aaron Lewis
@ 2023-04-07 18:53   ` Sean Christopherson
  0 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2023-04-07 18:53 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson, like.xu.linux

Same shortlog problem.

On Tue, Mar 07, 2023, Aaron Lewis wrote:
> Fix up both ASSERT_PMC_COUNTING and ASSERT_PMC_NOT_COUNTING in the
> pmu_event_filter_test by adding additional context in the assert
> message.

I 100% agree that the asserts are flawed, but in the context of changelogs, "fix"
almost always implies something is broken.  In this case, what is being asserted
is completely ok, it's only the messages that are bad.

Easiest thing is to just describe the change and explain why it's desirable, and
dodge the question of whether or not this should be considered a fix.

    Provide the actual vs. expected count in the PMU event filter test's
    asserts instead of relying on pr_info() to provide the context, e.g. so
    that all information needed to triage a failure is readily available even
    if the environment in which the test is run captures only the assert
    itself.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 5/5] KVM: selftests: Test the PMU event "Instructions retired"
  2023-03-07 14:14 ` [PATCH v3 5/5] KVM: selftests: Test the PMU event "Instructions retired" Aaron Lewis
@ 2023-04-07 20:17   ` Sean Christopherson
  0 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2023-04-07 20:17 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, jmattson, like.xu.linux

On Tue, Mar 07, 2023, Aaron Lewis wrote:
> @@ -71,6 +86,16 @@ static const uint64_t event_list[] = {
>  	AMD_ZEN_BR_RETIRED,
>  };
>  
> +struct perf_results {
> +	union {
> +		uint64_t raw;
> +		struct {
> +			uint64_t br_count:32;
> +			uint64_t ir_count:32;
> +		};
> +	};
> +};
> +
>  /*
>   * If we encounter a #GP during the guest PMU sanity check, then the guest
>   * PMU is not functional. Inform the hypervisor via GUEST_SYNC(0).
> @@ -102,13 +127,20 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)
>  
>  static uint64_t test_guest(uint32_t msr_base)
>  {
> +	struct perf_results r;
>  	uint64_t br0, br1;
> +	uint64_t ir0, ir1;
>  
>  	br0 = rdmsr(msr_base + 0);
> +	ir0 = rdmsr(msr_base + 1);
>  	__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
>  	br1 = rdmsr(msr_base + 0);
> +	ir1 = rdmsr(msr_base + 1);
>  
> -	return br1 - br0;
> +	r.br_count = br1 - br0;
> +	r.ir_count = ir1 - ir0;

The struct+union is problematic on 2+ fronts.  I don't like that it truncates
a 64-bit MSR value into a 32-bit field.  And this test now ends up with two
structs (perf_results and perf_counter) that serve the same purpose, but count
different events, and without any indiciation in the name _what_ event(s) the
struct tracks.

The existing "struct perf_count" has the same truncation problem.  It _shouldn't_
cause problems, but unless I'm missing something, it means that, very theoretically,
the test could have false passes, e.g. if KVM manages to corrupt the upper 32 bits.

There's really no reason to limit ourselves to 64 bits of data, as the selftests
framework provides helpers to copy arbitrary structs to/from the guest.  If we
throw all of the counts into a single struct, then we solve the naming issue and
can provide a helper to do the copies to/from the guest.

I have all of this typed up and it appears to work.  I'll post a new version of
just the selftests patches.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting
  2023-03-07 14:13 [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Aaron Lewis
                   ` (5 preceding siblings ...)
  2023-04-07  9:06 ` [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Like Xu
@ 2023-04-07 21:30 ` Sean Christopherson
  6 siblings, 0 replies; 17+ messages in thread
From: Sean Christopherson @ 2023-04-07 21:30 UTC (permalink / raw)
  To: Sean Christopherson, kvm, Aaron Lewis; +Cc: pbonzini, jmattson, like.xu.linux

On Tue, 07 Mar 2023 14:13:55 +0000, Aaron Lewis wrote:
> This series fixes an issue with the PMU event "Instructions Retired"
> (0xc0), then tests the fix to verify it works.  Running the test
> updates without the fix will result in a failed test.
> 
> v2 -> v3:
>  - s/pmc_is_allowed/event_is_allowed/ [Like]
> 
> [...]

Applied patch 1 to kvm-x86 pmu.  I prepended "pmc" to the new function to give
it a bit more namespacing, i.e. combined your pmc_is_allowed() with Like's
event_is_allowed().  As mentioned in my response to patch 5, I'll post a
new version of the selftests changes.

Thanks!

[1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events
      https://github.com/kvm-x86/linux/commit/dfdeda67ea2d

--
https://github.com/kvm-x86/linux/tree/next
https://github.com/kvm-x86/linux/tree/fixes

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-04-07 21:30 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-07 14:13 [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Aaron Lewis
2023-03-07 14:13 ` [PATCH v3 1/5] KVM: x86/pmu: Prevent the PMU from counting disallowed events Aaron Lewis
2023-03-07 15:19   ` Like Xu
2023-03-07 15:52     ` Aaron Lewis
2023-03-07 16:01       ` Sean Christopherson
2023-03-08  2:45         ` Like Xu
2023-03-08 19:46           ` Sean Christopherson
2023-03-07 14:13 ` [PATCH v3 2/5] KVM: selftests: Add a common helper to the guest Aaron Lewis
2023-04-07 18:43   ` Sean Christopherson
2023-03-07 14:13 ` [PATCH v3 3/5] KVM: selftests: Add helpers for PMC asserts Aaron Lewis
2023-04-07 18:47   ` Sean Christopherson
2023-03-07 14:13 ` [PATCH v3 4/5] KVM: selftests: Fixup test asserts Aaron Lewis
2023-04-07 18:53   ` Sean Christopherson
2023-03-07 14:14 ` [PATCH v3 5/5] KVM: selftests: Test the PMU event "Instructions retired" Aaron Lewis
2023-04-07 20:17   ` Sean Christopherson
2023-04-07  9:06 ` [PATCH v3 0/5] Fix "Instructions Retired" from incorrectly counting Like Xu
2023-04-07 21:30 ` Sean Christopherson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.