All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events
@ 2022-01-14  1:21 Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 1/6] " Jim Mattson
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-14  1:21 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux, daviddunn, cloudliang; +Cc: Jim Mattson

This started out as a simple change to sort the (up to 300 element)
PMU filtered event list and to use binary search rather than linear
search to see if an event is in the list.

I thought it would be nice to add a directed test for the PMU event
filter, and that's when things got complicated. The Intel side was
fine, but the AMD side was a bit ugly, until I did a few
refactorings. Imagine my dismay when I discovered that the PMU event
filter works fine on the AMD side, but that fundamental PMU
virtualization is broken. And I don't just mean erratum 1292, though
that throws even more brokenness into the mix.

v1 -> v2
* Drop the check for "AMDisbetter!" in is_amd_cpu() [David Dunn]
* Drop the call to cpuid(0, 0) that fed the original check for
  CPU vendor string "AuthenticAMD" in vm_compute_max_gfn().
* Simplify the inline asm in the selftest by using the compound literal
  for both input & output.
  
Jim Mattson (6):
  KVM: x86/pmu: Use binary search to check filtered events
  selftests: kvm/x86: Parameterize the CPUID vendor string check
  selftests: kvm/x86: Introduce is_amd_cpu()
  selftests: kvm/x86: Export x86_family() for use outside of processor.c
  selftests: kvm/x86: Introduce x86_model()
  selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER

 arch/x86/kvm/pmu.c                            |  30 +-
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/x86_64/processor.h  |  18 ++
 .../selftests/kvm/lib/x86_64/processor.c      |  40 +--
 .../kvm/x86_64/pmu_event_filter_test.c        | 306 ++++++++++++++++++
 6 files changed, 361 insertions(+), 35 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c

-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/6] KVM: x86/pmu: Use binary search to check filtered events
  2022-01-14  1:21 [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
@ 2022-01-14  1:21 ` Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 2/6] selftests: kvm/x86: Parameterize the CPUID vendor string check Jim Mattson
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-14  1:21 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux, daviddunn, cloudliang; +Cc: Jim Mattson

The PMU event filter may contain up to 300 events. Replace the linear
search in reprogram_gp_counter() with a binary search.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/pmu.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index e632693a2266..2c98f3ee8df4 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -13,6 +13,8 @@
 #include <linux/types.h>
 #include <linux/kvm_host.h>
 #include <linux/perf_event.h>
+#include <linux/bsearch.h>
+#include <linux/sort.h>
 #include <asm/perf_event.h>
 #include "x86.h"
 #include "cpuid.h"
@@ -172,12 +174,16 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
 	return true;
 }
 
+static int cmp_u64(const void *a, const void *b)
+{
+	return *(__u64 *)a - *(__u64 *)b;
+}
+
 void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
 {
 	unsigned config, type = PERF_TYPE_RAW;
 	struct kvm *kvm = pmc->vcpu->kvm;
 	struct kvm_pmu_event_filter *filter;
-	int i;
 	bool allow_event = true;
 
 	if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL)
@@ -192,16 +198,13 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
 
 	filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu);
 	if (filter) {
-		for (i = 0; i < filter->nevents; i++)
-			if (filter->events[i] ==
-			    (eventsel & AMD64_RAW_EVENT_MASK_NB))
-				break;
-		if (filter->action == KVM_PMU_EVENT_ALLOW &&
-		    i == filter->nevents)
-			allow_event = false;
-		if (filter->action == KVM_PMU_EVENT_DENY &&
-		    i < filter->nevents)
-			allow_event = false;
+		__u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB;
+
+		if (bsearch(&key, filter->events, filter->nevents,
+			    sizeof(__u64), cmp_u64))
+			allow_event = filter->action == KVM_PMU_EVENT_ALLOW;
+		else
+			allow_event = filter->action == KVM_PMU_EVENT_DENY;
 	}
 	if (!allow_event)
 		return;
@@ -576,6 +579,11 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 	/* Ensure nevents can't be changed between the user copies. */
 	*filter = tmp;
 
+	/*
+	 * Sort the in-kernel list so that we can search it with bsearch.
+	 */
+	sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL);
+
 	mutex_lock(&kvm->lock);
 	filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter,
 				     mutex_is_locked(&kvm->lock));
-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/6] selftests: kvm/x86: Parameterize the CPUID vendor string check
  2022-01-14  1:21 [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 1/6] " Jim Mattson
@ 2022-01-14  1:21 ` Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 3/6] selftests: kvm/x86: Introduce is_amd_cpu() Jim Mattson
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-14  1:21 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux, daviddunn, cloudliang; +Cc: Jim Mattson

Refactor is_intel_cpu() to make it easier to reuse the bulk of the
code for other vendors in the future.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 tools/testing/selftests/kvm/lib/x86_64/processor.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index eef7b34756d5..355a3f6f1970 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1198,10 +1198,10 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
 	}
 }
 
-bool is_intel_cpu(void)
+static bool cpu_vendor_string_is(const char *vendor)
 {
+	const uint32_t *chunk = (const uint32_t *)vendor;
 	int eax, ebx, ecx, edx;
-	const uint32_t *chunk;
 	const int leaf = 0;
 
 	__asm__ __volatile__(
@@ -1210,10 +1210,14 @@ bool is_intel_cpu(void)
 		  "=c"(ecx), "=d"(edx)
 		: /* input */ "0"(leaf), "2"(0));
 
-	chunk = (const uint32_t *)("GenuineIntel");
 	return (ebx == chunk[0] && edx == chunk[1] && ecx == chunk[2]);
 }
 
+bool is_intel_cpu(void)
+{
+	return cpu_vendor_string_is("GenuineIntel");
+}
+
 uint32_t kvm_get_cpuid_max_basic(void)
 {
 	return kvm_get_supported_cpuid_entry(0)->eax;
-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/6] selftests: kvm/x86: Introduce is_amd_cpu()
  2022-01-14  1:21 [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 1/6] " Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 2/6] selftests: kvm/x86: Parameterize the CPUID vendor string check Jim Mattson
@ 2022-01-14  1:21 ` Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 4/6] selftests: kvm/x86: Export x86_family() for use outside of processor.c Jim Mattson
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-14  1:21 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux, daviddunn, cloudliang; +Cc: Jim Mattson

Replace the one ad hoc "AuthenticAMD" CPUID vendor string comparison
with a new function, is_amd_cpu().

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 .../selftests/kvm/include/x86_64/processor.h   |  1 +
 .../selftests/kvm/lib/x86_64/processor.c       | 18 +++++++++---------
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 05e65ca1c30c..69eaf9a69bb7 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -347,6 +347,7 @@ static inline unsigned long get_xmm(int n)
 }
 
 bool is_intel_cpu(void);
+bool is_amd_cpu(void);
 
 struct kvm_x86_state;
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 355a3f6f1970..b969e38bc02e 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1218,6 +1218,14 @@ bool is_intel_cpu(void)
 	return cpu_vendor_string_is("GenuineIntel");
 }
 
+/*
+ * Exclude early K5 samples with a vendor string of "AMDisbetter!"
+ */
+bool is_amd_cpu(void)
+{
+	return cpu_vendor_string_is("AuthenticAMD");
+}
+
 uint32_t kvm_get_cpuid_max_basic(void)
 {
 	return kvm_get_supported_cpuid_entry(0)->eax;
@@ -1436,10 +1444,6 @@ struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpui
 	return cpuid;
 }
 
-#define X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx 0x68747541
-#define X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx 0x444d4163
-#define X86EMUL_CPUID_VENDOR_AuthenticAMD_edx 0x69746e65
-
 static inline unsigned x86_family(unsigned int eax)
 {
         unsigned int x86;
@@ -1461,11 +1465,7 @@ unsigned long vm_compute_max_gfn(struct kvm_vm *vm)
 	max_gfn = (1ULL << (vm->pa_bits - vm->page_shift)) - 1;
 
 	/* Avoid reserved HyperTransport region on AMD processors.  */
-	eax = ecx = 0;
-	cpuid(&eax, &ebx, &ecx, &edx);
-	if (ebx != X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx ||
-	    ecx != X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx ||
-	    edx != X86EMUL_CPUID_VENDOR_AuthenticAMD_edx)
+	if (!is_amd_cpu())
 		return max_gfn;
 
 	/* On parts with <40 physical address bits, the area is fully hidden */
-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/6] selftests: kvm/x86: Export x86_family() for use outside of processor.c
  2022-01-14  1:21 [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
                   ` (2 preceding siblings ...)
  2022-01-14  1:21 ` [PATCH v2 3/6] selftests: kvm/x86: Introduce is_amd_cpu() Jim Mattson
@ 2022-01-14  1:21 ` Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 5/6] selftests: kvm/x86: Introduce x86_model() Jim Mattson
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-14  1:21 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux, daviddunn, cloudliang; +Cc: Jim Mattson

Move this static inline function to processor.h, so that it can be
used in individual tests, as needed.

Opportunistically replace the bare 'unsigned' with 'unsigned int.'

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 .../testing/selftests/kvm/include/x86_64/processor.h | 12 ++++++++++++
 tools/testing/selftests/kvm/lib/x86_64/processor.c   | 12 ------------
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 69eaf9a69bb7..c5306e29edd4 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -349,6 +349,18 @@ static inline unsigned long get_xmm(int n)
 bool is_intel_cpu(void);
 bool is_amd_cpu(void);
 
+static inline unsigned int x86_family(unsigned int eax)
+{
+        unsigned int x86;
+
+        x86 = (eax >> 8) & 0xf;
+
+        if (x86 == 0xf)
+                x86 += (eax >> 20) & 0xff;
+
+        return x86;
+}
+
 struct kvm_x86_state;
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid,
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index b969e38bc02e..286ae9605501 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1444,18 +1444,6 @@ struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpui
 	return cpuid;
 }
 
-static inline unsigned x86_family(unsigned int eax)
-{
-        unsigned int x86;
-
-        x86 = (eax >> 8) & 0xf;
-
-        if (x86 == 0xf)
-                x86 += (eax >> 20) & 0xff;
-
-        return x86;
-}
-
 unsigned long vm_compute_max_gfn(struct kvm_vm *vm)
 {
 	const unsigned long num_ht_pages = 12 << (30 - vm->page_shift); /* 12 GiB */
-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/6] selftests: kvm/x86: Introduce x86_model()
  2022-01-14  1:21 [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
                   ` (3 preceding siblings ...)
  2022-01-14  1:21 ` [PATCH v2 4/6] selftests: kvm/x86: Export x86_family() for use outside of processor.c Jim Mattson
@ 2022-01-14  1:21 ` Jim Mattson
  2022-01-14  1:21 ` [PATCH v2 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER Jim Mattson
  2022-01-18  9:15 ` [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Paolo Bonzini
  6 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-14  1:21 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux, daviddunn, cloudliang; +Cc: Jim Mattson

Extract the x86 model number from CPUID.01H:EAX.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 tools/testing/selftests/kvm/include/x86_64/processor.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index c5306e29edd4..b723163ca9ba 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -361,6 +361,11 @@ static inline unsigned int x86_family(unsigned int eax)
         return x86;
 }
 
+static inline unsigned int x86_model(unsigned int eax)
+{
+	return ((eax >> 12) & 0xf0) | ((eax >> 4) & 0x0f);
+}
+
 struct kvm_x86_state;
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid,
-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER
  2022-01-14  1:21 [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
                   ` (4 preceding siblings ...)
  2022-01-14  1:21 ` [PATCH v2 5/6] selftests: kvm/x86: Introduce x86_model() Jim Mattson
@ 2022-01-14  1:21 ` Jim Mattson
  2022-01-14 19:15   ` David Dunn
  2022-01-18  9:15 ` [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Paolo Bonzini
  6 siblings, 1 reply; 10+ messages in thread
From: Jim Mattson @ 2022-01-14  1:21 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux, daviddunn, cloudliang; +Cc: Jim Mattson

Verify that the PMU event filter works as expected.

Note that the virtual PMU doesn't work as expected on AMD Zen CPUs (an
intercepted rdmsr appears to be counted as a retired branch
instruction), but the PMU event filter does work.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/x86_64/pmu_event_filter_test.c        | 306 ++++++++++++++++++
 3 files changed, 308 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 8c129961accf..7834e03ab159 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -22,6 +22,7 @@
 /x86_64/mmio_warning_test
 /x86_64/mmu_role_test
 /x86_64/platform_info_test
+/x86_64/pmu_event_filter_test
 /x86_64/set_boot_cpu_id
 /x86_64/set_sregs_test
 /x86_64/sev_migrate_tests
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index c407ebbec2c1..899413a6588f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -56,6 +56,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test
 TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
 TEST_GEN_PROGS_x86_64 += x86_64/mmu_role_test
 TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
+TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
 TEST_GEN_PROGS_x86_64 += x86_64/smm_test
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
new file mode 100644
index 000000000000..8ac99d4cbc73
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -0,0 +1,306 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Test for x86 KVM_SET_PMU_EVENT_FILTER.
+ *
+ * Copyright (C) 2022, Google LLC.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ *
+ * Verifies the expected behavior of allow lists and deny lists for
+ * virtual PMU events.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+
+/*
+ * In lieue of copying perf_event.h into tools...
+ */
+#define ARCH_PERFMON_EVENTSEL_ENABLE	BIT(22)
+#define ARCH_PERFMON_EVENTSEL_OS	BIT(17)
+
+#define VCPU_ID 0
+#define NUM_BRANCHES 42
+
+/*
+ * This is how the event selector and unit mask are stored in an AMD
+ * core performance event-select register. Intel's format is similar,
+ * but the event selector is only 8 bits.
+ */
+#define EVENT(select, umask) ((select & 0xf00UL) << 24 | (select & 0xff) | \
+			      (umask & 0xff) << 8)
+
+/*
+ * "Branch instructions retired", from the Intel SDM, volume 3,
+ * "Pre-defined Architectural Performance Events."
+ */
+
+#define INTEL_BR_RETIRED EVENT(0xc4, 0)
+
+/*
+ * "Retired branch instructions", from Processor Programming Reference
+ * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors,
+ * Preliminary Processor Programming Reference (PPR) for AMD Family
+ * 17h Model 31h, Revision B0 Processors, and Preliminary Processor
+ * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision
+ * B1 Processors Volume 1 of 2
+ */
+
+#define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0)
+
+/*
+ * This event list comprises Intel's eight architectural events plus
+ * AMD's "branch instructions retired" for Zen[123].
+ */
+static const uint64_t event_list[] = {
+	EVENT(0x3c, 0),
+	EVENT(0xc0, 0),
+	EVENT(0x3c, 1),
+	EVENT(0x2e, 0x4f),
+	EVENT(0x2e, 0x41),
+	EVENT(0xc4, 0),
+	EVENT(0xc5, 0),
+	EVENT(0xa4, 1),
+	AMD_ZEN_BR_RETIRED,
+};
+
+static void intel_guest_code(void)
+{
+	uint64_t br0, br1;
+
+	for (;;) {
+		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+		wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED);
+		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1);
+		br0 = rdmsr(MSR_IA32_PMC0);
+		__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
+		br1 = rdmsr(MSR_IA32_PMC0);
+		GUEST_SYNC(br1 - br0);
+	}
+}
+
+/*
+ * To avoid needing a check for CPUID.80000001:ECX.PerfCtrExtCore[bit
+ * 23], this code uses the always-available, legacy K7 PMU MSRs, which
+ * alias to the first four of the six extended core PMU MSRs.
+ */
+static void amd_guest_code(void)
+{
+	uint64_t br0, br1;
+
+	for (;;) {
+		wrmsr(MSR_K7_EVNTSEL0, 0);
+		wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED);
+		br0 = rdmsr(MSR_K7_PERFCTR0);
+		__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
+		br1 = rdmsr(MSR_K7_PERFCTR0);
+		GUEST_SYNC(br1 - br0);
+	}
+}
+
+static uint64_t test_branches_retired(struct kvm_vm *vm)
+{
+	struct kvm_run *run = vcpu_state(vm, VCPU_ID);
+	struct ucall uc;
+
+	vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
+		    "Exit_reason other than KVM_EXIT_IO: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	get_ucall(vm, VCPU_ID, &uc);
+	TEST_ASSERT(uc.cmd == UCALL_SYNC,
+		    "Received ucall other than UCALL_SYNC: %lu", uc.cmd);
+	return uc.args[1];
+}
+
+static struct kvm_pmu_event_filter *make_pmu_event_filter(uint32_t nevents)
+{
+	struct kvm_pmu_event_filter *f;
+	int size = sizeof(*f) + nevents * sizeof(f->events[0]);
+
+	f = malloc(size);
+	TEST_ASSERT(f, "Out of memory");
+	memset(f, 0, size);
+	f->nevents = nevents;
+	return f;
+}
+
+static struct kvm_pmu_event_filter *event_filter(uint32_t action)
+{
+	struct kvm_pmu_event_filter *f;
+	int i;
+
+	f = make_pmu_event_filter(ARRAY_SIZE(event_list));
+	f->action = action;
+	for (i = 0; i < ARRAY_SIZE(event_list); i++)
+		f->events[i] = event_list[i];
+
+	return f;
+}
+
+static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
+						 uint64_t event)
+{
+	bool found = false;
+	int i;
+
+	for (i = 0; i < f->nevents; i++) {
+		if (found)
+			f->events[i - 1] = f->events[i];
+		else
+			found = f->events[i] == event;
+	}
+	if (found)
+		f->nevents--;
+	return f;
+}
+
+static void test_no_filter(struct kvm_vm *vm)
+{
+	uint64_t count = test_branches_retired(vm);
+
+	if (count != NUM_BRANCHES)
+		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
+			__func__, count, NUM_BRANCHES);
+	TEST_ASSERT(count, "Allowed PMU event is not counting");
+}
+
+static uint64_t test_with_filter(struct kvm_vm *vm,
+				 struct kvm_pmu_event_filter *f)
+{
+	vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f);
+	return test_branches_retired(vm);
+}
+
+static void test_member_deny_list(struct kvm_vm *vm)
+{
+	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
+	uint64_t count = test_with_filter(vm, f);
+
+	free(f);
+	if (count)
+		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
+			__func__, count);
+	TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+}
+
+static void test_member_allow_list(struct kvm_vm *vm)
+{
+	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
+	uint64_t count = test_with_filter(vm, f);
+
+	free(f);
+	if (count != NUM_BRANCHES)
+		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
+			__func__, count, NUM_BRANCHES);
+	TEST_ASSERT(count, "Allowed PMU event is not counting");
+}
+
+static void test_not_member_deny_list(struct kvm_vm *vm)
+{
+	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
+	uint64_t count;
+
+	remove_event(f, INTEL_BR_RETIRED);
+	remove_event(f, AMD_ZEN_BR_RETIRED);
+	count = test_with_filter(vm, f);
+	free(f);
+	if (count != NUM_BRANCHES)
+		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
+			__func__, count, NUM_BRANCHES);
+	TEST_ASSERT(count, "Allowed PMU event is not counting");
+}
+
+static void test_not_member_allow_list(struct kvm_vm *vm)
+{
+	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
+	uint64_t count;
+
+	remove_event(f, INTEL_BR_RETIRED);
+	remove_event(f, AMD_ZEN_BR_RETIRED);
+	count = test_with_filter(vm, f);
+	free(f);
+	if (count)
+		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
+			__func__, count);
+	TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+}
+
+/*
+ * Note that CPUID leaf 0xa is Intel-specific. This leaf should be
+ * clear on AMD hardware.
+ */
+static bool vcpu_supports_intel_br_retired(void)
+{
+	struct kvm_cpuid_entry2 *entry;
+	struct kvm_cpuid2 *cpuid;
+
+	cpuid = kvm_get_supported_cpuid();
+	entry = kvm_get_supported_cpuid_index(0xa, 0);
+	return entry &&
+		(entry->eax & 0xff) &&
+		(entry->eax >> 24) > 5 &&
+		!(entry->ebx & BIT(5));
+}
+
+/*
+ * Determining AMD support for a PMU event requires consulting the AMD
+ * PPR for the CPU or reference material derived therefrom.
+ */
+static bool vcpu_supports_amd_zen_br_retired(void)
+{
+	struct kvm_cpuid_entry2 *entry;
+	struct kvm_cpuid2 *cpuid;
+
+	cpuid = kvm_get_supported_cpuid();
+	entry = kvm_get_supported_cpuid_index(1, 0);
+	return entry &&
+		((x86_family(entry->eax) == 0x17 &&
+		  (x86_model(entry->eax) == 1 ||
+		   x86_model(entry->eax) == 0x31)) ||
+		 (x86_family(entry->eax) == 0x19 &&
+		  x86_model(entry->eax) == 1));
+}
+
+int main(int argc, char *argv[])
+{
+	void (*guest_code)(void) = NULL;
+	struct kvm_vm *vm;
+	int r;
+
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	r = kvm_check_cap(KVM_CAP_PMU_EVENT_FILTER);
+	if (!r) {
+		print_skip("KVM_CAP_PMU_EVENT_FILTER not supported");
+		exit(KSFT_SKIP);
+	}
+
+	if (vcpu_supports_intel_br_retired())
+		guest_code = intel_guest_code;
+	else if (vcpu_supports_amd_zen_br_retired())
+		guest_code = amd_guest_code;
+
+	if (!guest_code) {
+		print_skip("Branch instructions retired not supported");
+		exit(KSFT_SKIP);
+	}
+
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	test_no_filter(vm);
+	test_member_deny_list(vm);
+	test_member_allow_list(vm);
+	test_not_member_deny_list(vm);
+	test_not_member_allow_list(vm);
+
+	kvm_vm_free(vm);
+
+	return 0;
+}
-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER
  2022-01-14  1:21 ` [PATCH v2 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER Jim Mattson
@ 2022-01-14 19:15   ` David Dunn
  2022-01-15  5:01     ` Jim Mattson
  0 siblings, 1 reply; 10+ messages in thread
From: David Dunn @ 2022-01-14 19:15 UTC (permalink / raw)
  To: Jim Mattson; +Cc: kvm, pbonzini, like.xu.linux, cloudliang

Jim,

The patch set looks good to me.  A couple comments and questions
related just to this test are inline.

On Thu, Jan 13, 2022 at 5:21 PM Jim Mattson <jmattson@google.com> wrote:

> + * Determining AMD support for a PMU event requires consulting the AMD
> + * PPR for the CPU or reference material derived therefrom.
> + */
> +static bool vcpu_supports_amd_zen_br_retired(void)
> +{
> +       struct kvm_cpuid_entry2 *entry;
> +       struct kvm_cpuid2 *cpuid;
> +
> +       cpuid = kvm_get_supported_cpuid();
> +       entry = kvm_get_supported_cpuid_index(1, 0);
> +       return entry &&
> +               ((x86_family(entry->eax) == 0x17 &&
> +                 (x86_model(entry->eax) == 1 ||
> +                  x86_model(entry->eax) == 0x31)) ||
> +                (x86_family(entry->eax) == 0x19 &&
> +                 x86_model(entry->eax) == 1));
> +}

The above function does not verify that the AMD host you are running
on supports PMU.  In particular, you might be running the KVM test
suite within a guest.  Is there a way to do that check here without
direct access to MSRs?  If not, maybe we need a KVM capability to
query this information.

> +       r = kvm_check_cap(KVM_CAP_PMU_EVENT_FILTER);
> +       if (!r) {
> +               print_skip("KVM_CAP_PMU_EVENT_FILTER not supported");
> +               exit(KSFT_SKIP);
> +       }

This capability is still supported even when PMU has been disabled by
Like Xu's new module parameter.  Should all the PMU related
capabilities be gated behind that module parameter?

Dave Dunn

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER
  2022-01-14 19:15   ` David Dunn
@ 2022-01-15  5:01     ` Jim Mattson
  0 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-15  5:01 UTC (permalink / raw)
  To: David Dunn; +Cc: kvm, pbonzini, like.xu.linux, cloudliang

On Fri, Jan 14, 2022 at 11:15 AM David Dunn <daviddunn@google.com> wrote:
>
> Jim,
>
> The patch set looks good to me.  A couple comments and questions
> related just to this test are inline.
>
> On Thu, Jan 13, 2022 at 5:21 PM Jim Mattson <jmattson@google.com> wrote:
>
> > + * Determining AMD support for a PMU event requires consulting the AMD
> > + * PPR for the CPU or reference material derived therefrom.
> > + */
> > +static bool vcpu_supports_amd_zen_br_retired(void)
> > +{
> > +       struct kvm_cpuid_entry2 *entry;
> > +       struct kvm_cpuid2 *cpuid;
> > +
> > +       cpuid = kvm_get_supported_cpuid();
> > +       entry = kvm_get_supported_cpuid_index(1, 0);
> > +       return entry &&
> > +               ((x86_family(entry->eax) == 0x17 &&
> > +                 (x86_model(entry->eax) == 1 ||
> > +                  x86_model(entry->eax) == 0x31)) ||
> > +                (x86_family(entry->eax) == 0x19 &&
> > +                 x86_model(entry->eax) == 1));
> > +}
>
> The above function does not verify that the AMD host you are running
> on supports PMU.  In particular, you might be running the KVM test
> suite within a guest.  Is there a way to do that check here without
> direct access to MSRs?  If not, maybe we need a KVM capability to
> query this information.

On Intel, if the parent hypervisor has constructed the CPUID:0AH leaf
correctly, the test will skip because it can't find what it wants in
CPUID:0AH, and the CPU family isn't what it is looking for on the AMD
side.

On AMD, the test assumes that the PMU is there (as the specification
requires). For the Zen line, a VM with vPMU enabled should report
CPUID:80000001H:ECX.PerfCtrExtCore[bit 23] set, and a VM with vPMU
disabled should report that bit clear.

In v3, I'm adding a PMU sanity check based on Linux's check_hw_exists().

> > +       r = kvm_check_cap(KVM_CAP_PMU_EVENT_FILTER);
> > +       if (!r) {
> > +               print_skip("KVM_CAP_PMU_EVENT_FILTER not supported");
> > +               exit(KSFT_SKIP);
> > +       }
>
> This capability is still supported even when PMU has been disabled by
> Like Xu's new module parameter.  Should all the PMU related
> capabilities be gated behind that module parameter?

Perhaps. That's something for Like Xu to consider with the current
rewrite of the module parameter.

> Dave Dunn

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events
  2022-01-14  1:21 [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
                   ` (5 preceding siblings ...)
  2022-01-14  1:21 ` [PATCH v2 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER Jim Mattson
@ 2022-01-18  9:15 ` Paolo Bonzini
  6 siblings, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2022-01-18  9:15 UTC (permalink / raw)
  To: Jim Mattson, kvm, like.xu.linux, daviddunn, cloudliang

On 1/14/22 02:21, Jim Mattson wrote:
> This started out as a simple change to sort the (up to 300 element)
> PMU filtered event list and to use binary search rather than linear
> search to see if an event is in the list.
> 
> I thought it would be nice to add a directed test for the PMU event
> filter, and that's when things got complicated. The Intel side was
> fine, but the AMD side was a bit ugly, until I did a few
> refactorings. Imagine my dismay when I discovered that the PMU event
> filter works fine on the AMD side, but that fundamental PMU
> virtualization is broken. And I don't just mean erratum 1292, though
> that throws even more brokenness into the mix.
> 
> v1 -> v2
> * Drop the check for "AMDisbetter!" in is_amd_cpu() [David Dunn]
> * Drop the call to cpuid(0, 0) that fed the original check for
>    CPU vendor string "AuthenticAMD" in vm_compute_max_gfn().
> * Simplify the inline asm in the selftest by using the compound literal
>    for both input & output.

Queued, thanks.

Paolo

> Jim Mattson (6):
>    KVM: x86/pmu: Use binary search to check filtered events
>    selftests: kvm/x86: Parameterize the CPUID vendor string check
>    selftests: kvm/x86: Introduce is_amd_cpu()
>    selftests: kvm/x86: Export x86_family() for use outside of processor.c
>    selftests: kvm/x86: Introduce x86_model()
>    selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER
> 
>   arch/x86/kvm/pmu.c                            |  30 +-
>   tools/testing/selftests/kvm/.gitignore        |   1 +
>   tools/testing/selftests/kvm/Makefile          |   1 +
>   .../selftests/kvm/include/x86_64/processor.h  |  18 ++
>   .../selftests/kvm/lib/x86_64/processor.c      |  40 +--
>   .../kvm/x86_64/pmu_event_filter_test.c        | 306 ++++++++++++++++++
>   6 files changed, 361 insertions(+), 35 deletions(-)
>   create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-01-18  9:15 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-14  1:21 [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
2022-01-14  1:21 ` [PATCH v2 1/6] " Jim Mattson
2022-01-14  1:21 ` [PATCH v2 2/6] selftests: kvm/x86: Parameterize the CPUID vendor string check Jim Mattson
2022-01-14  1:21 ` [PATCH v2 3/6] selftests: kvm/x86: Introduce is_amd_cpu() Jim Mattson
2022-01-14  1:21 ` [PATCH v2 4/6] selftests: kvm/x86: Export x86_family() for use outside of processor.c Jim Mattson
2022-01-14  1:21 ` [PATCH v2 5/6] selftests: kvm/x86: Introduce x86_model() Jim Mattson
2022-01-14  1:21 ` [PATCH v2 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER Jim Mattson
2022-01-14 19:15   ` David Dunn
2022-01-15  5:01     ` Jim Mattson
2022-01-18  9:15 ` [PATCH v2 0/6] KVM: x86/pmu: Use binary search to check filtered events Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.