kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] KVM: x86/pmu: Use binary search to check filtered events
@ 2022-01-13  1:14 Jim Mattson
  2022-01-13  1:14 ` [PATCH 1/6] " Jim Mattson
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-13  1:14 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux; +Cc: Jim Mattson

This started out as a simple change to sort the (up to 300 element)
PMU filtered event list and to use binary search rather than linear
search to see if an event is in the list.

I thought it would be nice to add a directed test for the PMU event
filter, and that's when things got complicated. The Intel side was
fine, but the AMD side was a bit ugly, until I did a few
refactorings. Imagine my dismay when I discovered that the PMU event
filter works fine on the AMD side, but that fundamental PMU
virtualization is broken.

Jim Mattson (6):
  KVM: x86/pmu: Use binary search to check filtered events
  selftests: kvm/x86: Parameterize the CPUID vendor string check
  selftests: kvm/x86: Introduce is_amd_cpu()
  selftests: kvm/x86: Export x86_family() for use outside of processor.c
  selftests: kvm/x86: Introduce x86_model()
  selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER

 arch/x86/kvm/pmu.c                            |  30 +-
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../selftests/kvm/include/x86_64/processor.h  |  18 +
 .../selftests/kvm/lib/x86_64/processor.c      |  36 +-
 .../kvm/x86_64/pmu_event_filter_test.c        | 310 ++++++++++++++++++
 6 files changed, 363 insertions(+), 33 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c

-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/6] KVM: x86/pmu: Use binary search to check filtered events
  2022-01-13  1:14 [PATCH 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
@ 2022-01-13  1:14 ` Jim Mattson
  2022-01-13  1:14 ` [PATCH 2/6] selftests: kvm/x86: Parameterize the CPUID vendor string check Jim Mattson
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-13  1:14 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux; +Cc: Jim Mattson

The PMU event filter may contain up to 300 events. Replace the linear
search in reprogram_gp_counter() with a binary search.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/pmu.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index e632693a2266..2c98f3ee8df4 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -13,6 +13,8 @@
 #include <linux/types.h>
 #include <linux/kvm_host.h>
 #include <linux/perf_event.h>
+#include <linux/bsearch.h>
+#include <linux/sort.h>
 #include <asm/perf_event.h>
 #include "x86.h"
 #include "cpuid.h"
@@ -172,12 +174,16 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
 	return true;
 }
 
+static int cmp_u64(const void *a, const void *b)
+{
+	return *(__u64 *)a - *(__u64 *)b;
+}
+
 void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
 {
 	unsigned config, type = PERF_TYPE_RAW;
 	struct kvm *kvm = pmc->vcpu->kvm;
 	struct kvm_pmu_event_filter *filter;
-	int i;
 	bool allow_event = true;
 
 	if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL)
@@ -192,16 +198,13 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
 
 	filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu);
 	if (filter) {
-		for (i = 0; i < filter->nevents; i++)
-			if (filter->events[i] ==
-			    (eventsel & AMD64_RAW_EVENT_MASK_NB))
-				break;
-		if (filter->action == KVM_PMU_EVENT_ALLOW &&
-		    i == filter->nevents)
-			allow_event = false;
-		if (filter->action == KVM_PMU_EVENT_DENY &&
-		    i < filter->nevents)
-			allow_event = false;
+		__u64 key = eventsel & AMD64_RAW_EVENT_MASK_NB;
+
+		if (bsearch(&key, filter->events, filter->nevents,
+			    sizeof(__u64), cmp_u64))
+			allow_event = filter->action == KVM_PMU_EVENT_ALLOW;
+		else
+			allow_event = filter->action == KVM_PMU_EVENT_DENY;
 	}
 	if (!allow_event)
 		return;
@@ -576,6 +579,11 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 	/* Ensure nevents can't be changed between the user copies. */
 	*filter = tmp;
 
+	/*
+	 * Sort the in-kernel list so that we can search it with bsearch.
+	 */
+	sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL);
+
 	mutex_lock(&kvm->lock);
 	filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter,
 				     mutex_is_locked(&kvm->lock));
-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/6] selftests: kvm/x86: Parameterize the CPUID vendor string check
  2022-01-13  1:14 [PATCH 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
  2022-01-13  1:14 ` [PATCH 1/6] " Jim Mattson
@ 2022-01-13  1:14 ` Jim Mattson
  2022-01-13  1:14 ` [PATCH 3/6] selftests: kvm/x86: Introduce is_amd_cpu() Jim Mattson
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-13  1:14 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux; +Cc: Jim Mattson

Refactor is_intel_cpu() to make it easier to reuse the bulk of the
code for other vendors in the future.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 tools/testing/selftests/kvm/lib/x86_64/processor.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index eef7b34756d5..355a3f6f1970 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1198,10 +1198,10 @@ void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_x86_state *s
 	}
 }
 
-bool is_intel_cpu(void)
+static bool cpu_vendor_string_is(const char *vendor)
 {
+	const uint32_t *chunk = (const uint32_t *)vendor;
 	int eax, ebx, ecx, edx;
-	const uint32_t *chunk;
 	const int leaf = 0;
 
 	__asm__ __volatile__(
@@ -1210,10 +1210,14 @@ bool is_intel_cpu(void)
 		  "=c"(ecx), "=d"(edx)
 		: /* input */ "0"(leaf), "2"(0));
 
-	chunk = (const uint32_t *)("GenuineIntel");
 	return (ebx == chunk[0] && edx == chunk[1] && ecx == chunk[2]);
 }
 
+bool is_intel_cpu(void)
+{
+	return cpu_vendor_string_is("GenuineIntel");
+}
+
 uint32_t kvm_get_cpuid_max_basic(void)
 {
 	return kvm_get_supported_cpuid_entry(0)->eax;
-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/6] selftests: kvm/x86: Introduce is_amd_cpu()
  2022-01-13  1:14 [PATCH 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
  2022-01-13  1:14 ` [PATCH 1/6] " Jim Mattson
  2022-01-13  1:14 ` [PATCH 2/6] selftests: kvm/x86: Parameterize the CPUID vendor string check Jim Mattson
@ 2022-01-13  1:14 ` Jim Mattson
  2022-01-13 16:50   ` David Dunn
  2022-01-13  1:14 ` [PATCH 4/6] selftests: kvm/x86: Export x86_family() for use outside of processor.c Jim Mattson
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Jim Mattson @ 2022-01-13  1:14 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux; +Cc: Jim Mattson

Replace the one ad hoc "AuthenticAMD" CPUID vendor string comparison
with a new function, is_amd_cpu().

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 .../selftests/kvm/include/x86_64/processor.h       |  1 +
 tools/testing/selftests/kvm/lib/x86_64/processor.c | 14 +++++++-------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 05e65ca1c30c..69eaf9a69bb7 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -347,6 +347,7 @@ static inline unsigned long get_xmm(int n)
 }
 
 bool is_intel_cpu(void);
+bool is_amd_cpu(void);
 
 struct kvm_x86_state;
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 355a3f6f1970..fdd259c1ab49 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1218,6 +1218,12 @@ bool is_intel_cpu(void)
 	return cpu_vendor_string_is("GenuineIntel");
 }
 
+bool is_amd_cpu(void)
+{
+	return cpu_vendor_string_is("AuthenticAMD") ||
+		cpu_vendor_string_is("AMDisbetter!");
+}
+
 uint32_t kvm_get_cpuid_max_basic(void)
 {
 	return kvm_get_supported_cpuid_entry(0)->eax;
@@ -1436,10 +1442,6 @@ struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpui
 	return cpuid;
 }
 
-#define X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx 0x68747541
-#define X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx 0x444d4163
-#define X86EMUL_CPUID_VENDOR_AuthenticAMD_edx 0x69746e65
-
 static inline unsigned x86_family(unsigned int eax)
 {
         unsigned int x86;
@@ -1463,9 +1465,7 @@ unsigned long vm_compute_max_gfn(struct kvm_vm *vm)
 	/* Avoid reserved HyperTransport region on AMD processors.  */
 	eax = ecx = 0;
 	cpuid(&eax, &ebx, &ecx, &edx);
-	if (ebx != X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx ||
-	    ecx != X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx ||
-	    edx != X86EMUL_CPUID_VENDOR_AuthenticAMD_edx)
+	if (!is_amd_cpu())
 		return max_gfn;
 
 	/* On parts with <40 physical address bits, the area is fully hidden */
-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] selftests: kvm/x86: Export x86_family() for use outside of processor.c
  2022-01-13  1:14 [PATCH 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
                   ` (2 preceding siblings ...)
  2022-01-13  1:14 ` [PATCH 3/6] selftests: kvm/x86: Introduce is_amd_cpu() Jim Mattson
@ 2022-01-13  1:14 ` Jim Mattson
  2022-01-13  1:14 ` [PATCH 5/6] selftests: kvm/x86: Introduce x86_model() Jim Mattson
  2022-01-13  1:14 ` [PATCH 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER Jim Mattson
  5 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-13  1:14 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux; +Cc: Jim Mattson

Move this static inline function to processor.h, so that it can be
used in individual tests, as needed.

Opportunistically replace the bare 'unsigned' with 'unsigned int.'

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 .../testing/selftests/kvm/include/x86_64/processor.h | 12 ++++++++++++
 tools/testing/selftests/kvm/lib/x86_64/processor.c   | 12 ------------
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 69eaf9a69bb7..c5306e29edd4 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -349,6 +349,18 @@ static inline unsigned long get_xmm(int n)
 bool is_intel_cpu(void);
 bool is_amd_cpu(void);
 
+static inline unsigned int x86_family(unsigned int eax)
+{
+        unsigned int x86;
+
+        x86 = (eax >> 8) & 0xf;
+
+        if (x86 == 0xf)
+                x86 += (eax >> 20) & 0xff;
+
+        return x86;
+}
+
 struct kvm_x86_state;
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid,
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index fdd259c1ab49..6776d692b238 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1442,18 +1442,6 @@ struct kvm_cpuid2 *vcpu_get_supported_hv_cpuid(struct kvm_vm *vm, uint32_t vcpui
 	return cpuid;
 }
 
-static inline unsigned x86_family(unsigned int eax)
-{
-        unsigned int x86;
-
-        x86 = (eax >> 8) & 0xf;
-
-        if (x86 == 0xf)
-                x86 += (eax >> 20) & 0xff;
-
-        return x86;
-}
-
 unsigned long vm_compute_max_gfn(struct kvm_vm *vm)
 {
 	const unsigned long num_ht_pages = 12 << (30 - vm->page_shift); /* 12 GiB */
-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/6] selftests: kvm/x86: Introduce x86_model()
  2022-01-13  1:14 [PATCH 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
                   ` (3 preceding siblings ...)
  2022-01-13  1:14 ` [PATCH 4/6] selftests: kvm/x86: Export x86_family() for use outside of processor.c Jim Mattson
@ 2022-01-13  1:14 ` Jim Mattson
  2022-01-13  1:14 ` [PATCH 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER Jim Mattson
  5 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-13  1:14 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux; +Cc: Jim Mattson

Extract the x86 model number from CPUID.01H:EAX.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 tools/testing/selftests/kvm/include/x86_64/processor.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index c5306e29edd4..b723163ca9ba 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -361,6 +361,11 @@ static inline unsigned int x86_family(unsigned int eax)
         return x86;
 }
 
+static inline unsigned int x86_model(unsigned int eax)
+{
+	return ((eax >> 12) & 0xf0) | ((eax >> 4) & 0x0f);
+}
+
 struct kvm_x86_state;
 struct kvm_x86_state *vcpu_save_state(struct kvm_vm *vm, uint32_t vcpuid);
 void vcpu_load_state(struct kvm_vm *vm, uint32_t vcpuid,
-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER
  2022-01-13  1:14 [PATCH 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
                   ` (4 preceding siblings ...)
  2022-01-13  1:14 ` [PATCH 5/6] selftests: kvm/x86: Introduce x86_model() Jim Mattson
@ 2022-01-13  1:14 ` Jim Mattson
  2022-01-13  4:06   ` Like Xu
  5 siblings, 1 reply; 10+ messages in thread
From: Jim Mattson @ 2022-01-13  1:14 UTC (permalink / raw)
  To: kvm, pbonzini, like.xu.linux; +Cc: Jim Mattson

Verify that the PMU event filter works as expected.

Note that the virtual PMU doesn't work as expected on AMD Zen CPUs (an
intercepted rdmsr is counted as a retired branch instruction), but the
PMU event filter does work.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 tools/testing/selftests/kvm/.gitignore        |   1 +
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/x86_64/pmu_event_filter_test.c        | 310 ++++++++++++++++++
 3 files changed, 312 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c

diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 8c129961accf..7834e03ab159 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -22,6 +22,7 @@
 /x86_64/mmio_warning_test
 /x86_64/mmu_role_test
 /x86_64/platform_info_test
+/x86_64/pmu_event_filter_test
 /x86_64/set_boot_cpu_id
 /x86_64/set_sregs_test
 /x86_64/sev_migrate_tests
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index c407ebbec2c1..899413a6588f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -56,6 +56,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test
 TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
 TEST_GEN_PROGS_x86_64 += x86_64/mmu_role_test
 TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
+TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test
 TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id
 TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
 TEST_GEN_PROGS_x86_64 += x86_64/smm_test
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
new file mode 100644
index 000000000000..d879a4b92fae
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -0,0 +1,310 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Test for x86 KVM_SET_PMU_EVENT_FILTER.
+ *
+ * Copyright (C) 2022, Google LLC.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ *
+ * Verifies the expected behavior of allow lists and deny lists for
+ * virtual PMU events.
+ */
+
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+
+/*
+ * In lieue of copying perf_event.h into tools...
+ */
+#define ARCH_PERFMON_EVENTSEL_ENABLE	BIT(22)
+#define ARCH_PERFMON_EVENTSEL_OS	BIT(17)
+
+#define VCPU_ID 0
+#define NUM_BRANCHES 42
+
+/*
+ * This is how the event selector and unit mask are stored in an AMD
+ * core performance event-select register. Intel's format is similar,
+ * but the event selector is only 8 bits.
+ */
+#define EVENT(select, umask) ((select & 0xf00UL) << 24 | (select & 0xff) | \
+			      (umask & 0xff) << 8)
+
+/*
+ * "Branch instructions retired", from the Intel SDM, volume 3,
+ * "Pre-defined Architectural Performance Events."
+ */
+
+#define INTEL_BR_RETIRED EVENT(0xc4, 0)
+
+/*
+ * "Retired branch instructions", from Processor Programming Reference
+ * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors,
+ * Preliminary Processor Programming Reference (PPR) for AMD Family
+ * 17h Model 31h, Revision B0 Processors, and Preliminary Processor
+ * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision
+ * B1 Processors Volume 1 of 2
+ */
+
+#define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0)
+
+/*
+ * This event list comprises Intel's eight architectural events plus
+ * AMD's "branch instructions retired" for Zen[123].
+ */
+static const uint64_t event_list[] = {
+	EVENT(0x3c, 0),
+	EVENT(0xc0, 0),
+	EVENT(0x3c, 1),
+	EVENT(0x2e, 0x4f),
+	EVENT(0x2e, 0x41),
+	EVENT(0xc4, 0),
+	EVENT(0xc5, 0),
+	EVENT(0xa4, 1),
+	AMD_ZEN_BR_RETIRED,
+};
+
+static void intel_guest_code(void)
+{
+	uint64_t br0, br1;
+
+	for (;;) {
+		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+		wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED);
+		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1);
+		br0 = rdmsr(MSR_IA32_PMC0);
+		__asm__ __volatile__("loop ."
+				     : "=c"((int){0})
+				     : "0"(NUM_BRANCHES));
+		br1 = rdmsr(MSR_IA32_PMC0);
+		GUEST_SYNC(br1 - br0);
+	}
+}
+
+/*
+ * To avoid needing a check for CPUID.80000001:ECX.PerfCtrExtCore[bit
+ * 23], this code uses the always-available, legacy K7 PMU MSRs, which
+ * alias to the first four of the six extended core PMU MSRs.
+ */
+static void amd_guest_code(void)
+{
+	uint64_t br0, br1;
+
+	for (;;) {
+		wrmsr(MSR_K7_EVNTSEL0, 0);
+		wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED);
+		br0 = rdmsr(MSR_K7_PERFCTR0);
+		__asm__ __volatile__("loop ."
+				     : "=c"((int){0})
+				     : "0"(NUM_BRANCHES));
+		br1 = rdmsr(MSR_K7_PERFCTR0);
+		GUEST_SYNC(br1 - br0);
+	}
+}
+
+static uint64_t test_branches_retired(struct kvm_vm *vm)
+{
+	struct kvm_run *run = vcpu_state(vm, VCPU_ID);
+	struct ucall uc;
+
+	vcpu_run(vm, VCPU_ID);
+	TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
+		    "Exit_reason other than KVM_EXIT_IO: %u (%s)\n",
+		    run->exit_reason,
+		    exit_reason_str(run->exit_reason));
+	get_ucall(vm, VCPU_ID, &uc);
+	TEST_ASSERT(uc.cmd == UCALL_SYNC,
+		    "Received ucall other than UCALL_SYNC: %lu", uc.cmd);
+	return uc.args[1];
+}
+
+static struct kvm_pmu_event_filter *make_pmu_event_filter(uint32_t nevents)
+{
+	struct kvm_pmu_event_filter *f;
+	int size = sizeof(*f) + nevents * sizeof(f->events[0]);
+
+	f = malloc(size);
+	TEST_ASSERT(f, "Out of memory");
+	memset(f, 0, size);
+	f->nevents = nevents;
+	return f;
+}
+
+static struct kvm_pmu_event_filter *event_filter(uint32_t action)
+{
+	struct kvm_pmu_event_filter *f;
+	int i;
+
+	f = make_pmu_event_filter(ARRAY_SIZE(event_list));
+	f->action = action;
+	for (i = 0; i < ARRAY_SIZE(event_list); i++)
+		f->events[i] = event_list[i];
+
+	return f;
+}
+
+static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
+						 uint64_t event)
+{
+	bool found = false;
+	int i;
+
+	for (i = 0; i < f->nevents; i++) {
+		if (found)
+			f->events[i - 1] = f->events[i];
+		else
+			found = f->events[i] == event;
+	}
+	if (found)
+		f->nevents--;
+	return f;
+}
+
+static void test_no_filter(struct kvm_vm *vm)
+{
+	uint64_t count = test_branches_retired(vm);
+
+	if (count != NUM_BRANCHES)
+		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
+			__func__, count, NUM_BRANCHES);
+	TEST_ASSERT(count, "Allowed PMU event is not counting");
+}
+
+static uint64_t test_with_filter(struct kvm_vm *vm,
+				 struct kvm_pmu_event_filter *f)
+{
+	vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f);
+	return test_branches_retired(vm);
+}
+
+static void test_member_deny_list(struct kvm_vm *vm)
+{
+	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
+	uint64_t count = test_with_filter(vm, f);
+
+	free(f);
+	if (count)
+		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
+			__func__, count);
+	TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+}
+
+static void test_member_allow_list(struct kvm_vm *vm)
+{
+	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
+	uint64_t count = test_with_filter(vm, f);
+
+	free(f);
+	if (count != NUM_BRANCHES)
+		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
+			__func__, count, NUM_BRANCHES);
+	TEST_ASSERT(count, "Allowed PMU event is not counting");
+}
+
+static void test_not_member_deny_list(struct kvm_vm *vm)
+{
+	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
+	uint64_t count;
+
+	remove_event(f, INTEL_BR_RETIRED);
+	remove_event(f, AMD_ZEN_BR_RETIRED);
+	count = test_with_filter(vm, f);
+	free(f);
+	if (count != NUM_BRANCHES)
+		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
+			__func__, count, NUM_BRANCHES);
+	TEST_ASSERT(count, "Allowed PMU event is not counting");
+}
+
+static void test_not_member_allow_list(struct kvm_vm *vm)
+{
+	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
+	uint64_t count;
+
+	remove_event(f, INTEL_BR_RETIRED);
+	remove_event(f, AMD_ZEN_BR_RETIRED);
+	count = test_with_filter(vm, f);
+	free(f);
+	if (count)
+		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
+			__func__, count);
+	TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+}
+
+/*
+ * Note that CPUID leaf 0xa is Intel-specific. This leaf should be
+ * clear on AMD hardware.
+ */
+static bool vcpu_supports_intel_br_retired(void)
+{
+	struct kvm_cpuid_entry2 *entry;
+	struct kvm_cpuid2 *cpuid;
+
+	cpuid = kvm_get_supported_cpuid();
+	entry = kvm_get_supported_cpuid_index(0xa, 0);
+	return entry &&
+		(entry->eax & 0xff) &&
+		(entry->eax >> 24) > 5 &&
+		!(entry->ebx & BIT(5));
+}
+
+/*
+ * Determining AMD support for a PMU event requires consulting the AMD
+ * PPR for the CPU or reference material derived therefrom.
+ */
+static bool vcpu_supports_amd_zen_br_retired(void)
+{
+	struct kvm_cpuid_entry2 *entry;
+	struct kvm_cpuid2 *cpuid;
+
+	cpuid = kvm_get_supported_cpuid();
+	entry = kvm_get_supported_cpuid_index(1, 0);
+	return entry &&
+		((x86_family(entry->eax) == 0x17 &&
+		  (x86_model(entry->eax) == 1 ||
+		   x86_model(entry->eax) == 0x31)) ||
+		 (x86_family(entry->eax) == 0x19 &&
+		  x86_model(entry->eax) == 1));
+}
+
+int main(int argc, char *argv[])
+{
+	void (*guest_code)(void) = NULL;
+	struct kvm_vm *vm;
+	int r;
+
+	/* Tell stdout not to buffer its content */
+	setbuf(stdout, NULL);
+
+	r = kvm_check_cap(KVM_CAP_PMU_EVENT_FILTER);
+	if (!r) {
+		print_skip("KVM_CAP_PMU_EVENT_FILTER not supported");
+		exit(KSFT_SKIP);
+	}
+
+	if (vcpu_supports_intel_br_retired())
+		guest_code = intel_guest_code;
+	else if (vcpu_supports_amd_zen_br_retired())
+		guest_code = amd_guest_code;
+
+	if (!guest_code) {
+		print_skip("Branch instructions retired not supported");
+		exit(KSFT_SKIP);
+	}
+
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+
+	test_no_filter(vm);
+	test_member_deny_list(vm);
+	test_member_allow_list(vm);
+	test_not_member_deny_list(vm);
+	test_not_member_allow_list(vm);
+
+	kvm_vm_free(vm);
+
+	return 0;
+}
-- 
2.34.1.575.g55b058a8bb-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER
  2022-01-13  1:14 ` [PATCH 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER Jim Mattson
@ 2022-01-13  4:06   ` Like Xu
  0 siblings, 0 replies; 10+ messages in thread
From: Like Xu @ 2022-01-13  4:06 UTC (permalink / raw)
  To: Jim Mattson, cloudliang(梁金荣)
  Cc: kvm list, Paolo Bonzini - Distinguished Engineer (kernel-recipes.org)

cc Liang,

On 13/1/2022 9:14 am, Jim Mattson wrote:
> Verify that the PMU event filter works as expected.
> 
> Note that the virtual PMU doesn't work as expected on AMD Zen CPUs (an
> intercepted rdmsr is counted as a retired branch instruction), but the
> PMU event filter does work.

Thanks for this patch. It saves us from upstreaming our equivalent patch.
We do have a plan to cover vPMU features (and more) in selftests/kvm/x86.

Liang would help me on these selftests patches and,
please share more issues or insights (if any) you have found.

> 
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
>   tools/testing/selftests/kvm/.gitignore        |   1 +
>   tools/testing/selftests/kvm/Makefile          |   1 +
>   .../kvm/x86_64/pmu_event_filter_test.c        | 310 ++++++++++++++++++
>   3 files changed, 312 insertions(+)
>   create mode 100644 tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> 
> diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
> index 8c129961accf..7834e03ab159 100644
> --- a/tools/testing/selftests/kvm/.gitignore
> +++ b/tools/testing/selftests/kvm/.gitignore
> @@ -22,6 +22,7 @@
>   /x86_64/mmio_warning_test
>   /x86_64/mmu_role_test
>   /x86_64/platform_info_test
> +/x86_64/pmu_event_filter_test
>   /x86_64/set_boot_cpu_id
>   /x86_64/set_sregs_test
>   /x86_64/sev_migrate_tests
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index c407ebbec2c1..899413a6588f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -56,6 +56,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test
>   TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
>   TEST_GEN_PROGS_x86_64 += x86_64/mmu_role_test
>   TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
> +TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test
>   TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id
>   TEST_GEN_PROGS_x86_64 += x86_64/set_sregs_test
>   TEST_GEN_PROGS_x86_64 += x86_64/smm_test
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> new file mode 100644
> index 000000000000..d879a4b92fae
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> @@ -0,0 +1,310 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Test for x86 KVM_SET_PMU_EVENT_FILTER.
> + *
> + * Copyright (C) 2022, Google LLC.
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2.
> + *
> + * Verifies the expected behavior of allow lists and deny lists for
> + * virtual PMU events.
> + */
> +
> +#define _GNU_SOURCE /* for program_invocation_short_name */
> +#include "test_util.h"
> +#include "kvm_util.h"
> +#include "processor.h"
> +
> +/*
> + * In lieue of copying perf_event.h into tools...
> + */
> +#define ARCH_PERFMON_EVENTSEL_ENABLE	BIT(22)
> +#define ARCH_PERFMON_EVENTSEL_OS	BIT(17)
> +
> +#define VCPU_ID 0
> +#define NUM_BRANCHES 42
> +
> +/*
> + * This is how the event selector and unit mask are stored in an AMD
> + * core performance event-select register. Intel's format is similar,
> + * but the event selector is only 8 bits.
> + */
> +#define EVENT(select, umask) ((select & 0xf00UL) << 24 | (select & 0xff) | \
> +			      (umask & 0xff) << 8)
> +
> +/*
> + * "Branch instructions retired", from the Intel SDM, volume 3,
> + * "Pre-defined Architectural Performance Events."
> + */
> +
> +#define INTEL_BR_RETIRED EVENT(0xc4, 0)
> +
> +/*
> + * "Retired branch instructions", from Processor Programming Reference
> + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors,
> + * Preliminary Processor Programming Reference (PPR) for AMD Family
> + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor
> + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision
> + * B1 Processors Volume 1 of 2
> + */
> +
> +#define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0)
> +
> +/*
> + * This event list comprises Intel's eight architectural events plus
> + * AMD's "branch instructions retired" for Zen[123].
> + */
> +static const uint64_t event_list[] = {
> +	EVENT(0x3c, 0),
> +	EVENT(0xc0, 0),
> +	EVENT(0x3c, 1),
> +	EVENT(0x2e, 0x4f),
> +	EVENT(0x2e, 0x41),
> +	EVENT(0xc4, 0),
> +	EVENT(0xc5, 0),
> +	EVENT(0xa4, 1),
> +	AMD_ZEN_BR_RETIRED,
> +};
> +
> +static void intel_guest_code(void)
> +{
> +	uint64_t br0, br1;
> +
> +	for (;;) {
> +		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
> +		wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
> +		      ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED);
> +		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1);
> +		br0 = rdmsr(MSR_IA32_PMC0);
> +		__asm__ __volatile__("loop ."
> +				     : "=c"((int){0})
> +				     : "0"(NUM_BRANCHES));
> +		br1 = rdmsr(MSR_IA32_PMC0);
> +		GUEST_SYNC(br1 - br0);
> +	}
> +}
> +
> +/*
> + * To avoid needing a check for CPUID.80000001:ECX.PerfCtrExtCore[bit
> + * 23], this code uses the always-available, legacy K7 PMU MSRs, which
> + * alias to the first four of the six extended core PMU MSRs.
> + */
> +static void amd_guest_code(void)
> +{
> +	uint64_t br0, br1;
> +
> +	for (;;) {
> +		wrmsr(MSR_K7_EVNTSEL0, 0);
> +		wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
> +		      ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED);
> +		br0 = rdmsr(MSR_K7_PERFCTR0);
> +		__asm__ __volatile__("loop ."
> +				     : "=c"((int){0})
> +				     : "0"(NUM_BRANCHES));
> +		br1 = rdmsr(MSR_K7_PERFCTR0);
> +		GUEST_SYNC(br1 - br0);
> +	}
> +}
> +
> +static uint64_t test_branches_retired(struct kvm_vm *vm)
> +{
> +	struct kvm_run *run = vcpu_state(vm, VCPU_ID);
> +	struct ucall uc;
> +
> +	vcpu_run(vm, VCPU_ID);
> +	TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
> +		    "Exit_reason other than KVM_EXIT_IO: %u (%s)\n",
> +		    run->exit_reason,
> +		    exit_reason_str(run->exit_reason));
> +	get_ucall(vm, VCPU_ID, &uc);
> +	TEST_ASSERT(uc.cmd == UCALL_SYNC,
> +		    "Received ucall other than UCALL_SYNC: %lu", uc.cmd);
> +	return uc.args[1];
> +}
> +
> +static struct kvm_pmu_event_filter *make_pmu_event_filter(uint32_t nevents)
> +{
> +	struct kvm_pmu_event_filter *f;
> +	int size = sizeof(*f) + nevents * sizeof(f->events[0]);
> +
> +	f = malloc(size);
> +	TEST_ASSERT(f, "Out of memory");
> +	memset(f, 0, size);
> +	f->nevents = nevents;
> +	return f;
> +}
> +
> +static struct kvm_pmu_event_filter *event_filter(uint32_t action)
> +{
> +	struct kvm_pmu_event_filter *f;
> +	int i;
> +
> +	f = make_pmu_event_filter(ARRAY_SIZE(event_list));
> +	f->action = action;
> +	for (i = 0; i < ARRAY_SIZE(event_list); i++)
> +		f->events[i] = event_list[i];
> +
> +	return f;
> +}
> +
> +static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
> +						 uint64_t event)
> +{
> +	bool found = false;
> +	int i;
> +
> +	for (i = 0; i < f->nevents; i++) {
> +		if (found)
> +			f->events[i - 1] = f->events[i];
> +		else
> +			found = f->events[i] == event;
> +	}
> +	if (found)
> +		f->nevents--;
> +	return f;
> +}
> +
> +static void test_no_filter(struct kvm_vm *vm)
> +{
> +	uint64_t count = test_branches_retired(vm);
> +
> +	if (count != NUM_BRANCHES)
> +		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
> +			__func__, count, NUM_BRANCHES);
> +	TEST_ASSERT(count, "Allowed PMU event is not counting");
> +}
> +
> +static uint64_t test_with_filter(struct kvm_vm *vm,
> +				 struct kvm_pmu_event_filter *f)
> +{
> +	vm_ioctl(vm, KVM_SET_PMU_EVENT_FILTER, (void *)f);
> +	return test_branches_retired(vm);
> +}
> +
> +static void test_member_deny_list(struct kvm_vm *vm)
> +{
> +	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
> +	uint64_t count = test_with_filter(vm, f);
> +
> +	free(f);
> +	if (count)
> +		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
> +			__func__, count);
> +	TEST_ASSERT(!count, "Disallowed PMU Event is counting");
> +}
> +
> +static void test_member_allow_list(struct kvm_vm *vm)
> +{
> +	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
> +	uint64_t count = test_with_filter(vm, f);
> +
> +	free(f);
> +	if (count != NUM_BRANCHES)
> +		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
> +			__func__, count, NUM_BRANCHES);
> +	TEST_ASSERT(count, "Allowed PMU event is not counting");
> +}
> +
> +static void test_not_member_deny_list(struct kvm_vm *vm)
> +{
> +	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
> +	uint64_t count;
> +
> +	remove_event(f, INTEL_BR_RETIRED);
> +	remove_event(f, AMD_ZEN_BR_RETIRED);
> +	count = test_with_filter(vm, f);
> +	free(f);
> +	if (count != NUM_BRANCHES)
> +		pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
> +			__func__, count, NUM_BRANCHES);
> +	TEST_ASSERT(count, "Allowed PMU event is not counting");
> +}
> +
> +static void test_not_member_allow_list(struct kvm_vm *vm)
> +{
> +	struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
> +	uint64_t count;
> +
> +	remove_event(f, INTEL_BR_RETIRED);
> +	remove_event(f, AMD_ZEN_BR_RETIRED);
> +	count = test_with_filter(vm, f);
> +	free(f);
> +	if (count)
> +		pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
> +			__func__, count);
> +	TEST_ASSERT(!count, "Disallowed PMU Event is counting");
> +}
> +
> +/*
> + * Note that CPUID leaf 0xa is Intel-specific. This leaf should be
> + * clear on AMD hardware.
> + */
> +static bool vcpu_supports_intel_br_retired(void)
> +{
> +	struct kvm_cpuid_entry2 *entry;
> +	struct kvm_cpuid2 *cpuid;
> +
> +	cpuid = kvm_get_supported_cpuid();
> +	entry = kvm_get_supported_cpuid_index(0xa, 0);
> +	return entry &&
> +		(entry->eax & 0xff) &&
> +		(entry->eax >> 24) > 5 &&
> +		!(entry->ebx & BIT(5));
> +}
> +
> +/*
> + * Determining AMD support for a PMU event requires consulting the AMD
> + * PPR for the CPU or reference material derived therefrom.
> + */
> +static bool vcpu_supports_amd_zen_br_retired(void)
> +{
> +	struct kvm_cpuid_entry2 *entry;
> +	struct kvm_cpuid2 *cpuid;
> +
> +	cpuid = kvm_get_supported_cpuid();
> +	entry = kvm_get_supported_cpuid_index(1, 0);
> +	return entry &&
> +		((x86_family(entry->eax) == 0x17 &&
> +		  (x86_model(entry->eax) == 1 ||
> +		   x86_model(entry->eax) == 0x31)) ||
> +		 (x86_family(entry->eax) == 0x19 &&
> +		  x86_model(entry->eax) == 1));
> +}
> +
> +int main(int argc, char *argv[])
> +{
> +	void (*guest_code)(void) = NULL;
> +	struct kvm_vm *vm;
> +	int r;
> +
> +	/* Tell stdout not to buffer its content */
> +	setbuf(stdout, NULL);
> +
> +	r = kvm_check_cap(KVM_CAP_PMU_EVENT_FILTER);
> +	if (!r) {
> +		print_skip("KVM_CAP_PMU_EVENT_FILTER not supported");
> +		exit(KSFT_SKIP);
> +	}
> +
> +	if (vcpu_supports_intel_br_retired())
> +		guest_code = intel_guest_code;
> +	else if (vcpu_supports_amd_zen_br_retired())
> +		guest_code = amd_guest_code;
> +
> +	if (!guest_code) {
> +		print_skip("Branch instructions retired not supported");
> +		exit(KSFT_SKIP);
> +	}
> +
> +	vm = vm_create_default(VCPU_ID, 0, guest_code);
> +
> +	test_no_filter(vm);
> +	test_member_deny_list(vm);
> +	test_member_allow_list(vm);
> +	test_not_member_deny_list(vm);
> +	test_not_member_allow_list(vm);
> +
> +	kvm_vm_free(vm);
> +
> +	return 0;
> +}

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/6] selftests: kvm/x86: Introduce is_amd_cpu()
  2022-01-13  1:14 ` [PATCH 3/6] selftests: kvm/x86: Introduce is_amd_cpu() Jim Mattson
@ 2022-01-13 16:50   ` David Dunn
  2022-01-13 18:11     ` Jim Mattson
  0 siblings, 1 reply; 10+ messages in thread
From: David Dunn @ 2022-01-13 16:50 UTC (permalink / raw)
  To: Jim Mattson; +Cc: kvm, pbonzini, like.xu.linux

Jim,

On Wed, Jan 12, 2022 at 5:15 PM Jim Mattson <jmattson@google.com> wrote:

> +bool is_amd_cpu(void)
> +{
> +       return cpu_vendor_string_is("AuthenticAMD") ||
> +               cpu_vendor_string_is("AMDisbetter!");
> +}
> +

The original code only checked for AuthenticAMD.  I don't think it is
necessary or wise to add the check for early K5 samples.  Do we know
whether they even need this KVM logic?

Dave Dunn

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/6] selftests: kvm/x86: Introduce is_amd_cpu()
  2022-01-13 16:50   ` David Dunn
@ 2022-01-13 18:11     ` Jim Mattson
  0 siblings, 0 replies; 10+ messages in thread
From: Jim Mattson @ 2022-01-13 18:11 UTC (permalink / raw)
  To: David Dunn; +Cc: kvm, pbonzini, like.xu.linux

On Thu, Jan 13, 2022 at 8:51 AM David Dunn <daviddunn@google.com> wrote:
>
> Jim,
>
> On Wed, Jan 12, 2022 at 5:15 PM Jim Mattson <jmattson@google.com> wrote:
>
> > +bool is_amd_cpu(void)
> > +{
> > +       return cpu_vendor_string_is("AuthenticAMD") ||
> > +               cpu_vendor_string_is("AMDisbetter!");
> > +}
> > +
>
> The original code only checked for AuthenticAMD.  I don't think it is
> necessary or wise to add the check for early K5 samples.  Do we know
> whether they even need this KVM logic?

The original code should handle K5 CPUs in the next block:

        /* On parts with <40 physical address bits, the area is fully hidden */
       if (vm->pa_bits < 40)
                return max_gfn;

But, you're right. K5 predates AMD support for SVM, so we don't need
to test kvm functionality on early K5 samples.

I'll remove the second disjunct in v2.

Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-01-13 18:11 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-13  1:14 [PATCH 0/6] KVM: x86/pmu: Use binary search to check filtered events Jim Mattson
2022-01-13  1:14 ` [PATCH 1/6] " Jim Mattson
2022-01-13  1:14 ` [PATCH 2/6] selftests: kvm/x86: Parameterize the CPUID vendor string check Jim Mattson
2022-01-13  1:14 ` [PATCH 3/6] selftests: kvm/x86: Introduce is_amd_cpu() Jim Mattson
2022-01-13 16:50   ` David Dunn
2022-01-13 18:11     ` Jim Mattson
2022-01-13  1:14 ` [PATCH 4/6] selftests: kvm/x86: Export x86_family() for use outside of processor.c Jim Mattson
2022-01-13  1:14 ` [PATCH 5/6] selftests: kvm/x86: Introduce x86_model() Jim Mattson
2022-01-13  1:14 ` [PATCH 6/6] selftests: kvm/x86: Add test for KVM_SET_PMU_EVENT_FILTER Jim Mattson
2022-01-13  4:06   ` Like Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).