kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/7] Introduce and test masked events
@ 2022-08-31 16:21 Aaron Lewis
  2022-08-31 16:21 ` [PATCH v4 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup Aaron Lewis
                   ` (6 more replies)
  0 siblings, 7 replies; 16+ messages in thread
From: Aaron Lewis @ 2022-08-31 16:21 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis

This series introduces the concept of masked events to the pmu event
filter. Masked events can help reduce the number of events needed in the
pmu event filter by allowing a more generalized matching method to be
used for the unit mask when filtering guest events in the pmu.  With
masked events, if an event select should be restricted from the guest,
instead of having to add an entry to the pmu event filter for each
event select + unit mask pair, a masked event can be added to generalize
the unit mask values.

v3 -> v4
 - Patch #1, Fix the mask for the guest event select used in bsearch.
 - Patch #2, Remove invalid events from the pmu event filter.
 - Patch #3, Create an internal/common representation for filter events.
 - Patch #4,
     - Use a common filter event to simplify kernel code. [Jim]
     - s/invalid/exclude for masked events. More descriptive. [Sean]
     - Simplified masked event layout. There was no need to complicate it.
     - Add KVM_CAP_PMU_EVENT_MASKED_EVENTS.
 - Patch #7, Rewrote the masked events tests using MEM_INST_RETIRED (0xd0)
   on Intel and LS Dispatch (0x29) on AMD. They have multiple unit masks
   each which were leveraged for improved masked events testing.

v2 -> v3
 - Reworked and documented the invert flag usage.  It was possible to
   get ambiguous results when using it.  That should not be possible
   now.
 - Added testing for inverted masked events to validate the updated
   implementation.
 - Removed testing for inverted masked events from the masked events test.
   They were meaningless with the updated implementation.  More meaning
   tests were added separately.

v1 -> v2
 - Made has_invalid_event() static to fix warning.
 - Fixed checkpatch.pl errors and warnings.
 - Updated to account for KVM_X86_PMU_OP().

Aaron Lewis (7):
  kvm: x86/pmu: Correct the mask used in a pmu event filter lookup
  kvm: x86/pmu: Remove invalid raw events from the pmu event filter
  kvm: x86/pmu: prepare the pmu event filter for masked events
  kvm: x86/pmu: Introduce masked events to the pmu event filter
  selftests: kvm/x86: Add flags when creating a pmu event filter
  selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER
  selftests: kvm/x86: Test masked events

 Documentation/virt/kvm/api.rst                |  81 +++-
 arch/x86/include/asm/kvm-x86-pmu-ops.h        |   1 +
 arch/x86/include/uapi/asm/kvm.h               |  28 ++
 arch/x86/kvm/pmu.c                            | 262 ++++++++++--
 arch/x86/kvm/pmu.h                            |  39 ++
 arch/x86/kvm/svm/pmu.c                        |   6 +
 arch/x86/kvm/vmx/pmu_intel.c                  |   6 +
 arch/x86/kvm/x86.c                            |   1 +
 include/uapi/linux/kvm.h                      |   1 +
 .../kvm/x86_64/pmu_event_filter_test.c        | 374 +++++++++++++++++-
 10 files changed, 756 insertions(+), 43 deletions(-)

-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v4 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup
  2022-08-31 16:21 [PATCH v4 0/7] Introduce and test masked events Aaron Lewis
@ 2022-08-31 16:21 ` Aaron Lewis
  2022-09-14 18:15   ` Jim Mattson
  2022-08-31 16:21 ` [PATCH v4 2/7] kvm: x86/pmu: Remove invalid raw events from the pmu event filter Aaron Lewis
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Aaron Lewis @ 2022-08-31 16:21 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis

When checking if a pmu event the guest is attempting to program should
be filtered, only consider the event select + unit mask in that
decision. Use an architecture specific mask to mask out all other bits,
including bits 35:32 on Intel.  Those bits are not part of the event
select and should not be considered in that decision.

Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter")
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/include/asm/kvm-x86-pmu-ops.h |  1 +
 arch/x86/kvm/pmu.c                     | 15 ++++++++++++++-
 arch/x86/kvm/pmu.h                     |  1 +
 arch/x86/kvm/svm/pmu.c                 |  6 ++++++
 arch/x86/kvm/vmx/pmu_intel.c           |  6 ++++++
 5 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm-x86-pmu-ops.h b/arch/x86/include/asm/kvm-x86-pmu-ops.h
index c17e3e96fc1d..e0280cc3e6e4 100644
--- a/arch/x86/include/asm/kvm-x86-pmu-ops.h
+++ b/arch/x86/include/asm/kvm-x86-pmu-ops.h
@@ -24,6 +24,7 @@ KVM_X86_PMU_OP(set_msr)
 KVM_X86_PMU_OP(refresh)
 KVM_X86_PMU_OP(init)
 KVM_X86_PMU_OP(reset)
+KVM_X86_PMU_OP(get_eventsel_event_mask)
 KVM_X86_PMU_OP_OPTIONAL(deliver_pmi)
 KVM_X86_PMU_OP_OPTIONAL(cleanup)
 
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 02f9e4f245bd..98f383789579 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -247,6 +247,19 @@ static int cmp_u64(const void *pa, const void *pb)
 	return (a > b) - (a < b);
 }
 
+static inline u64 get_event_select(u64 eventsel)
+{
+	return eventsel & static_call(kvm_x86_pmu_get_eventsel_event_mask)();
+}
+
+static inline u64 get_raw_event(u64 eventsel)
+{
+	u64 event_select = get_event_select(eventsel);
+	u64 unit_mask = eventsel & ARCH_PERFMON_EVENTSEL_UMASK;
+
+	return event_select | unit_mask;
+}
+
 static bool check_pmu_event_filter(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu_event_filter *filter;
@@ -263,7 +276,7 @@ static bool check_pmu_event_filter(struct kvm_pmc *pmc)
 		goto out;
 
 	if (pmc_is_gp(pmc)) {
-		key = pmc->eventsel & AMD64_RAW_EVENT_MASK_NB;
+		key = get_raw_event(pmc->eventsel);
 		if (bsearch(&key, filter->events, filter->nevents,
 			    sizeof(__u64), cmp_u64))
 			allow_event = filter->action == KVM_PMU_EVENT_ALLOW;
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 5cc5721f260b..4e22f9f55400 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -40,6 +40,7 @@ struct kvm_pmu_ops {
 	void (*reset)(struct kvm_vcpu *vcpu);
 	void (*deliver_pmi)(struct kvm_vcpu *vcpu);
 	void (*cleanup)(struct kvm_vcpu *vcpu);
+	u64 (*get_eventsel_event_mask)(void);
 };
 
 void kvm_pmu_ops_update(const struct kvm_pmu_ops *pmu_ops);
diff --git a/arch/x86/kvm/svm/pmu.c b/arch/x86/kvm/svm/pmu.c
index f24613a108c5..0b35eb04aa60 100644
--- a/arch/x86/kvm/svm/pmu.c
+++ b/arch/x86/kvm/svm/pmu.c
@@ -294,6 +294,11 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu)
 	}
 }
 
+static u64 amd_pmu_get_eventsel_event_mask(void)
+{
+	return AMD64_EVENTSEL_EVENT;
+}
+
 struct kvm_pmu_ops amd_pmu_ops __initdata = {
 	.hw_event_available = amd_hw_event_available,
 	.pmc_is_enabled = amd_pmc_is_enabled,
@@ -307,4 +312,5 @@ struct kvm_pmu_ops amd_pmu_ops __initdata = {
 	.refresh = amd_pmu_refresh,
 	.init = amd_pmu_init,
 	.reset = amd_pmu_reset,
+	.get_eventsel_event_mask = amd_pmu_get_eventsel_event_mask,
 };
diff --git a/arch/x86/kvm/vmx/pmu_intel.c b/arch/x86/kvm/vmx/pmu_intel.c
index c399637a3a79..0aec7576af0c 100644
--- a/arch/x86/kvm/vmx/pmu_intel.c
+++ b/arch/x86/kvm/vmx/pmu_intel.c
@@ -793,6 +793,11 @@ void intel_pmu_cross_mapped_check(struct kvm_pmu *pmu)
 	}
 }
 
+static u64 intel_pmu_get_eventsel_event_mask(void)
+{
+	return ARCH_PERFMON_EVENTSEL_EVENT;
+}
+
 struct kvm_pmu_ops intel_pmu_ops __initdata = {
 	.hw_event_available = intel_hw_event_available,
 	.pmc_is_enabled = intel_pmc_is_enabled,
@@ -808,4 +813,5 @@ struct kvm_pmu_ops intel_pmu_ops __initdata = {
 	.reset = intel_pmu_reset,
 	.deliver_pmi = intel_pmu_deliver_pmi,
 	.cleanup = intel_pmu_cleanup,
+	.get_eventsel_event_mask = intel_pmu_get_eventsel_event_mask,
 };
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 2/7] kvm: x86/pmu: Remove invalid raw events from the pmu event filter
  2022-08-31 16:21 [PATCH v4 0/7] Introduce and test masked events Aaron Lewis
  2022-08-31 16:21 ` [PATCH v4 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup Aaron Lewis
@ 2022-08-31 16:21 ` Aaron Lewis
  2022-09-14 18:17   ` Jim Mattson
  2022-08-31 16:21 ` [PATCH v4 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events Aaron Lewis
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Aaron Lewis @ 2022-08-31 16:21 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis

If a raw event is invalid, i.e. bits set outside the event select +
unit mask, the event will never match the search, so it's pointless
to have it in the list.  Opt for a shorter list by removing invalid
raw events.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/kvm/pmu.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 98f383789579..e7d94e6b7f28 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -577,6 +577,38 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
 }
 EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event);
 
+static inline u64 get_event_filter_mask(void)
+{
+	u64 event_select_mask =
+		static_call(kvm_x86_pmu_get_eventsel_event_mask)();
+
+	return event_select_mask | ARCH_PERFMON_EVENTSEL_UMASK;
+}
+
+static inline bool is_event_valid(u64 event, u64 mask)
+{
+	return !(event & ~mask);
+}
+
+static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter)
+{
+	u64 raw_mask;
+	int i, j;
+
+	if (filter->flags)
+		return;
+
+	raw_mask = get_event_filter_mask();
+	for (i = 0, j = 0; i < filter->nevents; i++) {
+		u64 raw_event = filter->events[i];
+
+		if (is_event_valid(raw_event, raw_mask))
+			filter->events[j++] = raw_event;
+	}
+
+	filter->nevents = j;
+}
+
 int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_pmu_event_filter tmp, *filter;
@@ -608,6 +640,8 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 	/* Ensure nevents can't be changed between the user copies. */
 	*filter = tmp;
 
+	remove_invalid_raw_events(filter);
+
 	/*
 	 * Sort the in-kernel list so that we can search it with bsearch.
 	 */
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events
  2022-08-31 16:21 [PATCH v4 0/7] Introduce and test masked events Aaron Lewis
  2022-08-31 16:21 ` [PATCH v4 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup Aaron Lewis
  2022-08-31 16:21 ` [PATCH v4 2/7] kvm: x86/pmu: Remove invalid raw events from the pmu event filter Aaron Lewis
@ 2022-08-31 16:21 ` Aaron Lewis
  2022-09-14 18:25   ` Jim Mattson
  2022-08-31 16:21 ` [PATCH v4 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Aaron Lewis @ 2022-08-31 16:21 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis

Create an internal representation for filter events to abstract the
events userspace uses from the events the kernel uses.  That will allow
the kernel to use a common event and a common code path between the
different types of filter events used in userspace once masked events
are introduced.

No functional changes intended

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 arch/x86/kvm/pmu.c | 118 ++++++++++++++++++++++++++++++++-------------
 arch/x86/kvm/pmu.h |  16 ++++++
 2 files changed, 100 insertions(+), 34 deletions(-)

diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index e7d94e6b7f28..50a36cc5bfd0 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -239,6 +239,19 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
 	return true;
 }
 
+static inline u16 get_event_select(u64 eventsel)
+{
+	u64 e = eventsel &
+		static_call(kvm_x86_pmu_get_eventsel_event_mask)();
+
+	return (e & ARCH_PERFMON_EVENTSEL_EVENT) | ((e >> 24) & 0xF00ULL);
+}
+
+static inline u8 get_unit_mask(u64 eventsel)
+{
+	return (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
+}
+
 static int cmp_u64(const void *pa, const void *pb)
 {
 	u64 a = *(u64 *)pa;
@@ -247,53 +260,63 @@ static int cmp_u64(const void *pa, const void *pb)
 	return (a > b) - (a < b);
 }
 
-static inline u64 get_event_select(u64 eventsel)
+static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key)
+{
+	return bsearch(&key, filter->events, filter->nevents,
+			  sizeof(filter->events[0]), cmp_u64);
+}
+
+static bool filter_contains_match(struct kvm_pmu_event_filter *filter,
+				  u64 eventsel)
+{
+	u16 event_select = get_event_select(eventsel);
+	u8 unit_mask = get_unit_mask(eventsel);
+	u64 key;
+
+	key = KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask);
+	if (find_filter_entry(filter, key))
+		return true;
+	return false;
+}
+
+static bool is_gp_event_allowed(struct kvm_pmu_event_filter *filter, u64 eventsel)
 {
-	return eventsel & static_call(kvm_x86_pmu_get_eventsel_event_mask)();
+	if (filter_contains_match(filter, eventsel))
+		return filter->action == KVM_PMU_EVENT_ALLOW;
+
+	return filter->action == KVM_PMU_EVENT_DENY;
 }
 
-static inline u64 get_raw_event(u64 eventsel)
+static bool is_fixed_event_allowed(struct kvm_pmu_event_filter *filter, int idx)
 {
-	u64 event_select = get_event_select(eventsel);
-	u64 unit_mask = eventsel & ARCH_PERFMON_EVENTSEL_UMASK;
+	int fixed_idx = idx - INTEL_PMC_IDX_FIXED;
 
-	return event_select | unit_mask;
+	if (filter->action == KVM_PMU_EVENT_DENY &&
+	    test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap))
+		return false;
+	if (filter->action == KVM_PMU_EVENT_ALLOW &&
+	    !test_bit(fixed_idx, (ulong *)&filter->fixed_counter_bitmap))
+		return false;
+
+	return true;
 }
 
 static bool check_pmu_event_filter(struct kvm_pmc *pmc)
 {
 	struct kvm_pmu_event_filter *filter;
 	struct kvm *kvm = pmc->vcpu->kvm;
-	bool allow_event = true;
-	__u64 key;
-	int idx;
 
 	if (!static_call(kvm_x86_pmu_hw_event_available)(pmc))
 		return false;
 
 	filter = srcu_dereference(kvm->arch.pmu_event_filter, &kvm->srcu);
 	if (!filter)
-		goto out;
+		return true;
 
-	if (pmc_is_gp(pmc)) {
-		key = get_raw_event(pmc->eventsel);
-		if (bsearch(&key, filter->events, filter->nevents,
-			    sizeof(__u64), cmp_u64))
-			allow_event = filter->action == KVM_PMU_EVENT_ALLOW;
-		else
-			allow_event = filter->action == KVM_PMU_EVENT_DENY;
-	} else {
-		idx = pmc->idx - INTEL_PMC_IDX_FIXED;
-		if (filter->action == KVM_PMU_EVENT_DENY &&
-		    test_bit(idx, (ulong *)&filter->fixed_counter_bitmap))
-			allow_event = false;
-		if (filter->action == KVM_PMU_EVENT_ALLOW &&
-		    !test_bit(idx, (ulong *)&filter->fixed_counter_bitmap))
-			allow_event = false;
-	}
+	if (pmc_is_gp(pmc))
+		return is_gp_event_allowed(filter, pmc->eventsel);
 
-out:
-	return allow_event;
+	return is_fixed_event_allowed(filter, pmc->idx);
 }
 
 void reprogram_counter(struct kvm_pmc *pmc)
@@ -609,6 +632,38 @@ static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter)
 	filter->nevents = j;
 }
 
+static inline u64 encode_filter_entry(u64 event)
+{
+	u16 event_select = get_event_select(event);
+	u8 unit_mask = get_unit_mask(event);
+
+	return KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask);
+}
+
+static void convert_to_filter_events(struct kvm_pmu_event_filter *filter)
+{
+	int i;
+
+	for (i = 0; i < filter->nevents; i++) {
+		u64 e = filter->events[i];
+
+		filter->events[i] = encode_filter_entry(e);
+	}
+}
+
+static void prepare_filter_events(struct kvm_pmu_event_filter *filter)
+{
+	remove_invalid_raw_events(filter);
+
+	convert_to_filter_events(filter);
+
+	/*
+	 * Sort the in-kernel list so that we can search it with bsearch.
+	 */
+	sort(&filter->events, filter->nevents, sizeof(filter->events[0]),
+	     cmp_u64, NULL);
+}
+
 int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 {
 	struct kvm_pmu_event_filter tmp, *filter;
@@ -640,12 +695,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 	/* Ensure nevents can't be changed between the user copies. */
 	*filter = tmp;
 
-	remove_invalid_raw_events(filter);
-
-	/*
-	 * Sort the in-kernel list so that we can search it with bsearch.
-	 */
-	sort(&filter->events, filter->nevents, sizeof(__u64), cmp_u64, NULL);
+	prepare_filter_events(filter);
 
 	mutex_lock(&kvm->lock);
 	filter = rcu_replace_pointer(kvm->arch.pmu_event_filter, filter,
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index 4e22f9f55400..df4f81e5c685 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -205,4 +205,20 @@ bool is_vmware_backdoor_pmc(u32 pmc_idx);
 
 extern struct kvm_pmu_ops intel_pmu_ops;
 extern struct kvm_pmu_ops amd_pmu_ops;
+
+struct kvm_pmu_filter_entry {
+	union {
+		u64 raw;
+		struct {
+			u64 event_select:12;
+			u64 unit_mask:8;
+			u64 rsvd:44;
+		};
+	};
+};
+
+#define KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask) \
+	(((event_select) & 0xFFFULL) | \
+	(((unit_mask) & 0xFFULL) << 12))
+
 #endif /* __KVM_X86_PMU_H */
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter
  2022-08-31 16:21 [PATCH v4 0/7] Introduce and test masked events Aaron Lewis
                   ` (2 preceding siblings ...)
  2022-08-31 16:21 ` [PATCH v4 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events Aaron Lewis
@ 2022-08-31 16:21 ` Aaron Lewis
  2022-09-14 18:45   ` Jim Mattson
  2022-08-31 16:21 ` [PATCH v4 5/7] selftests: kvm/x86: Add flags when creating a " Aaron Lewis
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 16+ messages in thread
From: Aaron Lewis @ 2022-08-31 16:21 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis

When building a list of filter events, it can sometimes be a challenge
to fit all the events needed to adequately restrict the guest into the
limited space available in the pmu event filter.  This stems from the
fact that the pmu event filter requires each raw event (i.e. event
select + unit mask) be listed, when the intention might be to restrict
the event select all together, regardless of it's unit mask.  Instead
of increasing the number of filter events in the pmu event filter, add
a new encoding that is able to do a more generalized match on the unit
mask.

Introduce masked events as a new encoding that the pmu event filter
understands in addition to raw events.  A masked event has the fields:
mask, match, and exclude.  When filtering based on these events, the
mask is applied to the guest's unit mask to see if it matches the match
value (i.e. unit_mask & mask == match).  The exclude bit can then be
used to exclude events from that match.  E.g. for a given event select,
if it's easier to say which unit mask values shouldn't be filtered, a
masked event can be set up to match all possible unit mask values, then
another masked event can be set up to match the unit mask values that
shouldn't be filtered.

Userspace can query to see if this feature exists by looking for the
capability, KVM_CAP_PMU_EVENT_MASKED_EVENTS.

This feature is enabled by setting the flags field in the pmu event
filter to KVM_PMU_EVENT_FLAG_MASKED_EVENTS.

Events can be encoded by using KVM_PMU_EVENT_ENCODE_MASKED_ENTRY().

It is an error to have a bit set outside the valid bits for a masked
event, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in
such cases, including the high bits (11:8) of the event select if
called on Intel.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 Documentation/virt/kvm/api.rst  |  81 ++++++++++++++++--
 arch/x86/include/uapi/asm/kvm.h |  28 ++++++
 arch/x86/kvm/pmu.c              | 145 +++++++++++++++++++++++++++-----
 arch/x86/kvm/pmu.h              |  32 +++++--
 arch/x86/kvm/x86.c              |   1 +
 include/uapi/linux/kvm.h        |   1 +
 6 files changed, 255 insertions(+), 33 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index abd7c32126ce..e7783e41c590 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -5027,7 +5027,13 @@ using this ioctl.
 :Architectures: x86
 :Type: vm ioctl
 :Parameters: struct kvm_pmu_event_filter (in)
-:Returns: 0 on success, -1 on error
+:Returns: 0 on success,
+    -EFAULT args[0] cannot be accessed.
+    -EINVAL args[0] contains invalid data in the filter or filter events.
+                    Note: event validation is only done for modes where
+                    the flags field is non-zero.
+    -E2BIG nevents is too large.
+    -ENOMEM not enough memory to allocate the filter.
 
 ::
 
@@ -5040,14 +5046,73 @@ using this ioctl.
 	__u64 events[0];
   };
 
-This ioctl restricts the set of PMU events that the guest can program.
-The argument holds a list of events which will be allowed or denied.
-The eventsel+umask of each event the guest attempts to program is compared
-against the events field to determine whether the guest should have access.
-The events field only controls general purpose counters; fixed purpose
-counters are controlled by the fixed_counter_bitmap.
+This ioctl restricts the set of PMU events the guest can program by limiting
+which event select and unit mask combinations are permitted.
+
+The argument holds a list of filter events which will be allowed or denied.
+
+Filter events only control general purpose counters; fixed purpose counters
+are controlled by the fixed_counter_bitmap.
+
+Valid values for 'flags'::
+
+``0``
+
+To use this mode, clear the 'flags' field.
+
+In this mode each filter event will contain an event select + unit mask.
+
+When the guest attempts to program the PMU the event select + unit mask from
+the event select register being programmed is compared against the filter
+events to determine whether the guest should have access.
+
+``KVM_PMU_EVENT_FLAG_MASKED_EVENTS``
+:Capability: KVM_CAP_PMU_EVENT_MASKED_EVENTS
+
+In this mode each filter event will contain an event select, mask, match, and
+exclude value.
+
+When the guest attempts to program the PMU, these steps are followed in
+determining if the guest should have access:
+ 1. Match the event select from the guest against the filter events.
+ 2. If a match is found, match the guest's unit mask to the mask and match
+    values of the included filter events.
+    I.e. (unit mask & mask) == match && !exclude.
+ 3. If a match is found, match the guest's unit mask to the mask and match
+    values of the excluded filter events.
+    I.e. (unit mask & mask) == match && exclude.
+ 4.
+   a. If an included match is found and an excluded match is not found, filter
+      the event.
+   b. For everything else, do not filter the event.
+ 5.
+   a. If the event is filtered and it's an allow list, allow the guest to
+      program the event.
+   b. If the event is filtered and it's a deny list, do not allow the guest to
+      program the event.
+
+To encode a filter event use:
+  KVM_PMU_EVENT_ENCODE_MASKED_ENTRY().
+
+To access individual components in a masked entry use:
+::
+  struct kvm_pmu_event_masked_entry {
+	union {
+	__u64 raw;
+		struct {
+			/* event_select bits 11:8 are only valid on AMD. */
+			__u64 event_select:12;
+			__u64 mask:8;
+			__u64 match:8;
+			__u64 exclude:1;
+			__u64 rsvd:35;
+		};
+	};
+  };
 
-No flags are defined yet, the field must be zero.
+-EINVAL will be returned if the 'rsvd' field is not zero or if any of
+the high bits (11:8) in the 'event_select' field are set when called
+on Intel.
 
 Valid values for 'action'::
 
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 46de10a809ec..c82400a06c8b 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -528,6 +528,34 @@ struct kvm_pmu_event_filter {
 #define KVM_PMU_EVENT_ALLOW 0
 #define KVM_PMU_EVENT_DENY 1
 
+#define KVM_PMU_EVENT_FLAG_MASKED_EVENTS BIT(0)
+#define KVM_PMU_EVENT_FLAGS_VALID_MASK (KVM_PMU_EVENT_FLAG_MASKED_EVENTS)
+
+struct kvm_pmu_event_masked_entry {
+	union {
+		__u64 raw;
+		struct {
+			/* event_select bits 11:8 are only valid on AMD. */
+			__u64 event_select:12;
+			__u64 mask:8;
+			__u64 match:8;
+			__u64 exclude:1;
+			__u64 rsvd:35;
+		};
+	};
+};
+
+#define KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, mask, match, exclude) \
+	(((event_select) & 0xFFFULL) | \
+	(((mask) & 0xFFULL) << 12) | \
+	(((match) & 0xFFULL) << 20) | \
+	((__u64)(!!(exclude)) << 28))
+
+#define KVM_PMU_EVENT_MASKED_EVENTSEL_EVENT	  (GENMASK_ULL(11, 0))
+#define KVM_PMU_EVENT_MASKED_EVENTSEL_MASK	  (GENMASK_ULL(19, 12))
+#define KVM_PMU_EVENT_MASKED_EVENTSEL_MATCH	  (GENMASK_ULL(27, 20))
+#define KVM_PMU_EVENT_MASKED_EVENTSEL_EXCLUDE	  (BIT_ULL(28))
+
 /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */
 #define KVM_VCPU_TSC_CTRL 0 /* control group for the timestamp counter (TSC) */
 #define   KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
index 50a36cc5bfd0..39e15d83a4ec 100644
--- a/arch/x86/kvm/pmu.c
+++ b/arch/x86/kvm/pmu.c
@@ -252,36 +252,61 @@ static inline u8 get_unit_mask(u64 eventsel)
 	return (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
 }
 
-static int cmp_u64(const void *pa, const void *pb)
+static int filter_event_cmp(const void *pa, const void *pb)
 {
-	u64 a = *(u64 *)pa;
-	u64 b = *(u64 *)pb;
+	u64 a = *(u64 *)pa & (KVM_PMU_FILTER_EVENTSEL_INDEX |
+			      KVM_PMU_FILTER_EVENTSEL_EVENT |
+			      KVM_PMU_FILTER_EVENTSEL_EXCLUDE);
+	u64 b = *(u64 *)pb & (KVM_PMU_FILTER_EVENTSEL_INDEX |
+			      KVM_PMU_FILTER_EVENTSEL_EVENT |
+			      KVM_PMU_FILTER_EVENTSEL_EXCLUDE);
 
 	return (a > b) - (a < b);
 }
 
-static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key)
+static int find_filter_index(struct kvm_pmu_event_filter *filter, u64 key)
 {
-	return bsearch(&key, filter->events, filter->nevents,
-			  sizeof(filter->events[0]), cmp_u64);
+	u64 *fe = bsearch(&key, filter->events, filter->nevents,
+			  sizeof(filter->events[0]), filter_event_cmp);
+
+	if (!fe)
+		return -1;
+
+	return fe - filter->events;
 }
 
 static bool filter_contains_match(struct kvm_pmu_event_filter *filter,
-				  u64 eventsel)
+				  u64 eventsel, bool exclude)
 {
 	u16 event_select = get_event_select(eventsel);
 	u8 unit_mask = get_unit_mask(eventsel);
+	struct kvm_pmu_filter_entry fe;
+	int i, index;
 	u64 key;
 
-	key = KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask);
-	if (find_filter_entry(filter, key))
-		return true;
+	key = KVM_PMU_ENCODE_FILTER_ENTRY(event_select, 0, 0, exclude);
+	index = find_filter_index(filter, key);
+	if (index < 0)
+		return false;
+
+	for (i = index; i < filter->nevents; i++) {
+		fe.raw = filter->events[i];
+
+		if (fe.event_select != event_select ||
+		    fe.exclude != exclude)
+			break;
+
+		if ((unit_mask & fe.mask) == fe.match)
+			return true;
+	}
+
 	return false;
 }
 
 static bool is_gp_event_allowed(struct kvm_pmu_event_filter *filter, u64 eventsel)
 {
-	if (filter_contains_match(filter, eventsel))
+	if (filter_contains_match(filter, eventsel, false) &&
+	    !filter_contains_match(filter, eventsel, true))
 		return filter->action == KVM_PMU_EVENT_ALLOW;
 
 	return filter->action == KVM_PMU_EVENT_DENY;
@@ -600,11 +625,20 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id)
 }
 EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event);
 
-static inline u64 get_event_filter_mask(void)
+static inline u64 get_event_filter_mask(u32 flags)
 {
 	u64 event_select_mask =
 		static_call(kvm_x86_pmu_get_eventsel_event_mask)();
 
+	if (flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) {
+		u64 masked_eventsel_event = get_event_select(event_select_mask);
+
+		return masked_eventsel_event |
+		       KVM_PMU_EVENT_MASKED_EVENTSEL_MASK |
+		       KVM_PMU_EVENT_MASKED_EVENTSEL_MATCH |
+		       KVM_PMU_EVENT_MASKED_EVENTSEL_EXCLUDE;
+	}
+
 	return event_select_mask | ARCH_PERFMON_EVENTSEL_UMASK;
 }
 
@@ -613,6 +647,29 @@ static inline bool is_event_valid(u64 event, u64 mask)
 	return !(event & ~mask);
 }
 
+static bool is_filter_valid(struct kvm_pmu_event_filter *filter)
+{
+	u64 event_mask;
+	int i;
+
+	/* To maintain backwards compatibility don't validate raw events. */
+	if (!filter->flags)
+		return true;
+
+	event_mask = get_event_filter_mask(filter->flags);
+	for (i = 0; i < filter->nevents; i++) {
+		if (!is_event_valid(filter->events[i], event_mask))
+			return false;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(filter->pad); i++) {
+		if (filter->pad[i])
+			return false;
+	}
+
+	return true;
+}
+
 static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter)
 {
 	u64 raw_mask;
@@ -621,7 +678,7 @@ static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter)
 	if (filter->flags)
 		return;
 
-	raw_mask = get_event_filter_mask();
+	raw_mask = get_event_filter_mask(filter->flags);
 	for (i = 0, j = 0; i < filter->nevents; i++) {
 		u64 raw_event = filter->events[i];
 
@@ -632,12 +689,27 @@ static void remove_invalid_raw_events(struct kvm_pmu_event_filter *filter)
 	filter->nevents = j;
 }
 
-static inline u64 encode_filter_entry(u64 event)
+static inline u64 encode_filter_entry(u64 event, u32 flags)
 {
-	u16 event_select = get_event_select(event);
-	u8 unit_mask = get_unit_mask(event);
+	u16 event_select;
+	u8 mask, match;
+	bool exclude;
 
-	return KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask);
+	if (flags & KVM_PMU_EVENT_FLAG_MASKED_EVENTS) {
+		struct kvm_pmu_event_masked_entry me = { .raw = event };
+
+		mask = me.mask;
+		match = me.match;
+		event_select = me.event_select;
+		exclude = me.exclude;
+	} else {
+		mask = 0xFF;
+		match = get_unit_mask(event);
+		event_select = get_event_select(event);
+		exclude = false;
+	}
+
+	return KVM_PMU_ENCODE_FILTER_ENTRY(event_select, mask, match, exclude);
 }
 
 static void convert_to_filter_events(struct kvm_pmu_event_filter *filter)
@@ -647,7 +719,34 @@ static void convert_to_filter_events(struct kvm_pmu_event_filter *filter)
 	for (i = 0; i < filter->nevents; i++) {
 		u64 e = filter->events[i];
 
-		filter->events[i] = encode_filter_entry(e);
+		filter->events[i] = encode_filter_entry(e, filter->flags);
+	}
+}
+
+/*
+ * Sort will order the list by exclude, then event select.  This function will
+ * then index the sublists of event selects such that when a search is done on
+ * the list, the head of the event select sublist is returned.  This simpilfies
+ * the logic in filter_contains_match() when walking the list.
+ */
+static void index_filter_events(struct kvm_pmu_event_filter *filter)
+{
+	struct kvm_pmu_filter_entry *prev, *curr;
+	int i, index = 0;
+
+	if (filter->nevents)
+		prev = (struct kvm_pmu_filter_entry *)(filter->events);
+
+	for (i = 0; i < filter->nevents; i++) {
+		curr = (struct kvm_pmu_filter_entry *)(&filter->events[i]);
+
+		if (curr->event_select != prev->event_select ||
+		    curr->exclude != prev->exclude) {
+			index = 0;
+			prev = curr;
+		}
+
+		curr->event_index = index++;
 	}
 }
 
@@ -661,7 +760,9 @@ static void prepare_filter_events(struct kvm_pmu_event_filter *filter)
 	 * Sort the in-kernel list so that we can search it with bsearch.
 	 */
 	sort(&filter->events, filter->nevents, sizeof(filter->events[0]),
-	     cmp_u64, NULL);
+	     filter_event_cmp, NULL);
+
+	index_filter_events(filter);
 }
 
 int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
@@ -677,7 +778,7 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 	    tmp.action != KVM_PMU_EVENT_DENY)
 		return -EINVAL;
 
-	if (tmp.flags != 0)
+	if (tmp.flags & ~KVM_PMU_EVENT_FLAGS_VALID_MASK)
 		return -EINVAL;
 
 	if (tmp.nevents > KVM_PMU_EVENT_FILTER_MAX_EVENTS)
@@ -695,6 +796,10 @@ int kvm_vm_ioctl_set_pmu_event_filter(struct kvm *kvm, void __user *argp)
 	/* Ensure nevents can't be changed between the user copies. */
 	*filter = tmp;
 
+	r = -EINVAL;
+	if (!is_filter_valid(filter))
+		goto cleanup;
+
 	prepare_filter_events(filter);
 
 	mutex_lock(&kvm->lock);
diff --git a/arch/x86/kvm/pmu.h b/arch/x86/kvm/pmu.h
index df4f81e5c685..ffc07e4d8d71 100644
--- a/arch/x86/kvm/pmu.h
+++ b/arch/x86/kvm/pmu.h
@@ -206,19 +206,41 @@ bool is_vmware_backdoor_pmc(u32 pmc_idx);
 extern struct kvm_pmu_ops intel_pmu_ops;
 extern struct kvm_pmu_ops amd_pmu_ops;
 
+#define KVM_PMU_FILTER_EVENTSEL_INDEX		(GENMASK_ULL(27, 16))
+#define KVM_PMU_FILTER_EVENTSEL_EVENT		(GENMASK_ULL(39, 28))
+#define KVM_PMU_FILTER_EVENTSEL_EXCLUDE		(BIT_ULL(40))
+
+/*
+ * Internal representation by which all filter events converge (e.g. masked
+ * events, raw events).  That way there is only one way filter events
+ * behave once the filter is set.
+ *
+ * When filter events are converted into this format then sorted, the
+ * resulting list naturally ends up in two sublists.  One for the 'include
+ * list' and one for the 'exclude list'.  These sublists are further broken
+ * down into sublists ordered by their event select.  After that, the
+ * event select sublists are indexed such that a search for: exclude = n,
+ * event_select = n, and event_index = 0 will return the head of an event
+ * select sublist that can be walked to see if a match exists.
+ */
 struct kvm_pmu_filter_entry {
 	union {
 		u64 raw;
 		struct {
+			u64 mask:8;
+			u64 match:8;
+			u64 event_index:12;
 			u64 event_select:12;
-			u64 unit_mask:8;
-			u64 rsvd:44;
+			u64 exclude:1;
+			u64 rsvd:23;
 		};
 	};
 };
 
-#define KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask) \
-	(((event_select) & 0xFFFULL) | \
-	(((unit_mask) & 0xFFULL) << 12))
+#define KVM_PMU_ENCODE_FILTER_ENTRY(event_select, mask, match, exclude) \
+	(((mask) & 0xFFULL) | \
+	(((match) & 0xFFULL) << 8) | \
+	(((event_select) & 0xFFFULL) << 28) | \
+	((u64)(!!(exclude)) << 40))
 
 #endif /* __KVM_X86_PMU_H */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 205ebdc2b11b..3ab6a55f2a7a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4357,6 +4357,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
  	case KVM_CAP_SPLIT_IRQCHIP:
 	case KVM_CAP_IMMEDIATE_EXIT:
 	case KVM_CAP_PMU_EVENT_FILTER:
+	case KVM_CAP_PMU_EVENT_MASKED_EVENTS:
 	case KVM_CAP_GET_MSR_FEATURES:
 	case KVM_CAP_MSR_PLATFORM_INFO:
 	case KVM_CAP_EXCEPTION_PAYLOAD:
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index eed0315a77a6..685034baea9d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1177,6 +1177,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 220
 #define KVM_CAP_S390_ZPCI_OP 221
 #define KVM_CAP_S390_CPU_TOPOLOGY 222
+#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 223
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 5/7] selftests: kvm/x86: Add flags when creating a pmu event filter
  2022-08-31 16:21 [PATCH v4 0/7] Introduce and test masked events Aaron Lewis
                   ` (3 preceding siblings ...)
  2022-08-31 16:21 ` [PATCH v4 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
@ 2022-08-31 16:21 ` Aaron Lewis
  2022-09-14 18:47   ` Jim Mattson
  2022-08-31 16:21 ` [PATCH v4 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER Aaron Lewis
  2022-08-31 16:21 ` [PATCH v4 7/7] selftests: kvm/x86: Test masked events Aaron Lewis
  6 siblings, 1 reply; 16+ messages in thread
From: Aaron Lewis @ 2022-08-31 16:21 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis

Now that the flags field can be non-zero, pass it in when creating a
pmu event filter.

This is needed in preparation for testing masked events.

No functional change intended.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 .../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index ea4e259a1e2e..bd7054a53981 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -221,14 +221,15 @@ static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents)
 
 
 static struct kvm_pmu_event_filter *
-create_pmu_event_filter(const uint64_t event_list[],
-			int nevents, uint32_t action)
+create_pmu_event_filter(const uint64_t event_list[], int nevents,
+			uint32_t action, uint32_t flags)
 {
 	struct kvm_pmu_event_filter *f;
 	int i;
 
 	f = alloc_pmu_event_filter(nevents);
 	f->action = action;
+	f->flags = flags;
 	for (i = 0; i < nevents; i++)
 		f->events[i] = event_list[i];
 
@@ -239,7 +240,7 @@ static struct kvm_pmu_event_filter *event_filter(uint32_t action)
 {
 	return create_pmu_event_filter(event_list,
 				       ARRAY_SIZE(event_list),
-				       action);
+				       action, 0);
 }
 
 /*
@@ -286,7 +287,7 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu)
 	struct kvm_pmu_event_filter *f;
 	uint64_t count;
 
-	f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY);
+	f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0);
 	count = test_with_filter(vcpu, f);
 
 	free(f);
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER
  2022-08-31 16:21 [PATCH v4 0/7] Introduce and test masked events Aaron Lewis
                   ` (4 preceding siblings ...)
  2022-08-31 16:21 ` [PATCH v4 5/7] selftests: kvm/x86: Add flags when creating a " Aaron Lewis
@ 2022-08-31 16:21 ` Aaron Lewis
  2022-09-14 19:00   ` Jim Mattson
  2022-08-31 16:21 ` [PATCH v4 7/7] selftests: kvm/x86: Test masked events Aaron Lewis
  6 siblings, 1 reply; 16+ messages in thread
From: Aaron Lewis @ 2022-08-31 16:21 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis

Test that masked events are not using invalid bits, and if they are,
ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER.
The only valid bits that can be used for masked events are set when
using KVM_PMU_EVENT_ENCODE_MASKED_EVENT() with one exception: If any
of the high bits (11:8) of the event select are set when using Intel,
the PMU event filter will fail.

Also, because validation was not being done prior to the introduction
of masked events, only expect validation to fail when masked events
are used.  E.g. in the first test a filter event with all it's bits set
is accepted by KVM_SET_PMU_EVENT_FILTER when flags = 0.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 .../kvm/x86_64/pmu_event_filter_test.c        | 35 +++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index bd7054a53981..73a81262ca72 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -442,6 +442,39 @@ static bool use_amd_pmu(void)
 		 is_zen3(entry->eax));
 }
 
+static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events,
+			   int nevents, uint32_t flags)
+{
+	struct kvm_pmu_event_filter *f;
+	int r;
+
+	f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags);
+	r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f);
+	free(f);
+
+	return r;
+}
+
+static void test_filter_ioctl(struct kvm_vcpu *vcpu)
+{
+	uint64_t e = ~0ul;
+	int r;
+
+	/*
+	 * Unfortunately having invalid bits set in event data is expected to
+	 * pass when flags == 0 (bits other than eventsel+umask).
+	 */
+	r = run_filter_test(vcpu, &e, 1, 0);
+	TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing");
+
+	r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
+	TEST_ASSERT(r != 0, "Invalid PMU Event Filter is expected to fail");
+
+	e = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(0xff, 0xff, 0xff, 0xf);
+	r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
+	TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing");
+}
+
 int main(int argc, char *argv[])
 {
 	void (*guest_code)(void);
@@ -472,6 +505,8 @@ int main(int argc, char *argv[])
 	test_not_member_deny_list(vcpu);
 	test_not_member_allow_list(vcpu);
 
+	test_filter_ioctl(vcpu);
+
 	kvm_vm_free(vm);
 
 	test_pmu_config_disable(guest_code);
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 7/7] selftests: kvm/x86: Test masked events
  2022-08-31 16:21 [PATCH v4 0/7] Introduce and test masked events Aaron Lewis
                   ` (5 preceding siblings ...)
  2022-08-31 16:21 ` [PATCH v4 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER Aaron Lewis
@ 2022-08-31 16:21 ` Aaron Lewis
  2022-09-14 20:12   ` Jim Mattson
  6 siblings, 1 reply; 16+ messages in thread
From: Aaron Lewis @ 2022-08-31 16:21 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, jmattson, seanjc, Aaron Lewis

Add testing to show that a pmu event can be filtered with a generalized
match on it's unit mask.

These tests set up test cases to demonstrate various ways of filtering
a pmu event that has multiple unit mask values.  It does this by
setting up the filter in KVM with the masked events provided, then
enabling three pmu counters in the guest.  The test then verifies that
the pmu counters agree with which counters should be counting and which
counters should be filtered for both a sparse filter list and a dense
filter list.

Signed-off-by: Aaron Lewis <aaronlewis@google.com>
---
 .../kvm/x86_64/pmu_event_filter_test.c        | 330 +++++++++++++++++-
 1 file changed, 329 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 73a81262ca72..dd75c2fb3048 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -442,6 +442,326 @@ static bool use_amd_pmu(void)
 		 is_zen3(entry->eax));
 }
 
+/*
+ * "MEM_INST_RETIRED.ALL_LOADS", "MEM_INST_RETIRED.ALL_STORES", and
+ * "MEM_INST_RETIRED.ANY" from https://perfmon-events.intel.com/
+ * supported on Intel Xeon processors:
+ *  - Sapphire Rapids, Ice Lake, Cascade Lake, Skylake.
+ */
+#define MEM_INST_RETIRED		0xD0
+#define MEM_INST_RETIRED_LOAD		EVENT(MEM_INST_RETIRED, 0x81)
+#define MEM_INST_RETIRED_STORE		EVENT(MEM_INST_RETIRED, 0x82)
+#define MEM_INST_RETIRED_LOAD_STORE	EVENT(MEM_INST_RETIRED, 0x83)
+
+static bool supports_event_mem_inst_retired(void)
+{
+	uint32_t eax, ebx, ecx, edx;
+
+	cpuid(1, &eax, &ebx, &ecx, &edx);
+	assert(x86_family(eax) == 0x6);
+
+	switch (x86_model(eax)) {
+	/* Sapphire Rapids */
+	case 0x8F:
+	/* Ice Lake */
+	case 0x6A:
+	/* Skylake */
+	/* Cascade Lake */
+	case 0x55:
+		return true;
+	}
+
+	return false;
+}
+
+/*
+ * "LS Dispatch", from Processor Programming Reference
+ * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors,
+ * Preliminary Processor Programming Reference (PPR) for AMD Family
+ * 17h Model 31h, Revision B0 Processors, and Preliminary Processor
+ * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision
+ * B1 Processors Volume 1 of 2.
+ */
+#define LS_DISPATCH		0x29
+#define LS_DISPATCH_LOAD	EVENT(LS_DISPATCH, BIT(0))
+#define LS_DISPATCH_STORE	EVENT(LS_DISPATCH, BIT(1))
+#define LS_DISPATCH_LOAD_STORE	EVENT(LS_DISPATCH, BIT(2))
+
+#define INCLUDE_MASKED_ENTRY(event_select, mask, match) \
+	KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, mask, match, false)
+#define EXCLUDE_MASKED_ENTRY(event_select, mask, match) \
+	KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, mask, match, true)
+
+struct perf_counter {
+	union {
+		uint64_t raw;
+		struct {
+			uint64_t loads:22;
+			uint64_t stores:22;
+			uint64_t loads_stores:20;
+		};
+	};
+};
+
+static uint64_t masked_events_guest_test(uint32_t msr_base)
+{
+	uint64_t ld0, ld1, st0, st1, ls0, ls1;
+	struct perf_counter c;
+	int val;
+
+	ld0 = rdmsr(msr_base + 0);
+	st0 = rdmsr(msr_base + 1);
+	ls0 = rdmsr(msr_base + 2);
+
+	__asm__ __volatile__("movl $0, %[v];"
+			     "movl %[v], %%eax;"
+			     "incl %[v];"
+			     : [v]"+m"(val) :: "eax");
+
+	ld1 = rdmsr(msr_base + 0);
+	st1 = rdmsr(msr_base + 1);
+	ls1 = rdmsr(msr_base + 2);
+
+	c.loads = ld1 - ld0;
+	c.stores = st1 - st0;
+	c.loads_stores = ls1 - ls0;
+
+	return c.raw;
+}
+
+static void intel_masked_events_guest_code(void)
+{
+	uint64_t r;
+
+	for (;;) {
+		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+		wrmsr(MSR_P6_EVNTSEL0 + 0, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD);
+		wrmsr(MSR_P6_EVNTSEL0 + 1, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_STORE);
+		wrmsr(MSR_P6_EVNTSEL0 + 2, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD_STORE);
+
+		wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x7);
+
+		r = masked_events_guest_test(MSR_IA32_PMC0);
+
+		GUEST_SYNC(r);
+	}
+}
+
+static void amd_masked_events_guest_code(void)
+{
+	uint64_t r;
+
+	for (;;) {
+		wrmsr(MSR_K7_EVNTSEL0, 0);
+		wrmsr(MSR_K7_EVNTSEL1, 0);
+		wrmsr(MSR_K7_EVNTSEL2, 0);
+
+		wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD);
+		wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_STORE);
+		wrmsr(MSR_K7_EVNTSEL2, ARCH_PERFMON_EVENTSEL_ENABLE |
+		      ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD_STORE);
+
+		r = masked_events_guest_test(MSR_K7_PERFCTR0);
+
+		GUEST_SYNC(r);
+	}
+}
+
+static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu,
+						 const uint64_t masked_events[],
+						 const int nmasked_events)
+{
+	struct kvm_pmu_event_filter *f;
+	struct perf_counter r;
+
+	f = create_pmu_event_filter(masked_events, nmasked_events,
+				    KVM_PMU_EVENT_ALLOW,
+				    KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
+	r.raw = test_with_filter(vcpu, f);
+	free(f);
+
+	return r;
+}
+
+/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */
+#define MAX_FILTER_EVENTS	300
+#define MAX_TEST_EVENTS		10
+
+#define ALLOW_LOADS		BIT(0)
+#define ALLOW_STORES		BIT(1)
+#define ALLOW_LOADS_STORES	BIT(2)
+
+struct masked_events_test {
+	uint64_t intel_events[MAX_TEST_EVENTS];
+	uint64_t intel_event_end;
+	uint64_t amd_events[MAX_TEST_EVENTS];
+	uint64_t amd_event_end;
+	const char *msg;
+	uint32_t flags;
+};
+
+/*
+ * These are the test cases for the masked events tests.
+ *
+ * For each test, the guest enables 3 PMU counters (loads, stores,
+ * loads + stores).  The filter is then set in KVM with the masked events
+ * provided.  The test then verifies that the counters agree with which
+ * ones should be counting and which ones should be filtered.
+ */
+const struct masked_events_test test_cases[] = {
+	{
+		.intel_events = {
+			INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x81),
+		},
+		.amd_events = {
+			INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)),
+		},
+		.msg = "Only allow loads.",
+		.flags = ALLOW_LOADS,
+	}, {
+		.intel_events = {
+			INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82),
+		},
+		.amd_events = {
+			INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)),
+		},
+		.msg = "Only allow stores.",
+		.flags = ALLOW_STORES,
+	}, {
+		.intel_events = {
+			INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83),
+		},
+		.amd_events = {
+			INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(2)),
+		},
+		.msg = "Only allow loads + stores.",
+		.flags = ALLOW_LOADS_STORES,
+	}, {
+		.intel_events = {
+			INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0),
+			EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83),
+		},
+		.amd_events = {
+			INCLUDE_MASKED_ENTRY(LS_DISPATCH, ~(BIT(0) | BIT(1)), 0),
+		},
+		.msg = "Only allow loads and stores.",
+		.flags = ALLOW_LOADS | ALLOW_STORES,
+	}, {
+		.intel_events = {
+			INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0),
+			EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82),
+		},
+		.amd_events = {
+			INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0),
+			EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)),
+		},
+		.msg = "Only allow loads and loads + stores.",
+		.flags = ALLOW_LOADS | ALLOW_LOADS_STORES
+	}, {
+		.intel_events = {
+			INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFE, 0x82),
+		},
+		.amd_events = {
+			INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0),
+			EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)),
+		},
+		.msg = "Only allow stores and loads + stores.",
+		.flags = ALLOW_STORES | ALLOW_LOADS_STORES
+	}, {
+		.intel_events = {
+			INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0),
+		},
+		.amd_events = {
+			INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0),
+		},
+		.msg = "Only allow loads, stores, and loads + stores.",
+		.flags = ALLOW_LOADS | ALLOW_STORES | ALLOW_LOADS_STORES
+	},
+};
+
+static int append_test_events(const struct masked_events_test *test,
+			      uint64_t *events, int nevents)
+{
+	const uint64_t *evts;
+	int i;
+
+	evts = use_intel_pmu() ? test->intel_events : test->amd_events;
+	for (i = 0; i < MAX_TEST_EVENTS; i++) {
+		if (evts[i] == 0)
+			break;
+
+		events[nevents + i] = evts[i];
+	}
+
+	return nevents + i;
+}
+
+static bool bool_eq(bool a, bool b)
+{
+	return a == b;
+}
+
+static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events,
+				    int nevents)
+{
+	int ntests = ARRAY_SIZE(test_cases);
+	struct perf_counter c;
+	int i, n;
+
+	for (i = 0; i < ntests; i++) {
+		const struct masked_events_test *test = &test_cases[i];
+
+		/* Do any test case events overflow MAX_TEST_EVENTS? */
+		assert(test->intel_event_end == 0);
+		assert(test->amd_event_end == 0);
+
+		n = append_test_events(test, events, nevents);
+
+		c = run_masked_events_test(vcpu, events, n);
+		TEST_ASSERT(bool_eq(c.loads, test->flags & ALLOW_LOADS) &&
+			    bool_eq(c.stores, test->flags & ALLOW_STORES) &&
+			    bool_eq(c.loads_stores,
+				    test->flags & ALLOW_LOADS_STORES),
+			    "%s  loads: %u, stores: %u, loads + stores: %u",
+			    test->msg, c.loads, c.stores, c.loads_stores);
+	}
+}
+
+static void add_dummy_events(uint64_t *events, int nevents)
+{
+	int i;
+
+	for (i = 0; i < nevents; i++) {
+		int event_select = i % 0xFF;
+		bool exclude = ((i % 4) == 0);
+
+		if (event_select == MEM_INST_RETIRED ||
+		    event_select == LS_DISPATCH)
+			event_select++;
+
+		events[i] = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, 0,
+							      0, exclude);
+	}
+}
+
+static void test_masked_events(struct kvm_vcpu *vcpu)
+{
+	int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS;
+	uint64_t events[MAX_FILTER_EVENTS];
+
+	/* Run the test cases against a sparse PMU event filter. */
+	run_masked_events_tests(vcpu, events, 0);
+
+	/* Run the test cases against a dense PMU event filter. */
+	add_dummy_events(events, MAX_FILTER_EVENTS);
+	run_masked_events_tests(vcpu, events, nevents);
+}
+
 static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events,
 			   int nevents, uint32_t flags)
 {
@@ -478,13 +798,14 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu)
 int main(int argc, char *argv[])
 {
 	void (*guest_code)(void);
-	struct kvm_vcpu *vcpu;
+	struct kvm_vcpu *vcpu, *vcpu2 = NULL;
 	struct kvm_vm *vm;
 
 	/* Tell stdout not to buffer its content */
 	setbuf(stdout, NULL);
 
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_FILTER));
+	TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_MASKED_EVENTS));
 
 	TEST_REQUIRE(use_intel_pmu() || use_amd_pmu());
 	guest_code = use_intel_pmu() ? intel_guest_code : amd_guest_code;
@@ -505,6 +826,13 @@ int main(int argc, char *argv[])
 	test_not_member_deny_list(vcpu);
 	test_not_member_allow_list(vcpu);
 
+	if (use_intel_pmu() && supports_event_mem_inst_retired())
+		vcpu2 = vm_vcpu_add(vm, 2, intel_masked_events_guest_code);
+	else if (use_amd_pmu())
+		vcpu2 = vm_vcpu_add(vm, 2, amd_masked_events_guest_code);
+
+	if (vcpu2)
+		test_masked_events(vcpu2);
 	test_filter_ioctl(vcpu);
 
 	kvm_vm_free(vm);
-- 
2.37.2.672.g94769d06f0-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup
  2022-08-31 16:21 ` [PATCH v4 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup Aaron Lewis
@ 2022-09-14 18:15   ` Jim Mattson
  0 siblings, 0 replies; 16+ messages in thread
From: Jim Mattson @ 2022-09-14 18:15 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, seanjc

On Wed, Aug 31, 2022 at 9:21 AM Aaron Lewis <aaronlewis@google.com> wrote:
>
> When checking if a pmu event the guest is attempting to program should
> be filtered, only consider the event select + unit mask in that
> decision. Use an architecture specific mask to mask out all other bits,
> including bits 35:32 on Intel.  Those bits are not part of the event
> select and should not be considered in that decision.
>
> Fixes: 66bb8a065f5a ("KVM: x86: PMU Event Filter")
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 2/7] kvm: x86/pmu: Remove invalid raw events from the pmu event filter
  2022-08-31 16:21 ` [PATCH v4 2/7] kvm: x86/pmu: Remove invalid raw events from the pmu event filter Aaron Lewis
@ 2022-09-14 18:17   ` Jim Mattson
  0 siblings, 0 replies; 16+ messages in thread
From: Jim Mattson @ 2022-09-14 18:17 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, seanjc

On Wed, Aug 31, 2022 at 9:21 AM Aaron Lewis <aaronlewis@google.com> wrote:
>
> If a raw event is invalid, i.e. bits set outside the event select +
> unit mask, the event will never match the search, so it's pointless
> to have it in the list.  Opt for a shorter list by removing invalid
> raw events.
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events
  2022-08-31 16:21 ` [PATCH v4 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events Aaron Lewis
@ 2022-09-14 18:25   ` Jim Mattson
  0 siblings, 0 replies; 16+ messages in thread
From: Jim Mattson @ 2022-09-14 18:25 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, seanjc

On Wed, Aug 31, 2022 at 9:21 AM Aaron Lewis <aaronlewis@google.com> wrote:
>
> Create an internal representation for filter events to abstract the
> events userspace uses from the events the kernel uses.  That will allow
> the kernel to use a common event and a common code path between the
> different types of filter events used in userspace once masked events
> are introduced.
>
> No functional changes intended
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> ---
>  arch/x86/kvm/pmu.c | 118 ++++++++++++++++++++++++++++++++-------------
>  arch/x86/kvm/pmu.h |  16 ++++++
>  2 files changed, 100 insertions(+), 34 deletions(-)
>
> diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c
> index e7d94e6b7f28..50a36cc5bfd0 100644
> --- a/arch/x86/kvm/pmu.c
> +++ b/arch/x86/kvm/pmu.c
> @@ -239,6 +239,19 @@ static bool pmc_resume_counter(struct kvm_pmc *pmc)
>         return true;
>  }
>
> +static inline u16 get_event_select(u64 eventsel)
> +{
> +       u64 e = eventsel &
> +               static_call(kvm_x86_pmu_get_eventsel_event_mask)();
> +
> +       return (e & ARCH_PERFMON_EVENTSEL_EVENT) | ((e >> 24) & 0xF00ULL);
> +}
> +
> +static inline u8 get_unit_mask(u64 eventsel)
> +{
> +       return (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8;
> +}
> +
>  static int cmp_u64(const void *pa, const void *pb)
>  {
>         u64 a = *(u64 *)pa;
> @@ -247,53 +260,63 @@ static int cmp_u64(const void *pa, const void *pb)
>         return (a > b) - (a < b);
>  }
>
> -static inline u64 get_event_select(u64 eventsel)
> +static u64 *find_filter_entry(struct kvm_pmu_event_filter *filter, u64 key)
> +{
> +       return bsearch(&key, filter->events, filter->nevents,
> +                         sizeof(filter->events[0]), cmp_u64);
> +}
> +
> +static bool filter_contains_match(struct kvm_pmu_event_filter *filter,
> +                                 u64 eventsel)
> +{
> +       u16 event_select = get_event_select(eventsel);
> +       u8 unit_mask = get_unit_mask(eventsel);
> +       u64 key;
> +
> +       key = KVM_PMU_ENCODE_FILTER_ENTRY(event_select, unit_mask);
> +       if (find_filter_entry(filter, key))
> +               return true;
> +       return false;

Perhaps just:
        return find_filter_entry(filter, key);

Reviewed-by: Jim Mattson <jmattson@google.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter
  2022-08-31 16:21 ` [PATCH v4 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
@ 2022-09-14 18:45   ` Jim Mattson
  0 siblings, 0 replies; 16+ messages in thread
From: Jim Mattson @ 2022-09-14 18:45 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, seanjc

On Wed, Aug 31, 2022 at 9:21 AM Aaron Lewis <aaronlewis@google.com> wrote:
>
> When building a list of filter events, it can sometimes be a challenge
> to fit all the events needed to adequately restrict the guest into the
> limited space available in the pmu event filter.  This stems from the
> fact that the pmu event filter requires each raw event (i.e. event
> select + unit mask) be listed, when the intention might be to restrict
> the event select all together, regardless of it's unit mask.  Instead
> of increasing the number of filter events in the pmu event filter, add
> a new encoding that is able to do a more generalized match on the unit
> mask.
>
> Introduce masked events as a new encoding that the pmu event filter
> understands in addition to raw events.  A masked event has the fields:
> mask, match, and exclude.  When filtering based on these events, the
> mask is applied to the guest's unit mask to see if it matches the match
> value (i.e. unit_mask & mask == match).  The exclude bit can then be
> used to exclude events from that match.  E.g. for a given event select,
> if it's easier to say which unit mask values shouldn't be filtered, a
> masked event can be set up to match all possible unit mask values, then
> another masked event can be set up to match the unit mask values that
> shouldn't be filtered.
>
> Userspace can query to see if this feature exists by looking for the
> capability, KVM_CAP_PMU_EVENT_MASKED_EVENTS.
>
> This feature is enabled by setting the flags field in the pmu event
> filter to KVM_PMU_EVENT_FLAG_MASKED_EVENTS.
>
> Events can be encoded by using KVM_PMU_EVENT_ENCODE_MASKED_ENTRY().
>
> It is an error to have a bit set outside the valid bits for a masked
> event, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in
> such cases, including the high bits (11:8) of the event select if
> called on Intel.
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> ---
>  Documentation/virt/kvm/api.rst  |  81 ++++++++++++++++--
>  arch/x86/include/uapi/asm/kvm.h |  28 ++++++
>  arch/x86/kvm/pmu.c              | 145 +++++++++++++++++++++++++++-----
>  arch/x86/kvm/pmu.h              |  32 +++++--
>  arch/x86/kvm/x86.c              |   1 +
>  include/uapi/linux/kvm.h        |   1 +
>  6 files changed, 255 insertions(+), 33 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index abd7c32126ce..e7783e41c590 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -5027,7 +5027,13 @@ using this ioctl.
>  :Architectures: x86
>  :Type: vm ioctl
>  :Parameters: struct kvm_pmu_event_filter (in)
> -:Returns: 0 on success, -1 on error
> +:Returns: 0 on success,
> +    -EFAULT args[0] cannot be accessed.
> +    -EINVAL args[0] contains invalid data in the filter or filter events.
> +                    Note: event validation is only done for modes where
> +                    the flags field is non-zero.
> +    -E2BIG nevents is too large.
> +    -ENOMEM not enough memory to allocate the filter.

I assume that the ioctl returns -1 on error, but that errno is set to
one of the errors listed above (in its positive form).

So...

:Returns: 0 on success, -1 on error

Errors:
====== ============================================
EFAULT args[0] cannot be accessed
EINVAL args[0] contains invalid data in the filter or filter events
E2BIG nevents is too large
EBUSY not enough memory to allocate the filter
====== ============================================

> @@ -647,7 +719,34 @@ static void convert_to_filter_events(struct kvm_pmu_event_filter *filter)
>         for (i = 0; i < filter->nevents; i++) {
>                 u64 e = filter->events[i];
>
> -               filter->events[i] = encode_filter_entry(e);
> +               filter->events[i] = encode_filter_entry(e, filter->flags);
> +       }
> +}
> +
> +/*
> + * Sort will order the list by exclude, then event select.  This function will
> + * then index the sublists of event selects such that when a search is done on
> + * the list, the head of the event select sublist is returned.  This simpilfies

Nit: simplifies

With the documentation changes,

Reviewed-by: Jim Mattson <jmattson@google.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 5/7] selftests: kvm/x86: Add flags when creating a pmu event filter
  2022-08-31 16:21 ` [PATCH v4 5/7] selftests: kvm/x86: Add flags when creating a " Aaron Lewis
@ 2022-09-14 18:47   ` Jim Mattson
  0 siblings, 0 replies; 16+ messages in thread
From: Jim Mattson @ 2022-09-14 18:47 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, seanjc

On Wed, Aug 31, 2022 at 9:21 AM Aaron Lewis <aaronlewis@google.com> wrote:
>
> Now that the flags field can be non-zero, pass it in when creating a
> pmu event filter.
>
> This is needed in preparation for testing masked events.
>
> No functional change intended.
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>

I'm not sure this warrants a separate commit, but okay.

Reviewed-by: Jim Mattson <jmattson@google.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER
  2022-08-31 16:21 ` [PATCH v4 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER Aaron Lewis
@ 2022-09-14 19:00   ` Jim Mattson
  0 siblings, 0 replies; 16+ messages in thread
From: Jim Mattson @ 2022-09-14 19:00 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, seanjc

On Wed, Aug 31, 2022 at 9:21 AM Aaron Lewis <aaronlewis@google.com> wrote:
>
> Test that masked events are not using invalid bits, and if they are,
> ensure the pmu event filter is not accepted by KVM_SET_PMU_EVENT_FILTER.
> The only valid bits that can be used for masked events are set when
> using KVM_PMU_EVENT_ENCODE_MASKED_EVENT() with one exception: If any
> of the high bits (11:8) of the event select are set when using Intel,
> the PMU event filter will fail.
>
> Also, because validation was not being done prior to the introduction
> of masked events, only expect validation to fail when masked events
> are used.  E.g. in the first test a filter event with all it's bits set

Nit: its

> is accepted by KVM_SET_PMU_EVENT_FILTER when flags = 0.
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> ---
>  .../kvm/x86_64/pmu_event_filter_test.c        | 35 +++++++++++++++++++
>  1 file changed, 35 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> index bd7054a53981..73a81262ca72 100644
> --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> @@ -442,6 +442,39 @@ static bool use_amd_pmu(void)
>                  is_zen3(entry->eax));
>  }
>
> +static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events,
> +                          int nevents, uint32_t flags)
> +{
> +       struct kvm_pmu_event_filter *f;
> +       int r;
> +
> +       f = create_pmu_event_filter(events, nevents, KVM_PMU_EVENT_ALLOW, flags);
> +       r = __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f);
> +       free(f);
> +
> +       return r;
> +}
> +
> +static void test_filter_ioctl(struct kvm_vcpu *vcpu)
> +{
> +       uint64_t e = ~0ul;
> +       int r;
> +
> +       /*
> +        * Unfortunately having invalid bits set in event data is expected to
> +        * pass when flags == 0 (bits other than eventsel+umask).
> +        */
> +       r = run_filter_test(vcpu, &e, 1, 0);
> +       TEST_ASSERT(r == 0, "Valid PMU Event Filter is failing");
> +
> +       r = run_filter_test(vcpu, &e, 1, KVM_PMU_EVENT_FLAG_MASKED_EVENTS);

Before using KVM_PMU_EVENT_FLAG_MASKED_EVENTS, we need to test for
KVM_CAP_PMU_EVENT_MASKED_EVENTS.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 7/7] selftests: kvm/x86: Test masked events
  2022-08-31 16:21 ` [PATCH v4 7/7] selftests: kvm/x86: Test masked events Aaron Lewis
@ 2022-09-14 20:12   ` Jim Mattson
  2022-09-20 15:24     ` Aaron Lewis
  0 siblings, 1 reply; 16+ messages in thread
From: Jim Mattson @ 2022-09-14 20:12 UTC (permalink / raw)
  To: Aaron Lewis; +Cc: kvm, pbonzini, seanjc

On Wed, Aug 31, 2022 at 9:21 AM Aaron Lewis <aaronlewis@google.com> wrote:
>
> Add testing to show that a pmu event can be filtered with a generalized
> match on it's unit mask.
>
> These tests set up test cases to demonstrate various ways of filtering
> a pmu event that has multiple unit mask values.  It does this by
> setting up the filter in KVM with the masked events provided, then
> enabling three pmu counters in the guest.  The test then verifies that
> the pmu counters agree with which counters should be counting and which
> counters should be filtered for both a sparse filter list and a dense
> filter list.
>
> Signed-off-by: Aaron Lewis <aaronlewis@google.com>
> ---
>  .../kvm/x86_64/pmu_event_filter_test.c        | 330 +++++++++++++++++-
>  1 file changed, 329 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> index 73a81262ca72..dd75c2fb3048 100644
> --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> @@ -442,6 +442,326 @@ static bool use_amd_pmu(void)
>                  is_zen3(entry->eax));
>  }
>
> +/*
> + * "MEM_INST_RETIRED.ALL_LOADS", "MEM_INST_RETIRED.ALL_STORES", and
> + * "MEM_INST_RETIRED.ANY" from https://perfmon-events.intel.com/
> + * supported on Intel Xeon processors:
> + *  - Sapphire Rapids, Ice Lake, Cascade Lake, Skylake.
> + */
> +#define MEM_INST_RETIRED               0xD0
> +#define MEM_INST_RETIRED_LOAD          EVENT(MEM_INST_RETIRED, 0x81)
> +#define MEM_INST_RETIRED_STORE         EVENT(MEM_INST_RETIRED, 0x82)
> +#define MEM_INST_RETIRED_LOAD_STORE    EVENT(MEM_INST_RETIRED, 0x83)
> +
> +static bool supports_event_mem_inst_retired(void)
> +{
> +       uint32_t eax, ebx, ecx, edx;
> +
> +       cpuid(1, &eax, &ebx, &ecx, &edx);
> +       assert(x86_family(eax) == 0x6);

This seems aggressive. Even if Linux no longer runs on Pentium 4, what
about Xeon Phi?

> +
> +       switch (x86_model(eax)) {
> +       /* Sapphire Rapids */
> +       case 0x8F:
> +       /* Ice Lake */
> +       case 0x6A:
> +       /* Skylake */
> +       /* Cascade Lake */
> +       case 0x55:
> +               return true;
> +       }
> +
> +       return false;
> +}
> +
> +/*
> + * "LS Dispatch", from Processor Programming Reference
> + * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors,
> + * Preliminary Processor Programming Reference (PPR) for AMD Family
> + * 17h Model 31h, Revision B0 Processors, and Preliminary Processor
> + * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision
> + * B1 Processors Volume 1 of 2.
> + */
> +#define LS_DISPATCH            0x29
> +#define LS_DISPATCH_LOAD       EVENT(LS_DISPATCH, BIT(0))
> +#define LS_DISPATCH_STORE      EVENT(LS_DISPATCH, BIT(1))
> +#define LS_DISPATCH_LOAD_STORE EVENT(LS_DISPATCH, BIT(2))
> +
> +#define INCLUDE_MASKED_ENTRY(event_select, mask, match) \
> +       KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, mask, match, false)
> +#define EXCLUDE_MASKED_ENTRY(event_select, mask, match) \
> +       KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, mask, match, true)
> +
> +struct perf_counter {
> +       union {
> +               uint64_t raw;
> +               struct {
> +                       uint64_t loads:22;
> +                       uint64_t stores:22;
> +                       uint64_t loads_stores:20;
> +               };
> +       };
> +};
> +
> +static uint64_t masked_events_guest_test(uint32_t msr_base)
> +{
> +       uint64_t ld0, ld1, st0, st1, ls0, ls1;
> +       struct perf_counter c;
> +       int val;
> +
> +       ld0 = rdmsr(msr_base + 0);
> +       st0 = rdmsr(msr_base + 1);
> +       ls0 = rdmsr(msr_base + 2);
> +
> +       __asm__ __volatile__("movl $0, %[v];"
> +                            "movl %[v], %%eax;"
> +                            "incl %[v];"
> +                            : [v]"+m"(val) :: "eax");
> +
> +       ld1 = rdmsr(msr_base + 0);
> +       st1 = rdmsr(msr_base + 1);
> +       ls1 = rdmsr(msr_base + 2);
> +
> +       c.loads = ld1 - ld0;
> +       c.stores = st1 - st0;
> +       c.loads_stores = ls1 - ls0;
> +
> +       return c.raw;
> +}
> +
> +static void intel_masked_events_guest_code(void)
> +{
> +       uint64_t r;
> +
> +       for (;;) {
> +               wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
> +
> +               wrmsr(MSR_P6_EVNTSEL0 + 0, ARCH_PERFMON_EVENTSEL_ENABLE |
> +                     ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD);
> +               wrmsr(MSR_P6_EVNTSEL0 + 1, ARCH_PERFMON_EVENTSEL_ENABLE |
> +                     ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_STORE);
> +               wrmsr(MSR_P6_EVNTSEL0 + 2, ARCH_PERFMON_EVENTSEL_ENABLE |
> +                     ARCH_PERFMON_EVENTSEL_OS | MEM_INST_RETIRED_LOAD_STORE);
> +
> +               wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x7);
> +
> +               r = masked_events_guest_test(MSR_IA32_PMC0);
> +
> +               GUEST_SYNC(r);
> +       }
> +}
> +
> +static void amd_masked_events_guest_code(void)
> +{
> +       uint64_t r;
> +
> +       for (;;) {
> +               wrmsr(MSR_K7_EVNTSEL0, 0);
> +               wrmsr(MSR_K7_EVNTSEL1, 0);
> +               wrmsr(MSR_K7_EVNTSEL2, 0);
> +
> +               wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
> +                     ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD);
> +               wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE |
> +                     ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_STORE);
> +               wrmsr(MSR_K7_EVNTSEL2, ARCH_PERFMON_EVENTSEL_ENABLE |
> +                     ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD_STORE);
> +
> +               r = masked_events_guest_test(MSR_K7_PERFCTR0);
> +
> +               GUEST_SYNC(r);
> +       }
> +}
> +
> +static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu,
> +                                                const uint64_t masked_events[],
> +                                                const int nmasked_events)
> +{
> +       struct kvm_pmu_event_filter *f;
> +       struct perf_counter r;
> +
> +       f = create_pmu_event_filter(masked_events, nmasked_events,
> +                                   KVM_PMU_EVENT_ALLOW,
> +                                   KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
> +       r.raw = test_with_filter(vcpu, f);
> +       free(f);
> +
> +       return r;
> +}
> +
> +/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */
> +#define MAX_FILTER_EVENTS      300
> +#define MAX_TEST_EVENTS                10
> +
> +#define ALLOW_LOADS            BIT(0)
> +#define ALLOW_STORES           BIT(1)
> +#define ALLOW_LOADS_STORES     BIT(2)
> +
> +struct masked_events_test {
> +       uint64_t intel_events[MAX_TEST_EVENTS];
> +       uint64_t intel_event_end;
> +       uint64_t amd_events[MAX_TEST_EVENTS];
> +       uint64_t amd_event_end;
> +       const char *msg;
> +       uint32_t flags;
> +};
> +
> +/*
> + * These are the test cases for the masked events tests.
> + *
> + * For each test, the guest enables 3 PMU counters (loads, stores,
> + * loads + stores).  The filter is then set in KVM with the masked events
> + * provided.  The test then verifies that the counters agree with which
> + * ones should be counting and which ones should be filtered.
> + */
> +const struct masked_events_test test_cases[] = {
> +       {
> +               .intel_events = {
> +                       INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x81),
> +               },
> +               .amd_events = {
> +                       INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)),
> +               },
> +               .msg = "Only allow loads.",
> +               .flags = ALLOW_LOADS,
> +       }, {
> +               .intel_events = {
> +                       INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82),
> +               },
> +               .amd_events = {
> +                       INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)),
> +               },
> +               .msg = "Only allow stores.",
> +               .flags = ALLOW_STORES,
> +       }, {
> +               .intel_events = {
> +                       INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83),
> +               },
> +               .amd_events = {
> +                       INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(2)),
> +               },
> +               .msg = "Only allow loads + stores.",
> +               .flags = ALLOW_LOADS_STORES,
> +       }, {
> +               .intel_events = {
> +                       INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0),
> +                       EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x83),
> +               },
> +               .amd_events = {
> +                       INCLUDE_MASKED_ENTRY(LS_DISPATCH, ~(BIT(0) | BIT(1)), 0),
> +               },
> +               .msg = "Only allow loads and stores.",
> +               .flags = ALLOW_LOADS | ALLOW_STORES,
> +       }, {
> +               .intel_events = {
> +                       INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0),
> +                       EXCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFF, 0x82),
> +               },
> +               .amd_events = {
> +                       INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0),
> +                       EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(1)),
> +               },
> +               .msg = "Only allow loads and loads + stores.",
> +               .flags = ALLOW_LOADS | ALLOW_LOADS_STORES
> +       }, {
> +               .intel_events = {
> +                       INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0xFE, 0x82),
> +               },
> +               .amd_events = {
> +                       INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0),
> +                       EXCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xFF, BIT(0)),
> +               },
> +               .msg = "Only allow stores and loads + stores.",
> +               .flags = ALLOW_STORES | ALLOW_LOADS_STORES
> +       }, {
> +               .intel_events = {
> +                       INCLUDE_MASKED_ENTRY(MEM_INST_RETIRED, 0x7C, 0),
> +               },
> +               .amd_events = {
> +                       INCLUDE_MASKED_ENTRY(LS_DISPATCH, 0xF8, 0),
> +               },
> +               .msg = "Only allow loads, stores, and loads + stores.",
> +               .flags = ALLOW_LOADS | ALLOW_STORES | ALLOW_LOADS_STORES
> +       },
> +};
> +
> +static int append_test_events(const struct masked_events_test *test,
> +                             uint64_t *events, int nevents)
> +{
> +       const uint64_t *evts;
> +       int i;
> +
> +       evts = use_intel_pmu() ? test->intel_events : test->amd_events;
> +       for (i = 0; i < MAX_TEST_EVENTS; i++) {
> +               if (evts[i] == 0)
> +                       break;
> +
> +               events[nevents + i] = evts[i];
> +       }
> +
> +       return nevents + i;
> +}
> +
> +static bool bool_eq(bool a, bool b)
> +{
> +       return a == b;
> +}
> +
> +static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events,
> +                                   int nevents)
> +{
> +       int ntests = ARRAY_SIZE(test_cases);
> +       struct perf_counter c;
> +       int i, n;
> +
> +       for (i = 0; i < ntests; i++) {
> +               const struct masked_events_test *test = &test_cases[i];
> +
> +               /* Do any test case events overflow MAX_TEST_EVENTS? */
> +               assert(test->intel_event_end == 0);
> +               assert(test->amd_event_end == 0);
> +
> +               n = append_test_events(test, events, nevents);
> +
> +               c = run_masked_events_test(vcpu, events, n);
> +               TEST_ASSERT(bool_eq(c.loads, test->flags & ALLOW_LOADS) &&
> +                           bool_eq(c.stores, test->flags & ALLOW_STORES) &&
> +                           bool_eq(c.loads_stores,
> +                                   test->flags & ALLOW_LOADS_STORES),
> +                           "%s  loads: %u, stores: %u, loads + stores: %u",
> +                           test->msg, c.loads, c.stores, c.loads_stores);
> +       }
> +}
> +
> +static void add_dummy_events(uint64_t *events, int nevents)
> +{
> +       int i;
> +
> +       for (i = 0; i < nevents; i++) {
> +               int event_select = i % 0xFF;
> +               bool exclude = ((i % 4) == 0);
> +
> +               if (event_select == MEM_INST_RETIRED ||
> +                   event_select == LS_DISPATCH)
> +                       event_select++;
> +
> +               events[i] = KVM_PMU_EVENT_ENCODE_MASKED_ENTRY(event_select, 0,
> +                                                             0, exclude);
> +       }
> +}
> +
> +static void test_masked_events(struct kvm_vcpu *vcpu)
> +{
> +       int nevents = MAX_FILTER_EVENTS - MAX_TEST_EVENTS;
> +       uint64_t events[MAX_FILTER_EVENTS];
> +
> +       /* Run the test cases against a sparse PMU event filter. */
> +       run_masked_events_tests(vcpu, events, 0);
> +
> +       /* Run the test cases against a dense PMU event filter. */
> +       add_dummy_events(events, MAX_FILTER_EVENTS);
> +       run_masked_events_tests(vcpu, events, nevents);
> +}
> +
>  static int run_filter_test(struct kvm_vcpu *vcpu, const uint64_t *events,
>                            int nevents, uint32_t flags)
>  {
> @@ -478,13 +798,14 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu)
>  int main(int argc, char *argv[])
>  {
>         void (*guest_code)(void);
> -       struct kvm_vcpu *vcpu;
> +       struct kvm_vcpu *vcpu, *vcpu2 = NULL;
>         struct kvm_vm *vm;
>
>         /* Tell stdout not to buffer its content */
>         setbuf(stdout, NULL);
>
>         TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_FILTER));
> +       TEST_REQUIRE(kvm_has_cap(KVM_CAP_PMU_EVENT_MASKED_EVENTS));
>
>         TEST_REQUIRE(use_intel_pmu() || use_amd_pmu());
>         guest_code = use_intel_pmu() ? intel_guest_code : amd_guest_code;
> @@ -505,6 +826,13 @@ int main(int argc, char *argv[])
>         test_not_member_deny_list(vcpu);
>         test_not_member_allow_list(vcpu);
>
> +       if (use_intel_pmu() && supports_event_mem_inst_retired())

Do we need to verify that 3 or more general purpose counters are available?

> +               vcpu2 = vm_vcpu_add(vm, 2, intel_masked_events_guest_code);
> +       else if (use_amd_pmu())

Do we need to verify that the CPU is Zen[123]?

> +               vcpu2 = vm_vcpu_add(vm, 2, amd_masked_events_guest_code);
> +
> +       if (vcpu2)
> +               test_masked_events(vcpu2);
>         test_filter_ioctl(vcpu);
>
>         kvm_vm_free(vm);
> --
> 2.37.2.672.g94769d06f0-goog
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 7/7] selftests: kvm/x86: Test masked events
  2022-09-14 20:12   ` Jim Mattson
@ 2022-09-20 15:24     ` Aaron Lewis
  0 siblings, 0 replies; 16+ messages in thread
From: Aaron Lewis @ 2022-09-20 15:24 UTC (permalink / raw)
  To: Jim Mattson; +Cc: kvm, pbonzini, seanjc

>
>
> > +static bool supports_event_mem_inst_retired(void)
> > +{
> > +       uint32_t eax, ebx, ecx, edx;
> > +
> > +       cpuid(1, &eax, &ebx, &ecx, &edx);
> > +       assert(x86_family(eax) == 0x6);
>
> This seems aggressive. Even if Linux no longer runs on Pentium 4, what
> about Xeon Phi?
>

I'll make it a branch.

> > @@ -505,6 +826,13 @@ int main(int argc, char *argv[])
> >         test_not_member_deny_list(vcpu);
> >         test_not_member_allow_list(vcpu);
> >
> > +       if (use_intel_pmu() && supports_event_mem_inst_retired())
>
> Do we need to verify that 3 or more general purpose counters are available?

I'll add a check for that.

>
> > +               vcpu2 = vm_vcpu_add(vm, 2, intel_masked_events_guest_code);
> > +       else if (use_amd_pmu())
>
> Do we need to verify that the CPU is Zen[123]?

use_amd_pmu() does that already.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-09-20 15:24 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-31 16:21 [PATCH v4 0/7] Introduce and test masked events Aaron Lewis
2022-08-31 16:21 ` [PATCH v4 1/7] kvm: x86/pmu: Correct the mask used in a pmu event filter lookup Aaron Lewis
2022-09-14 18:15   ` Jim Mattson
2022-08-31 16:21 ` [PATCH v4 2/7] kvm: x86/pmu: Remove invalid raw events from the pmu event filter Aaron Lewis
2022-09-14 18:17   ` Jim Mattson
2022-08-31 16:21 ` [PATCH v4 3/7] kvm: x86/pmu: prepare the pmu event filter for masked events Aaron Lewis
2022-09-14 18:25   ` Jim Mattson
2022-08-31 16:21 ` [PATCH v4 4/7] kvm: x86/pmu: Introduce masked events to the pmu event filter Aaron Lewis
2022-09-14 18:45   ` Jim Mattson
2022-08-31 16:21 ` [PATCH v4 5/7] selftests: kvm/x86: Add flags when creating a " Aaron Lewis
2022-09-14 18:47   ` Jim Mattson
2022-08-31 16:21 ` [PATCH v4 6/7] selftests: kvm/x86: Add testing for KVM_SET_PMU_EVENT_FILTER Aaron Lewis
2022-09-14 19:00   ` Jim Mattson
2022-08-31 16:21 ` [PATCH v4 7/7] selftests: kvm/x86: Test masked events Aaron Lewis
2022-09-14 20:12   ` Jim Mattson
2022-09-20 15:24     ` Aaron Lewis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).