linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters
@ 2019-02-04 16:53 Andrew Murray
  2019-02-04 16:53 ` [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions Andrew Murray
                   ` (5 more replies)
  0 siblings, 6 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-04 16:53 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier
  Cc: Julien Thierry, kvmarm, linux-arm-kernel, suzuki.poulose

ARMv8 provides support for chained PMU counters, where an event type
of 0x001E is set for odd-numbered counters, the event counter will
increment by one for each overflow of the preceding even-numbered
counter. Let's emulate this in KVM by creating a 64 bit perf counter
when a user chains two emulated counters together.

Testing has been performed by hard-coding hwc->sample_period in
__hw_perf_event_init (arm_pmu.c) to a small value, this results in
regular overflows (for non sampling events). The following command
was then used to measure chained and non-chained instruction cycles:

perf stat -e armv8_pmuv3/long=1,inst_retired/u \
          -e armv8_pmuv3/long=0,inst_retired/u dd if=/dev/zero bs=1M \
	  count=10 | gzip > /dev/null

The reported values were identical (and for non-chained was in the
same ballpark when running on a kernel without this patchset). Debug
was added to verify that the guest received overflow interrupts for
the chain counter.

Changes since v1:

 - Rename kvm_pmu_{enable,disable}_counter to reflect that they can
   operate on multiple counters at once and use these functions where
   possible

 - Fix bugs with overflow handing, kvm_pmu_get_counter_value did not
   take into consideration the perf counter value overflowing the low
   counter

 - Ensure PMCCFILTR_EL0 is used when operating on the cycle counter

 - Rename kvm_pmu_reenable_enabled_{pair, single} and similar

 - Always create perf event disabled to simplify logic elsewhere

 - Move PMCNTENSET_EL0 test to kvm_pmu_enable_counter_mask

Andrew Murray (5):
  KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions
  KVM: arm/arm64: extract duplicated code to own function
  KVM: arm/arm64: re-create event when setting counter value
  KVM: arm/arm64: lazily create perf events on enable
  KVM: arm/arm64: support chained PMU counters

 arch/arm64/kvm/sys_regs.c |   4 +-
 include/kvm/arm_pmu.h     |   9 +-
 virt/kvm/arm/pmu.c        | 409 ++++++++++++++++++++++++++++++++++++++--------
 3 files changed, 352 insertions(+), 70 deletions(-)

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions
  2019-02-04 16:53 [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Andrew Murray
@ 2019-02-04 16:53 ` Andrew Murray
  2019-02-05 12:21   ` Julien Thierry
  2019-02-11 17:23   ` Suzuki K Poulose
  2019-02-04 16:53 ` [PATCH v2 2/5] KVM: arm/arm64: extract duplicated code to own function Andrew Murray
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-04 16:53 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier
  Cc: Julien Thierry, kvmarm, linux-arm-kernel, suzuki.poulose

The kvm_pmu_{enable/disable}_counter functions can enabled/disable
multiple counters at once as they operate on a bitmask. Let's
make this clearer by renaming the function.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 arch/arm64/kvm/sys_regs.c |  4 ++--
 include/kvm/arm_pmu.h     |  8 ++++----
 virt/kvm/arm/pmu.c        | 12 ++++++------
 3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e3e3722..4a6fbac 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -857,11 +857,11 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		if (r->Op2 & 0x1) {
 			/* accessing PMCNTENSET_EL0 */
 			__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
-			kvm_pmu_enable_counter(vcpu, val);
+			kvm_pmu_enable_counter_mask(vcpu, val);
 		} else {
 			/* accessing PMCNTENCLR_EL0 */
 			__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
-			kvm_pmu_disable_counter(vcpu, val);
+			kvm_pmu_disable_counter_mask(vcpu, val);
 		}
 	} else {
 		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index f87fe20..b73f31b 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -46,8 +46,8 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
-void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
-void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu);
@@ -83,8 +83,8 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 }
 static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
-static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
-static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 1c5b76c..c5a722a 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -135,13 +135,13 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 }
 
 /**
- * kvm_pmu_enable_counter - enable selected PMU counter
+ * kvm_pmu_enable_counter_mask - enable selected PMU counters
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENSET register
  *
  * Call perf_event_enable to start counting the perf event
  */
-void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
+void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
@@ -164,13 +164,13 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 }
 
 /**
- * kvm_pmu_disable_counter - disable selected PMU counter
+ * kvm_pmu_disable_counter_mask - disable selected PMU counters
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENCLR register
  *
  * Call perf_event_disable to stop counting the perf event
  */
-void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
+void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
@@ -347,10 +347,10 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 
 	mask = kvm_pmu_valid_counter_mask(vcpu);
 	if (val & ARMV8_PMU_PMCR_E) {
-		kvm_pmu_enable_counter(vcpu,
+		kvm_pmu_enable_counter_mask(vcpu,
 		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
 	} else {
-		kvm_pmu_disable_counter(vcpu, mask);
+		kvm_pmu_disable_counter_mask(vcpu, mask);
 	}
 
 	if (val & ARMV8_PMU_PMCR_C)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 2/5] KVM: arm/arm64: extract duplicated code to own function
  2019-02-04 16:53 [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Andrew Murray
  2019-02-04 16:53 ` [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions Andrew Murray
@ 2019-02-04 16:53 ` Andrew Murray
  2019-02-04 16:53 ` [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value Andrew Murray
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-04 16:53 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier
  Cc: Julien Thierry, kvmarm, linux-arm-kernel, suzuki.poulose

Let's reduce code duplication by extracting common code to its own
function.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 virt/kvm/arm/pmu.c | 28 ++++++++++++++++------------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index c5a722a..6e7c179 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -65,6 +65,19 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 }
 
 /**
+ * kvm_pmu_release_perf_event - remove the perf event
+ * @pmc: The PMU counter pointer
+ */
+static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
+{
+	if (pmc->perf_event) {
+		perf_event_disable(pmc->perf_event);
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+/**
  * kvm_pmu_stop_counter - stop PMU counter
  * @pmc: The PMU counter pointer
  *
@@ -79,9 +92,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
 		__vcpu_sys_reg(vcpu, reg) = counter;
-		perf_event_disable(pmc->perf_event);
-		perf_event_release_kernel(pmc->perf_event);
-		pmc->perf_event = NULL;
+		kvm_pmu_release_perf_event(pmc);
 	}
 }
 
@@ -112,15 +123,8 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 	int i;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 
-	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		struct kvm_pmc *pmc = &pmu->pmc[i];
-
-		if (pmc->perf_event) {
-			perf_event_disable(pmc->perf_event);
-			perf_event_release_kernel(pmc->perf_event);
-			pmc->perf_event = NULL;
-		}
-	}
+	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
+		kvm_pmu_release_perf_event(&pmu->pmc[i]);
 }
 
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value
  2019-02-04 16:53 [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Andrew Murray
  2019-02-04 16:53 ` [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions Andrew Murray
  2019-02-04 16:53 ` [PATCH v2 2/5] KVM: arm/arm64: extract duplicated code to own function Andrew Murray
@ 2019-02-04 16:53 ` Andrew Murray
  2019-02-05 12:21   ` Julien Thierry
  2019-02-13 14:28   ` Suzuki K Poulose
  2019-02-04 16:53 ` [PATCH v2 4/5] KVM: arm/arm64: lazily create perf events on enable Andrew Murray
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-04 16:53 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier
  Cc: Julien Thierry, kvmarm, linux-arm-kernel, suzuki.poulose

The perf event sample_period is currently set based upon the current
counter value, when PMXEVTYPER is written to and the perf event is created.
However the user may choose to write the type before the counter value in
which case sample_period will be set incorrectly. Let's instead decouple
event creation from PMXEVTYPER and (re)create the event in either
suitation.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 virt/kvm/arm/pmu.c | 40 +++++++++++++++++++++++++++++++---------
 1 file changed, 31 insertions(+), 9 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 6e7c179..95d74ec 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,6 +24,7 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -62,6 +63,9 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
+
+	/* Recreate the perf event to reflect the updated sample_period */
+	kvm_pmu_create_perf_event(vcpu, select_idx);
 }
 
 /**
@@ -378,23 +382,22 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 }
 
 /**
- * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * kvm_pmu_create_perf_event - create a perf event for a counter
  * @vcpu: The vcpu pointer
- * @data: The data guest writes to PMXEVTYPER_EL0
+ * @data: Type of event as per PMXEVTYPER_EL0 format
  * @select_idx: The number of selected counter
- *
- * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
- * event with given hardware event number. Here we call perf_event API to
- * emulate this action and create a kernel perf event for it.
  */
-void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
-				    u64 select_idx)
+static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 	struct perf_event *event;
 	struct perf_event_attr attr;
-	u64 eventsel, counter;
+	u64 eventsel, counter, reg, data;
+
+	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
+	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
+	data = __vcpu_sys_reg(vcpu, reg);
 
 	kvm_pmu_stop_counter(vcpu, pmc);
 	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
@@ -431,6 +434,25 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	pmc->perf_event = event;
 }
 
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx)
+{
+	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
+
+	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
+	kvm_pmu_create_perf_event(vcpu, select_idx);
+}
+
 bool kvm_arm_support_pmu_v3(void)
 {
 	/*
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 4/5] KVM: arm/arm64: lazily create perf events on enable
  2019-02-04 16:53 [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Andrew Murray
                   ` (2 preceding siblings ...)
  2019-02-04 16:53 ` [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value Andrew Murray
@ 2019-02-04 16:53 ` Andrew Murray
  2019-02-14 11:36   ` Suzuki K Poulose
  2019-02-04 16:53 ` [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters Andrew Murray
  2019-02-18  8:52 ` [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Marc Zyngier
  5 siblings, 1 reply; 21+ messages in thread
From: Andrew Murray @ 2019-02-04 16:53 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier
  Cc: Julien Thierry, kvmarm, linux-arm-kernel, suzuki.poulose

To prevent re-creating perf events everytime the counter registers
are changed, let's instead lazily create the event when the event
is first enabled and destroy it when it changes.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 virt/kvm/arm/pmu.c | 96 ++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 68 insertions(+), 28 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 95d74ec..a64aeb2 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,7 +24,10 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, u64 select_idx);
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
+static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -59,13 +62,15 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 {
 	u64 reg;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	/* Recreate the perf event to reflect the updated sample_period */
-	kvm_pmu_create_perf_event(vcpu, select_idx);
+	kvm_pmu_stop_counter(vcpu, pmc);
+	kvm_pmu_sync_counter_enable(vcpu, select_idx);
 }
 
 /**
@@ -83,6 +88,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
 
 /**
  * kvm_pmu_stop_counter - stop PMU counter
+ * @vcpu: The vcpu pointer
  * @pmc: The PMU counter pointer
  *
  * If this counter has been configured to monitor some event, release it here.
@@ -143,6 +149,24 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 }
 
 /**
+ * kvm_pmu_enable_counter - create/enable a counter
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (!pmc->perf_event)
+		kvm_pmu_create_perf_event(vcpu, select_idx);
+
+	perf_event_enable(pmc->perf_event);
+	if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+		kvm_debug("failed to enable perf event\n");
+}
+
+/**
  * kvm_pmu_enable_counter_mask - enable selected PMU counters
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENSET register
@@ -152,8 +176,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
 
 	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
 		return;
@@ -162,16 +184,39 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 		if (!(val & BIT(i)))
 			continue;
 
-		pmc = &pmu->pmc[i];
-		if (pmc->perf_event) {
-			perf_event_enable(pmc->perf_event);
-			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
-				kvm_debug("fail to enable perf event\n");
-		}
+		kvm_pmu_enable_counter(vcpu, i);
 	}
 }
 
 /**
+ * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
+{
+	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+
+	if (set & BIT(select_idx))
+		kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @pmc: The counter to disable
+ */
+static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (pmc->perf_event)
+		perf_event_disable(pmc->perf_event);
+}
+
+/**
  * kvm_pmu_disable_counter_mask - disable selected PMU counters
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENCLR register
@@ -181,8 +226,6 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
 
 	if (!val)
 		return;
@@ -191,9 +234,7 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 		if (!(val & BIT(i)))
 			continue;
 
-		pmc = &pmu->pmc[i];
-		if (pmc->perf_event)
-			perf_event_disable(pmc->perf_event);
+		kvm_pmu_disable_counter(vcpu, i);
 	}
 }
 
@@ -375,16 +416,9 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
-static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
-	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
-}
-
 /**
- * kvm_pmu_create_perf_event - create a perf event for a counter
+ * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter
  * @vcpu: The vcpu pointer
- * @data: Type of event as per PMXEVTYPER_EL0 format
  * @select_idx: The number of selected counter
  */
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
@@ -399,7 +433,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
 	data = __vcpu_sys_reg(vcpu, reg);
 
-	kvm_pmu_stop_counter(vcpu, pmc);
 	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
 
 	/* Software increment event does't need to be backed by a perf event */
@@ -411,7 +444,7 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
 	attr.pinned = 1;
-	attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx);
+	attr.disabled = 1;
 	attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
 	attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
 	attr.exclude_hv = 1; /* Don't count EL2 events */
@@ -447,10 +480,17 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx)
 {
-	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	u64 reg, event_type = data & ARMV8_PMU_EVTYPE_MASK;
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+
+	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
+	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
 
-	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
-	kvm_pmu_create_perf_event(vcpu, select_idx);
+	__vcpu_sys_reg(vcpu, reg) = event_type;
+	kvm_pmu_sync_counter_enable(vcpu, select_idx);
 }
 
 bool kvm_arm_support_pmu_v3(void)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters
  2019-02-04 16:53 [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Andrew Murray
                   ` (3 preceding siblings ...)
  2019-02-04 16:53 ` [PATCH v2 4/5] KVM: arm/arm64: lazily create perf events on enable Andrew Murray
@ 2019-02-04 16:53 ` Andrew Murray
  2019-02-05 14:33   ` Julien Thierry
  2019-02-18  8:52 ` [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Marc Zyngier
  5 siblings, 1 reply; 21+ messages in thread
From: Andrew Murray @ 2019-02-04 16:53 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier
  Cc: Julien Thierry, kvmarm, linux-arm-kernel, suzuki.poulose

Emulate chained PMU counters by creating a single 64 bit event counter
for a pair of chained KVM counters.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 include/kvm/arm_pmu.h |   1 +
 virt/kvm/arm/pmu.c    | 321 +++++++++++++++++++++++++++++++++++++++++---------
 2 files changed, 269 insertions(+), 53 deletions(-)

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index b73f31b..8e691ee 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -29,6 +29,7 @@ struct kvm_pmc {
 	u8 idx;	/* index into the pmu->pmc array */
 	struct perf_event *perf_event;
 	u64 bitmask;
+	u64 overflow_count;
 };
 
 struct kvm_pmu {
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index a64aeb2..9318130 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,9 +24,25 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
+static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
+					    u64 pair_low);
+static void kvm_pmu_stop_release_perf_event(struct kvm_vcpu *vcpu,
+					      u64 select_idx);
+static void kvm_pmu_sync_counter_enable_pair(struct kvm_vcpu *vcpu, u64 pair_low);
 static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, u64 select_idx);
 static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
-static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
+
+/**
+ * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
+ * @pmc: The PMU counter pointer
+ * @select_idx: The counter index
+ */
+static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
+{
+	return ((pmc->perf_event->attr.config1 & 0x1)
+		&& (pmc->idx % 2));
+}
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -35,22 +51,70 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
  */
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	u64 counter, reg, enabled, running;
+	u64 counter_idx, reg, enabled, running, incr;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
-	counter = __vcpu_sys_reg(vcpu, reg);
+	counter_idx = __vcpu_sys_reg(vcpu, reg);
 
 	/* The real counter value is equal to the value of counter register plus
 	 * the value perf event counts.
 	 */
-	if (pmc->perf_event)
-		counter += perf_event_read_value(pmc->perf_event, &enabled,
+	if (pmc->perf_event) {
+		incr = perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
-	return counter & pmc->bitmask;
+		if (kvm_pmu_counter_is_high_word(pmc)) {
+			u64 counter_low, counter;
+
+			reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
+			      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx - 1;
+			counter_low = __vcpu_sys_reg(vcpu, reg);
+			counter = lower_32_bits(counter_low) | (counter_idx << 32);
+			counter_idx = upper_32_bits(counter + incr);
+		} else {
+			counter_idx += incr;
+		}
+	}
+
+	return counter_idx & pmc->bitmask;
+}
+
+/**
+ * kvm_pmu_counter_is_enabled - is a counter active
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
+	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
+}
+
+/**
+ * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
+ * @vcpu: The vcpu pointer
+ * @select_idx: The low counter index
+ */
+static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
+{
+	u64 eventsel, reg;
+
+	reg = (pair_low + 1 == ARMV8_PMU_CYCLE_IDX)
+	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pair_low + 1;
+	eventsel = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_EVENT;
+	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
+		return false;
+
+	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
+	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
+		return false;
+
+	return true;
 }
 
 /**
@@ -61,29 +125,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
  */
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 {
-	u64 reg;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	u64 reg, pair_low;
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	kvm_pmu_stop_counter(vcpu, pmc);
-	kvm_pmu_sync_counter_enable(vcpu, select_idx);
+	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
+
+	/* Recreate the perf event to reflect the updated sample_period */
+	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
+		kvm_pmu_sync_counter_enable_pair(vcpu, pair_low);
+	} else {
+		kvm_pmu_stop_release_perf_event(vcpu, select_idx);
+		kvm_pmu_sync_counter_enable(vcpu, select_idx);
+	}
 }
 
 /**
  * kvm_pmu_release_perf_event - remove the perf event
  * @pmc: The PMU counter pointer
  */
-static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
+static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
+				       struct kvm_pmc *pmc)
 {
-	if (pmc->perf_event) {
-		perf_event_disable(pmc->perf_event);
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_alt;
+	u64 pair_alt;
+
+	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
+	pmc_alt = &pmu->pmc[pair_alt];
+
+	if (pmc->perf_event)
 		perf_event_release_kernel(pmc->perf_event);
-		pmc->perf_event = NULL;
-	}
+
+	if (pmc->perf_event == pmc_alt->perf_event)
+		pmc_alt->perf_event = NULL;
+
+	pmc->perf_event = NULL;
 }
 
 /**
@@ -91,22 +171,65 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
  * @vcpu: The vcpu pointer
  * @pmc: The PMU counter pointer
  *
- * If this counter has been configured to monitor some event, release it here.
+ * If this counter has been configured to monitor some event, stop it here.
  */
 static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
 	u64 counter, reg;
 
 	if (pmc->perf_event) {
+		perf_event_disable(pmc->perf_event);
 		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
 		__vcpu_sys_reg(vcpu, reg) = counter;
-		kvm_pmu_release_perf_event(pmc);
 	}
 }
 
 /**
+ * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
+ * @vcpu: The vcpu pointer
+ * @pmc_low: The PMU counter pointer for lower word
+ * @pmc_high: The PMU counter pointer for higher word
+ *
+ * As chained counters share the underlying perf event, we stop them
+ * both first before discarding the underlying perf event
+ */
+static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
+					    u64 idx_low)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
+
+	/* Stopping a counter involves adding the perf event value to the
+	 * vcpu sys register value prior to releasing the perf event. As
+	 * kvm_pmu_get_counter_value may depend on the low counter value we
+	 * must always stop the high counter first
+	 */
+	kvm_pmu_stop_counter(vcpu, pmc_high);
+	kvm_pmu_stop_counter(vcpu, pmc_low);
+
+	kvm_pmu_release_perf_event(vcpu, pmc_high);
+	kvm_pmu_release_perf_event(vcpu, pmc_low);
+}
+
+/**
+ * kvm_pmu_stop_release_perf_event - stop and release a counter
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_stop_release_perf_event(struct kvm_vcpu *vcpu,
+					      u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+	kvm_pmu_release_perf_event(vcpu, pmc);
+}
+
+/**
  * kvm_pmu_vcpu_reset - reset pmu state for cpu
  * @vcpu: The vcpu pointer
  *
@@ -117,7 +240,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
+		kvm_pmu_stop_release_perf_event(vcpu, i);
 		pmu->pmc[i].idx = i;
 		pmu->pmc[i].bitmask = 0xffffffffUL;
 	}
@@ -134,7 +257,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
-		kvm_pmu_release_perf_event(&pmu->pmc[i]);
+		kvm_pmu_release_perf_event(vcpu, &pmu->pmc[i]);
 }
 
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
@@ -167,53 +290,115 @@ static void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
 }
 
 /**
+ * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
+{
+	kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
+}
+
+/**
+ * kvm_pmu_sync_counter_enable_pair - reenable a pair if they should be enabled
+ * @vcpu: The vcpu pointer
+ * @pair_low: The low counter index
+ */
+static void kvm_pmu_sync_counter_enable_pair(struct kvm_vcpu *vcpu, u64 pair_low)
+{
+	kvm_pmu_enable_counter_mask(vcpu, BIT(pair_low) | BIT(pair_low + 1));
+}
+
+/**
+ * kvm_pmu_enable_counter_pair - enable counters pair at a time
+ * @vcpu: The vcpu pointer
+ * @val: counters to enable
+ * @pair_low: The low counter index
+ */
+static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
+					u64 pair_low)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
+
+	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		if (pmc_low->perf_event != pmc_high->perf_event)
+			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
+	}
+
+	if (val & BIT(pair_low))
+		kvm_pmu_enable_counter(vcpu, pair_low);
+
+	if (val & BIT(pair_low+1))
+		kvm_pmu_enable_counter(vcpu, pair_low + 1);
+}
+
+/**
  * kvm_pmu_enable_counter_mask - enable selected PMU counters
  * @vcpu: The vcpu pointer
- * @val: the value guest writes to PMCNTENSET register
+ * @val: the value guest writes to PMCNTENSET register or a subset
  *
  * Call perf_event_enable to start counting the perf event
  */
 void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
 
 	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
 		return;
 
-	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		if (!(val & BIT(i)))
-			continue;
-
-		kvm_pmu_enable_counter(vcpu, i);
-	}
+	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
+		kvm_pmu_enable_counter_pair(vcpu, val & set, i);
 }
 
 /**
- * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
+ * kvm_pmu_disable_counter - disable selected PMU counter
  * @vcpu: The vcpu pointer
  * @select_idx: The counter index
  */
-static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
-					    u64 select_idx)
+static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
-	if (set & BIT(select_idx))
-		kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
+	if (pmc->perf_event) {
+		perf_event_disable(pmc->perf_event);
+		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+			kvm_debug("fail to enable perf event\n");
+	}
 }
 
 /**
- * kvm_pmu_disable_counter - disable selected PMU counter
- * @vcpu: The vcpu pointer
- * @pmc: The counter to disable
+ * kvm_pmu_disable_counter_pair - disable counters pair at a time
+ * @val: counters to disable
+ * @pair_low: The low counter index
  */
-static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
+static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
+					 u64 pair_low)
 {
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
+
+	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		if (pmc_low->perf_event == pmc_high->perf_event) {
+			if (pmc_low->perf_event) {
+				kvm_pmu_stop_release_perf_event_pair(vcpu,
+								pair_low);
+				kvm_pmu_sync_counter_enable_pair(vcpu, pair_low);
+			}
+		}
+	}
 
-	if (pmc->perf_event)
-		perf_event_disable(pmc->perf_event);
+	if (val & BIT(pair_low))
+		kvm_pmu_disable_counter(vcpu, pair_low);
+
+	if (val & BIT(pair_low + 1))
+		kvm_pmu_disable_counter(vcpu, pair_low + 1);
 }
 
 /**
@@ -230,12 +415,8 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
 	if (!val)
 		return;
 
-	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		if (!(val & BIT(i)))
-			continue;
-
-		kvm_pmu_disable_counter(vcpu, i);
-	}
+	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
+		kvm_pmu_disable_counter_pair(vcpu, val, i);
 }
 
 static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
@@ -346,6 +527,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
 
+	if (kvm_pmu_event_is_chained(vcpu, idx)) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+		struct kvm_pmc *pmc_high = &pmu->pmc[idx + 1];
+
+		if (!(--pmc_high->overflow_count)) {
+			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx + 1);
+			pmc_high->overflow_count = U32_MAX + 1UL;
+		}
+
+	}
+
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
 		kvm_vcpu_kick(vcpu);
@@ -440,6 +632,10 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	    select_idx != ARMV8_PMU_CYCLE_IDX)
 		return;
 
+	/* Handled by even event */
+	if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
@@ -452,6 +648,9 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 	attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
 		ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
 
+	if (kvm_pmu_event_is_chained(vcpu, select_idx))
+		attr.config1 |= 0x1;
+
 	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
@@ -464,6 +663,14 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 		return;
 	}
 
+	if (kvm_pmu_event_is_chained(vcpu, select_idx)) {
+		struct kvm_pmc *pmc_next = &pmu->pmc[select_idx + 1];
+
+		pmc_next->perf_event = event;
+		counter = kvm_pmu_get_counter_value(vcpu, select_idx + 1);
+		pmc_next->overflow_count = (-counter) & pmc->bitmask;
+	}
+
 	pmc->perf_event = event;
 }
 
@@ -480,17 +687,25 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx)
 {
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
-	u64 reg, event_type = data & ARMV8_PMU_EVTYPE_MASK;
-
-	kvm_pmu_stop_counter(vcpu, pmc);
+	u64 eventsel, event_type, pair_low, reg;
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
 
-	__vcpu_sys_reg(vcpu, reg) = event_type;
-	kvm_pmu_sync_counter_enable(vcpu, select_idx);
+	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
+	event_type = data & ARMV8_PMU_EVTYPE_MASK;
+	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
+
+	if (kvm_pmu_event_is_chained(vcpu, pair_low) ||
+	    eventsel == ARMV8_PMUV3_PERFCTR_CHAIN) {
+		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
+		__vcpu_sys_reg(vcpu, reg) = event_type;
+		kvm_pmu_sync_counter_enable_pair(vcpu, pair_low);
+	} else {
+		kvm_pmu_stop_release_perf_event(vcpu, pair_low);
+		__vcpu_sys_reg(vcpu, reg) = event_type;
+		kvm_pmu_sync_counter_enable(vcpu, pair_low);
+	}
 }
 
 bool kvm_arm_support_pmu_v3(void)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions
  2019-02-04 16:53 ` [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions Andrew Murray
@ 2019-02-05 12:21   ` Julien Thierry
  2019-02-11 17:23   ` Suzuki K Poulose
  1 sibling, 0 replies; 21+ messages in thread
From: Julien Thierry @ 2019-02-05 12:21 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, suzuki.poulose



On 04/02/2019 16:53, Andrew Murray wrote:
> The kvm_pmu_{enable/disable}_counter functions can enabled/disable
> multiple counters at once as they operate on a bitmask. Let's
> make this clearer by renaming the function.
>> Signed-off-by: Andrew Murray <andrew.murray@arm.com>


Reviewed-by: Julien Thierry <julien.thierry@arm.com>

> ---
>  arch/arm64/kvm/sys_regs.c |  4 ++--
>  include/kvm/arm_pmu.h     |  8 ++++----
>  virt/kvm/arm/pmu.c        | 12 ++++++------
>  3 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index e3e3722..4a6fbac 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -857,11 +857,11 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  		if (r->Op2 & 0x1) {
>  			/* accessing PMCNTENSET_EL0 */
>  			__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
> -			kvm_pmu_enable_counter(vcpu, val);
> +			kvm_pmu_enable_counter_mask(vcpu, val);
>  		} else {
>  			/* accessing PMCNTENCLR_EL0 */
>  			__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
> -			kvm_pmu_disable_counter(vcpu, val);
> +			kvm_pmu_disable_counter_mask(vcpu, val);
>  		}
>  	} else {
>  		p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index f87fe20..b73f31b 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -46,8 +46,8 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val);
>  u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
>  void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
>  void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
> -void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
> -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
> +void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val);
> +void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
>  void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>  bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu);
> @@ -83,8 +83,8 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  }
>  static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
> -static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
> -static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
> +static inline void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {}
> +static inline void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
>  static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu)
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 1c5b76c..c5a722a 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -135,13 +135,13 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> - * kvm_pmu_enable_counter - enable selected PMU counter
> + * kvm_pmu_enable_counter_mask - enable selected PMU counters
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENSET register
>   *
>   * Call perf_event_enable to start counting the perf event
>   */
> -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> +void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>  {
>  	int i;
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> @@ -164,13 +164,13 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  }
>  
>  /**
> - * kvm_pmu_disable_counter - disable selected PMU counter
> + * kvm_pmu_disable_counter_mask - disable selected PMU counters
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENCLR register
>   *
>   * Call perf_event_disable to stop counting the perf event
>   */
> -void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> +void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>  {
>  	int i;
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> @@ -347,10 +347,10 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  
>  	mask = kvm_pmu_valid_counter_mask(vcpu);
>  	if (val & ARMV8_PMU_PMCR_E) {
> -		kvm_pmu_enable_counter(vcpu,
> +		kvm_pmu_enable_counter_mask(vcpu,
>  		       __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
>  	} else {
> -		kvm_pmu_disable_counter(vcpu, mask);
> +		kvm_pmu_disable_counter_mask(vcpu, mask);
>  	}
>  
>  	if (val & ARMV8_PMU_PMCR_C)
> 

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value
  2019-02-04 16:53 ` [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value Andrew Murray
@ 2019-02-05 12:21   ` Julien Thierry
  2019-02-05 12:27     ` Andrew Murray
  2019-02-13 14:28   ` Suzuki K Poulose
  1 sibling, 1 reply; 21+ messages in thread
From: Julien Thierry @ 2019-02-05 12:21 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, suzuki.poulose

Hi Andrew,

On 04/02/2019 16:53, Andrew Murray wrote:
> The perf event sample_period is currently set based upon the current
> counter value, when PMXEVTYPER is written to and the perf event is created.
> However the user may choose to write the type before the counter value in
> which case sample_period will be set incorrectly. Let's instead decouple
> event creation from PMXEVTYPER and (re)create the event in either
> suitation.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  virt/kvm/arm/pmu.c | 40 +++++++++++++++++++++++++++++++---------
>  1 file changed, 31 insertions(+), 9 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 6e7c179..95d74ec 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,6 +24,7 @@
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
>   * @vcpu: The vcpu pointer
> @@ -62,6 +63,9 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> +
> +	/* Recreate the perf event to reflect the updated sample_period */
> +	kvm_pmu_create_perf_event(vcpu, select_idx);
>  }
>  
>  /**
> @@ -378,23 +382,22 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>  }
>  
>  /**
> - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * kvm_pmu_create_perf_event - create a perf event for a counter
>   * @vcpu: The vcpu pointer
> - * @data: The data guest writes to PMXEVTYPER_EL0
> + * @data: Type of event as per PMXEVTYPER_EL0 format
>   * @select_idx: The number of selected counter
> - *
> - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> - * event with given hardware event number. Here we call perf_event API to
> - * emulate this action and create a kernel perf event for it.
>   */
> -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> -				    u64 select_idx)
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  	struct perf_event *event;
>  	struct perf_event_attr attr;
> -	u64 eventsel, counter;
> +	u64 eventsel, counter, reg, data;
> +
> +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> +	data = __vcpu_sys_reg(vcpu, reg);
>  
>  	kvm_pmu_stop_counter(vcpu, pmc);
>  	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
> @@ -431,6 +434,25 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  	pmc->perf_event = event;
>  }
>  
> +/**
> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * @vcpu: The vcpu pointer
> + * @data: The data guest writes to PMXEVTYPER_EL0
> + * @select_idx: The number of selected counter
> + *
> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> + * event with given hardware event number. Here we call perf_event API to
> + * emulate this action and create a kernel perf event for it.
> + */
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> +				    u64 select_idx)
> +{
> +	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
> +> +	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;

I think there has been a wrong split. In the next patch you add the
check for the cycle counter but I believe it should be already present here.

With that fixed:

Reviewed-by: Julien Thierry <julien.thierry@arm.com>

> +	kvm_pmu_create_perf_event(vcpu, select_idx);
> +}
> +
>  bool kvm_arm_support_pmu_v3(void)
>  {
>  	/*
> 

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value
  2019-02-05 12:21   ` Julien Thierry
@ 2019-02-05 12:27     ` Andrew Murray
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-05 12:27 UTC (permalink / raw)
  To: Julien Thierry
  Cc: Marc Zyngier, suzuki.poulose, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, Feb 05, 2019 at 12:21:25PM +0000, Julien Thierry wrote:
> Hi Andrew,
> 
> On 04/02/2019 16:53, Andrew Murray wrote:
> > The perf event sample_period is currently set based upon the current
> > counter value, when PMXEVTYPER is written to and the perf event is created.
> > However the user may choose to write the type before the counter value in
> > which case sample_period will be set incorrectly. Let's instead decouple
> > event creation from PMXEVTYPER and (re)create the event in either
> > suitation.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  virt/kvm/arm/pmu.c | 40 +++++++++++++++++++++++++++++++---------
> >  1 file changed, 31 insertions(+), 9 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 6e7c179..95d74ec 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,6 +24,7 @@
> >  #include <kvm/arm_pmu.h>
> >  #include <kvm/arm_vgic.h>
> >  
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
> >  /**
> >   * kvm_pmu_get_counter_value - get PMU counter value
> >   * @vcpu: The vcpu pointer
> > @@ -62,6 +63,9 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > +
> > +	/* Recreate the perf event to reflect the updated sample_period */
> > +	kvm_pmu_create_perf_event(vcpu, select_idx);
> >  }
> >  
> >  /**
> > @@ -378,23 +382,22 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> >  }
> >  
> >  /**
> > - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> > + * kvm_pmu_create_perf_event - create a perf event for a counter
> >   * @vcpu: The vcpu pointer
> > - * @data: The data guest writes to PMXEVTYPER_EL0
> > + * @data: Type of event as per PMXEVTYPER_EL0 format
> >   * @select_idx: The number of selected counter
> > - *
> > - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> > - * event with given hardware event number. Here we call perf_event API to
> > - * emulate this action and create a kernel perf event for it.
> >   */
> > -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> > -				    u64 select_idx)
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
> >  {
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  	struct perf_event *event;
> >  	struct perf_event_attr attr;
> > -	u64 eventsel, counter;
> > +	u64 eventsel, counter, reg, data;
> > +
> > +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > +	data = __vcpu_sys_reg(vcpu, reg);
> >  
> >  	kvm_pmu_stop_counter(vcpu, pmc);
> >  	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
> > @@ -431,6 +434,25 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> >  	pmc->perf_event = event;
> >  }
> >  
> > +/**
> > + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> > + * @vcpu: The vcpu pointer
> > + * @data: The data guest writes to PMXEVTYPER_EL0
> > + * @select_idx: The number of selected counter
> > + *
> > + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> > + * event with given hardware event number. Here we call perf_event API to
> > + * emulate this action and create a kernel perf event for it.
> > + */
> > +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> > +				    u64 select_idx)
> > +{
> > +	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
> > +> +	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
> 
> I think there has been a wrong split. In the next patch you add the
> check for the cycle counter but I believe it should be already present here.
> 
> With that fixed:
> 
> Reviewed-by: Julien Thierry <julien.thierry@arm.com>

Thanks for spotting this and for the review.

Thanks,

Andrew Murray

> 
> > +	kvm_pmu_create_perf_event(vcpu, select_idx);
> > +}
> > +
> >  bool kvm_arm_support_pmu_v3(void)
> >  {
> >  	/*
> > 
> 
> -- 
> Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters
  2019-02-04 16:53 ` [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters Andrew Murray
@ 2019-02-05 14:33   ` Julien Thierry
  2019-02-14 11:42     ` Suzuki K Poulose
  2019-02-18 10:11     ` Andrew Murray
  0 siblings, 2 replies; 21+ messages in thread
From: Julien Thierry @ 2019-02-05 14:33 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, suzuki.poulose

Hi Andrew,

On 04/02/2019 16:53, Andrew Murray wrote:
> Emulate chained PMU counters by creating a single 64 bit event counter
> for a pair of chained KVM counters.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  include/kvm/arm_pmu.h |   1 +
>  virt/kvm/arm/pmu.c    | 321 +++++++++++++++++++++++++++++++++++++++++---------
>  2 files changed, 269 insertions(+), 53 deletions(-)
> 
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index b73f31b..8e691ee 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -29,6 +29,7 @@ struct kvm_pmc {
>  	u8 idx;	/* index into the pmu->pmc array */
>  	struct perf_event *perf_event;
>  	u64 bitmask;
> +	u64 overflow_count;
>  };
>  
>  struct kvm_pmu {
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index a64aeb2..9318130 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,9 +24,25 @@
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E

I find it a bit awkward to have this redefined here.

Maybe we could define a helper in kvm_host.h:
bool kvm_pmu_typer_is_chain(u64 typer);

That would always return false for arm32?

> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> +					    u64 pair_low);
> +static void kvm_pmu_stop_release_perf_event(struct kvm_vcpu *vcpu,
> +					      u64 select_idx);
> +static void kvm_pmu_sync_counter_enable_pair(struct kvm_vcpu *vcpu, u64 pair_low);
>  static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, u64 select_idx);
>  static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
> -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> +
> +/**
> + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
> + * @pmc: The PMU counter pointer
> + * @select_idx: The counter index
> + */
> +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
> +{
> +	return ((pmc->perf_event->attr.config1 & 0x1)
> +		&& (pmc->idx % 2));
> +}
>  
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
> @@ -35,22 +51,70 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
>   */
>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
> -	u64 counter, reg, enabled, running;
> +	u64 counter_idx, reg, enabled, running, incr;
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> -	counter = __vcpu_sys_reg(vcpu, reg);
> +	counter_idx = __vcpu_sys_reg(vcpu, reg);

I'm not sure I understand the "_idx" suffix for this variable. This
holds a counter value, not and index. Right?


>  
>  	/* The real counter value is equal to the value of counter register plus
>  	 * the value perf event counts.
>  	 */
> -	if (pmc->perf_event)
> -		counter += perf_event_read_value(pmc->perf_event, &enabled,
> +	if (pmc->perf_event) {
> +		incr = perf_event_read_value(pmc->perf_event, &enabled,
>  						 &running);
>  
> -	return counter & pmc->bitmask;
> +		if (kvm_pmu_counter_is_high_word(pmc)) {
> +			u64 counter_low, counter;
> +
> +			reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> +			      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx - 1;
> +			counter_low = __vcpu_sys_reg(vcpu, reg);
> +			counter = lower_32_bits(counter_low) | (counter_idx << 32);
> +			counter_idx = upper_32_bits(counter + incr);
> +		} else {
> +			counter_idx += incr;
> +		}
> +	}
> +
> +	return counter_idx & pmc->bitmask;
> +}
> +
> +/**
> + * kvm_pmu_counter_is_enabled - is a counter active
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +
> +	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> +	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
> +}
> +
> +/**
> + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The low counter index
> + */
> +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
> +{
> +	u64 eventsel, reg;
> +
> +	reg = (pair_low + 1 == ARMV8_PMU_CYCLE_IDX)
> +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pair_low + 1;
> +	eventsel = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_EVENT;
> +	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
> +		return false;
> +
> +	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
> +	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
> +		return false;
> +
> +	return true;
>  }
>  
>  /**
> @@ -61,29 +125,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>   */
>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>  {
> -	u64 reg;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	u64 reg, pair_low;
>  
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>  
> -	kvm_pmu_stop_counter(vcpu, pmc);
> -	kvm_pmu_sync_counter_enable(vcpu, select_idx);
> +	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
> +
> +	/* Recreate the perf event to reflect the updated sample_period */
> +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> +		kvm_pmu_sync_counter_enable_pair(vcpu, pair_low);
> +	} else {
> +		kvm_pmu_stop_release_perf_event(vcpu, select_idx);
> +		kvm_pmu_sync_counter_enable(vcpu, select_idx);
> +	}
>  }
>  
>  /**
>   * kvm_pmu_release_perf_event - remove the perf event
>   * @pmc: The PMU counter pointer
>   */
> -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
> +				       struct kvm_pmc *pmc)
>  {
> -	if (pmc->perf_event) {
> -		perf_event_disable(pmc->perf_event);
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_alt;
> +	u64 pair_alt;
> +
> +	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
> +	pmc_alt = &pmu->pmc[pair_alt];
> +
> +	if (pmc->perf_event)
>  		perf_event_release_kernel(pmc->perf_event);
> -		pmc->perf_event = NULL;
> -	}
> +
> +	if (pmc->perf_event == pmc_alt->perf_event)
> +		pmc_alt->perf_event = NULL;
> +
> +	pmc->perf_event = NULL;
>  }
>  
>  /**
> @@ -91,22 +171,65 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>   * @vcpu: The vcpu pointer
>   * @pmc: The PMU counter pointer
>   *
> - * If this counter has been configured to monitor some event, release it here.
> + * If this counter has been configured to monitor some event, stop it here.
>   */
>  static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
>  {
>  	u64 counter, reg;
>  
>  	if (pmc->perf_event) {
> +		perf_event_disable(pmc->perf_event);
>  		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>  		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
>  		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
>  		__vcpu_sys_reg(vcpu, reg) = counter;
> -		kvm_pmu_release_perf_event(pmc);
>  	}
>  }
>  
>  /**
> + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
> + * @vcpu: The vcpu pointer
> + * @pmc_low: The PMU counter pointer for lower word
> + * @pmc_high: The PMU counter pointer for higher word
> + *
> + * As chained counters share the underlying perf event, we stop them
> + * both first before discarding the underlying perf event
> + */
> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> +					    u64 idx_low)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
> +
> +	/* Stopping a counter involves adding the perf event value to the
> +	 * vcpu sys register value prior to releasing the perf event. As
> +	 * kvm_pmu_get_counter_value may depend on the low counter value we
> +	 * must always stop the high counter first
> +	 */
> +	kvm_pmu_stop_counter(vcpu, pmc_high);
> +	kvm_pmu_stop_counter(vcpu, pmc_low);
> +
> +	kvm_pmu_release_perf_event(vcpu, pmc_high);
> +	kvm_pmu_release_perf_event(vcpu, pmc_low);
> +}
> +
> +/**
> + * kvm_pmu_stop_release_perf_event - stop and release a counter
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_stop_release_perf_event(struct kvm_vcpu *vcpu,
> +					      u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	kvm_pmu_release_perf_event(vcpu, pmc);
> +}
> +
> +/**
>   * kvm_pmu_vcpu_reset - reset pmu state for cpu
>   * @vcpu: The vcpu pointer
>   *
> @@ -117,7 +240,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
> +		kvm_pmu_stop_release_perf_event(vcpu, i);
>  		pmu->pmc[i].idx = i;
>  		pmu->pmc[i].bitmask = 0xffffffffUL;
>  	}
> @@ -134,7 +257,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
> -		kvm_pmu_release_perf_event(&pmu->pmc[i]);
> +		kvm_pmu_release_perf_event(vcpu, &pmu->pmc[i]);
>  }
>  
>  u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> @@ -167,53 +290,115 @@ static void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
>  }
>  
>  /**
> + * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)
> +{
> +	kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
> +}
> +
> +/**
> + * kvm_pmu_sync_counter_enable_pair - reenable a pair if they should be enabled
> + * @vcpu: The vcpu pointer
> + * @pair_low: The low counter index
> + */
> +static void kvm_pmu_sync_counter_enable_pair(struct kvm_vcpu *vcpu, u64 pair_low)
> +{
> +	kvm_pmu_enable_counter_mask(vcpu, BIT(pair_low) | BIT(pair_low + 1));
> +}
> +
> +/**
> + * kvm_pmu_enable_counter_pair - enable counters pair at a time
> + * @vcpu: The vcpu pointer
> + * @val: counters to enable
> + * @pair_low: The low counter index
> + */
> +static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> +					u64 pair_low)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> +
> +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		if (pmc_low->perf_event != pmc_high->perf_event)
> +			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> +	}
> +
> +	if (val & BIT(pair_low))
> +		kvm_pmu_enable_counter(vcpu, pair_low);
> +
> +	if (val & BIT(pair_low+1))

Style nit: I think there should be spaces around the '+', might be worth
running checkpatch to check for other style stuff.

> +		kvm_pmu_enable_counter(vcpu, pair_low + 1);
> +}
> +
> +/**
>   * kvm_pmu_enable_counter_mask - enable selected PMU counters
>   * @vcpu: The vcpu pointer
> - * @val: the value guest writes to PMCNTENSET register
> + * @val: the value guest writes to PMCNTENSET register or a subset
>   *
>   * Call perf_event_enable to start counting the perf event
>   */
>  void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>  {
>  	int i;
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
>  
>  	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
>  		return;
>  
> -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		if (!(val & BIT(i)))
> -			continue;
> -
> -		kvm_pmu_enable_counter(vcpu, i);
> -	}
> +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> +		kvm_pmu_enable_counter_pair(vcpu, val & set, i);
>  }
>  
>  /**
> - * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
> + * kvm_pmu_disable_counter - disable selected PMU counter
>   * @vcpu: The vcpu pointer
>   * @select_idx: The counter index
>   */
> -static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
> -					    u64 select_idx)
> +static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
> -	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  
> -	if (set & BIT(select_idx))
> -		kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
> +	if (pmc->perf_event) {
> +		perf_event_disable(pmc->perf_event);
> +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +			kvm_debug("fail to enable perf event\n");
> +	}
>  }
>  
>  /**
> - * kvm_pmu_disable_counter - disable selected PMU counter
> - * @vcpu: The vcpu pointer
> - * @pmc: The counter to disable
> + * kvm_pmu_disable_counter_pair - disable counters pair at a time
> + * @val: counters to disable
> + * @pair_low: The low counter index
>   */
> -static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
> +static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> +					 u64 pair_low)
>  {
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> +
> +	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		if (pmc_low->perf_event == pmc_high->perf_event) {
> +			if (pmc_low->perf_event) {
> +				kvm_pmu_stop_release_perf_event_pair(vcpu,
> +								pair_low);
> +				kvm_pmu_sync_counter_enable_pair(vcpu, pair_low);
> +			}
> +		}
> +	}
>  
> -	if (pmc->perf_event)
> -		perf_event_disable(pmc->perf_event);
> +	if (val & BIT(pair_low))
> +		kvm_pmu_disable_counter(vcpu, pair_low);
> +
> +	if (val & BIT(pair_low + 1))
> +		kvm_pmu_disable_counter(vcpu, pair_low + 1);
>  }
>  
>  /**
> @@ -230,12 +415,8 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>  	if (!val)
>  		return;
>  
> -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		if (!(val & BIT(i)))
> -			continue;
> -
> -		kvm_pmu_disable_counter(vcpu, i);
> -	}
> +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> +		kvm_pmu_disable_counter_pair(vcpu, val, i);
>  }
>  
>  static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
> @@ -346,6 +527,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
>  
>  	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
>  
> +	if (kvm_pmu_event_is_chained(vcpu, idx)) {
> +		struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +		struct kvm_pmc *pmc_high = &pmu->pmc[idx + 1];
> +
> +		if (!(--pmc_high->overflow_count)) {
> +			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx + 1);
> +			pmc_high->overflow_count = U32_MAX + 1UL;
> +		}
> +
> +	}
> +
>  	if (kvm_pmu_overflow_status(vcpu)) {
>  		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
>  		kvm_vcpu_kick(vcpu);
> @@ -440,6 +632,10 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
>  	    select_idx != ARMV8_PMU_CYCLE_IDX)
>  		return;
>  
> +	/* Handled by even event */
> +	if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
> +		return;
> +
>  	memset(&attr, 0, sizeof(struct perf_event_attr));
>  	attr.type = PERF_TYPE_RAW;
>  	attr.size = sizeof(attr);
> @@ -452,6 +648,9 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
>  	attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
>  		ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
>  
> +	if (kvm_pmu_event_is_chained(vcpu, select_idx))
> +		attr.config1 |= 0x1;

I'm not very familiar with the usage of perf attributes configs, but is
there any chance we could name this flag? Even if only for the local
file? Something like PERF_ATTR_CFG1_KVM_PMU_CHAINED (unless there is an
existing naming convention for event attributes).

Thanks,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions
  2019-02-04 16:53 ` [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions Andrew Murray
  2019-02-05 12:21   ` Julien Thierry
@ 2019-02-11 17:23   ` Suzuki K Poulose
  1 sibling, 0 replies; 21+ messages in thread
From: Suzuki K Poulose @ 2019-02-11 17:23 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier
  Cc: kvmarm, linux-arm-kernel, julien.thierry



On 04/02/2019 16:53, Andrew Murray wrote:
> The kvm_pmu_{enable/disable}_counter functions can enabled/disable
> multiple counters at once as they operate on a bitmask. Let's
> make this clearer by renaming the function.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>

Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value
  2019-02-04 16:53 ` [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value Andrew Murray
  2019-02-05 12:21   ` Julien Thierry
@ 2019-02-13 14:28   ` Suzuki K Poulose
  2019-02-18  9:39     ` Andrew Murray
  1 sibling, 1 reply; 21+ messages in thread
From: Suzuki K Poulose @ 2019-02-13 14:28 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier
  Cc: kvmarm, linux-arm-kernel, julien.thierry

Hi Andrew,

On 04/02/2019 16:53, Andrew Murray wrote:
> The perf event sample_period is currently set based upon the current
> counter value, when PMXEVTYPER is written to and the perf event is created.
> However the user may choose to write the type before the counter value in
> which case sample_period will be set incorrectly. Let's instead decouple
> event creation from PMXEVTYPER and (re)create the event in either
> suitation.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>   virt/kvm/arm/pmu.c | 40 +++++++++++++++++++++++++++++++---------
>   1 file changed, 31 insertions(+), 9 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 6e7c179..95d74ec 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,6 +24,7 @@
>   #include <kvm/arm_pmu.h>
>   #include <kvm/arm_vgic.h>
>   
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
>   /**
>    * kvm_pmu_get_counter_value - get PMU counter value
>    * @vcpu: The vcpu pointer
> @@ -62,6 +63,9 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> +
> +	/* Recreate the perf event to reflect the updated sample_period */
> +	kvm_pmu_create_perf_event(vcpu, select_idx);
>   }
>   
>   /**
> @@ -378,23 +382,22 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>   }
>   
>   /**
> - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * kvm_pmu_create_perf_event - create a perf event for a counter
>    * @vcpu: The vcpu pointer
> - * @data: The data guest writes to PMXEVTYPER_EL0
> + * @data: Type of event as per PMXEVTYPER_EL0 format
>    * @select_idx: The number of selected counter

nit: data no longer exists.


> - *
> - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> - * event with given hardware event number. Here we call perf_event API to
> - * emulate this action and create a kernel perf event for it.
>    */
> -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> -				    u64 select_idx)
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
>   {

With the comment from Julien addressed,

Reviewed-by: Suzuki K Poulose <suzuki.poulse@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 4/5] KVM: arm/arm64: lazily create perf events on enable
  2019-02-04 16:53 ` [PATCH v2 4/5] KVM: arm/arm64: lazily create perf events on enable Andrew Murray
@ 2019-02-14 11:36   ` Suzuki K Poulose
  2019-02-18  9:57     ` Andrew Murray
  0 siblings, 1 reply; 21+ messages in thread
From: Suzuki K Poulose @ 2019-02-14 11:36 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier
  Cc: kvmarm, linux-arm-kernel, julien.thierry

Hi Andrew,

On 04/02/2019 16:53, Andrew Murray wrote:
> To prevent re-creating perf events everytime the counter registers
> are changed, let's instead lazily create the event when the event
> is first enabled and destroy it when it changes.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>   virt/kvm/arm/pmu.c | 96 ++++++++++++++++++++++++++++++++++++++----------------
>   1 file changed, 68 insertions(+), 28 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 95d74ec..a64aeb2 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,7 +24,10 @@
>   #include <kvm/arm_pmu.h>
>   #include <kvm/arm_vgic.h>
>   
> +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, u64 select_idx);
>   static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
> +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> +
>   /**
>    * kvm_pmu_get_counter_value - get PMU counter value
>    * @vcpu: The vcpu pointer
> @@ -59,13 +62,15 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>   {
>   	u64 reg;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>   
>   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>   
> -	/* Recreate the perf event to reflect the updated sample_period */
> -	kvm_pmu_create_perf_event(vcpu, select_idx);
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	kvm_pmu_sync_counter_enable(vcpu, select_idx);
>   }
>   
>   /**
> @@ -83,6 +88,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>   
>   /**
>    * kvm_pmu_stop_counter - stop PMU counter
> + * @vcpu: The vcpu pointer
>    * @pmc: The PMU counter pointer
>    *
>    * If this counter has been configured to monitor some event, release it here.
> @@ -143,6 +149,24 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>   }
>   
>   /**
> + * kvm_pmu_enable_counter - create/enable a counter
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (!pmc->perf_event)
> +		kvm_pmu_create_perf_event(vcpu, select_idx);
> +
> +	perf_event_enable(pmc->perf_event);
> +	if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +		kvm_debug("failed to enable perf event\n");
> +}
> +
> +/**
>    * kvm_pmu_enable_counter_mask - enable selected PMU counters
>    * @vcpu: The vcpu pointer
>    * @val: the value guest writes to PMCNTENSET register
> @@ -152,8 +176,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>   void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>   {
>   	int i;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc;
>   
>   	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
>   		return;
> @@ -162,16 +184,39 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>   		if (!(val & BIT(i)))
>   			continue;
>   
> -		pmc = &pmu->pmc[i];
> -		if (pmc->perf_event) {
> -			perf_event_enable(pmc->perf_event);
> -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> -				kvm_debug("fail to enable perf event\n");
> -		}
> +		kvm_pmu_enable_counter(vcpu, i);
>   	}
>   }
>   
>   /**
> + * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)
> +{
> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> +
> +	if (set & BIT(select_idx))
> +		kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));

I think there is a problem here. We could be creating an event for a
counter beyond what is supported by the CPU. i.e, we don't
seem to validate that the mask we are creating is within the
kvm_pmu_valid_counter_mask(). The other callers seem to verify this.
I guess it may be better to add a check for this in the
kvm_pmu_enable_counter_mask().

minor nit: Feel free to ignore. If we move the check for PMCNTENSET_EL0 to
pmu_enable_counter_mask() we could get rid of the above function. Anyways,
we should only be enabling the counters set in PMCNTENSET_EL0.


> +}
> +
> +/**
> + * kvm_pmu_disable_counter - disable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @pmc: The counter to disable
> + */
> +static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (pmc->perf_event)
> +		perf_event_disable(pmc->perf_event);
> +}
> +
> +/**
>    * kvm_pmu_disable_counter_mask - disable selected PMU counters
>    * @vcpu: The vcpu pointer
>    * @val: the value guest writes to PMCNTENCLR register
> @@ -181,8 +226,6 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>   void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>   {
>   	int i;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc;
>   
>   	if (!val)
>   		return;
> @@ -191,9 +234,7 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
>   		if (!(val & BIT(i)))
>   			continue;
>   
> -		pmc = &pmu->pmc[i];
> -		if (pmc->perf_event)
> -			perf_event_disable(pmc->perf_event);
> +		kvm_pmu_disable_counter(vcpu, i);
>   	}
>   }
>   
> @@ -375,16 +416,9 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>   	}
>   }
>   
> -static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> -{
> -	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> -	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
> -}
> -
>   /**
> - * kvm_pmu_create_perf_event - create a perf event for a counter
> + * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter

nit: Name hasn't changed. Also, please could update that, the events are always
created disabled and they are only enabled lazily.

Reset looks fines.

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters
  2019-02-05 14:33   ` Julien Thierry
@ 2019-02-14 11:42     ` Suzuki K Poulose
  2019-02-18 12:03       ` Andrew Murray
  2019-02-18 10:11     ` Andrew Murray
  1 sibling, 1 reply; 21+ messages in thread
From: Suzuki K Poulose @ 2019-02-14 11:42 UTC (permalink / raw)
  To: julien.thierry, andrew.murray, christoffer.dall, marc.zyngier
  Cc: kvmarm, linux-arm-kernel



On 05/02/2019 14:33, Julien Thierry wrote:
> Hi Andrew,
> 
> On 04/02/2019 16:53, Andrew Murray wrote:
>> Emulate chained PMU counters by creating a single 64 bit event counter
>> for a pair of chained KVM counters.
>>
>> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
>> ---
>>   include/kvm/arm_pmu.h |   1 +
>>   virt/kvm/arm/pmu.c    | 321 +++++++++++++++++++++++++++++++++++++++++---------
>>   2 files changed, 269 insertions(+), 53 deletions(-)
>>
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index b73f31b..8e691ee 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -29,6 +29,7 @@ struct kvm_pmc {
>>   	u8 idx;	/* index into the pmu->pmc array */
>>   	struct perf_event *perf_event;
>>   	u64 bitmask;
>> +	u64 overflow_count;
>>   };
>>   
>>   struct kvm_pmu {
>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> index a64aeb2..9318130 100644
>> --- a/virt/kvm/arm/pmu.c
>> +++ b/virt/kvm/arm/pmu.c
>> @@ -24,9 +24,25 @@
>>   #include <kvm/arm_pmu.h>
>>   #include <kvm/arm_vgic.h>
>>   
>> +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
> 
> I find it a bit awkward to have this redefined here.
> 
> Maybe we could define a helper in kvm_host.h:
> bool kvm_pmu_typer_is_chain(u64 typer);
> 
> That would always return false for arm32?

We don't support ARMv7 host, so that doesn't matter. But
it is a good idea to wrap it in a function here.

>>   		kvm_vcpu_kick(vcpu);
>> @@ -440,6 +632,10 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
>>   	    select_idx != ARMV8_PMU_CYCLE_IDX)
>>   		return;
>>   
>> +	/* Handled by even event */
>> +	if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
>> +		return;
>> +
>>   	memset(&attr, 0, sizeof(struct perf_event_attr));
>>   	attr.type = PERF_TYPE_RAW;
>>   	attr.size = sizeof(attr);
>> @@ -452,6 +648,9 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
>>   	attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
>>   		ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
>>   
>> +	if (kvm_pmu_event_is_chained(vcpu, select_idx))
>> +		attr.config1 |= 0x1;
> 
> I'm not very familiar with the usage of perf attributes configs, but is
> there any chance we could name this flag? Even if only for the local
> file? Something like PERF_ATTR_CFG1_KVM_PMU_CHAINED (unless there is an
> existing naming convention for event attributes).

There is ARPMU_EVT_64BIT in "linux/perf/arm_pmu.h". But in general we don't
specify where the Bit goes in (i.e, CFG1 etc).

Cheers
Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters
  2019-02-04 16:53 [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Andrew Murray
                   ` (4 preceding siblings ...)
  2019-02-04 16:53 ` [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters Andrew Murray
@ 2019-02-18  8:52 ` Marc Zyngier
  2019-02-18 12:10   ` Andrew Murray
  5 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2019-02-18  8:52 UTC (permalink / raw)
  To: Andrew Murray
  Cc: Julien Thierry, suzuki.poulose, Christoffer Dall,
	linux-arm-kernel, kvmarm

On Mon, 4 Feb 2019 16:53:33 +0000
Andrew Murray <andrew.murray@arm.com> wrote:

> ARMv8 provides support for chained PMU counters, where an event type
> of 0x001E is set for odd-numbered counters, the event counter will
> increment by one for each overflow of the preceding even-numbered
> counter. Let's emulate this in KVM by creating a 64 bit perf counter
> when a user chains two emulated counters together.
> 
> Testing has been performed by hard-coding hwc->sample_period in
> __hw_perf_event_init (arm_pmu.c) to a small value, this results in
> regular overflows (for non sampling events). The following command
> was then used to measure chained and non-chained instruction cycles:
> 
> perf stat -e armv8_pmuv3/long=1,inst_retired/u \
>           -e armv8_pmuv3/long=0,inst_retired/u dd if=/dev/zero bs=1M \
> 	  count=10 | gzip > /dev/null
> 
> The reported values were identical (and for non-chained was in the
> same ballpark when running on a kernel without this patchset). Debug
> was added to verify that the guest received overflow interrupts for
> the chain counter.

Hi Andrew,

We're getting quite close to the merge window, and I need to wrap up the
pull request pretty soon. If you want this to make it into 5.1, you'll
have to respin it pretty quickly (right now, basically...), addressing
the comments Suzuki and Julien raised.

Thanks,

	M.
-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value
  2019-02-13 14:28   ` Suzuki K Poulose
@ 2019-02-18  9:39     ` Andrew Murray
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-18  9:39 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: marc.zyngier, julien.thierry, christoffer.dall, linux-arm-kernel, kvmarm

On Wed, Feb 13, 2019 at 02:28:51PM +0000, Suzuki K Poulose wrote:
> Hi Andrew,
> 
> On 04/02/2019 16:53, Andrew Murray wrote:
> > The perf event sample_period is currently set based upon the current
> > counter value, when PMXEVTYPER is written to and the perf event is created.
> > However the user may choose to write the type before the counter value in
> > which case sample_period will be set incorrectly. Let's instead decouple
> > event creation from PMXEVTYPER and (re)create the event in either
> > suitation.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >   virt/kvm/arm/pmu.c | 40 +++++++++++++++++++++++++++++++---------
> >   1 file changed, 31 insertions(+), 9 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 6e7c179..95d74ec 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,6 +24,7 @@
> >   #include <kvm/arm_pmu.h>
> >   #include <kvm/arm_vgic.h>
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
> >   /**
> >    * kvm_pmu_get_counter_value - get PMU counter value
> >    * @vcpu: The vcpu pointer
> > @@ -62,6 +63,9 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > +
> > +	/* Recreate the perf event to reflect the updated sample_period */
> > +	kvm_pmu_create_perf_event(vcpu, select_idx);
> >   }
> >   /**
> > @@ -378,23 +382,22 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> >   }
> >   /**
> > - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> > + * kvm_pmu_create_perf_event - create a perf event for a counter
> >    * @vcpu: The vcpu pointer
> > - * @data: The data guest writes to PMXEVTYPER_EL0
> > + * @data: Type of event as per PMXEVTYPER_EL0 format
> >    * @select_idx: The number of selected counter
> 
> nit: data no longer exists.
> 

Thanks for the review,

Andrew Murray

> 
> > - *
> > - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> > - * event with given hardware event number. Here we call perf_event API to
> > - * emulate this action and create a kernel perf event for it.
> >    */
> > -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> > -				    u64 select_idx)
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
> >   {
> 
> With the comment from Julien addressed,
> 
> Reviewed-by: Suzuki K Poulose <suzuki.poulse@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 4/5] KVM: arm/arm64: lazily create perf events on enable
  2019-02-14 11:36   ` Suzuki K Poulose
@ 2019-02-18  9:57     ` Andrew Murray
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-18  9:57 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: marc.zyngier, julien.thierry, christoffer.dall, linux-arm-kernel, kvmarm

On Thu, Feb 14, 2019 at 11:36:02AM +0000, Suzuki K Poulose wrote:
> Hi Andrew,
> 
> On 04/02/2019 16:53, Andrew Murray wrote:
> > To prevent re-creating perf events everytime the counter registers
> > are changed, let's instead lazily create the event when the event
> > is first enabled and destroy it when it changes.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >   virt/kvm/arm/pmu.c | 96 ++++++++++++++++++++++++++++++++++++++----------------
> >   1 file changed, 68 insertions(+), 28 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 95d74ec..a64aeb2 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,7 +24,10 @@
> >   #include <kvm/arm_pmu.h>
> >   #include <kvm/arm_vgic.h>
> > +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, u64 select_idx);
> >   static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
> > +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> > +
> >   /**
> >    * kvm_pmu_get_counter_value - get PMU counter value
> >    * @vcpu: The vcpu pointer
> > @@ -59,13 +62,15 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >   {
> >   	u64 reg;
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > -	/* Recreate the perf event to reflect the updated sample_period */
> > -	kvm_pmu_create_perf_event(vcpu, select_idx);
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> > +	kvm_pmu_sync_counter_enable(vcpu, select_idx);
> >   }
> >   /**
> > @@ -83,6 +88,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> >   /**
> >    * kvm_pmu_stop_counter - stop PMU counter
> > + * @vcpu: The vcpu pointer
> >    * @pmc: The PMU counter pointer
> >    *
> >    * If this counter has been configured to monitor some event, release it here.
> > @@ -143,6 +149,24 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >   }
> >   /**
> > + * kvm_pmu_enable_counter - create/enable a counter
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (!pmc->perf_event)
> > +		kvm_pmu_create_perf_event(vcpu, select_idx);
> > +
> > +	perf_event_enable(pmc->perf_event);
> > +	if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > +		kvm_debug("failed to enable perf event\n");
> > +}
> > +
> > +/**
> >    * kvm_pmu_enable_counter_mask - enable selected PMU counters
> >    * @vcpu: The vcpu pointer
> >    * @val: the value guest writes to PMCNTENSET register
> > @@ -152,8 +176,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >   void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> >   {
> >   	int i;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc;
> >   	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> >   		return;
> > @@ -162,16 +184,39 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> >   		if (!(val & BIT(i)))
> >   			continue;
> > -		pmc = &pmu->pmc[i];
> > -		if (pmc->perf_event) {
> > -			perf_event_enable(pmc->perf_event);
> > -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > -				kvm_debug("fail to enable perf event\n");
> > -		}
> > +		kvm_pmu_enable_counter(vcpu, i);
> >   	}
> >   }
> >   /**
> > + * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
> > +					    u64 select_idx)
> > +{
> > +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> > +
> > +	if (set & BIT(select_idx))
> > +		kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
> 
> I think there is a problem here. We could be creating an event for a
> counter beyond what is supported by the CPU. i.e, we don't
> seem to validate that the mask we are creating is within the
> kvm_pmu_valid_counter_mask(). The other callers seem to verify this.
> I guess it may be better to add a check for this in the
> kvm_pmu_enable_counter_mask().

Actually I think this is OK. This function is only called by
kvm_pmu_set_counter_event_type and kvm_pmu_set_counter_value - both these
functions are called from arch/arm64/kvm/sys_regs.c which ensures that
pmu_counter_idx_valid is true (which is same as kvm_pmu_valid_counter_mask).

This does become a problem in the next patch when we add chained counters
however we then add the check against kvm_pmu_valid_counter_mask (see
kvm_pmu_enable_counter_mask).

> 
> minor nit: Feel free to ignore. If we move the check for PMCNTENSET_EL0 to
> pmu_enable_counter_mask() we could get rid of the above function. Anyways,
> we should only be enabling the counters set in PMCNTENSET_EL0.
> 

This does seem acceptable, however the next patch adds a
kvm_pmu_sync_counter_pair function, which does add some additional value. So
whilst kvm_pmu_sync_counter_enable may seem redundant here it is built upon
later.

> 
> > +}
> > +
> > +/**
> > + * kvm_pmu_disable_counter - disable selected PMU counter
> > + * @vcpu: The vcpu pointer
> > + * @pmc: The counter to disable
> > + */
> > +static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (pmc->perf_event)
> > +		perf_event_disable(pmc->perf_event);
> > +}
> > +
> > +/**
> >    * kvm_pmu_disable_counter_mask - disable selected PMU counters
> >    * @vcpu: The vcpu pointer
> >    * @val: the value guest writes to PMCNTENCLR register
> > @@ -181,8 +226,6 @@ void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> >   void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> >   {
> >   	int i;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc;
> >   	if (!val)
> >   		return;
> > @@ -191,9 +234,7 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> >   		if (!(val & BIT(i)))
> >   			continue;
> > -		pmc = &pmu->pmc[i];
> > -		if (pmc->perf_event)
> > -			perf_event_disable(pmc->perf_event);
> > +		kvm_pmu_disable_counter(vcpu, i);
> >   	}
> >   }
> > @@ -375,16 +416,9 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
> >   	}
> >   }
> > -static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> > -{
> > -	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> > -	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
> > -}
> > -
> >   /**
> > - * kvm_pmu_create_perf_event - create a perf event for a counter
> > + * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter
> 
> nit: Name hasn't changed. Also, please could update that, the events are always
> created disabled and they are only enabled lazily.

Thanks,

Andrew Murray

> 
> Reset looks fines.
> 
> Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters
  2019-02-05 14:33   ` Julien Thierry
  2019-02-14 11:42     ` Suzuki K Poulose
@ 2019-02-18 10:11     ` Andrew Murray
  1 sibling, 0 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-18 10:11 UTC (permalink / raw)
  To: Julien Thierry
  Cc: Marc Zyngier, suzuki.poulose, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, Feb 05, 2019 at 02:33:03PM +0000, Julien Thierry wrote:
> Hi Andrew,
> 
> On 04/02/2019 16:53, Andrew Murray wrote:
> > Emulate chained PMU counters by creating a single 64 bit event counter
> > for a pair of chained KVM counters.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  include/kvm/arm_pmu.h |   1 +
> >  virt/kvm/arm/pmu.c    | 321 +++++++++++++++++++++++++++++++++++++++++---------
> >  2 files changed, 269 insertions(+), 53 deletions(-)
> > 
> > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> > index b73f31b..8e691ee 100644
> > --- a/include/kvm/arm_pmu.h
> > +++ b/include/kvm/arm_pmu.h
> > @@ -29,6 +29,7 @@ struct kvm_pmc {
> >  	u8 idx;	/* index into the pmu->pmc array */
> >  	struct perf_event *perf_event;
> >  	u64 bitmask;
> > +	u64 overflow_count;
> >  };
> >  
> >  struct kvm_pmu {
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index a64aeb2..9318130 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,9 +24,25 @@
> >  #include <kvm/arm_pmu.h>
> >  #include <kvm/arm_vgic.h>
> >  
> > +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
> 
> I find it a bit awkward to have this redefined here.
> 
> Maybe we could define a helper in kvm_host.h:
> bool kvm_pmu_typer_is_chain(u64 typer);
> 
> That would always return false for arm32?
> 
> > +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> > +					    u64 pair_low);
> > +static void kvm_pmu_stop_release_perf_event(struct kvm_vcpu *vcpu,
> > +					      u64 select_idx);
> > +static void kvm_pmu_sync_counter_enable_pair(struct kvm_vcpu *vcpu, u64 pair_low);
> >  static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu, u64 select_idx);
> >  static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
> > -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> > +
> > +/**
> > + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
> > + * @pmc: The PMU counter pointer
> > + * @select_idx: The counter index
> > + */
> > +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
> > +{
> > +	return ((pmc->perf_event->attr.config1 & 0x1)
> > +		&& (pmc->idx % 2));
> > +}
> >  
> >  /**
> >   * kvm_pmu_get_counter_value - get PMU counter value
> > @@ -35,22 +51,70 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> >   */
> >  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >  {
> > -	u64 counter, reg, enabled, running;
> > +	u64 counter_idx, reg, enabled, running, incr;
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> > -	counter = __vcpu_sys_reg(vcpu, reg);
> > +	counter_idx = __vcpu_sys_reg(vcpu, reg);
> 
> I'm not sure I understand the "_idx" suffix for this variable. This
> holds a counter value, not and index. Right?

Yes it holds a counter value. The reason for the '_idx' suffix was to indicate
that this holds the counter value as selected by 'select_idx'. Also in the
case of a chained counter, this value is only the high or low part and so I
reserve 'counter' for representing the entire chained counter value.

I'll change this such that 'counter_idx' becomes 'counter' again. And 'counter'
(introduced by this patch) becomes 'counter_full' - does that make it clearer?

> 
> 
> >  
> >  	/* The real counter value is equal to the value of counter register plus
> >  	 * the value perf event counts.
> >  	 */
> > -	if (pmc->perf_event)
> > -		counter += perf_event_read_value(pmc->perf_event, &enabled,
> > +	if (pmc->perf_event) {
> > +		incr = perf_event_read_value(pmc->perf_event, &enabled,
> >  						 &running);
> >  
> > -	return counter & pmc->bitmask;
> > +		if (kvm_pmu_counter_is_high_word(pmc)) {
> > +			u64 counter_low, counter;
> > +
> > +			reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > +			      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx - 1;
> > +			counter_low = __vcpu_sys_reg(vcpu, reg);
> > +			counter = lower_32_bits(counter_low) | (counter_idx << 32);
> > +			counter_idx = upper_32_bits(counter + incr);
> > +		} else {
> > +			counter_idx += incr;
> > +		}
> > +	}
> > +
> > +	return counter_idx & pmc->bitmask;
> > +}
> > +
> > +/**
> > + * kvm_pmu_counter_is_enabled - is a counter active
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +
> > +	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> > +	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
> > +}
> > +
> > +/**
> > + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The low counter index
> > + */
> > +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
> > +{
> > +	u64 eventsel, reg;
> > +
> > +	reg = (pair_low + 1 == ARMV8_PMU_CYCLE_IDX)
> > +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + pair_low + 1;
> > +	eventsel = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_EVENT;
> > +	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
> > +		return false;
> > +
> > +	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
> > +	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
> > +		return false;
> > +
> > +	return true;
> >  }
> >  
> >  /**
> > @@ -61,29 +125,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >   */
> >  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >  {
> > -	u64 reg;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +	u64 reg, pair_low;
> >  
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> >  
> > -	kvm_pmu_stop_counter(vcpu, pmc);
> > -	kvm_pmu_sync_counter_enable(vcpu, select_idx);
> > +	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
> > +
> > +	/* Recreate the perf event to reflect the updated sample_period */
> > +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> > +		kvm_pmu_sync_counter_enable_pair(vcpu, pair_low);
> > +	} else {
> > +		kvm_pmu_stop_release_perf_event(vcpu, select_idx);
> > +		kvm_pmu_sync_counter_enable(vcpu, select_idx);
> > +	}
> >  }
> >  
> >  /**
> >   * kvm_pmu_release_perf_event - remove the perf event
> >   * @pmc: The PMU counter pointer
> >   */
> > -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> > +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
> > +				       struct kvm_pmc *pmc)
> >  {
> > -	if (pmc->perf_event) {
> > -		perf_event_disable(pmc->perf_event);
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_alt;
> > +	u64 pair_alt;
> > +
> > +	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
> > +	pmc_alt = &pmu->pmc[pair_alt];
> > +
> > +	if (pmc->perf_event)
> >  		perf_event_release_kernel(pmc->perf_event);
> > -		pmc->perf_event = NULL;
> > -	}
> > +
> > +	if (pmc->perf_event == pmc_alt->perf_event)
> > +		pmc_alt->perf_event = NULL;
> > +
> > +	pmc->perf_event = NULL;
> >  }
> >  
> >  /**
> > @@ -91,22 +171,65 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> >   * @vcpu: The vcpu pointer
> >   * @pmc: The PMU counter pointer
> >   *
> > - * If this counter has been configured to monitor some event, release it here.
> > + * If this counter has been configured to monitor some event, stop it here.
> >   */
> >  static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
> >  {
> >  	u64 counter, reg;
> >  
> >  	if (pmc->perf_event) {
> > +		perf_event_disable(pmc->perf_event);
> >  		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> >  		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
> >  		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
> >  		__vcpu_sys_reg(vcpu, reg) = counter;
> > -		kvm_pmu_release_perf_event(pmc);
> >  	}
> >  }
> >  
> >  /**
> > + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
> > + * @vcpu: The vcpu pointer
> > + * @pmc_low: The PMU counter pointer for lower word
> > + * @pmc_high: The PMU counter pointer for higher word
> > + *
> > + * As chained counters share the underlying perf event, we stop them
> > + * both first before discarding the underlying perf event
> > + */
> > +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> > +					    u64 idx_low)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
> > +
> > +	/* Stopping a counter involves adding the perf event value to the
> > +	 * vcpu sys register value prior to releasing the perf event. As
> > +	 * kvm_pmu_get_counter_value may depend on the low counter value we
> > +	 * must always stop the high counter first
> > +	 */
> > +	kvm_pmu_stop_counter(vcpu, pmc_high);
> > +	kvm_pmu_stop_counter(vcpu, pmc_low);
> > +
> > +	kvm_pmu_release_perf_event(vcpu, pmc_high);
> > +	kvm_pmu_release_perf_event(vcpu, pmc_low);
> > +}
> > +
> > +/**
> > + * kvm_pmu_stop_release_perf_event - stop and release a counter
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_stop_release_perf_event(struct kvm_vcpu *vcpu,
> > +					      u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> > +	kvm_pmu_release_perf_event(vcpu, pmc);
> > +}
> > +
> > +/**
> >   * kvm_pmu_vcpu_reset - reset pmu state for cpu
> >   * @vcpu: The vcpu pointer
> >   *
> > @@ -117,7 +240,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  
> >  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
> > +		kvm_pmu_stop_release_perf_event(vcpu, i);
> >  		pmu->pmc[i].idx = i;
> >  		pmu->pmc[i].bitmask = 0xffffffffUL;
> >  	}
> > @@ -134,7 +257,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  
> >  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++)
> > -		kvm_pmu_release_perf_event(&pmu->pmc[i]);
> > +		kvm_pmu_release_perf_event(vcpu, &pmu->pmc[i]);
> >  }
> >  
> >  u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> > @@ -167,53 +290,115 @@ static void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
> >  }
> >  
> >  /**
> > + * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
> > +					    u64 select_idx)
> > +{
> > +	kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
> > +}
> > +
> > +/**
> > + * kvm_pmu_sync_counter_enable_pair - reenable a pair if they should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @pair_low: The low counter index
> > + */
> > +static void kvm_pmu_sync_counter_enable_pair(struct kvm_vcpu *vcpu, u64 pair_low)
> > +{
> > +	kvm_pmu_enable_counter_mask(vcpu, BIT(pair_low) | BIT(pair_low + 1));
> > +}
> > +
> > +/**
> > + * kvm_pmu_enable_counter_pair - enable counters pair at a time
> > + * @vcpu: The vcpu pointer
> > + * @val: counters to enable
> > + * @pair_low: The low counter index
> > + */
> > +static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> > +					u64 pair_low)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> > +
> > +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		if (pmc_low->perf_event != pmc_high->perf_event)
> > +			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> > +	}
> > +
> > +	if (val & BIT(pair_low))
> > +		kvm_pmu_enable_counter(vcpu, pair_low);
> > +
> > +	if (val & BIT(pair_low+1))
> 
> Style nit: I think there should be spaces around the '+', might be worth
> running checkpatch to check for other style stuff.

Thanks - for some reason checkpatch doesn't pick this one up.

Andrew Murray

> 
> > +		kvm_pmu_enable_counter(vcpu, pair_low + 1);
> > +}
> > +
> > +/**
> >   * kvm_pmu_enable_counter_mask - enable selected PMU counters
> >   * @vcpu: The vcpu pointer
> > - * @val: the value guest writes to PMCNTENSET register
> > + * @val: the value guest writes to PMCNTENSET register or a subset
> >   *
> >   * Call perf_event_enable to start counting the perf event
> >   */
> >  void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> >  {
> >  	int i;
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> >  
> >  	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> >  		return;
> >  
> > -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		if (!(val & BIT(i)))
> > -			continue;
> > -
> > -		kvm_pmu_enable_counter(vcpu, i);
> > -	}
> > +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> > +		kvm_pmu_enable_counter_pair(vcpu, val & set, i);
> >  }
> >  
> >  /**
> > - * kvm_pmu_sync_counter_enable - reenable a counter if it should be enabled
> > + * kvm_pmu_disable_counter - disable selected PMU counter
> >   * @vcpu: The vcpu pointer
> >   * @select_idx: The counter index
> >   */
> > -static void kvm_pmu_sync_counter_enable(struct kvm_vcpu *vcpu,
> > -					    u64 select_idx)
> > +static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
> >  {
> > -	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  
> > -	if (set & BIT(select_idx))
> > -		kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
> > +	if (pmc->perf_event) {
> > +		perf_event_disable(pmc->perf_event);
> > +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > +			kvm_debug("fail to enable perf event\n");
> > +	}
> >  }
> >  
> >  /**
> > - * kvm_pmu_disable_counter - disable selected PMU counter
> > - * @vcpu: The vcpu pointer
> > - * @pmc: The counter to disable
> > + * kvm_pmu_disable_counter_pair - disable counters pair at a time
> > + * @val: counters to disable
> > + * @pair_low: The low counter index
> >   */
> > -static void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 select_idx)
> > +static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> > +					 u64 pair_low)
> >  {
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> > +
> > +	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		if (pmc_low->perf_event == pmc_high->perf_event) {
> > +			if (pmc_low->perf_event) {
> > +				kvm_pmu_stop_release_perf_event_pair(vcpu,
> > +								pair_low);
> > +				kvm_pmu_sync_counter_enable_pair(vcpu, pair_low);
> > +			}
> > +		}
> > +	}
> >  
> > -	if (pmc->perf_event)
> > -		perf_event_disable(pmc->perf_event);
> > +	if (val & BIT(pair_low))
> > +		kvm_pmu_disable_counter(vcpu, pair_low);
> > +
> > +	if (val & BIT(pair_low + 1))
> > +		kvm_pmu_disable_counter(vcpu, pair_low + 1);
> >  }
> >  
> >  /**
> > @@ -230,12 +415,8 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val)
> >  	if (!val)
> >  		return;
> >  
> > -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		if (!(val & BIT(i)))
> > -			continue;
> > -
> > -		kvm_pmu_disable_counter(vcpu, i);
> > -	}
> > +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> > +		kvm_pmu_disable_counter_pair(vcpu, val, i);
> >  }
> >  
> >  static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
> > @@ -346,6 +527,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
> >  
> >  	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
> >  
> > +	if (kvm_pmu_event_is_chained(vcpu, idx)) {
> > +		struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +		struct kvm_pmc *pmc_high = &pmu->pmc[idx + 1];
> > +
> > +		if (!(--pmc_high->overflow_count)) {
> > +			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx + 1);
> > +			pmc_high->overflow_count = U32_MAX + 1UL;
> > +		}
> > +
> > +	}
> > +
> >  	if (kvm_pmu_overflow_status(vcpu)) {
> >  		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
> >  		kvm_vcpu_kick(vcpu);
> > @@ -440,6 +632,10 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
> >  	    select_idx != ARMV8_PMU_CYCLE_IDX)
> >  		return;
> >  
> > +	/* Handled by even event */
> > +	if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
> > +		return;
> > +
> >  	memset(&attr, 0, sizeof(struct perf_event_attr));
> >  	attr.type = PERF_TYPE_RAW;
> >  	attr.size = sizeof(attr);
> > @@ -452,6 +648,9 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
> >  	attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
> >  		ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
> >  
> > +	if (kvm_pmu_event_is_chained(vcpu, select_idx))
> > +		attr.config1 |= 0x1;
> 
> I'm not very familiar with the usage of perf attributes configs, but is
> there any chance we could name this flag? Even if only for the local
> file? Something like PERF_ATTR_CFG1_KVM_PMU_CHAINED (unless there is an
> existing naming convention for event attributes).
> 
> Thanks,
> 
> -- 
> Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters
  2019-02-14 11:42     ` Suzuki K Poulose
@ 2019-02-18 12:03       ` Andrew Murray
  2019-02-18 14:02         ` Marc Zyngier
  0 siblings, 1 reply; 21+ messages in thread
From: Andrew Murray @ 2019-02-18 12:03 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: marc.zyngier, kvmarm, christoffer.dall, linux-arm-kernel, julien.thierry

On Thu, Feb 14, 2019 at 11:42:15AM +0000, Suzuki K Poulose wrote:
> 
> 
> On 05/02/2019 14:33, Julien Thierry wrote:
> > Hi Andrew,
> > 
> > On 04/02/2019 16:53, Andrew Murray wrote:
> > > Emulate chained PMU counters by creating a single 64 bit event counter
> > > for a pair of chained KVM counters.
> > > 
> > > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > > ---
> > >   include/kvm/arm_pmu.h |   1 +
> > >   virt/kvm/arm/pmu.c    | 321 +++++++++++++++++++++++++++++++++++++++++---------
> > >   2 files changed, 269 insertions(+), 53 deletions(-)
> > > 
> > > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> > > index b73f31b..8e691ee 100644
> > > --- a/include/kvm/arm_pmu.h
> > > +++ b/include/kvm/arm_pmu.h
> > > @@ -29,6 +29,7 @@ struct kvm_pmc {
> > >   	u8 idx;	/* index into the pmu->pmc array */
> > >   	struct perf_event *perf_event;
> > >   	u64 bitmask;
> > > +	u64 overflow_count;
> > >   };
> > >   struct kvm_pmu {
> > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > > index a64aeb2..9318130 100644
> > > --- a/virt/kvm/arm/pmu.c
> > > +++ b/virt/kvm/arm/pmu.c
> > > @@ -24,9 +24,25 @@
> > >   #include <kvm/arm_pmu.h>
> > >   #include <kvm/arm_vgic.h>
> > > +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
> > 
> > I find it a bit awkward to have this redefined here.
> > 
> > Maybe we could define a helper in kvm_host.h:
> > bool kvm_pmu_typer_is_chain(u64 typer);
> > 
> > That would always return false for arm32?
> 
> We don't support ARMv7 host, so that doesn't matter. But
> it is a good idea to wrap it in a function here.

I'm not sure kvm_host.h is the right place for this as this really relates
to the ARM PMU drivers. Seeing that armv8pmu_filter_match in
arch/arm64/kernel/perf_event.c would also benefit from this new function I'll
add it to arch/arm64/include/asm/perf_event.h and stub it out in arm_pmu.c
for the ARM32 case.

> 
> > >   		kvm_vcpu_kick(vcpu);
> > > @@ -440,6 +632,10 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
> > >   	    select_idx != ARMV8_PMU_CYCLE_IDX)
> > >   		return;
> > > +	/* Handled by even event */
> > > +	if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
> > > +		return;
> > > +
> > >   	memset(&attr, 0, sizeof(struct perf_event_attr));
> > >   	attr.type = PERF_TYPE_RAW;
> > >   	attr.size = sizeof(attr);
> > > @@ -452,6 +648,9 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx)
> > >   	attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
> > >   		ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
> > > +	if (kvm_pmu_event_is_chained(vcpu, select_idx))
> > > +		attr.config1 |= 0x1;
> > 
> > I'm not very familiar with the usage of perf attributes configs, but is
> > there any chance we could name this flag? Even if only for the local
> > file? Something like PERF_ATTR_CFG1_KVM_PMU_CHAINED (unless there is an
> > existing naming convention for event attributes).
> 
> There is ARPMU_EVT_64BIT in "linux/perf/arm_pmu.h". But in general we don't
> specify where the Bit goes in (i.e, CFG1 etc).

I'll add PERF_ATTR_CFG1_KVM_PMU_CHAINED to this file as you suggest.

Thanks,

Andrew Murray

> 
> Cheers
> Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters
  2019-02-18  8:52 ` [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Marc Zyngier
@ 2019-02-18 12:10   ` Andrew Murray
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Murray @ 2019-02-18 12:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Julien Thierry, suzuki.poulose, Christoffer Dall,
	linux-arm-kernel, kvmarm

On Mon, Feb 18, 2019 at 08:52:17AM +0000, Marc Zyngier wrote:
> On Mon, 4 Feb 2019 16:53:33 +0000
> Andrew Murray <andrew.murray@arm.com> wrote:
> 
> > ARMv8 provides support for chained PMU counters, where an event type
> > of 0x001E is set for odd-numbered counters, the event counter will
> > increment by one for each overflow of the preceding even-numbered
> > counter. Let's emulate this in KVM by creating a 64 bit perf counter
> > when a user chains two emulated counters together.
> > 
> > Testing has been performed by hard-coding hwc->sample_period in
> > __hw_perf_event_init (arm_pmu.c) to a small value, this results in
> > regular overflows (for non sampling events). The following command
> > was then used to measure chained and non-chained instruction cycles:
> > 
> > perf stat -e armv8_pmuv3/long=1,inst_retired/u \
> >           -e armv8_pmuv3/long=0,inst_retired/u dd if=/dev/zero bs=1M \
> > 	  count=10 | gzip > /dev/null
> > 
> > The reported values were identical (and for non-chained was in the
> > same ballpark when running on a kernel without this patchset). Debug
> > was added to verify that the guest received overflow interrupts for
> > the chain counter.
> 
> Hi Andrew,
> 
> We're getting quite close to the merge window, and I need to wrap up the
> pull request pretty soon. If you want this to make it into 5.1, you'll
> have to respin it pretty quickly (right now, basically...), addressing
> the comments Suzuki and Julien raised.
> 

Thanks for the prompt, I'll send this out very shortly.

Thanks,

Andrew Murray

> Thanks,
> 
> 	M.
> -- 
> Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters
  2019-02-18 12:03       ` Andrew Murray
@ 2019-02-18 14:02         ` Marc Zyngier
  0 siblings, 0 replies; 21+ messages in thread
From: Marc Zyngier @ 2019-02-18 14:02 UTC (permalink / raw)
  To: Andrew Murray
  Cc: julien.thierry, kvmarm, christoffer.dall, linux-arm-kernel,
	Suzuki K Poulose

On Mon, 18 Feb 2019 12:03:05 +0000
Andrew Murray <andrew.murray@arm.com> wrote:

> On Thu, Feb 14, 2019 at 11:42:15AM +0000, Suzuki K Poulose wrote:
> > 
> > 
> > On 05/02/2019 14:33, Julien Thierry wrote:  
> > > Hi Andrew,
> > > 
> > > On 04/02/2019 16:53, Andrew Murray wrote:  
> > > > Emulate chained PMU counters by creating a single 64 bit event counter
> > > > for a pair of chained KVM counters.
> > > > 
> > > > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > > > ---
> > > >   include/kvm/arm_pmu.h |   1 +
> > > >   virt/kvm/arm/pmu.c    | 321 +++++++++++++++++++++++++++++++++++++++++---------
> > > >   2 files changed, 269 insertions(+), 53 deletions(-)
> > > > 
> > > > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> > > > index b73f31b..8e691ee 100644
> > > > --- a/include/kvm/arm_pmu.h
> > > > +++ b/include/kvm/arm_pmu.h
> > > > @@ -29,6 +29,7 @@ struct kvm_pmc {
> > > >   	u8 idx;	/* index into the pmu->pmc array */
> > > >   	struct perf_event *perf_event;
> > > >   	u64 bitmask;
> > > > +	u64 overflow_count;
> > > >   };
> > > >   struct kvm_pmu {
> > > > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > > > index a64aeb2..9318130 100644
> > > > --- a/virt/kvm/arm/pmu.c
> > > > +++ b/virt/kvm/arm/pmu.c
> > > > @@ -24,9 +24,25 @@
> > > >   #include <kvm/arm_pmu.h>
> > > >   #include <kvm/arm_vgic.h>
> > > > +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E  
> > > 
> > > I find it a bit awkward to have this redefined here.
> > > 
> > > Maybe we could define a helper in kvm_host.h:
> > > bool kvm_pmu_typer_is_chain(u64 typer);
> > > 
> > > That would always return false for arm32?  
> > 
> > We don't support ARMv7 host, so that doesn't matter. But
> > it is a good idea to wrap it in a function here.  
> 
> I'm not sure kvm_host.h is the right place for this as this really relates
> to the ARM PMU drivers. Seeing that armv8pmu_filter_match in
> arch/arm64/kernel/perf_event.c would also benefit from this new function I'll
> add it to arch/arm64/include/asm/perf_event.h and stub it out in arm_pmu.c
> for the ARM32 case.

Doing that is going to create a dependency with the perf subsystem,
and will need an Ack from the maintainer (Will).

Thanks,

	M.
-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2019-02-18 14:03 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-04 16:53 [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Andrew Murray
2019-02-04 16:53 ` [PATCH v2 1/5] KVM: arm/arm64: rename kvm_pmu_{enable/disable}_counter functions Andrew Murray
2019-02-05 12:21   ` Julien Thierry
2019-02-11 17:23   ` Suzuki K Poulose
2019-02-04 16:53 ` [PATCH v2 2/5] KVM: arm/arm64: extract duplicated code to own function Andrew Murray
2019-02-04 16:53 ` [PATCH v2 3/5] KVM: arm/arm64: re-create event when setting counter value Andrew Murray
2019-02-05 12:21   ` Julien Thierry
2019-02-05 12:27     ` Andrew Murray
2019-02-13 14:28   ` Suzuki K Poulose
2019-02-18  9:39     ` Andrew Murray
2019-02-04 16:53 ` [PATCH v2 4/5] KVM: arm/arm64: lazily create perf events on enable Andrew Murray
2019-02-14 11:36   ` Suzuki K Poulose
2019-02-18  9:57     ` Andrew Murray
2019-02-04 16:53 ` [PATCH v2 5/5] KVM: arm/arm64: support chained PMU counters Andrew Murray
2019-02-05 14:33   ` Julien Thierry
2019-02-14 11:42     ` Suzuki K Poulose
2019-02-18 12:03       ` Andrew Murray
2019-02-18 14:02         ` Marc Zyngier
2019-02-18 10:11     ` Andrew Murray
2019-02-18  8:52 ` [PATCH v2 0/5] KVM: arm/arm64: add support for chained counters Marc Zyngier
2019-02-18 12:10   ` Andrew Murray

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).