All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] KVM: arm/arm64: add support for chained counters
@ 2019-01-22 10:49 ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

ARMv8 provides support for chained PMU counters, where an event type
of 0x001E is set for odd-numbered counters, the event counter will
increment by one for each overflow of the preceding even-numbered
counter. Let's emulate this in KVM by creating a 64 bit perf counter
when a user chains two emulated counters together.

Andrew Murray (4):
  KVM: arm/arm64: extract duplicated code to own function
  KVM: arm/arm64: re-create event when setting counter value
  KVM: arm/arm64: lazily create perf events on enable
  KVM: arm/arm64: support chained PMU counters

 include/kvm/arm_pmu.h |   2 +
 virt/kvm/arm/pmu.c    | 377 +++++++++++++++++++++++++++++++++++++++++++-------
 2 files changed, 327 insertions(+), 52 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH 0/4] KVM: arm/arm64: add support for chained counters
@ 2019-01-22 10:49 ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, suzuki.poulose

ARMv8 provides support for chained PMU counters, where an event type
of 0x001E is set for odd-numbered counters, the event counter will
increment by one for each overflow of the preceding even-numbered
counter. Let's emulate this in KVM by creating a 64 bit perf counter
when a user chains two emulated counters together.

Andrew Murray (4):
  KVM: arm/arm64: extract duplicated code to own function
  KVM: arm/arm64: re-create event when setting counter value
  KVM: arm/arm64: lazily create perf events on enable
  KVM: arm/arm64: support chained PMU counters

 include/kvm/arm_pmu.h |   2 +
 virt/kvm/arm/pmu.c    | 377 +++++++++++++++++++++++++++++++++++++++++++-------
 2 files changed, 327 insertions(+), 52 deletions(-)

-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH 1/4] KVM: arm/arm64: extract duplicated code to own function
  2019-01-22 10:49 ` Andrew Murray
@ 2019-01-22 10:49   ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Let's reduce code duplication by extracting common code to its own
function.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 virt/kvm/arm/pmu.c | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 1c5b76c..531d27f 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -65,6 +65,19 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 }
 
 /**
+ * kvm_pmu_release_perf_event - remove the perf event
+ * @pmc: The PMU counter pointer
+ */
+static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
+{
+	if (pmc->perf_event) {
+		perf_event_disable(pmc->perf_event);
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+/**
  * kvm_pmu_stop_counter - stop PMU counter
  * @pmc: The PMU counter pointer
  *
@@ -79,9 +92,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
 		__vcpu_sys_reg(vcpu, reg) = counter;
-		perf_event_disable(pmc->perf_event);
-		perf_event_release_kernel(pmc->perf_event);
-		pmc->perf_event = NULL;
+		kvm_pmu_release_perf_event(pmc);
 	}
 }
 
@@ -114,12 +125,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
 		struct kvm_pmc *pmc = &pmu->pmc[i];
-
-		if (pmc->perf_event) {
-			perf_event_disable(pmc->perf_event);
-			perf_event_release_kernel(pmc->perf_event);
-			pmc->perf_event = NULL;
-		}
+		kvm_pmu_release_perf_event(pmc);
 	}
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 1/4] KVM: arm/arm64: extract duplicated code to own function
@ 2019-01-22 10:49   ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, suzuki.poulose

Let's reduce code duplication by extracting common code to its own
function.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 virt/kvm/arm/pmu.c | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 1c5b76c..531d27f 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -65,6 +65,19 @@ void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 }
 
 /**
+ * kvm_pmu_release_perf_event - remove the perf event
+ * @pmc: The PMU counter pointer
+ */
+static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
+{
+	if (pmc->perf_event) {
+		perf_event_disable(pmc->perf_event);
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+/**
  * kvm_pmu_stop_counter - stop PMU counter
  * @pmc: The PMU counter pointer
  *
@@ -79,9 +92,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
 		__vcpu_sys_reg(vcpu, reg) = counter;
-		perf_event_disable(pmc->perf_event);
-		perf_event_release_kernel(pmc->perf_event);
-		pmc->perf_event = NULL;
+		kvm_pmu_release_perf_event(pmc);
 	}
 }
 
@@ -114,12 +125,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
 		struct kvm_pmc *pmc = &pmu->pmc[i];
-
-		if (pmc->perf_event) {
-			perf_event_disable(pmc->perf_event);
-			perf_event_release_kernel(pmc->perf_event);
-			pmc->perf_event = NULL;
-		}
+		kvm_pmu_release_perf_event(pmc);
 	}
 }
 
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
  2019-01-22 10:49 ` Andrew Murray
@ 2019-01-22 10:49   ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

The perf event sample_period is currently set based upon the current
counter value, when PMXEVTYPER is written to and the perf event is created.
However the user may choose to write the type before the counter value in
which case sample_period will be set incorrectly. Let's instead decouple
event creation from PMXEVTYPER and (re)create the event in either
suitation.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
 1 file changed, 30 insertions(+), 9 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 531d27f..4464899 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,6 +24,8 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
+				      u64 select_idx);
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
  */
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 {
-	u64 reg;
+	u64 reg, data;
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
+
+	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
+	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
+	data = __vcpu_sys_reg(vcpu, reg + select_idx);
+
+	/* Recreate the perf event to reflect the updated sample_period */
+	kvm_pmu_create_perf_event(vcpu, data, select_idx);
 }
 
 /**
@@ -380,17 +389,13 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 }
 
 /**
- * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * kvm_pmu_create_perf_event - create a perf event for a counter
  * @vcpu: The vcpu pointer
- * @data: The data guest writes to PMXEVTYPER_EL0
+ * @data: Type of event as per PMXEVTYPER_EL0 format
  * @select_idx: The number of selected counter
- *
- * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
- * event with given hardware event number. Here we call perf_event API to
- * emulate this action and create a kernel perf event for it.
  */
-void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
-				    u64 select_idx)
+static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
+				      u64 select_idx)
 {
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
@@ -433,6 +438,22 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	pmc->perf_event = event;
 }
 
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx)
+{
+	kvm_pmu_create_perf_event(vcpu, data, select_idx);
+}
+
 bool kvm_arm_support_pmu_v3(void)
 {
 	/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
@ 2019-01-22 10:49   ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, suzuki.poulose

The perf event sample_period is currently set based upon the current
counter value, when PMXEVTYPER is written to and the perf event is created.
However the user may choose to write the type before the counter value in
which case sample_period will be set incorrectly. Let's instead decouple
event creation from PMXEVTYPER and (re)create the event in either
suitation.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
 1 file changed, 30 insertions(+), 9 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 531d27f..4464899 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,6 +24,8 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
+				      u64 select_idx);
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
  */
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 {
-	u64 reg;
+	u64 reg, data;
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
+
+	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
+	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
+	data = __vcpu_sys_reg(vcpu, reg + select_idx);
+
+	/* Recreate the perf event to reflect the updated sample_period */
+	kvm_pmu_create_perf_event(vcpu, data, select_idx);
 }
 
 /**
@@ -380,17 +389,13 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 }
 
 /**
- * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * kvm_pmu_create_perf_event - create a perf event for a counter
  * @vcpu: The vcpu pointer
- * @data: The data guest writes to PMXEVTYPER_EL0
+ * @data: Type of event as per PMXEVTYPER_EL0 format
  * @select_idx: The number of selected counter
- *
- * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
- * event with given hardware event number. Here we call perf_event API to
- * emulate this action and create a kernel perf event for it.
  */
-void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
-				    u64 select_idx)
+static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
+				      u64 select_idx)
 {
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
@@ -433,6 +438,22 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	pmc->perf_event = event;
 }
 
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx)
+{
+	kvm_pmu_create_perf_event(vcpu, data, select_idx);
+}
+
 bool kvm_arm_support_pmu_v3(void)
 {
 	/*
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
  2019-01-22 10:49 ` Andrew Murray
@ 2019-01-22 10:49   ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

To prevent re-creating perf events everytime the counter registers
are changed, let's instead lazily create the event when the event
is first enabled and destroy it when it changes.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 78 insertions(+), 36 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 4464899..1921ca9 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,8 +24,11 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
-static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
-				      u64 select_idx);
+static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
+static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
+						      u64 select_idx);
+static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
  */
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 {
-	u64 reg, data;
+	u64 reg;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
-	data = __vcpu_sys_reg(vcpu, reg + select_idx);
-
-	/* Recreate the perf event to reflect the updated sample_period */
-	kvm_pmu_create_perf_event(vcpu, data, select_idx);
+	kvm_pmu_stop_counter(vcpu, pmc);
+	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
 }
 
 /**
@@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
 
 /**
  * kvm_pmu_stop_counter - stop PMU counter
+ * @vcpu: The vcpu pointer
  * @pmc: The PMU counter pointer
  *
  * If this counter has been configured to monitor some event, release it here.
@@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 }
 
 /**
+ * kvm_pmu_enable_counter_single - create/enable a unpaired counter
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (!pmc->perf_event) {
+		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
+	} else if (pmc->perf_event) {
+		perf_event_enable(pmc->perf_event);
+		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+			kvm_debug("fail to enable perf event\n");
+	}
+}
+
+/**
  * kvm_pmu_enable_counter - enable selected PMU counter
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENSET register
@@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
 
 	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
 		return;
@@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 		if (!(val & BIT(i)))
 			continue;
 
-		pmc = &pmu->pmc[i];
-		if (pmc->perf_event) {
-			perf_event_enable(pmc->perf_event);
-			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
-				kvm_debug("fail to enable perf event\n");
-		}
+		kvm_pmu_enable_counter_single(vcpu, i);
 	}
 }
 
 /**
+ * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+		return;
+
+	if (set & BIT(select_idx))
+		kvm_pmu_enable_counter_single(vcpu, select_idx);
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @pmc: The counter to dissable
+ */
+static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
+					   u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (pmc->perf_event)
+		perf_event_disable(pmc->perf_event);
+}
+
+/**
  * kvm_pmu_disable_counter - disable selected PMU counter
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENCLR register
@@ -188,8 +235,6 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
 
 	if (!val)
 		return;
@@ -198,9 +243,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 		if (!(val & BIT(i)))
 			continue;
 
-		pmc = &pmu->pmc[i];
-		if (pmc->perf_event)
-			perf_event_disable(pmc->perf_event);
+		kvm_pmu_disable_counter_single(vcpu, i);
 	}
 }
 
@@ -382,28 +425,22 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
-static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
-	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
-}
-
 /**
- * kvm_pmu_create_perf_event - create a perf event for a counter
+ * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter
  * @vcpu: The vcpu pointer
- * @data: Type of event as per PMXEVTYPER_EL0 format
  * @select_idx: The number of selected counter
  */
-static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
-				      u64 select_idx)
+static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
+						u64 select_idx)
 {
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 	struct perf_event *event;
 	struct perf_event_attr attr;
-	u64 eventsel, counter;
+	u64 eventsel, counter, data;
+
+	data = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx);
 
-	kvm_pmu_stop_counter(vcpu, pmc);
 	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
 
 	/* Software increment event does't need to be backed by a perf event */
@@ -415,7 +452,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
 	attr.pinned = 1;
-	attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx);
 	attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
 	attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
 	attr.exclude_hv = 1; /* Don't count EL2 events */
@@ -451,7 +487,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx)
 {
-	kvm_pmu_create_perf_event(vcpu, data, select_idx);
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
+	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
 }
 
 bool kvm_arm_support_pmu_v3(void)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
@ 2019-01-22 10:49   ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, suzuki.poulose

To prevent re-creating perf events everytime the counter registers
are changed, let's instead lazily create the event when the event
is first enabled and destroy it when it changes.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 78 insertions(+), 36 deletions(-)

diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 4464899..1921ca9 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,8 +24,11 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
-static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
-				      u64 select_idx);
+static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
+static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
+						      u64 select_idx);
+static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
  */
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 {
-	u64 reg, data;
+	u64 reg;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
-	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
-	data = __vcpu_sys_reg(vcpu, reg + select_idx);
-
-	/* Recreate the perf event to reflect the updated sample_period */
-	kvm_pmu_create_perf_event(vcpu, data, select_idx);
+	kvm_pmu_stop_counter(vcpu, pmc);
+	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
 }
 
 /**
@@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
 
 /**
  * kvm_pmu_stop_counter - stop PMU counter
+ * @vcpu: The vcpu pointer
  * @pmc: The PMU counter pointer
  *
  * If this counter has been configured to monitor some event, release it here.
@@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 }
 
 /**
+ * kvm_pmu_enable_counter_single - create/enable a unpaired counter
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (!pmc->perf_event) {
+		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
+	} else if (pmc->perf_event) {
+		perf_event_enable(pmc->perf_event);
+		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+			kvm_debug("fail to enable perf event\n");
+	}
+}
+
+/**
  * kvm_pmu_enable_counter - enable selected PMU counter
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENSET register
@@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
 
 	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
 		return;
@@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 		if (!(val & BIT(i)))
 			continue;
 
-		pmc = &pmu->pmc[i];
-		if (pmc->perf_event) {
-			perf_event_enable(pmc->perf_event);
-			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
-				kvm_debug("fail to enable perf event\n");
-		}
+		kvm_pmu_enable_counter_single(vcpu, i);
 	}
 }
 
 /**
+ * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+		return;
+
+	if (set & BIT(select_idx))
+		kvm_pmu_enable_counter_single(vcpu, select_idx);
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @pmc: The counter to dissable
+ */
+static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
+					   u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (pmc->perf_event)
+		perf_event_disable(pmc->perf_event);
+}
+
+/**
  * kvm_pmu_disable_counter - disable selected PMU counter
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENCLR register
@@ -188,8 +235,6 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 {
 	int i;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc;
 
 	if (!val)
 		return;
@@ -198,9 +243,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 		if (!(val & BIT(i)))
 			continue;
 
-		pmc = &pmu->pmc[i];
-		if (pmc->perf_event)
-			perf_event_disable(pmc->perf_event);
+		kvm_pmu_disable_counter_single(vcpu, i);
 	}
 }
 
@@ -382,28 +425,22 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
-static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
-{
-	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
-	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
-}
-
 /**
- * kvm_pmu_create_perf_event - create a perf event for a counter
+ * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter
  * @vcpu: The vcpu pointer
- * @data: Type of event as per PMXEVTYPER_EL0 format
  * @select_idx: The number of selected counter
  */
-static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
-				      u64 select_idx)
+static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
+						u64 select_idx)
 {
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 	struct perf_event *event;
 	struct perf_event_attr attr;
-	u64 eventsel, counter;
+	u64 eventsel, counter, data;
+
+	data = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx);
 
-	kvm_pmu_stop_counter(vcpu, pmc);
 	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
 
 	/* Software increment event does't need to be backed by a perf event */
@@ -415,7 +452,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
 	attr.pinned = 1;
-	attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx);
 	attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
 	attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
 	attr.exclude_hv = 1; /* Don't count EL2 events */
@@ -451,7 +487,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx)
 {
-	kvm_pmu_create_perf_event(vcpu, data, select_idx);
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
+	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
 }
 
 bool kvm_arm_support_pmu_v3(void)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 4/4] KVM: arm/arm64: support chained PMU counters
  2019-01-22 10:49 ` Andrew Murray
@ 2019-01-22 10:49   ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Emulate chained PMU counters by creating a single 64 bit event counter
for a pair of chained KVM counters.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 include/kvm/arm_pmu.h |   2 +
 virt/kvm/arm/pmu.c    | 308 +++++++++++++++++++++++++++++++++++++++++---------
 2 files changed, 258 insertions(+), 52 deletions(-)

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index f87fe20..d4f3b28 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -29,6 +29,8 @@ struct kvm_pmc {
 	u8 idx;	/* index into the pmu->pmc array */
 	struct perf_event *perf_event;
 	u64 bitmask;
+	u64 sample_period;
+	u64 left;
 };
 
 struct kvm_pmu {
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 1921ca9..d111d5b 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,10 +24,26 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
+static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
+					    u64 pair_low);
+static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
+					      u64 select_idx);
+static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low);
 static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
 static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 						      u64 select_idx);
-static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
+
+/**
+ * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
+ * @pmc: The PMU counter pointer
+ * @select_idx: The counter index
+ */
+static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
+{
+	return ((pmc->perf_event->attr.config1 & 0x1)
+		&& (pmc->idx % 2));
+}
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
  */
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	u64 counter, reg, enabled, running;
+	u64 counter, reg, enabled, running, incr;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
@@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 	/* The real counter value is equal to the value of counter register plus
 	 * the value perf event counts.
 	 */
-	if (pmc->perf_event)
-		counter += perf_event_read_value(pmc->perf_event, &enabled,
+	if (pmc->perf_event) {
+		incr = perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
+		if (kvm_pmu_counter_is_high_word(pmc))
+			incr = upper_32_bits(incr);
+		counter += incr;
+	}
+
 	return counter & pmc->bitmask;
 }
 
 /**
+ * kvm_pmu_counter_is_enabled - is a counter active
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
+	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
+}
+
+/**
+ * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
+ * @vcpu: The vcpu pointer
+ * @select_idx: The low counter index
+ */
+static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
+{
+	u64 eventsel;
+
+	eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) &
+			ARMV8_PMU_EVTYPE_EVENT;
+	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
+		return false;
+
+	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
+	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
+		return false;
+
+	return true;
+}
+
+/**
  * kvm_pmu_set_counter_value - set PMU counter value
  * @vcpu: The vcpu pointer
  * @select_idx: The counter index
@@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
  */
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 {
-	u64 reg;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	u64 reg, pair_low;
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	kvm_pmu_stop_counter(vcpu, pmc);
-	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
+	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
+
+	/* Recreate the perf event to reflect the updated sample_period */
+	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
+		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
+	} else {
+		kvm_pmu_stop_release_perf_event_single(vcpu, select_idx);
+		kvm_pmu_reenable_enabled_single(vcpu, select_idx);
+	}
 }
 
 /**
  * kvm_pmu_release_perf_event - remove the perf event
  * @pmc: The PMU counter pointer
  */
-static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
+static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
+				       struct kvm_pmc *pmc)
 {
-	if (pmc->perf_event) {
-		perf_event_disable(pmc->perf_event);
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_alt;
+	u64 pair_alt;
+
+	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
+	pmc_alt = &pmu->pmc[pair_alt];
+
+	if (pmc->perf_event)
 		perf_event_release_kernel(pmc->perf_event);
-		pmc->perf_event = NULL;
-	}
+
+	if (pmc->perf_event == pmc_alt->perf_event)
+		pmc_alt->perf_event = NULL;
+
+	pmc->perf_event = NULL;
 }
 
 /**
@@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
  * @vcpu: The vcpu pointer
  * @pmc: The PMU counter pointer
  *
- * If this counter has been configured to monitor some event, release it here.
+ * If this counter has been configured to monitor some event, stop it here.
  */
 static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
 	u64 counter, reg;
 
 	if (pmc->perf_event) {
+		perf_event_disable(pmc->perf_event);
 		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
 		__vcpu_sys_reg(vcpu, reg) = counter;
-		kvm_pmu_release_perf_event(pmc);
 	}
 }
 
 /**
+ * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
+ * @vcpu: The vcpu pointer
+ * @pmc_low: The PMU counter pointer for lower word
+ * @pmc_high: The PMU counter pointer for higher word
+ *
+ * As chained counters share the underlying perf event, we stop them
+ * both first before discarding the underlying perf event
+ */
+static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
+					    u64 idx_low)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
+
+	kvm_pmu_stop_counter(vcpu, pmc_low);
+	kvm_pmu_stop_counter(vcpu, pmc_high);
+
+	kvm_pmu_release_perf_event(vcpu, pmc_low);
+	kvm_pmu_release_perf_event(vcpu, pmc_high);
+}
+
+/**
+ * kvm_pmu_stop_release_perf_event_single - stop and release a counter
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
+					      u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+	kvm_pmu_release_perf_event(vcpu, pmc);
+}
+
+/**
  * kvm_pmu_vcpu_reset - reset pmu state for cpu
  * @vcpu: The vcpu pointer
  *
@@ -118,7 +227,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
+		kvm_pmu_stop_release_perf_event_single(vcpu, i);
 		pmu->pmc[i].idx = i;
 		pmu->pmc[i].bitmask = 0xffffffffUL;
 	}
@@ -136,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
 		struct kvm_pmc *pmc = &pmu->pmc[i];
-		kvm_pmu_release_perf_event(pmc);
+		kvm_pmu_release_perf_event(vcpu, pmc);
 	}
 }
 
@@ -171,49 +280,81 @@ static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
 }
 
 /**
- * kvm_pmu_enable_counter - enable selected PMU counter
+ * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
  * @vcpu: The vcpu pointer
- * @val: the value guest writes to PMCNTENSET register
- *
- * Call perf_event_enable to start counting the perf event
+ * @select_idx: The counter index
  */
-void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
+static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
 {
-	int i;
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
 		return;
 
-	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		if (!(val & BIT(i)))
-			continue;
+	if (set & BIT(select_idx))
+		kvm_pmu_enable_counter_single(vcpu, select_idx);
+}
 
-		kvm_pmu_enable_counter_single(vcpu, i);
+/**
+ * kvm_pmu_reenable_enabled_pair - reenable a pair if they should be enabled
+ * @vcpu: The vcpu pointer
+ * @pair_low: The low counter index
+ */
+static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low)
+{
+	kvm_pmu_reenable_enabled_single(vcpu, pair_low);
+	kvm_pmu_reenable_enabled_single(vcpu, pair_low+1);
+}
+
+/**
+ * kvm_pmu_enable_counter_pair - enable counters pair at a time
+ * @vcpu: The vcpu pointer
+ * @val: counters to enable
+ * @pair_low: The low counter index
+ */
+static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
+					u64 pair_low)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
+
+	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		if (pmc_low->perf_event != pmc_high->perf_event)
+			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
 	}
+
+	if (val & BIT(pair_low))
+		kvm_pmu_enable_counter_single(vcpu, pair_low);
+
+	if (val & BIT(pair_low+1))
+		kvm_pmu_enable_counter_single(vcpu, pair_low + 1);
 }
 
 /**
- * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
+ * kvm_pmu_enable_counter - enable selected PMU counter
  * @vcpu: The vcpu pointer
- * @select_idx: The counter index
+ * @val: the value guest writes to PMCNTENSET register
+ *
+ * Call perf_event_enable to start counting the perf event
  */
-static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
-					    u64 select_idx)
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 {
-	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
-	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+	int i;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
 		return;
 
-	if (set & BIT(select_idx))
-		kvm_pmu_enable_counter_single(vcpu, select_idx);
+	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
+		kvm_pmu_enable_counter_pair(vcpu, val, i);
 }
 
 /**
  * kvm_pmu_disable_counter - disable selected PMU counter
  * @vcpu: The vcpu pointer
- * @pmc: The counter to dissable
+ * @select_idx: The counter index
  */
 static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
 					   u64 select_idx)
@@ -221,8 +362,40 @@ static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
-	if (pmc->perf_event)
+	if (pmc->perf_event) {
 		perf_event_disable(pmc->perf_event);
+		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+			kvm_debug("fail to enable perf event\n");
+	}
+}
+
+/**
+ * kvm_pmu_disable_counter_pair - disable counters pair at a time
+ * @val: counters to disable
+ * @pair_low: The low counter index
+ */
+static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
+					 u64 pair_low)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
+
+	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		if (pmc_low->perf_event == pmc_high->perf_event) {
+			if (pmc_low->perf_event) {
+				kvm_pmu_stop_release_perf_event_pair(vcpu,
+								pair_low);
+				kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
+			}
+		}
+	}
+
+	if (val & BIT(pair_low))
+		kvm_pmu_disable_counter_single(vcpu, pair_low);
+
+	if (val & BIT(pair_low + 1))
+		kvm_pmu_disable_counter_single(vcpu, pair_low + 1);
 }
 
 /**
@@ -239,12 +412,8 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 	if (!val)
 		return;
 
-	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		if (!(val & BIT(i)))
-			continue;
-
-		kvm_pmu_disable_counter_single(vcpu, i);
-	}
+	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
+		kvm_pmu_disable_counter_pair(vcpu, val, i);
 }
 
 static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
@@ -355,6 +524,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
 
+	if (kvm_pmu_event_is_chained(vcpu, idx + 1)) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+		struct kvm_pmc *pmc_high = &pmu->pmc[idx + 1];
+
+		if (!(--pmc_high->left)) {
+			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx + 1);
+			pmc_high->left = pmc_high->sample_period;
+		}
+
+	}
+
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
 		kvm_vcpu_kick(vcpu);
@@ -448,6 +628,10 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 	    select_idx != ARMV8_PMU_CYCLE_IDX)
 		return;
 
+	/* Handled by even event */
+	if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
@@ -459,6 +643,9 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 	attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
 		ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
 
+	if (kvm_pmu_event_is_chained(vcpu, select_idx))
+		attr.config1 |= 0x1;
+
 	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
@@ -471,6 +658,14 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 		return;
 	}
 
+	if (kvm_pmu_event_is_chained(vcpu, select_idx)) {
+		struct kvm_pmc *pmc_next = &pmu->pmc[select_idx + 1];
+
+		pmc_next->perf_event = event;
+		counter = kvm_pmu_get_counter_value(vcpu, select_idx + 1);
+		pmc_next->left = (-counter) & pmc->bitmask;
+	}
+
 	pmc->perf_event = event;
 }
 
@@ -487,13 +682,22 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx)
 {
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
-	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
+	u64 eventsel, event_type, pair_low;
 
-	kvm_pmu_stop_counter(vcpu, pmc);
-	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
-	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
+	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
+	event_type = data & ARMV8_PMU_EVTYPE_MASK;
+	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
+
+	if (kvm_pmu_event_is_chained(vcpu, pair_low) ||
+	    eventsel == ARMV8_PMUV3_PERFCTR_CHAIN) {
+		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
+		__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
+		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
+	} else {
+		kvm_pmu_stop_release_perf_event_single(vcpu, pair_low);
+		__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
+		kvm_pmu_reenable_enabled_single(vcpu, pair_low);
+	}
 }
 
 bool kvm_arm_support_pmu_v3(void)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH 4/4] KVM: arm/arm64: support chained PMU counters
@ 2019-01-22 10:49   ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 10:49 UTC (permalink / raw)
  To: Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel, suzuki.poulose

Emulate chained PMU counters by creating a single 64 bit event counter
for a pair of chained KVM counters.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 include/kvm/arm_pmu.h |   2 +
 virt/kvm/arm/pmu.c    | 308 +++++++++++++++++++++++++++++++++++++++++---------
 2 files changed, 258 insertions(+), 52 deletions(-)

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index f87fe20..d4f3b28 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -29,6 +29,8 @@ struct kvm_pmc {
 	u8 idx;	/* index into the pmu->pmc array */
 	struct perf_event *perf_event;
 	u64 bitmask;
+	u64 sample_period;
+	u64 left;
 };
 
 struct kvm_pmu {
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 1921ca9..d111d5b 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -24,10 +24,26 @@
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
+static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
+					    u64 pair_low);
+static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
+					      u64 select_idx);
+static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low);
 static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
 static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 						      u64 select_idx);
-static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
+
+/**
+ * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
+ * @pmc: The PMU counter pointer
+ * @select_idx: The counter index
+ */
+static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
+{
+	return ((pmc->perf_event->attr.config1 & 0x1)
+		&& (pmc->idx % 2));
+}
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
  */
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 {
-	u64 counter, reg, enabled, running;
+	u64 counter, reg, enabled, running, incr;
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
@@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
 	/* The real counter value is equal to the value of counter register plus
 	 * the value perf event counts.
 	 */
-	if (pmc->perf_event)
-		counter += perf_event_read_value(pmc->perf_event, &enabled,
+	if (pmc->perf_event) {
+		incr = perf_event_read_value(pmc->perf_event, &enabled,
 						 &running);
 
+		if (kvm_pmu_counter_is_high_word(pmc))
+			incr = upper_32_bits(incr);
+		counter += incr;
+	}
+
 	return counter & pmc->bitmask;
 }
 
 /**
+ * kvm_pmu_counter_is_enabled - is a counter active
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
+	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
+}
+
+/**
+ * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
+ * @vcpu: The vcpu pointer
+ * @select_idx: The low counter index
+ */
+static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
+{
+	u64 eventsel;
+
+	eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) &
+			ARMV8_PMU_EVTYPE_EVENT;
+	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
+		return false;
+
+	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
+	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
+		return false;
+
+	return true;
+}
+
+/**
  * kvm_pmu_set_counter_value - set PMU counter value
  * @vcpu: The vcpu pointer
  * @select_idx: The counter index
@@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
  */
 void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
 {
-	u64 reg;
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	u64 reg, pair_low;
 
 	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
 	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
 	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
 
-	kvm_pmu_stop_counter(vcpu, pmc);
-	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
+	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
+
+	/* Recreate the perf event to reflect the updated sample_period */
+	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
+		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
+	} else {
+		kvm_pmu_stop_release_perf_event_single(vcpu, select_idx);
+		kvm_pmu_reenable_enabled_single(vcpu, select_idx);
+	}
 }
 
 /**
  * kvm_pmu_release_perf_event - remove the perf event
  * @pmc: The PMU counter pointer
  */
-static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
+static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
+				       struct kvm_pmc *pmc)
 {
-	if (pmc->perf_event) {
-		perf_event_disable(pmc->perf_event);
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_alt;
+	u64 pair_alt;
+
+	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
+	pmc_alt = &pmu->pmc[pair_alt];
+
+	if (pmc->perf_event)
 		perf_event_release_kernel(pmc->perf_event);
-		pmc->perf_event = NULL;
-	}
+
+	if (pmc->perf_event == pmc_alt->perf_event)
+		pmc_alt->perf_event = NULL;
+
+	pmc->perf_event = NULL;
 }
 
 /**
@@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
  * @vcpu: The vcpu pointer
  * @pmc: The PMU counter pointer
  *
- * If this counter has been configured to monitor some event, release it here.
+ * If this counter has been configured to monitor some event, stop it here.
  */
 static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 {
 	u64 counter, reg;
 
 	if (pmc->perf_event) {
+		perf_event_disable(pmc->perf_event);
 		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
 		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
 		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
 		__vcpu_sys_reg(vcpu, reg) = counter;
-		kvm_pmu_release_perf_event(pmc);
 	}
 }
 
 /**
+ * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
+ * @vcpu: The vcpu pointer
+ * @pmc_low: The PMU counter pointer for lower word
+ * @pmc_high: The PMU counter pointer for higher word
+ *
+ * As chained counters share the underlying perf event, we stop them
+ * both first before discarding the underlying perf event
+ */
+static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
+					    u64 idx_low)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
+
+	kvm_pmu_stop_counter(vcpu, pmc_low);
+	kvm_pmu_stop_counter(vcpu, pmc_high);
+
+	kvm_pmu_release_perf_event(vcpu, pmc_low);
+	kvm_pmu_release_perf_event(vcpu, pmc_high);
+}
+
+/**
+ * kvm_pmu_stop_release_perf_event_single - stop and release a counter
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
+					      u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+	kvm_pmu_release_perf_event(vcpu, pmc);
+}
+
+/**
  * kvm_pmu_vcpu_reset - reset pmu state for cpu
  * @vcpu: The vcpu pointer
  *
@@ -118,7 +227,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
+		kvm_pmu_stop_release_perf_event_single(vcpu, i);
 		pmu->pmc[i].idx = i;
 		pmu->pmc[i].bitmask = 0xffffffffUL;
 	}
@@ -136,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
 
 	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
 		struct kvm_pmc *pmc = &pmu->pmc[i];
-		kvm_pmu_release_perf_event(pmc);
+		kvm_pmu_release_perf_event(vcpu, pmc);
 	}
 }
 
@@ -171,49 +280,81 @@ static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
 }
 
 /**
- * kvm_pmu_enable_counter - enable selected PMU counter
+ * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
  * @vcpu: The vcpu pointer
- * @val: the value guest writes to PMCNTENSET register
- *
- * Call perf_event_enable to start counting the perf event
+ * @select_idx: The counter index
  */
-void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
+static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
 {
-	int i;
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
 		return;
 
-	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		if (!(val & BIT(i)))
-			continue;
+	if (set & BIT(select_idx))
+		kvm_pmu_enable_counter_single(vcpu, select_idx);
+}
 
-		kvm_pmu_enable_counter_single(vcpu, i);
+/**
+ * kvm_pmu_reenable_enabled_pair - reenable a pair if they should be enabled
+ * @vcpu: The vcpu pointer
+ * @pair_low: The low counter index
+ */
+static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low)
+{
+	kvm_pmu_reenable_enabled_single(vcpu, pair_low);
+	kvm_pmu_reenable_enabled_single(vcpu, pair_low+1);
+}
+
+/**
+ * kvm_pmu_enable_counter_pair - enable counters pair at a time
+ * @vcpu: The vcpu pointer
+ * @val: counters to enable
+ * @pair_low: The low counter index
+ */
+static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
+					u64 pair_low)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
+
+	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		if (pmc_low->perf_event != pmc_high->perf_event)
+			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
 	}
+
+	if (val & BIT(pair_low))
+		kvm_pmu_enable_counter_single(vcpu, pair_low);
+
+	if (val & BIT(pair_low+1))
+		kvm_pmu_enable_counter_single(vcpu, pair_low + 1);
 }
 
 /**
- * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
+ * kvm_pmu_enable_counter - enable selected PMU counter
  * @vcpu: The vcpu pointer
- * @select_idx: The counter index
+ * @val: the value guest writes to PMCNTENSET register
+ *
+ * Call perf_event_enable to start counting the perf event
  */
-static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
-					    u64 select_idx)
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
 {
-	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
-	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+	int i;
 
-	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
+	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
 		return;
 
-	if (set & BIT(select_idx))
-		kvm_pmu_enable_counter_single(vcpu, select_idx);
+	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
+		kvm_pmu_enable_counter_pair(vcpu, val, i);
 }
 
 /**
  * kvm_pmu_disable_counter - disable selected PMU counter
  * @vcpu: The vcpu pointer
- * @pmc: The counter to dissable
+ * @select_idx: The counter index
  */
 static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
 					   u64 select_idx)
@@ -221,8 +362,40 @@ static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
 	struct kvm_pmu *pmu = &vcpu->arch.pmu;
 	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
 
-	if (pmc->perf_event)
+	if (pmc->perf_event) {
 		perf_event_disable(pmc->perf_event);
+		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+			kvm_debug("fail to enable perf event\n");
+	}
+}
+
+/**
+ * kvm_pmu_disable_counter_pair - disable counters pair at a time
+ * @val: counters to disable
+ * @pair_low: The low counter index
+ */
+static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
+					 u64 pair_low)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
+	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
+
+	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
+		if (pmc_low->perf_event == pmc_high->perf_event) {
+			if (pmc_low->perf_event) {
+				kvm_pmu_stop_release_perf_event_pair(vcpu,
+								pair_low);
+				kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
+			}
+		}
+	}
+
+	if (val & BIT(pair_low))
+		kvm_pmu_disable_counter_single(vcpu, pair_low);
+
+	if (val & BIT(pair_low + 1))
+		kvm_pmu_disable_counter_single(vcpu, pair_low + 1);
 }
 
 /**
@@ -239,12 +412,8 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 	if (!val)
 		return;
 
-	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
-		if (!(val & BIT(i)))
-			continue;
-
-		kvm_pmu_disable_counter_single(vcpu, i);
-	}
+	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
+		kvm_pmu_disable_counter_pair(vcpu, val, i);
 }
 
 static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
@@ -355,6 +524,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
 
 	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
 
+	if (kvm_pmu_event_is_chained(vcpu, idx + 1)) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+		struct kvm_pmc *pmc_high = &pmu->pmc[idx + 1];
+
+		if (!(--pmc_high->left)) {
+			__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx + 1);
+			pmc_high->left = pmc_high->sample_period;
+		}
+
+	}
+
 	if (kvm_pmu_overflow_status(vcpu)) {
 		kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
 		kvm_vcpu_kick(vcpu);
@@ -448,6 +628,10 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 	    select_idx != ARMV8_PMU_CYCLE_IDX)
 		return;
 
+	/* Handled by even event */
+	if (eventsel == ARMV8_PMUV3_PERFCTR_CHAIN)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
@@ -459,6 +643,9 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 	attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
 		ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
 
+	if (kvm_pmu_event_is_chained(vcpu, select_idx))
+		attr.config1 |= 0x1;
+
 	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
@@ -471,6 +658,14 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 		return;
 	}
 
+	if (kvm_pmu_event_is_chained(vcpu, select_idx)) {
+		struct kvm_pmc *pmc_next = &pmu->pmc[select_idx + 1];
+
+		pmc_next->perf_event = event;
+		counter = kvm_pmu_get_counter_value(vcpu, select_idx + 1);
+		pmc_next->left = (-counter) & pmc->bitmask;
+	}
+
 	pmc->perf_event = event;
 }
 
@@ -487,13 +682,22 @@ static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx)
 {
-	struct kvm_pmu *pmu = &vcpu->arch.pmu;
-	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
-	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
+	u64 eventsel, event_type, pair_low;
 
-	kvm_pmu_stop_counter(vcpu, pmc);
-	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
-	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
+	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
+	event_type = data & ARMV8_PMU_EVTYPE_MASK;
+	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
+
+	if (kvm_pmu_event_is_chained(vcpu, pair_low) ||
+	    eventsel == ARMV8_PMUV3_PERFCTR_CHAIN) {
+		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
+		__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
+		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
+	} else {
+		kvm_pmu_stop_release_perf_event_single(vcpu, pair_low);
+		__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
+		kvm_pmu_reenable_enabled_single(vcpu, pair_low);
+	}
 }
 
 bool kvm_arm_support_pmu_v3(void)
-- 
2.7.4


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
  2019-01-22 10:49   ` Andrew Murray
@ 2019-01-22 12:12     ` Julien Thierry
  -1 siblings, 0 replies; 38+ messages in thread
From: Julien Thierry @ 2019-01-22 12:12 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew,

On 22/01/2019 10:49, Andrew Murray wrote:
> The perf event sample_period is currently set based upon the current
> counter value, when PMXEVTYPER is written to and the perf event is created.
> However the user may choose to write the type before the counter value in
> which case sample_period will be set incorrectly. Let's instead decouple
> event creation from PMXEVTYPER and (re)create the event in either
> suitation.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
>  1 file changed, 30 insertions(+), 9 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 531d27f..4464899 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,6 +24,8 @@
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> +				      u64 select_idx);
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
>   * @vcpu: The vcpu pointer
> @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>   */
>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>  {
> -	u64 reg;
> +	u64 reg, data;
>  
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> +
> +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> +	data = __vcpu_sys_reg(vcpu, reg + select_idx);

I think this should be just "reg" instead of "reg + select_idx".

Cheers,

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
@ 2019-01-22 12:12     ` Julien Thierry
  0 siblings, 0 replies; 38+ messages in thread
From: Julien Thierry @ 2019-01-22 12:12 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew,

On 22/01/2019 10:49, Andrew Murray wrote:
> The perf event sample_period is currently set based upon the current
> counter value, when PMXEVTYPER is written to and the perf event is created.
> However the user may choose to write the type before the counter value in
> which case sample_period will be set incorrectly. Let's instead decouple
> event creation from PMXEVTYPER and (re)create the event in either
> suitation.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
>  1 file changed, 30 insertions(+), 9 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 531d27f..4464899 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,6 +24,8 @@
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> +				      u64 select_idx);
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
>   * @vcpu: The vcpu pointer
> @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>   */
>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>  {
> -	u64 reg;
> +	u64 reg, data;
>  
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> +
> +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> +	data = __vcpu_sys_reg(vcpu, reg + select_idx);

I think this should be just "reg" instead of "reg + select_idx".

Cheers,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
  2019-01-22 12:12     ` Julien Thierry
@ 2019-01-22 12:42       ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 12:42 UTC (permalink / raw)
  To: Julien Thierry; +Cc: Marc Zyngier, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 12:12:51PM +0000, Julien Thierry wrote:
> Hi Andrew,
> 
> On 22/01/2019 10:49, Andrew Murray wrote:
> > The perf event sample_period is currently set based upon the current
> > counter value, when PMXEVTYPER is written to and the perf event is created.
> > However the user may choose to write the type before the counter value in
> > which case sample_period will be set incorrectly. Let's instead decouple
> > event creation from PMXEVTYPER and (re)create the event in either
> > suitation.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
> >  1 file changed, 30 insertions(+), 9 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 531d27f..4464899 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,6 +24,8 @@
> >  #include <kvm/arm_pmu.h>
> >  #include <kvm/arm_vgic.h>
> >  
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > +				      u64 select_idx);
> >  /**
> >   * kvm_pmu_get_counter_value - get PMU counter value
> >   * @vcpu: The vcpu pointer
> > @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >   */
> >  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >  {
> > -	u64 reg;
> > +	u64 reg, data;
> >  
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > +
> > +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > +	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> 
> I think this should be just "reg" instead of "reg + select_idx".

Yes, good catch.

Thanks,

Andrew Murray

> 
> Cheers,
> 
> -- 
> Julien Thierry

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
@ 2019-01-22 12:42       ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-22 12:42 UTC (permalink / raw)
  To: Julien Thierry; +Cc: Marc Zyngier, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 12:12:51PM +0000, Julien Thierry wrote:
> Hi Andrew,
> 
> On 22/01/2019 10:49, Andrew Murray wrote:
> > The perf event sample_period is currently set based upon the current
> > counter value, when PMXEVTYPER is written to and the perf event is created.
> > However the user may choose to write the type before the counter value in
> > which case sample_period will be set incorrectly. Let's instead decouple
> > event creation from PMXEVTYPER and (re)create the event in either
> > suitation.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
> >  1 file changed, 30 insertions(+), 9 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 531d27f..4464899 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,6 +24,8 @@
> >  #include <kvm/arm_pmu.h>
> >  #include <kvm/arm_vgic.h>
> >  
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > +				      u64 select_idx);
> >  /**
> >   * kvm_pmu_get_counter_value - get PMU counter value
> >   * @vcpu: The vcpu pointer
> > @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >   */
> >  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >  {
> > -	u64 reg;
> > +	u64 reg, data;
> >  
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > +
> > +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > +	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> 
> I think this should be just "reg" instead of "reg + select_idx".

Yes, good catch.

Thanks,

Andrew Murray

> 
> Cheers,
> 
> -- 
> Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
  2019-01-22 10:49   ` Andrew Murray
@ 2019-01-22 13:41     ` Julien Thierry
  -1 siblings, 0 replies; 38+ messages in thread
From: Julien Thierry @ 2019-01-22 13:41 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew,

On 22/01/2019 10:49, Andrew Murray wrote:
> To prevent re-creating perf events everytime the counter registers
> are changed, let's instead lazily create the event when the event
> is first enabled and destroy it when it changes.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
>  1 file changed, 78 insertions(+), 36 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 4464899..1921ca9 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,8 +24,11 @@
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> -				      u64 select_idx);
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
> +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> +						      u64 select_idx);
> +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> +
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
>   * @vcpu: The vcpu pointer
> @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>   */
>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>  {
> -	u64 reg, data;
> +	u64 reg;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>  
> -	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> -	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> -	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> -
> -	/* Recreate the perf event to reflect the updated sample_period */
> -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> +	kvm_pmu_stop_counter(vcpu, pmc);

Shouldn't this be before we do the write to __vcpu_sys_reg()?

> +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>  }
>  
>  /**
> @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>  
>  /**
>   * kvm_pmu_stop_counter - stop PMU counter
> + * @vcpu: The vcpu pointer
>   * @pmc: The PMU counter pointer
>   *
>   * If this counter has been configured to monitor some event, release it here.
> @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> + * kvm_pmu_enable_counter_single - create/enable a unpaired counter
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (!pmc->perf_event) {
> +		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
> +	} else if (pmc->perf_event) {

"else" is enough here, no need for "else if" :) .


Actually, after we call kvm_pmu_counter_create_enabled_perf_event() we
know that pmc->perf_event != NULL.

Shouldn't we execute the code below unconditionally?

> +		perf_event_enable(pmc->perf_event);
> +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +			kvm_debug("fail to enable perf event\n");
> +	}
> +}
> +
> +/**
>   * kvm_pmu_enable_counter - enable selected PMU counter
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENSET register
> @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  {
>  	int i;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc;
>  
>  	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
>  		return;
> @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  		if (!(val & BIT(i)))
>  			continue;
>  
> -		pmc = &pmu->pmc[i];
> -		if (pmc->perf_event) {
> -			perf_event_enable(pmc->perf_event);
> -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> -				kvm_debug("fail to enable perf event\n");
> -		}
> +		kvm_pmu_enable_counter_single(vcpu, i);
>  	}
>  }
>  
>  /**
> + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)

Not completely convinced by the name. kvm_pmu_sync_counter_status() ?

Or maybe have the callers check whether they actually need to
disable/enable and not have this function.

> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +
> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> +		return;
> +
> +	if (set & BIT(select_idx))
> +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> +}
> +
> +/**
> + * kvm_pmu_disable_counter - disable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @pmc: The counter to dissable
> + */
> +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> +					   u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (pmc->perf_event)
> +		perf_event_disable(pmc->perf_event);
> +}
> +
> +/**
>   * kvm_pmu_disable_counter - disable selected PMU counter
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENCLR register
> @@ -188,8 +235,6 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
>  {
>  	int i;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc;
>  
>  	if (!val)
>  		return;
> @@ -198,9 +243,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
>  		if (!(val & BIT(i)))
>  			continue;
>  
> -		pmc = &pmu->pmc[i];
> -		if (pmc->perf_event)
> -			perf_event_disable(pmc->perf_event);
> +		kvm_pmu_disable_counter_single(vcpu, i);
>  	}
>  }
>  
> @@ -382,28 +425,22 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  	}
>  }
>  
> -static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> -{
> -	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> -	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
> -}
> -
>  /**
> - * kvm_pmu_create_perf_event - create a perf event for a counter
> + * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter
>   * @vcpu: The vcpu pointer
> - * @data: Type of event as per PMXEVTYPER_EL0 format
>   * @select_idx: The number of selected counter
>   */
> -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> -				      u64 select_idx)
> +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> +						u64 select_idx)
>  {
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  	struct perf_event *event;
>  	struct perf_event_attr attr;
> -	u64 eventsel, counter;
> +	u64 eventsel, counter, data;
> +
> +	data = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx);

Should we worry about the case select_idx == ARMV8_PMU_CYCLE_IDX?

>  
> -	kvm_pmu_stop_counter(vcpu, pmc);
>  	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
>  
>  	/* Software increment event does't need to be backed by a perf event */
> @@ -415,7 +452,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
>  	attr.type = PERF_TYPE_RAW;
>  	attr.size = sizeof(attr);
>  	attr.pinned = 1;
> -	attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx);
>  	attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
>  	attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
>  	attr.exclude_hv = 1; /* Don't count EL2 events */
> @@ -451,7 +487,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx)
>  {
> -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
> +
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;

Why don't we take into account the select_idx == ARMV8_PMU_CYCLE_IDX
case into account anymore?

> +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>  }
>  
>  bool kvm_arm_support_pmu_v3(void)
> 

Cheers,

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
@ 2019-01-22 13:41     ` Julien Thierry
  0 siblings, 0 replies; 38+ messages in thread
From: Julien Thierry @ 2019-01-22 13:41 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew,

On 22/01/2019 10:49, Andrew Murray wrote:
> To prevent re-creating perf events everytime the counter registers
> are changed, let's instead lazily create the event when the event
> is first enabled and destroy it when it changes.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
>  1 file changed, 78 insertions(+), 36 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 4464899..1921ca9 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,8 +24,11 @@
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> -				      u64 select_idx);
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
> +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> +						      u64 select_idx);
> +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> +
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
>   * @vcpu: The vcpu pointer
> @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>   */
>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>  {
> -	u64 reg, data;
> +	u64 reg;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>  
> -	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> -	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> -	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> -
> -	/* Recreate the perf event to reflect the updated sample_period */
> -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> +	kvm_pmu_stop_counter(vcpu, pmc);

Shouldn't this be before we do the write to __vcpu_sys_reg()?

> +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>  }
>  
>  /**
> @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>  
>  /**
>   * kvm_pmu_stop_counter - stop PMU counter
> + * @vcpu: The vcpu pointer
>   * @pmc: The PMU counter pointer
>   *
>   * If this counter has been configured to monitor some event, release it here.
> @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> + * kvm_pmu_enable_counter_single - create/enable a unpaired counter
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (!pmc->perf_event) {
> +		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
> +	} else if (pmc->perf_event) {

"else" is enough here, no need for "else if" :) .


Actually, after we call kvm_pmu_counter_create_enabled_perf_event() we
know that pmc->perf_event != NULL.

Shouldn't we execute the code below unconditionally?

> +		perf_event_enable(pmc->perf_event);
> +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +			kvm_debug("fail to enable perf event\n");
> +	}
> +}
> +
> +/**
>   * kvm_pmu_enable_counter - enable selected PMU counter
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENSET register
> @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  {
>  	int i;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc;
>  
>  	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
>  		return;
> @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  		if (!(val & BIT(i)))
>  			continue;
>  
> -		pmc = &pmu->pmc[i];
> -		if (pmc->perf_event) {
> -			perf_event_enable(pmc->perf_event);
> -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> -				kvm_debug("fail to enable perf event\n");
> -		}
> +		kvm_pmu_enable_counter_single(vcpu, i);
>  	}
>  }
>  
>  /**
> + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)

Not completely convinced by the name. kvm_pmu_sync_counter_status() ?

Or maybe have the callers check whether they actually need to
disable/enable and not have this function.

> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +
> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> +		return;
> +
> +	if (set & BIT(select_idx))
> +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> +}
> +
> +/**
> + * kvm_pmu_disable_counter - disable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @pmc: The counter to dissable
> + */
> +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> +					   u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (pmc->perf_event)
> +		perf_event_disable(pmc->perf_event);
> +}
> +
> +/**
>   * kvm_pmu_disable_counter - disable selected PMU counter
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENCLR register
> @@ -188,8 +235,6 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
>  {
>  	int i;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc;
>  
>  	if (!val)
>  		return;
> @@ -198,9 +243,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
>  		if (!(val & BIT(i)))
>  			continue;
>  
> -		pmc = &pmu->pmc[i];
> -		if (pmc->perf_event)
> -			perf_event_disable(pmc->perf_event);
> +		kvm_pmu_disable_counter_single(vcpu, i);
>  	}
>  }
>  
> @@ -382,28 +425,22 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  	}
>  }
>  
> -static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> -{
> -	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> -	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
> -}
> -
>  /**
> - * kvm_pmu_create_perf_event - create a perf event for a counter
> + * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter
>   * @vcpu: The vcpu pointer
> - * @data: Type of event as per PMXEVTYPER_EL0 format
>   * @select_idx: The number of selected counter
>   */
> -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> -				      u64 select_idx)
> +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> +						u64 select_idx)
>  {
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  	struct perf_event *event;
>  	struct perf_event_attr attr;
> -	u64 eventsel, counter;
> +	u64 eventsel, counter, data;
> +
> +	data = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx);

Should we worry about the case select_idx == ARMV8_PMU_CYCLE_IDX?

>  
> -	kvm_pmu_stop_counter(vcpu, pmc);
>  	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
>  
>  	/* Software increment event does't need to be backed by a perf event */
> @@ -415,7 +452,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
>  	attr.type = PERF_TYPE_RAW;
>  	attr.size = sizeof(attr);
>  	attr.pinned = 1;
> -	attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx);
>  	attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
>  	attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
>  	attr.exclude_hv = 1; /* Don't count EL2 events */
> @@ -451,7 +487,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx)
>  {
> -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
> +
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;

Why don't we take into account the select_idx == ARMV8_PMU_CYCLE_IDX
case into account anymore?

> +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>  }
>  
>  bool kvm_arm_support_pmu_v3(void)
> 

Cheers,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
  2019-01-22 10:49   ` Andrew Murray
@ 2019-01-22 14:18     ` Suzuki K Poulose
  -1 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-22 14:18 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew

On 01/22/2019 10:49 AM, Andrew Murray wrote:
> The perf event sample_period is currently set based upon the current
> counter value, when PMXEVTYPER is written to and the perf event is created.
> However the user may choose to write the type before the counter value in
> which case sample_period will be set incorrectly. Let's instead decouple
> event creation from PMXEVTYPER and (re)create the event in either
> suitation.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>

The approach looks fine to me. However this patch seems to introduce a
memory leak, see below, which you may be addressing in a later patch in 
the series. But this will affect bisecting issues.

> ---
>   virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
>   1 file changed, 30 insertions(+), 9 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 531d27f..4464899 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,6 +24,8 @@
>   #include <kvm/arm_pmu.h>
>   #include <kvm/arm_vgic.h>
>   
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> +				      u64 select_idx);

Could we just pass the counter index (i.e, select_idx) after updating
the event_type/counter value in the respective functions.

nit: If we decide not to do that, please rename "data" to something more
obvious, event_type.

>   /**
>    * kvm_pmu_get_counter_value - get PMU counter value
>    * @vcpu: The vcpu pointer
> @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>    */
>   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>   {
> -	u64 reg;
> +	u64 reg, data;

nit: Same here, data is too generic.

>   
>   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> +
> +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> +	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> +
> +	/* Recreate the perf event to reflect the updated sample_period */
> +	kvm_pmu_create_perf_event(vcpu, data, select_idx);
>   }
>   
>   /**
> @@ -380,17 +389,13 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>   }
>   
>   /**
> - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * kvm_pmu_create_perf_event - create a perf event for a counter
>    * @vcpu: The vcpu pointer
> - * @data: The data guest writes to PMXEVTYPER_EL0
> + * @data: Type of event as per PMXEVTYPER_EL0 format
>    * @select_idx: The number of selected counter
> - *
> - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> - * event with given hardware event number. Here we call perf_event API to
> - * emulate this action and create a kernel perf event for it.
>    */
> -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> -				    u64 select_idx)
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> +				      u64 select_idx)
>   {
>   	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>   	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> @@ -433,6 +438,22 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>   	pmc->perf_event = event;

We should release the existing perf_event to prevent a memory leak and
also a corruption in the data via the overflow handler for the existing
event. Am I missing something here ?

>   }
>   
> +/**
> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * @vcpu: The vcpu pointer
> + * @data: The data guest writes to PMXEVTYPER_EL0
> + * @select_idx: The number of selected counter
> + *
> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> + * event with given hardware event number. Here we call perf_event API to
> + * emulate this action and create a kernel perf event for it.
> + */
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> +				    u64 select_idx)
> +{
> +	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> +}
> +
>   bool kvm_arm_support_pmu_v3(void)
>   {
>   	/*
> 


Cheers
Suzuki

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
@ 2019-01-22 14:18     ` Suzuki K Poulose
  0 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-22 14:18 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew

On 01/22/2019 10:49 AM, Andrew Murray wrote:
> The perf event sample_period is currently set based upon the current
> counter value, when PMXEVTYPER is written to and the perf event is created.
> However the user may choose to write the type before the counter value in
> which case sample_period will be set incorrectly. Let's instead decouple
> event creation from PMXEVTYPER and (re)create the event in either
> suitation.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>

The approach looks fine to me. However this patch seems to introduce a
memory leak, see below, which you may be addressing in a later patch in 
the series. But this will affect bisecting issues.

> ---
>   virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
>   1 file changed, 30 insertions(+), 9 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 531d27f..4464899 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,6 +24,8 @@
>   #include <kvm/arm_pmu.h>
>   #include <kvm/arm_vgic.h>
>   
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> +				      u64 select_idx);

Could we just pass the counter index (i.e, select_idx) after updating
the event_type/counter value in the respective functions.

nit: If we decide not to do that, please rename "data" to something more
obvious, event_type.

>   /**
>    * kvm_pmu_get_counter_value - get PMU counter value
>    * @vcpu: The vcpu pointer
> @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>    */
>   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>   {
> -	u64 reg;
> +	u64 reg, data;

nit: Same here, data is too generic.

>   
>   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> +
> +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> +	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> +
> +	/* Recreate the perf event to reflect the updated sample_period */
> +	kvm_pmu_create_perf_event(vcpu, data, select_idx);
>   }
>   
>   /**
> @@ -380,17 +389,13 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>   }
>   
>   /**
> - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * kvm_pmu_create_perf_event - create a perf event for a counter
>    * @vcpu: The vcpu pointer
> - * @data: The data guest writes to PMXEVTYPER_EL0
> + * @data: Type of event as per PMXEVTYPER_EL0 format
>    * @select_idx: The number of selected counter
> - *
> - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> - * event with given hardware event number. Here we call perf_event API to
> - * emulate this action and create a kernel perf event for it.
>    */
> -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> -				    u64 select_idx)
> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> +				      u64 select_idx)
>   {
>   	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>   	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> @@ -433,6 +438,22 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>   	pmc->perf_event = event;

We should release the existing perf_event to prevent a memory leak and
also a corruption in the data via the overflow handler for the existing
event. Am I missing something here ?

>   }
>   
> +/**
> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * @vcpu: The vcpu pointer
> + * @data: The data guest writes to PMXEVTYPER_EL0
> + * @select_idx: The number of selected counter
> + *
> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> + * event with given hardware event number. Here we call perf_event API to
> + * emulate this action and create a kernel perf event for it.
> + */
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> +				    u64 select_idx)
> +{
> +	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> +}
> +
>   bool kvm_arm_support_pmu_v3(void)
>   {
>   	/*
> 


Cheers
Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 1/4] KVM: arm/arm64: extract duplicated code to own function
  2019-01-22 10:49   ` Andrew Murray
@ 2019-01-22 14:20     ` Suzuki K Poulose
  -1 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-22 14:20 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier; +Cc: kvmarm, linux-arm-kernel

On 01/22/2019 10:49 AM, Andrew Murray wrote:
> Let's reduce code duplication by extracting common code to its own
> function.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 1/4] KVM: arm/arm64: extract duplicated code to own function
@ 2019-01-22 14:20     ` Suzuki K Poulose
  0 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-22 14:20 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier; +Cc: kvmarm, linux-arm-kernel

On 01/22/2019 10:49 AM, Andrew Murray wrote:
> Let's reduce code duplication by extracting common code to its own
> function.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/4] KVM: arm/arm64: support chained PMU counters
  2019-01-22 10:49   ` Andrew Murray
@ 2019-01-22 14:59     ` Julien Thierry
  -1 siblings, 0 replies; 38+ messages in thread
From: Julien Thierry @ 2019-01-22 14:59 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew

On 22/01/2019 10:49, Andrew Murray wrote:
> Emulate chained PMU counters by creating a single 64 bit event counter
> for a pair of chained KVM counters.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  include/kvm/arm_pmu.h |   2 +
>  virt/kvm/arm/pmu.c    | 308 +++++++++++++++++++++++++++++++++++++++++---------
>  2 files changed, 258 insertions(+), 52 deletions(-)
> 
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index f87fe20..d4f3b28 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -29,6 +29,8 @@ struct kvm_pmc {
>  	u8 idx;	/* index into the pmu->pmc array */
>  	struct perf_event *perf_event;
>  	u64 bitmask;
> +	u64 sample_period;
> +	u64 left;
>  };
>  
>  struct kvm_pmu {
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 1921ca9..d111d5b 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,10 +24,26 @@
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> +					    u64 pair_low);
> +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
> +					      u64 select_idx);
> +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low);
>  static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
>  static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
>  						      u64 select_idx);
> -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> +
> +/**
> + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
> + * @pmc: The PMU counter pointer
> + * @select_idx: The counter index
> + */
> +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
> +{
> +	return ((pmc->perf_event->attr.config1 & 0x1)
> +		&& (pmc->idx % 2));
> +}
>  
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
> @@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
>   */
>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
> -	u64 counter, reg, enabled, running;
> +	u64 counter, reg, enabled, running, incr;
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  
> @@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>  	/* The real counter value is equal to the value of counter register plus
>  	 * the value perf event counts.
>  	 */
> -	if (pmc->perf_event)
> -		counter += perf_event_read_value(pmc->perf_event, &enabled,
> +	if (pmc->perf_event) {
> +		incr = perf_event_read_value(pmc->perf_event, &enabled,
>  						 &running);
>  
> +		if (kvm_pmu_counter_is_high_word(pmc))
> +			incr = upper_32_bits(incr);
> +		counter += incr;
> +	}
> +
>  	return counter & pmc->bitmask;
>  }
>  
>  /**
> + * kvm_pmu_counter_is_enabled - is a counter active
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +
> +	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> +	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
> +}
> +
> +/**
> + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The low counter index
> + */
> +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
> +{
> +	u64 eventsel;
> +
> +	eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) &
> +			ARMV8_PMU_EVTYPE_EVENT;
> +	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
> +		return false;
> +
> +	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
> +	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
> +		return false;
> +
> +	return true;
> +}
> +
> +/**
>   * kvm_pmu_set_counter_value - set PMU counter value
>   * @vcpu: The vcpu pointer
>   * @select_idx: The counter index
> @@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>   */
>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>  {
> -	u64 reg;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	u64 reg, pair_low;
>  
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>  
> -	kvm_pmu_stop_counter(vcpu, pmc);
> -	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> +	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;

Don't really know if it's better but you can write it as:

	pair_low = select_idx & ~(1ULL);

But the compiler might already optimize it.

> +
> +	/* Recreate the perf event to reflect the updated sample_period */
> +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> +		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
> +	} else {
> +		kvm_pmu_stop_release_perf_event_single(vcpu, select_idx);
> +		kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> +	}
>  }
>  
>  /**
>   * kvm_pmu_release_perf_event - remove the perf event
>   * @pmc: The PMU counter pointer
>   */
> -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
> +				       struct kvm_pmc *pmc)
>  {
> -	if (pmc->perf_event) {
> -		perf_event_disable(pmc->perf_event);
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_alt;
> +	u64 pair_alt;
> +
> +	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
> +	pmc_alt = &pmu->pmc[pair_alt];
> +
> +	if (pmc->perf_event)
>  		perf_event_release_kernel(pmc->perf_event);
> -		pmc->perf_event = NULL;
> -	}
> +
> +	if (pmc->perf_event == pmc_alt->perf_event)
> +		pmc_alt->perf_event = NULL;

Shouldn't we release pmc_alt->perf_event before setting it to NULL?

> +
> +	pmc->perf_event = NULL;
>  }
>  
>  /**
> @@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>   * @vcpu: The vcpu pointer
>   * @pmc: The PMU counter pointer
>   *
> - * If this counter has been configured to monitor some event, release it here.
> + * If this counter has been configured to monitor some event, stop it here.
>   */
>  static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
>  {
>  	u64 counter, reg;
>  
>  	if (pmc->perf_event) {
> +		perf_event_disable(pmc->perf_event);
>  		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>  		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
>  		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
>  		__vcpu_sys_reg(vcpu, reg) = counter;
> -		kvm_pmu_release_perf_event(pmc);
>  	}
>  }
>  
>  /**
> + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
> + * @vcpu: The vcpu pointer
> + * @pmc_low: The PMU counter pointer for lower word
> + * @pmc_high: The PMU counter pointer for higher word
> + *
> + * As chained counters share the underlying perf event, we stop them
> + * both first before discarding the underlying perf event
> + */
> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> +					    u64 idx_low)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
> +
> +	kvm_pmu_stop_counter(vcpu, pmc_low);
> +	kvm_pmu_stop_counter(vcpu, pmc_high);
> +
> +	kvm_pmu_release_perf_event(vcpu, pmc_low);
> +	kvm_pmu_release_perf_event(vcpu, pmc_high);

Hmmm, I think there is some confusion between what this function and
kvm_pmu_release_perf_event() should do, at this point
pmc_high->perf_event == NULL and we can't release it.

> +}
> +
> +/**
> + * kvm_pmu_stop_release_perf_event_single - stop and release a counter
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
> +					      u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	kvm_pmu_release_perf_event(vcpu, pmc);
> +}
> +
> +/**
>   * kvm_pmu_vcpu_reset - reset pmu state for cpu
>   * @vcpu: The vcpu pointer
>   *
> @@ -118,7 +227,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
> +		kvm_pmu_stop_release_perf_event_single(vcpu, i);
>  		pmu->pmc[i].idx = i;
>  		pmu->pmc[i].bitmask = 0xffffffffUL;
>  	}
> @@ -136,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
>  		struct kvm_pmc *pmc = &pmu->pmc[i];
> -		kvm_pmu_release_perf_event(pmc);
> +		kvm_pmu_release_perf_event(vcpu, pmc);
>  	}
>  }
>  
> @@ -171,49 +280,81 @@ static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
>  }
>  
>  /**
> - * kvm_pmu_enable_counter - enable selected PMU counter
> + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
>   * @vcpu: The vcpu pointer
> - * @val: the value guest writes to PMCNTENSET register
> - *
> - * Call perf_event_enable to start counting the perf event
> + * @select_idx: The counter index
>   */
> -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)
>  {
> -	int i;
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
>  
> -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
>  		return;
>  
> -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		if (!(val & BIT(i)))
> -			continue;
> +	if (set & BIT(select_idx))
> +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> +}
>  
> -		kvm_pmu_enable_counter_single(vcpu, i);
> +/**
> + * kvm_pmu_reenable_enabled_pair - reenable a pair if they should be enabled
> + * @vcpu: The vcpu pointer
> + * @pair_low: The low counter index
> + */
> +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low)
> +{
> +	kvm_pmu_reenable_enabled_single(vcpu, pair_low);
> +	kvm_pmu_reenable_enabled_single(vcpu, pair_low+1);
> +}
> +
> +/**
> + * kvm_pmu_enable_counter_pair - enable counters pair at a time
> + * @vcpu: The vcpu pointer
> + * @val: counters to enable
> + * @pair_low: The low counter index
> + */
> +static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> +					u64 pair_low)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> +
> +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		if (pmc_low->perf_event != pmc_high->perf_event)
> +			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
>  	}
> +
> +	if (val & BIT(pair_low))
> +		kvm_pmu_enable_counter_single(vcpu, pair_low);
> +
> +	if (val & BIT(pair_low+1))
> +		kvm_pmu_enable_counter_single(vcpu, pair_low + 1);
>  }
>  
>  /**
> - * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> + * kvm_pmu_enable_counter - enable selected PMU counter
>   * @vcpu: The vcpu pointer
> - * @select_idx: The counter index
> + * @val: the value guest writes to PMCNTENSET register
> + *
> + * Call perf_event_enable to start counting the perf event
>   */
> -static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> -					    u64 select_idx)
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  {
> -	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> -	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +	int i;
>  
> -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
>  		return;
>  
> -	if (set & BIT(select_idx))
> -		kvm_pmu_enable_counter_single(vcpu, select_idx);
> +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> +		kvm_pmu_enable_counter_pair(vcpu, val, i);
>  }
>  
>  /**
>   * kvm_pmu_disable_counter - disable selected PMU counter
>   * @vcpu: The vcpu pointer
> - * @pmc: The counter to dissable
> + * @select_idx: The counter index
>   */
>  static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
>  					   u64 select_idx)
> @@ -221,8 +362,40 @@ static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  
> -	if (pmc->perf_event)
> +	if (pmc->perf_event) {
>  		perf_event_disable(pmc->perf_event);
> +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +			kvm_debug("fail to enable perf event\n");
> +	}
> +}
> +
> +/**
> + * kvm_pmu_disable_counter_pair - disable counters pair at a time
> + * @val: counters to disable
> + * @pair_low: The low counter index
> + */
> +static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> +					 u64 pair_low)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> +
> +	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		if (pmc_low->perf_event == pmc_high->perf_event) {
> +			if (pmc_low->perf_event) {
> +				kvm_pmu_stop_release_perf_event_pair(vcpu,
> +								pair_low);
> +				kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
> +			}
> +		}
> +	}
> +
> +	if (val & BIT(pair_low))
> +		kvm_pmu_disable_counter_single(vcpu, pair_low);
> +
> +	if (val & BIT(pair_low + 1))
> +		kvm_pmu_disable_counter_single(vcpu, pair_low + 1);
>  }
>  
>  /**
> @@ -239,12 +412,8 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
>  	if (!val)
>  		return;
>  
> -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		if (!(val & BIT(i)))
> -			continue;
> -
> -		kvm_pmu_disable_counter_single(vcpu, i);
> -	}
> +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> +		kvm_pmu_disable_counter_pair(vcpu, val, i);
>  }
>  
>  static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
> @@ -355,6 +524,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
>  
>  	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
>  
> +	if (kvm_pmu_event_is_chained(vcpu, idx + 1)) {

Doesn't kvm_pmu_event_is_chained() expect the low part of the counter pair?

Cheers,

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/4] KVM: arm/arm64: support chained PMU counters
@ 2019-01-22 14:59     ` Julien Thierry
  0 siblings, 0 replies; 38+ messages in thread
From: Julien Thierry @ 2019-01-22 14:59 UTC (permalink / raw)
  To: Andrew Murray, Christoffer Dall, Marc Zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew

On 22/01/2019 10:49, Andrew Murray wrote:
> Emulate chained PMU counters by creating a single 64 bit event counter
> for a pair of chained KVM counters.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  include/kvm/arm_pmu.h |   2 +
>  virt/kvm/arm/pmu.c    | 308 +++++++++++++++++++++++++++++++++++++++++---------
>  2 files changed, 258 insertions(+), 52 deletions(-)
> 
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index f87fe20..d4f3b28 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -29,6 +29,8 @@ struct kvm_pmc {
>  	u8 idx;	/* index into the pmu->pmc array */
>  	struct perf_event *perf_event;
>  	u64 bitmask;
> +	u64 sample_period;
> +	u64 left;
>  };
>  
>  struct kvm_pmu {
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 1921ca9..d111d5b 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,10 +24,26 @@
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> +					    u64 pair_low);
> +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
> +					      u64 select_idx);
> +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low);
>  static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
>  static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
>  						      u64 select_idx);
> -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> +
> +/**
> + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
> + * @pmc: The PMU counter pointer
> + * @select_idx: The counter index
> + */
> +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
> +{
> +	return ((pmc->perf_event->attr.config1 & 0x1)
> +		&& (pmc->idx % 2));
> +}
>  
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
> @@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
>   */
>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
> -	u64 counter, reg, enabled, running;
> +	u64 counter, reg, enabled, running, incr;
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  
> @@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>  	/* The real counter value is equal to the value of counter register plus
>  	 * the value perf event counts.
>  	 */
> -	if (pmc->perf_event)
> -		counter += perf_event_read_value(pmc->perf_event, &enabled,
> +	if (pmc->perf_event) {
> +		incr = perf_event_read_value(pmc->perf_event, &enabled,
>  						 &running);
>  
> +		if (kvm_pmu_counter_is_high_word(pmc))
> +			incr = upper_32_bits(incr);
> +		counter += incr;
> +	}
> +
>  	return counter & pmc->bitmask;
>  }
>  
>  /**
> + * kvm_pmu_counter_is_enabled - is a counter active
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +
> +	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> +	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
> +}
> +
> +/**
> + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The low counter index
> + */
> +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
> +{
> +	u64 eventsel;
> +
> +	eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) &
> +			ARMV8_PMU_EVTYPE_EVENT;
> +	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
> +		return false;
> +
> +	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
> +	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
> +		return false;
> +
> +	return true;
> +}
> +
> +/**
>   * kvm_pmu_set_counter_value - set PMU counter value
>   * @vcpu: The vcpu pointer
>   * @select_idx: The counter index
> @@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>   */
>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>  {
> -	u64 reg;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	u64 reg, pair_low;
>  
>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>  
> -	kvm_pmu_stop_counter(vcpu, pmc);
> -	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> +	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;

Don't really know if it's better but you can write it as:

	pair_low = select_idx & ~(1ULL);

But the compiler might already optimize it.

> +
> +	/* Recreate the perf event to reflect the updated sample_period */
> +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> +		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
> +	} else {
> +		kvm_pmu_stop_release_perf_event_single(vcpu, select_idx);
> +		kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> +	}
>  }
>  
>  /**
>   * kvm_pmu_release_perf_event - remove the perf event
>   * @pmc: The PMU counter pointer
>   */
> -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
> +				       struct kvm_pmc *pmc)
>  {
> -	if (pmc->perf_event) {
> -		perf_event_disable(pmc->perf_event);
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_alt;
> +	u64 pair_alt;
> +
> +	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
> +	pmc_alt = &pmu->pmc[pair_alt];
> +
> +	if (pmc->perf_event)
>  		perf_event_release_kernel(pmc->perf_event);
> -		pmc->perf_event = NULL;
> -	}
> +
> +	if (pmc->perf_event == pmc_alt->perf_event)
> +		pmc_alt->perf_event = NULL;

Shouldn't we release pmc_alt->perf_event before setting it to NULL?

> +
> +	pmc->perf_event = NULL;
>  }
>  
>  /**
> @@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>   * @vcpu: The vcpu pointer
>   * @pmc: The PMU counter pointer
>   *
> - * If this counter has been configured to monitor some event, release it here.
> + * If this counter has been configured to monitor some event, stop it here.
>   */
>  static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
>  {
>  	u64 counter, reg;
>  
>  	if (pmc->perf_event) {
> +		perf_event_disable(pmc->perf_event);
>  		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>  		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
>  		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
>  		__vcpu_sys_reg(vcpu, reg) = counter;
> -		kvm_pmu_release_perf_event(pmc);
>  	}
>  }
>  
>  /**
> + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
> + * @vcpu: The vcpu pointer
> + * @pmc_low: The PMU counter pointer for lower word
> + * @pmc_high: The PMU counter pointer for higher word
> + *
> + * As chained counters share the underlying perf event, we stop them
> + * both first before discarding the underlying perf event
> + */
> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> +					    u64 idx_low)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
> +
> +	kvm_pmu_stop_counter(vcpu, pmc_low);
> +	kvm_pmu_stop_counter(vcpu, pmc_high);
> +
> +	kvm_pmu_release_perf_event(vcpu, pmc_low);
> +	kvm_pmu_release_perf_event(vcpu, pmc_high);

Hmmm, I think there is some confusion between what this function and
kvm_pmu_release_perf_event() should do, at this point
pmc_high->perf_event == NULL and we can't release it.

> +}
> +
> +/**
> + * kvm_pmu_stop_release_perf_event_single - stop and release a counter
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
> +					      u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	kvm_pmu_release_perf_event(vcpu, pmc);
> +}
> +
> +/**
>   * kvm_pmu_vcpu_reset - reset pmu state for cpu
>   * @vcpu: The vcpu pointer
>   *
> @@ -118,7 +227,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
> +		kvm_pmu_stop_release_perf_event_single(vcpu, i);
>  		pmu->pmc[i].idx = i;
>  		pmu->pmc[i].bitmask = 0xffffffffUL;
>  	}
> @@ -136,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
>  
>  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
>  		struct kvm_pmc *pmc = &pmu->pmc[i];
> -		kvm_pmu_release_perf_event(pmc);
> +		kvm_pmu_release_perf_event(vcpu, pmc);
>  	}
>  }
>  
> @@ -171,49 +280,81 @@ static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
>  }
>  
>  /**
> - * kvm_pmu_enable_counter - enable selected PMU counter
> + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
>   * @vcpu: The vcpu pointer
> - * @val: the value guest writes to PMCNTENSET register
> - *
> - * Call perf_event_enable to start counting the perf event
> + * @select_idx: The counter index
>   */
> -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)
>  {
> -	int i;
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
>  
> -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
>  		return;
>  
> -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		if (!(val & BIT(i)))
> -			continue;
> +	if (set & BIT(select_idx))
> +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> +}
>  
> -		kvm_pmu_enable_counter_single(vcpu, i);
> +/**
> + * kvm_pmu_reenable_enabled_pair - reenable a pair if they should be enabled
> + * @vcpu: The vcpu pointer
> + * @pair_low: The low counter index
> + */
> +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low)
> +{
> +	kvm_pmu_reenable_enabled_single(vcpu, pair_low);
> +	kvm_pmu_reenable_enabled_single(vcpu, pair_low+1);
> +}
> +
> +/**
> + * kvm_pmu_enable_counter_pair - enable counters pair at a time
> + * @vcpu: The vcpu pointer
> + * @val: counters to enable
> + * @pair_low: The low counter index
> + */
> +static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> +					u64 pair_low)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> +
> +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		if (pmc_low->perf_event != pmc_high->perf_event)
> +			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
>  	}
> +
> +	if (val & BIT(pair_low))
> +		kvm_pmu_enable_counter_single(vcpu, pair_low);
> +
> +	if (val & BIT(pair_low+1))
> +		kvm_pmu_enable_counter_single(vcpu, pair_low + 1);
>  }
>  
>  /**
> - * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> + * kvm_pmu_enable_counter - enable selected PMU counter
>   * @vcpu: The vcpu pointer
> - * @select_idx: The counter index
> + * @val: the value guest writes to PMCNTENSET register
> + *
> + * Call perf_event_enable to start counting the perf event
>   */
> -static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> -					    u64 select_idx)
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>  {
> -	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> -	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +	int i;
>  
> -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
>  		return;
>  
> -	if (set & BIT(select_idx))
> -		kvm_pmu_enable_counter_single(vcpu, select_idx);
> +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> +		kvm_pmu_enable_counter_pair(vcpu, val, i);
>  }
>  
>  /**
>   * kvm_pmu_disable_counter - disable selected PMU counter
>   * @vcpu: The vcpu pointer
> - * @pmc: The counter to dissable
> + * @select_idx: The counter index
>   */
>  static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
>  					   u64 select_idx)
> @@ -221,8 +362,40 @@ static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>  
> -	if (pmc->perf_event)
> +	if (pmc->perf_event) {
>  		perf_event_disable(pmc->perf_event);
> +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +			kvm_debug("fail to enable perf event\n");
> +	}
> +}
> +
> +/**
> + * kvm_pmu_disable_counter_pair - disable counters pair at a time
> + * @val: counters to disable
> + * @pair_low: The low counter index
> + */
> +static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> +					 u64 pair_low)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> +
> +	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
> +		if (pmc_low->perf_event == pmc_high->perf_event) {
> +			if (pmc_low->perf_event) {
> +				kvm_pmu_stop_release_perf_event_pair(vcpu,
> +								pair_low);
> +				kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
> +			}
> +		}
> +	}
> +
> +	if (val & BIT(pair_low))
> +		kvm_pmu_disable_counter_single(vcpu, pair_low);
> +
> +	if (val & BIT(pair_low + 1))
> +		kvm_pmu_disable_counter_single(vcpu, pair_low + 1);
>  }
>  
>  /**
> @@ -239,12 +412,8 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
>  	if (!val)
>  		return;
>  
> -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> -		if (!(val & BIT(i)))
> -			continue;
> -
> -		kvm_pmu_disable_counter_single(vcpu, i);
> -	}
> +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> +		kvm_pmu_disable_counter_pair(vcpu, val, i);
>  }
>  
>  static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
> @@ -355,6 +524,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
>  
>  	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
>  
> +	if (kvm_pmu_event_is_chained(vcpu, idx + 1)) {

Doesn't kvm_pmu_event_is_chained() expect the low part of the counter pair?

Cheers,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
  2019-01-22 10:49   ` Andrew Murray
@ 2019-01-22 22:12     ` Suzuki K Poulose
  -1 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-22 22:12 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew,

On 01/22/2019 10:49 AM, Andrew Murray wrote:
> To prevent re-creating perf events everytime the counter registers
> are changed, let's instead lazily create the event when the event
> is first enabled and destroy it when it changes.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>


> ---
>   virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
>   1 file changed, 78 insertions(+), 36 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 4464899..1921ca9 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,8 +24,11 @@
>   #include <kvm/arm_pmu.h>
>   #include <kvm/arm_vgic.h>
>   
> -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> -				      u64 select_idx);
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);

I find the approach good. However the function names are a bit odd and
it makes the code read a bit difficult.

I think we could :

1) Rename the existing
  kvm_pmu_{enable/disable}_counter => kvm_pmu_{enable/disable}_[mask or 
counters ]
as they operate on a set of counters (as a mask) instead of a single
counter.
And then you may be able to drop "_single" from
kvm_pmu_{enable/disable}_counter"_single() functions below, which makes
better sense for what they do.

> +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> +						      u64 select_idx);

Could we simply keep kvm_pmu_counter_create_event() and add a comment 
above the function explaining that the events are enabled as they are
created lazily ?

> +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> +
>   /**
>    * kvm_pmu_get_counter_value - get PMU counter value
>    * @vcpu: The vcpu pointer
> @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>    */
>   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>   {
> -	u64 reg, data;
> +	u64 reg;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>   
>   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>   
> -	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> -	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> -	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> -
> -	/* Recreate the perf event to reflect the updated sample_period */
> -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>   }
>   
>   /**
> @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>   
>   /**
>    * kvm_pmu_stop_counter - stop PMU counter
> + * @vcpu: The vcpu pointer
>    * @pmc: The PMU counter pointer
>    *
>    * If this counter has been configured to monitor some event, release it here.
> @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>   }
>   
>   /**
> + * kvm_pmu_enable_counter_single - create/enable a unpaired counter
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (!pmc->perf_event) {
> +		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
> +	} else if (pmc->perf_event) {
> +		perf_event_enable(pmc->perf_event);
> +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +			kvm_debug("fail to enable perf event\n");

nit: failed

> +	}
> +}
> +
> +/**
>    * kvm_pmu_enable_counter - enable selected PMU counter

nit: This is a bit misleading. We could be enabling a set of counters.
Please could we update the comment.

>    * @vcpu: The vcpu pointer
>    * @val: the value guest writes to PMCNTENSET register
> @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>   void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>   {
>   	int i;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc;
>   
>   	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
>   		return;
> @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>   		if (!(val & BIT(i)))
>   			continue;
>   
> -		pmc = &pmu->pmc[i];
> -		if (pmc->perf_event) {
> -			perf_event_enable(pmc->perf_event);
> -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> -				kvm_debug("fail to enable perf event\n");
> -		}
> +		kvm_pmu_enable_counter_single(vcpu, i);
>   	}
>   }
>   
>   /**
> + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)
> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +
> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> +		return;
> +
> +	if (set & BIT(select_idx))
> +		kvm_pmu_enable_counter_single(vcpu, select_idx);

Could we not reuse kvm_pmu_enable_counter() here :
	i.e,
static inline void kvm_pmu_reenable_counter(struct kvm_vcpu *vcpu, u64
						select_idx)
{
	kvm_pmu_enable_counter(vcpu, BIT(select_idx));
}

> +}
> +
> +/**
> + * kvm_pmu_disable_counter - disable selected PMU counter

Stale comment

> + * @vcpu: The vcpu pointer
> + * @pmc: The counter to dissable

nit: s/dissable/disable/

> + */
> +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> +					   u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (pmc->perf_event)
> +		perf_event_disable(pmc->perf_event);
> +}
> +
> +/**
>    * kvm_pmu_disable_counter - disable selected PMU counter

While you are at this, please could you make the comment a bit more
clear. i.e, we disable a set of PMU counters not a single one.

Suzuki

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
@ 2019-01-22 22:12     ` Suzuki K Poulose
  0 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-22 22:12 UTC (permalink / raw)
  To: andrew.murray, christoffer.dall, marc.zyngier; +Cc: kvmarm, linux-arm-kernel

Hi Andrew,

On 01/22/2019 10:49 AM, Andrew Murray wrote:
> To prevent re-creating perf events everytime the counter registers
> are changed, let's instead lazily create the event when the event
> is first enabled and destroy it when it changes.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>


> ---
>   virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
>   1 file changed, 78 insertions(+), 36 deletions(-)
> 
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 4464899..1921ca9 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -24,8 +24,11 @@
>   #include <kvm/arm_pmu.h>
>   #include <kvm/arm_vgic.h>
>   
> -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> -				      u64 select_idx);
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);

I find the approach good. However the function names are a bit odd and
it makes the code read a bit difficult.

I think we could :

1) Rename the existing
  kvm_pmu_{enable/disable}_counter => kvm_pmu_{enable/disable}_[mask or 
counters ]
as they operate on a set of counters (as a mask) instead of a single
counter.
And then you may be able to drop "_single" from
kvm_pmu_{enable/disable}_counter"_single() functions below, which makes
better sense for what they do.

> +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> +						      u64 select_idx);

Could we simply keep kvm_pmu_counter_create_event() and add a comment 
above the function explaining that the events are enabled as they are
created lazily ?

> +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> +
>   /**
>    * kvm_pmu_get_counter_value - get PMU counter value
>    * @vcpu: The vcpu pointer
> @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>    */
>   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>   {
> -	u64 reg, data;
> +	u64 reg;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>   
>   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>   
> -	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> -	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> -	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> -
> -	/* Recreate the perf event to reflect the updated sample_period */
> -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>   }
>   
>   /**
> @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>   
>   /**
>    * kvm_pmu_stop_counter - stop PMU counter
> + * @vcpu: The vcpu pointer
>    * @pmc: The PMU counter pointer
>    *
>    * If this counter has been configured to monitor some event, release it here.
> @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>   }
>   
>   /**
> + * kvm_pmu_enable_counter_single - create/enable a unpaired counter
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (!pmc->perf_event) {
> +		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
> +	} else if (pmc->perf_event) {
> +		perf_event_enable(pmc->perf_event);
> +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +			kvm_debug("fail to enable perf event\n");

nit: failed

> +	}
> +}
> +
> +/**
>    * kvm_pmu_enable_counter - enable selected PMU counter

nit: This is a bit misleading. We could be enabling a set of counters.
Please could we update the comment.

>    * @vcpu: The vcpu pointer
>    * @val: the value guest writes to PMCNTENSET register
> @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>   void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>   {
>   	int i;
> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> -	struct kvm_pmc *pmc;
>   
>   	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
>   		return;
> @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
>   		if (!(val & BIT(i)))
>   			continue;
>   
> -		pmc = &pmu->pmc[i];
> -		if (pmc->perf_event) {
> -			perf_event_enable(pmc->perf_event);
> -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> -				kvm_debug("fail to enable perf event\n");
> -		}
> +		kvm_pmu_enable_counter_single(vcpu, i);
>   	}
>   }
>   
>   /**
> + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)
> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +
> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> +		return;
> +
> +	if (set & BIT(select_idx))
> +		kvm_pmu_enable_counter_single(vcpu, select_idx);

Could we not reuse kvm_pmu_enable_counter() here :
	i.e,
static inline void kvm_pmu_reenable_counter(struct kvm_vcpu *vcpu, u64
						select_idx)
{
	kvm_pmu_enable_counter(vcpu, BIT(select_idx));
}

> +}
> +
> +/**
> + * kvm_pmu_disable_counter - disable selected PMU counter

Stale comment

> + * @vcpu: The vcpu pointer
> + * @pmc: The counter to dissable

nit: s/dissable/disable/

> + */
> +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> +					   u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (pmc->perf_event)
> +		perf_event_disable(pmc->perf_event);
> +}
> +
> +/**
>    * kvm_pmu_disable_counter - disable selected PMU counter

While you are at this, please could you make the comment a bit more
clear. i.e, we disable a set of PMU counters not a single one.

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
  2019-01-22 14:18     ` Suzuki K Poulose
@ 2019-01-28 11:47       ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-28 11:47 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: marc.zyngier, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 02:18:17PM +0000, Suzuki K Poulose wrote:
> Hi Andrew
> 
> On 01/22/2019 10:49 AM, Andrew Murray wrote:
> > The perf event sample_period is currently set based upon the current
> > counter value, when PMXEVTYPER is written to and the perf event is created.
> > However the user may choose to write the type before the counter value in
> > which case sample_period will be set incorrectly. Let's instead decouple
> > event creation from PMXEVTYPER and (re)create the event in either
> > suitation.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> 
> The approach looks fine to me. However this patch seems to introduce a
> memory leak, see below, which you may be addressing in a later patch in the
> series. But this will affect bisecting issues.

See below, I don't think this is true.

> 
> > ---
> >   virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
> >   1 file changed, 30 insertions(+), 9 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 531d27f..4464899 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,6 +24,8 @@
> >   #include <kvm/arm_pmu.h>
> >   #include <kvm/arm_vgic.h>
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > +				      u64 select_idx);
> 
> Could we just pass the counter index (i.e, select_idx) after updating
> the event_type/counter value in the respective functions.

Unless I misunderstand we need the value of 'data' as it is used to
populate the function local perf_event_attr structure.

However it is possible to instead read 'data' from __vcpu_sys_reg in
kvm_pmu_create_perf_event instead of the call site. However
kvm_pmu_set_counter_event_type would have to set the value of
__vcpu_sys_reg from its data argument (as __vcpu_sys_reg normally gets
set after kvm_pmu_set_counter_event_type returns). This is OK as we
do this in the next patch in this series anyway - so perhaps I can
bring that forward to this patch?

> 
> nit: If we decide not to do that, please rename "data" to something more
> obvious, event_type.
> 
> >   /**
> >    * kvm_pmu_get_counter_value - get PMU counter value
> >    * @vcpu: The vcpu pointer
> > @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >    */
> >   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >   {
> > -	u64 reg;
> > +	u64 reg, data;
> 
> nit: Same here, data is too generic.
> 
> >   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > +
> > +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > +	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> > +
> > +	/* Recreate the perf event to reflect the updated sample_period */
> > +	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> >   }
> >   /**
> > @@ -380,17 +389,13 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> >   }
> >   /**
> > - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> > + * kvm_pmu_create_perf_event - create a perf event for a counter
> >    * @vcpu: The vcpu pointer
> > - * @data: The data guest writes to PMXEVTYPER_EL0
> > + * @data: Type of event as per PMXEVTYPER_EL0 format
> >    * @select_idx: The number of selected counter
> > - *
> > - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> > - * event with given hardware event number. Here we call perf_event API to
> > - * emulate this action and create a kernel perf event for it.
> >    */
> > -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> > -				    u64 select_idx)
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > +				      u64 select_idx)
> >   {
> >   	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >   	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > @@ -433,6 +438,22 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> >   	pmc->perf_event = event;
> 
> We should release the existing perf_event to prevent a memory leak and
> also a corruption in the data via the overflow handler for the existing
> event. Am I missing something here ?

In kvm_pmu_create_perf_event (formally kvm_pmu_set_counter_event_type) we call
kvm_pmu_stop_counter - this releases the event.

So there is no memory leak here.

Thanks,

Andrew Murray

> 
> >   }
> > +/**
> > + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> > + * @vcpu: The vcpu pointer
> > + * @data: The data guest writes to PMXEVTYPER_EL0
> > + * @select_idx: The number of selected counter
> > + *
> > + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> > + * event with given hardware event number. Here we call perf_event API to
> > + * emulate this action and create a kernel perf event for it.
> > + */
> > +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> > +				    u64 select_idx)
> > +{
> > +	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> > +}
> > +
> >   bool kvm_arm_support_pmu_v3(void)
> >   {
> >   	/*
> > 
> 
> 
> Cheers
> Suzuki

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
@ 2019-01-28 11:47       ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-28 11:47 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: marc.zyngier, christoffer.dall, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 02:18:17PM +0000, Suzuki K Poulose wrote:
> Hi Andrew
> 
> On 01/22/2019 10:49 AM, Andrew Murray wrote:
> > The perf event sample_period is currently set based upon the current
> > counter value, when PMXEVTYPER is written to and the perf event is created.
> > However the user may choose to write the type before the counter value in
> > which case sample_period will be set incorrectly. Let's instead decouple
> > event creation from PMXEVTYPER and (re)create the event in either
> > suitation.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> 
> The approach looks fine to me. However this patch seems to introduce a
> memory leak, see below, which you may be addressing in a later patch in the
> series. But this will affect bisecting issues.

See below, I don't think this is true.

> 
> > ---
> >   virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
> >   1 file changed, 30 insertions(+), 9 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 531d27f..4464899 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,6 +24,8 @@
> >   #include <kvm/arm_pmu.h>
> >   #include <kvm/arm_vgic.h>
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > +				      u64 select_idx);
> 
> Could we just pass the counter index (i.e, select_idx) after updating
> the event_type/counter value in the respective functions.

Unless I misunderstand we need the value of 'data' as it is used to
populate the function local perf_event_attr structure.

However it is possible to instead read 'data' from __vcpu_sys_reg in
kvm_pmu_create_perf_event instead of the call site. However
kvm_pmu_set_counter_event_type would have to set the value of
__vcpu_sys_reg from its data argument (as __vcpu_sys_reg normally gets
set after kvm_pmu_set_counter_event_type returns). This is OK as we
do this in the next patch in this series anyway - so perhaps I can
bring that forward to this patch?

> 
> nit: If we decide not to do that, please rename "data" to something more
> obvious, event_type.
> 
> >   /**
> >    * kvm_pmu_get_counter_value - get PMU counter value
> >    * @vcpu: The vcpu pointer
> > @@ -57,11 +59,18 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >    */
> >   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >   {
> > -	u64 reg;
> > +	u64 reg, data;
> 
> nit: Same here, data is too generic.
> 
> >   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > +
> > +	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > +	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > +	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> > +
> > +	/* Recreate the perf event to reflect the updated sample_period */
> > +	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> >   }
> >   /**
> > @@ -380,17 +389,13 @@ static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> >   }
> >   /**
> > - * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> > + * kvm_pmu_create_perf_event - create a perf event for a counter
> >    * @vcpu: The vcpu pointer
> > - * @data: The data guest writes to PMXEVTYPER_EL0
> > + * @data: Type of event as per PMXEVTYPER_EL0 format
> >    * @select_idx: The number of selected counter
> > - *
> > - * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> > - * event with given hardware event number. Here we call perf_event API to
> > - * emulate this action and create a kernel perf event for it.
> >    */
> > -void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> > -				    u64 select_idx)
> > +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > +				      u64 select_idx)
> >   {
> >   	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >   	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > @@ -433,6 +438,22 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> >   	pmc->perf_event = event;
> 
> We should release the existing perf_event to prevent a memory leak and
> also a corruption in the data via the overflow handler for the existing
> event. Am I missing something here ?

In kvm_pmu_create_perf_event (formally kvm_pmu_set_counter_event_type) we call
kvm_pmu_stop_counter - this releases the event.

So there is no memory leak here.

Thanks,

Andrew Murray

> 
> >   }
> > +/**
> > + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> > + * @vcpu: The vcpu pointer
> > + * @data: The data guest writes to PMXEVTYPER_EL0
> > + * @select_idx: The number of selected counter
> > + *
> > + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> > + * event with given hardware event number. Here we call perf_event API to
> > + * emulate this action and create a kernel perf event for it.
> > + */
> > +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> > +				    u64 select_idx)
> > +{
> > +	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> > +}
> > +
> >   bool kvm_arm_support_pmu_v3(void)
> >   {
> >   	/*
> > 
> 
> 
> Cheers
> Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
  2019-01-22 22:12     ` Suzuki K Poulose
@ 2019-01-28 14:28       ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-28 14:28 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: marc.zyngier, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 10:12:22PM +0000, Suzuki K Poulose wrote:
> Hi Andrew,
> 
> On 01/22/2019 10:49 AM, Andrew Murray wrote:
> > To prevent re-creating perf events everytime the counter registers
> > are changed, let's instead lazily create the event when the event
> > is first enabled and destroy it when it changes.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> 
> 
> > ---
> >   virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
> >   1 file changed, 78 insertions(+), 36 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 4464899..1921ca9 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,8 +24,11 @@
> >   #include <kvm/arm_pmu.h>
> >   #include <kvm/arm_vgic.h>
> > -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > -				      u64 select_idx);
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
> 
> I find the approach good. However the function names are a bit odd and
> it makes the code read a bit difficult.

Thanks - the odd naming probably came about as I started with a patch that
added chained PMU support and worked backward to split it into smaller patches
that each made individual sense. The _single suffix was the counterpart of
_pair.

> 
> I think we could :
> 
> 1) Rename the existing
>  kvm_pmu_{enable/disable}_counter => kvm_pmu_{enable/disable}_[mask or
> counters ]
> as they operate on a set of counters (as a mask) instead of a single
> counter.
> And then you may be able to drop "_single" from
> kvm_pmu_{enable/disable}_counter"_single() functions below, which makes
> better sense for what they do.

Thanks for this suggestion. I like this.

> 
> > +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> > +						      u64 select_idx);
> 
> Could we simply keep kvm_pmu_counter_create_event() and add a comment above
> the function explaining that the events are enabled as they are
> created lazily ?

OK.

> 
> > +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> > +
> >   /**
> >    * kvm_pmu_get_counter_value - get PMU counter value
> >    * @vcpu: The vcpu pointer
> > @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >    */
> >   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >   {
> > -	u64 reg, data;
> > +	u64 reg;
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > -	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > -	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > -	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> > -
> > -	/* Recreate the perf event to reflect the updated sample_period */
> > -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> > +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> >   }
> >   /**
> > @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> >   /**
> >    * kvm_pmu_stop_counter - stop PMU counter
> > + * @vcpu: The vcpu pointer
> >    * @pmc: The PMU counter pointer
> >    *
> >    * If this counter has been configured to monitor some event, release it here.
> > @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >   }
> >   /**
> > + * kvm_pmu_enable_counter_single - create/enable a unpaired counter
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (!pmc->perf_event) {
> > +		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
> > +	} else if (pmc->perf_event) {
> > +		perf_event_enable(pmc->perf_event);
> > +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > +			kvm_debug("fail to enable perf event\n");
> 
> nit: failed
> 
> > +	}
> > +}
> > +
> > +/**
> >    * kvm_pmu_enable_counter - enable selected PMU counter
> 
> nit: This is a bit misleading. We could be enabling a set of counters.
> Please could we update the comment.

No problem.

> 
> >    * @vcpu: The vcpu pointer
> >    * @val: the value guest writes to PMCNTENSET register
> > @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >   void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >   {
> >   	int i;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc;
> >   	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> >   		return;
> > @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >   		if (!(val & BIT(i)))
> >   			continue;
> > -		pmc = &pmu->pmc[i];
> > -		if (pmc->perf_event) {
> > -			perf_event_enable(pmc->perf_event);
> > -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > -				kvm_debug("fail to enable perf event\n");
> > -		}
> > +		kvm_pmu_enable_counter_single(vcpu, i);
> >   	}
> >   }
> >   /**
> > + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> > +					    u64 select_idx)
> > +{
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> > +
> > +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> > +		return;
> > +
> > +	if (set & BIT(select_idx))
> > +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> 
> Could we not reuse kvm_pmu_enable_counter() here :
> 	i.e,
> static inline void kvm_pmu_reenable_counter(struct kvm_vcpu *vcpu, u64
> 						select_idx)
> {
> 	kvm_pmu_enable_counter(vcpu, BIT(select_idx));
> }
> 

Not quite - when we call kvm_pmu_reenable_enabled_single the individual
counter may or may not be enabled. We only want to recreate the perf event
if it was previously enabled.

But we can do better, e.g.

static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
                                            u64 select_idx)
{
        u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);

        if (set & BIT(select_idx))
                kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
}

(The kvm_pmu_valid_counter_mask also wasn't needed here, but is needed
later when we may attempt to enable a counter that we don't have).

> > +}
> > +
> > +/**
> > + * kvm_pmu_disable_counter - disable selected PMU counter
> 
> Stale comment
> 
> > + * @vcpu: The vcpu pointer
> > + * @pmc: The counter to dissable
> 
> nit: s/dissable/disable/
> 
> > + */
> > +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> > +					   u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (pmc->perf_event)
> > +		perf_event_disable(pmc->perf_event);
> > +}
> > +
> > +/**
> >    * kvm_pmu_disable_counter - disable selected PMU counter
> 
> While you are at this, please could you make the comment a bit more
> clear. i.e, we disable a set of PMU counters not a single one.

Yes sure.

Thanks for the review.

Andrew Murray

> 
> Suzuki

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
@ 2019-01-28 14:28       ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-28 14:28 UTC (permalink / raw)
  To: Suzuki K Poulose; +Cc: marc.zyngier, christoffer.dall, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 10:12:22PM +0000, Suzuki K Poulose wrote:
> Hi Andrew,
> 
> On 01/22/2019 10:49 AM, Andrew Murray wrote:
> > To prevent re-creating perf events everytime the counter registers
> > are changed, let's instead lazily create the event when the event
> > is first enabled and destroy it when it changes.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> 
> 
> > ---
> >   virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
> >   1 file changed, 78 insertions(+), 36 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 4464899..1921ca9 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,8 +24,11 @@
> >   #include <kvm/arm_pmu.h>
> >   #include <kvm/arm_vgic.h>
> > -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > -				      u64 select_idx);
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
> 
> I find the approach good. However the function names are a bit odd and
> it makes the code read a bit difficult.

Thanks - the odd naming probably came about as I started with a patch that
added chained PMU support and worked backward to split it into smaller patches
that each made individual sense. The _single suffix was the counterpart of
_pair.

> 
> I think we could :
> 
> 1) Rename the existing
>  kvm_pmu_{enable/disable}_counter => kvm_pmu_{enable/disable}_[mask or
> counters ]
> as they operate on a set of counters (as a mask) instead of a single
> counter.
> And then you may be able to drop "_single" from
> kvm_pmu_{enable/disable}_counter"_single() functions below, which makes
> better sense for what they do.

Thanks for this suggestion. I like this.

> 
> > +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> > +						      u64 select_idx);
> 
> Could we simply keep kvm_pmu_counter_create_event() and add a comment above
> the function explaining that the events are enabled as they are
> created lazily ?

OK.

> 
> > +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> > +
> >   /**
> >    * kvm_pmu_get_counter_value - get PMU counter value
> >    * @vcpu: The vcpu pointer
> > @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >    */
> >   void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >   {
> > -	u64 reg, data;
> > +	u64 reg;
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >   	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >   	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >   	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> > -	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > -	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > -	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> > -
> > -	/* Recreate the perf event to reflect the updated sample_period */
> > -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> > +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> >   }
> >   /**
> > @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> >   /**
> >    * kvm_pmu_stop_counter - stop PMU counter
> > + * @vcpu: The vcpu pointer
> >    * @pmc: The PMU counter pointer
> >    *
> >    * If this counter has been configured to monitor some event, release it here.
> > @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >   }
> >   /**
> > + * kvm_pmu_enable_counter_single - create/enable a unpaired counter
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (!pmc->perf_event) {
> > +		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
> > +	} else if (pmc->perf_event) {
> > +		perf_event_enable(pmc->perf_event);
> > +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > +			kvm_debug("fail to enable perf event\n");
> 
> nit: failed
> 
> > +	}
> > +}
> > +
> > +/**
> >    * kvm_pmu_enable_counter - enable selected PMU counter
> 
> nit: This is a bit misleading. We could be enabling a set of counters.
> Please could we update the comment.

No problem.

> 
> >    * @vcpu: The vcpu pointer
> >    * @val: the value guest writes to PMCNTENSET register
> > @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >   void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >   {
> >   	int i;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc;
> >   	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> >   		return;
> > @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >   		if (!(val & BIT(i)))
> >   			continue;
> > -		pmc = &pmu->pmc[i];
> > -		if (pmc->perf_event) {
> > -			perf_event_enable(pmc->perf_event);
> > -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > -				kvm_debug("fail to enable perf event\n");
> > -		}
> > +		kvm_pmu_enable_counter_single(vcpu, i);
> >   	}
> >   }
> >   /**
> > + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> > +					    u64 select_idx)
> > +{
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> > +
> > +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> > +		return;
> > +
> > +	if (set & BIT(select_idx))
> > +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> 
> Could we not reuse kvm_pmu_enable_counter() here :
> 	i.e,
> static inline void kvm_pmu_reenable_counter(struct kvm_vcpu *vcpu, u64
> 						select_idx)
> {
> 	kvm_pmu_enable_counter(vcpu, BIT(select_idx));
> }
> 

Not quite - when we call kvm_pmu_reenable_enabled_single the individual
counter may or may not be enabled. We only want to recreate the perf event
if it was previously enabled.

But we can do better, e.g.

static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
                                            u64 select_idx)
{
        u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);

        if (set & BIT(select_idx))
                kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
}

(The kvm_pmu_valid_counter_mask also wasn't needed here, but is needed
later when we may attempt to enable a counter that we don't have).

> > +}
> > +
> > +/**
> > + * kvm_pmu_disable_counter - disable selected PMU counter
> 
> Stale comment
> 
> > + * @vcpu: The vcpu pointer
> > + * @pmc: The counter to dissable
> 
> nit: s/dissable/disable/
> 
> > + */
> > +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> > +					   u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (pmc->perf_event)
> > +		perf_event_disable(pmc->perf_event);
> > +}
> > +
> > +/**
> >    * kvm_pmu_disable_counter - disable selected PMU counter
> 
> While you are at this, please could you make the comment a bit more
> clear. i.e, we disable a set of PMU counters not a single one.

Yes sure.

Thanks for the review.

Andrew Murray

> 
> Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
  2019-01-22 13:41     ` Julien Thierry
@ 2019-01-28 17:02       ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-28 17:02 UTC (permalink / raw)
  To: Julien Thierry; +Cc: Marc Zyngier, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 01:41:49PM +0000, Julien Thierry wrote:
> Hi Andrew,
> 
> On 22/01/2019 10:49, Andrew Murray wrote:
> > To prevent re-creating perf events everytime the counter registers
> > are changed, let's instead lazily create the event when the event
> > is first enabled and destroy it when it changes.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
> >  1 file changed, 78 insertions(+), 36 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 4464899..1921ca9 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,8 +24,11 @@
> >  #include <kvm/arm_pmu.h>
> >  #include <kvm/arm_vgic.h>
> >  
> > -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > -				      u64 select_idx);
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
> > +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> > +						      u64 select_idx);
> > +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> > +
> >  /**
> >   * kvm_pmu_get_counter_value - get PMU counter value
> >   * @vcpu: The vcpu pointer
> > @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >   */
> >  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >  {
> > -	u64 reg, data;
> > +	u64 reg;
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> >  
> > -	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > -	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > -	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> > -
> > -	/* Recreate the perf event to reflect the updated sample_period */
> > -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> 
> Shouldn't this be before we do the write to __vcpu_sys_reg()?

I don't think we need to. It's the users choice to set a counter value whilst
it's still counting. In fact the later we leave it the better as there is then
a smaller period of time where we're not counting when we should be.

> 
> > +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> >  }
> >  
> >  /**
> > @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> >  
> >  /**
> >   * kvm_pmu_stop_counter - stop PMU counter
> > + * @vcpu: The vcpu pointer
> >   * @pmc: The PMU counter pointer
> >   *
> >   * If this counter has been configured to monitor some event, release it here.
> > @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >  }
> >  
> >  /**
> > + * kvm_pmu_enable_counter_single - create/enable a unpaired counter
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (!pmc->perf_event) {
> > +		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
> > +	} else if (pmc->perf_event) {
> 
> "else" is enough here, no need for "else if" :) .

Not sure where that come from!

> 
> 
> Actually, after we call kvm_pmu_counter_create_enabled_perf_event() we
> know that pmc->perf_event != NULL.
> 
> Shouldn't we execute the code below unconditionally?

I guess I wanted to avoid calling perf_event_enable on an event that
was already enabled (in the case pmc->perf_event is NULL on entry here).

Though along with Suzuki's feedback, I'll take your suggestion here, but
update kvm_pmu_counter_create_enabled_perf_event to not enable the event
by default. It's clearer then all around.

> 
> > +		perf_event_enable(pmc->perf_event);
> > +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > +			kvm_debug("fail to enable perf event\n");
> > +	}
> > +}
> > +
> > +/**
> >   * kvm_pmu_enable_counter - enable selected PMU counter
> >   * @vcpu: The vcpu pointer
> >   * @val: the value guest writes to PMCNTENSET register
> > @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  {
> >  	int i;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc;
> >  
> >  	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> >  		return;
> > @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  		if (!(val & BIT(i)))
> >  			continue;
> >  
> > -		pmc = &pmu->pmc[i];
> > -		if (pmc->perf_event) {
> > -			perf_event_enable(pmc->perf_event);
> > -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > -				kvm_debug("fail to enable perf event\n");
> > -		}
> > +		kvm_pmu_enable_counter_single(vcpu, i);
> >  	}
> >  }
> >  
> >  /**
> > + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> > +					    u64 select_idx)
> 
> Not completely convinced by the name. kvm_pmu_sync_counter_status() ?
> 
> Or maybe have the callers check whether they actually need to
> disable/enable and not have this function.

I don't think checking in the callers is the right approach.

Though perhaps kvm_pmu_sync_counter_enable is more understandable.

> 
> > +{
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> > +
> > +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> > +		return;
> > +
> > +	if (set & BIT(select_idx))
> > +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> > +}
> > +
> > +/**
> > + * kvm_pmu_disable_counter - disable selected PMU counter
> > + * @vcpu: The vcpu pointer
> > + * @pmc: The counter to dissable
> > + */
> > +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> > +					   u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (pmc->perf_event)
> > +		perf_event_disable(pmc->perf_event);
> > +}
> > +
> > +/**
> >   * kvm_pmu_disable_counter - disable selected PMU counter
> >   * @vcpu: The vcpu pointer
> >   * @val: the value guest writes to PMCNTENCLR register
> > @@ -188,8 +235,6 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  {
> >  	int i;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc;
> >  
> >  	if (!val)
> >  		return;
> > @@ -198,9 +243,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  		if (!(val & BIT(i)))
> >  			continue;
> >  
> > -		pmc = &pmu->pmc[i];
> > -		if (pmc->perf_event)
> > -			perf_event_disable(pmc->perf_event);
> > +		kvm_pmu_disable_counter_single(vcpu, i);
> >  	}
> >  }
> >  
> > @@ -382,28 +425,22 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
> >  	}
> >  }
> >  
> > -static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> > -{
> > -	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> > -	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
> > -}
> > -
> >  /**
> > - * kvm_pmu_create_perf_event - create a perf event for a counter
> > + * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter
> >   * @vcpu: The vcpu pointer
> > - * @data: Type of event as per PMXEVTYPER_EL0 format
> >   * @select_idx: The number of selected counter
> >   */
> > -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > -				      u64 select_idx)
> > +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> > +						u64 select_idx)
> >  {
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  	struct perf_event *event;
> >  	struct perf_event_attr attr;
> > -	u64 eventsel, counter;
> > +	u64 eventsel, counter, data;
> > +
> > +	data = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx);
> 
> Should we worry about the case select_idx == ARMV8_PMU_CYCLE_IDX?
> 
> >  
> > -	kvm_pmu_stop_counter(vcpu, pmc);
> >  	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
> >  
> >  	/* Software increment event does't need to be backed by a perf event */
> > @@ -415,7 +452,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> >  	attr.type = PERF_TYPE_RAW;
> >  	attr.size = sizeof(attr);
> >  	attr.pinned = 1;
> > -	attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx);
> >  	attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
> >  	attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
> >  	attr.exclude_hv = 1; /* Don't count EL2 events */
> > @@ -451,7 +487,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> >  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> >  				    u64 select_idx)
> >  {
> > -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
> > +
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> > +	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
> 
> Why don't we take into account the select_idx == ARMV8_PMU_CYCLE_IDX
> case into account anymore?
> 

We should, I've missed this - thanks for spotting this.

Thanks,

Andrew Murray

> > +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> >  }
> >  
> >  bool kvm_arm_support_pmu_v3(void)
> > 
> 
> Cheers,
> 
> -- 
> Julien Thierry

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
@ 2019-01-28 17:02       ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-28 17:02 UTC (permalink / raw)
  To: Julien Thierry; +Cc: Marc Zyngier, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 01:41:49PM +0000, Julien Thierry wrote:
> Hi Andrew,
> 
> On 22/01/2019 10:49, Andrew Murray wrote:
> > To prevent re-creating perf events everytime the counter registers
> > are changed, let's instead lazily create the event when the event
> > is first enabled and destroy it when it changes.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
> >  1 file changed, 78 insertions(+), 36 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 4464899..1921ca9 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,8 +24,11 @@
> >  #include <kvm/arm_pmu.h>
> >  #include <kvm/arm_vgic.h>
> >  
> > -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > -				      u64 select_idx);
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
> > +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> > +						      u64 select_idx);
> > +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> > +
> >  /**
> >   * kvm_pmu_get_counter_value - get PMU counter value
> >   * @vcpu: The vcpu pointer
> > @@ -59,18 +62,16 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >   */
> >  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >  {
> > -	u64 reg, data;
> > +	u64 reg;
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> >  
> > -	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> > -	      ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> > -	data = __vcpu_sys_reg(vcpu, reg + select_idx);
> > -
> > -	/* Recreate the perf event to reflect the updated sample_period */
> > -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> 
> Shouldn't this be before we do the write to __vcpu_sys_reg()?

I don't think we need to. It's the users choice to set a counter value whilst
it's still counting. In fact the later we leave it the better as there is then
a smaller period of time where we're not counting when we should be.

> 
> > +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> >  }
> >  
> >  /**
> > @@ -88,6 +89,7 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> >  
> >  /**
> >   * kvm_pmu_stop_counter - stop PMU counter
> > + * @vcpu: The vcpu pointer
> >   * @pmc: The PMU counter pointer
> >   *
> >   * If this counter has been configured to monitor some event, release it here.
> > @@ -150,6 +152,25 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >  }
> >  
> >  /**
> > + * kvm_pmu_enable_counter_single - create/enable a unpaired counter
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (!pmc->perf_event) {
> > +		kvm_pmu_counter_create_enabled_perf_event(vcpu, select_idx);
> > +	} else if (pmc->perf_event) {
> 
> "else" is enough here, no need for "else if" :) .

Not sure where that come from!

> 
> 
> Actually, after we call kvm_pmu_counter_create_enabled_perf_event() we
> know that pmc->perf_event != NULL.
> 
> Shouldn't we execute the code below unconditionally?

I guess I wanted to avoid calling perf_event_enable on an event that
was already enabled (in the case pmc->perf_event is NULL on entry here).

Though along with Suzuki's feedback, I'll take your suggestion here, but
update kvm_pmu_counter_create_enabled_perf_event to not enable the event
by default. It's clearer then all around.

> 
> > +		perf_event_enable(pmc->perf_event);
> > +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > +			kvm_debug("fail to enable perf event\n");
> > +	}
> > +}
> > +
> > +/**
> >   * kvm_pmu_enable_counter - enable selected PMU counter
> >   * @vcpu: The vcpu pointer
> >   * @val: the value guest writes to PMCNTENSET register
> > @@ -159,8 +180,6 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> >  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  {
> >  	int i;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc;
> >  
> >  	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> >  		return;
> > @@ -169,16 +188,44 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  		if (!(val & BIT(i)))
> >  			continue;
> >  
> > -		pmc = &pmu->pmc[i];
> > -		if (pmc->perf_event) {
> > -			perf_event_enable(pmc->perf_event);
> > -			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > -				kvm_debug("fail to enable perf event\n");
> > -		}
> > +		kvm_pmu_enable_counter_single(vcpu, i);
> >  	}
> >  }
> >  
> >  /**
> > + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> > +					    u64 select_idx)
> 
> Not completely convinced by the name. kvm_pmu_sync_counter_status() ?
> 
> Or maybe have the callers check whether they actually need to
> disable/enable and not have this function.

I don't think checking in the callers is the right approach.

Though perhaps kvm_pmu_sync_counter_enable is more understandable.

> 
> > +{
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> > +
> > +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> > +		return;
> > +
> > +	if (set & BIT(select_idx))
> > +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> > +}
> > +
> > +/**
> > + * kvm_pmu_disable_counter - disable selected PMU counter
> > + * @vcpu: The vcpu pointer
> > + * @pmc: The counter to dissable
> > + */
> > +static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> > +					   u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	if (pmc->perf_event)
> > +		perf_event_disable(pmc->perf_event);
> > +}
> > +
> > +/**
> >   * kvm_pmu_disable_counter - disable selected PMU counter
> >   * @vcpu: The vcpu pointer
> >   * @val: the value guest writes to PMCNTENCLR register
> > @@ -188,8 +235,6 @@ void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  {
> >  	int i;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc;
> >  
> >  	if (!val)
> >  		return;
> > @@ -198,9 +243,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  		if (!(val & BIT(i)))
> >  			continue;
> >  
> > -		pmc = &pmu->pmc[i];
> > -		if (pmc->perf_event)
> > -			perf_event_disable(pmc->perf_event);
> > +		kvm_pmu_disable_counter_single(vcpu, i);
> >  	}
> >  }
> >  
> > @@ -382,28 +425,22 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
> >  	}
> >  }
> >  
> > -static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> > -{
> > -	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> > -	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
> > -}
> > -
> >  /**
> > - * kvm_pmu_create_perf_event - create a perf event for a counter
> > + * kvm_pmu_counter_create_enabled_perf_event - create a perf event for a counter
> >   * @vcpu: The vcpu pointer
> > - * @data: Type of event as per PMXEVTYPER_EL0 format
> >   * @select_idx: The number of selected counter
> >   */
> > -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> > -				      u64 select_idx)
> > +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> > +						u64 select_idx)
> >  {
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  	struct perf_event *event;
> >  	struct perf_event_attr attr;
> > -	u64 eventsel, counter;
> > +	u64 eventsel, counter, data;
> > +
> > +	data = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx);
> 
> Should we worry about the case select_idx == ARMV8_PMU_CYCLE_IDX?
> 
> >  
> > -	kvm_pmu_stop_counter(vcpu, pmc);
> >  	eventsel = data & ARMV8_PMU_EVTYPE_EVENT;
> >  
> >  	/* Software increment event does't need to be backed by a perf event */
> > @@ -415,7 +452,6 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> >  	attr.type = PERF_TYPE_RAW;
> >  	attr.size = sizeof(attr);
> >  	attr.pinned = 1;
> > -	attr.disabled = !kvm_pmu_counter_is_enabled(vcpu, select_idx);
> >  	attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0;
> >  	attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0;
> >  	attr.exclude_hv = 1; /* Don't count EL2 events */
> > @@ -451,7 +487,13 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
> >  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> >  				    u64 select_idx)
> >  {
> > -	kvm_pmu_create_perf_event(vcpu, data, select_idx);
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +	u64 event_type = data & ARMV8_PMU_EVTYPE_MASK;
> > +
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> > +	__vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + select_idx) = event_type;
> 
> Why don't we take into account the select_idx == ARMV8_PMU_CYCLE_IDX
> case into account anymore?
> 

We should, I've missed this - thanks for spotting this.

Thanks,

Andrew Murray

> > +	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> >  }
> >  
> >  bool kvm_arm_support_pmu_v3(void)
> > 
> 
> Cheers,
> 
> -- 
> Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/4] KVM: arm/arm64: support chained PMU counters
  2019-01-22 14:59     ` Julien Thierry
@ 2019-01-28 17:13       ` Andrew Murray
  -1 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-28 17:13 UTC (permalink / raw)
  To: Julien Thierry; +Cc: Marc Zyngier, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 02:59:48PM +0000, Julien Thierry wrote:
> Hi Andrew
> 
> On 22/01/2019 10:49, Andrew Murray wrote:
> > Emulate chained PMU counters by creating a single 64 bit event counter
> > for a pair of chained KVM counters.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  include/kvm/arm_pmu.h |   2 +
> >  virt/kvm/arm/pmu.c    | 308 +++++++++++++++++++++++++++++++++++++++++---------
> >  2 files changed, 258 insertions(+), 52 deletions(-)
> > 
> > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> > index f87fe20..d4f3b28 100644
> > --- a/include/kvm/arm_pmu.h
> > +++ b/include/kvm/arm_pmu.h
> > @@ -29,6 +29,8 @@ struct kvm_pmc {
> >  	u8 idx;	/* index into the pmu->pmc array */
> >  	struct perf_event *perf_event;
> >  	u64 bitmask;
> > +	u64 sample_period;
> > +	u64 left;
> >  };
> >  
> >  struct kvm_pmu {
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 1921ca9..d111d5b 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,10 +24,26 @@
> >  #include <kvm/arm_pmu.h>
> >  #include <kvm/arm_vgic.h>
> >  
> > +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
> > +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> > +					    u64 pair_low);
> > +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
> > +					      u64 select_idx);
> > +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low);
> >  static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
> >  static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> >  						      u64 select_idx);
> > -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> > +
> > +/**
> > + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
> > + * @pmc: The PMU counter pointer
> > + * @select_idx: The counter index
> > + */
> > +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
> > +{
> > +	return ((pmc->perf_event->attr.config1 & 0x1)
> > +		&& (pmc->idx % 2));
> > +}
> >  
> >  /**
> >   * kvm_pmu_get_counter_value - get PMU counter value
> > @@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> >   */
> >  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >  {
> > -	u64 counter, reg, enabled, running;
> > +	u64 counter, reg, enabled, running, incr;
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  
> > @@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >  	/* The real counter value is equal to the value of counter register plus
> >  	 * the value perf event counts.
> >  	 */
> > -	if (pmc->perf_event)
> > -		counter += perf_event_read_value(pmc->perf_event, &enabled,
> > +	if (pmc->perf_event) {
> > +		incr = perf_event_read_value(pmc->perf_event, &enabled,
> >  						 &running);
> >  
> > +		if (kvm_pmu_counter_is_high_word(pmc))
> > +			incr = upper_32_bits(incr);
> > +		counter += incr;
> > +	}
> > +
> >  	return counter & pmc->bitmask;
> >  }
> >  
> >  /**
> > + * kvm_pmu_counter_is_enabled - is a counter active
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +
> > +	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> > +	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
> > +}
> > +
> > +/**
> > + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The low counter index
> > + */
> > +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
> > +{
> > +	u64 eventsel;
> > +
> > +	eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) &
> > +			ARMV8_PMU_EVTYPE_EVENT;
> > +	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
> > +		return false;
> > +
> > +	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
> > +	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
> > +		return false;
> > +
> > +	return true;
> > +}
> > +
> > +/**
> >   * kvm_pmu_set_counter_value - set PMU counter value
> >   * @vcpu: The vcpu pointer
> >   * @select_idx: The counter index
> > @@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >   */
> >  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >  {
> > -	u64 reg;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +	u64 reg, pair_low;
> >  
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> >  
> > -	kvm_pmu_stop_counter(vcpu, pmc);
> > -	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> > +	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
> 
> Don't really know if it's better but you can write it as:
> 
> 	pair_low = select_idx & ~(1ULL);
> 
> But the compiler might already optimize it.
> 

That's quite neat, though I'll leave it as it is, that seems clearer to me.

> > +
> > +	/* Recreate the perf event to reflect the updated sample_period */
> > +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> > +		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
> > +	} else {
> > +		kvm_pmu_stop_release_perf_event_single(vcpu, select_idx);
> > +		kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> > +	}
> >  }
> >  
> >  /**
> >   * kvm_pmu_release_perf_event - remove the perf event
> >   * @pmc: The PMU counter pointer
> >   */
> > -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> > +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
> > +				       struct kvm_pmc *pmc)
> >  {
> > -	if (pmc->perf_event) {
> > -		perf_event_disable(pmc->perf_event);
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_alt;
> > +	u64 pair_alt;
> > +
> > +	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
> > +	pmc_alt = &pmu->pmc[pair_alt];
> > +
> > +	if (pmc->perf_event)
> >  		perf_event_release_kernel(pmc->perf_event);
> > -		pmc->perf_event = NULL;
> > -	}
> > +
> > +	if (pmc->perf_event == pmc_alt->perf_event)
> > +		pmc_alt->perf_event = NULL;
> 
> Shouldn't we release pmc_alt->perf_event before setting it to NULL?

No, becuase these are the same event, we don't want to free the same
event twice.

> 
> > +
> > +	pmc->perf_event = NULL;
> >  }
> >  
> >  /**
> > @@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> >   * @vcpu: The vcpu pointer
> >   * @pmc: The PMU counter pointer
> >   *
> > - * If this counter has been configured to monitor some event, release it here.
> > + * If this counter has been configured to monitor some event, stop it here.
> >   */
> >  static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
> >  {
> >  	u64 counter, reg;
> >  
> >  	if (pmc->perf_event) {
> > +		perf_event_disable(pmc->perf_event);
> >  		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> >  		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
> >  		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
> >  		__vcpu_sys_reg(vcpu, reg) = counter;
> > -		kvm_pmu_release_perf_event(pmc);
> >  	}
> >  }
> >  
> >  /**
> > + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
> > + * @vcpu: The vcpu pointer
> > + * @pmc_low: The PMU counter pointer for lower word
> > + * @pmc_high: The PMU counter pointer for higher word
> > + *
> > + * As chained counters share the underlying perf event, we stop them
> > + * both first before discarding the underlying perf event
> > + */
> > +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> > +					    u64 idx_low)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
> > +
> > +	kvm_pmu_stop_counter(vcpu, pmc_low);
> > +	kvm_pmu_stop_counter(vcpu, pmc_high);
> > +
> > +	kvm_pmu_release_perf_event(vcpu, pmc_low);
> > +	kvm_pmu_release_perf_event(vcpu, pmc_high);
> 
> Hmmm, I think there is some confusion between what this function and
> kvm_pmu_release_perf_event() should do, at this point
> pmc_high->perf_event == NULL and we can't release it.
>

Don't forget that for paired events, both pmc_{low,high} refer to the
same perf_event. (This is why we stop the counters individually before
releasing them - so that we can use the event information whilst working
out the counter value of pmc_high).

Or is there something I'm missing?
 
> > +}
> > +
> > +/**
> > + * kvm_pmu_stop_release_perf_event_single - stop and release a counter
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
> > +					      u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> > +	kvm_pmu_release_perf_event(vcpu, pmc);
> > +}
> > +
> > +/**
> >   * kvm_pmu_vcpu_reset - reset pmu state for cpu
> >   * @vcpu: The vcpu pointer
> >   *
> > @@ -118,7 +227,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  
> >  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
> > +		kvm_pmu_stop_release_perf_event_single(vcpu, i);
> >  		pmu->pmc[i].idx = i;
> >  		pmu->pmc[i].bitmask = 0xffffffffUL;
> >  	}
> > @@ -136,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
> >  
> >  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> >  		struct kvm_pmc *pmc = &pmu->pmc[i];
> > -		kvm_pmu_release_perf_event(pmc);
> > +		kvm_pmu_release_perf_event(vcpu, pmc);
> >  	}
> >  }
> >  
> > @@ -171,49 +280,81 @@ static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> >  }
> >  
> >  /**
> > - * kvm_pmu_enable_counter - enable selected PMU counter
> > + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> >   * @vcpu: The vcpu pointer
> > - * @val: the value guest writes to PMCNTENSET register
> > - *
> > - * Call perf_event_enable to start counting the perf event
> > + * @select_idx: The counter index
> >   */
> > -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> > +					    u64 select_idx)
> >  {
> > -	int i;
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> >  
> > -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> > +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> >  		return;
> >  
> > -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		if (!(val & BIT(i)))
> > -			continue;
> > +	if (set & BIT(select_idx))
> > +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> > +}
> >  
> > -		kvm_pmu_enable_counter_single(vcpu, i);
> > +/**
> > + * kvm_pmu_reenable_enabled_pair - reenable a pair if they should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @pair_low: The low counter index
> > + */
> > +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low)
> > +{
> > +	kvm_pmu_reenable_enabled_single(vcpu, pair_low);
> > +	kvm_pmu_reenable_enabled_single(vcpu, pair_low+1);
> > +}
> > +
> > +/**
> > + * kvm_pmu_enable_counter_pair - enable counters pair at a time
> > + * @vcpu: The vcpu pointer
> > + * @val: counters to enable
> > + * @pair_low: The low counter index
> > + */
> > +static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> > +					u64 pair_low)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> > +
> > +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		if (pmc_low->perf_event != pmc_high->perf_event)
> > +			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> >  	}
> > +
> > +	if (val & BIT(pair_low))
> > +		kvm_pmu_enable_counter_single(vcpu, pair_low);
> > +
> > +	if (val & BIT(pair_low+1))
> > +		kvm_pmu_enable_counter_single(vcpu, pair_low + 1);
> >  }
> >  
> >  /**
> > - * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> > + * kvm_pmu_enable_counter - enable selected PMU counter
> >   * @vcpu: The vcpu pointer
> > - * @select_idx: The counter index
> > + * @val: the value guest writes to PMCNTENSET register
> > + *
> > + * Call perf_event_enable to start counting the perf event
> >   */
> > -static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> > -					    u64 select_idx)
> > +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  {
> > -	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > -	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> > +	int i;
> >  
> > -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> > +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> >  		return;
> >  
> > -	if (set & BIT(select_idx))
> > -		kvm_pmu_enable_counter_single(vcpu, select_idx);
> > +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> > +		kvm_pmu_enable_counter_pair(vcpu, val, i);
> >  }
> >  
> >  /**
> >   * kvm_pmu_disable_counter - disable selected PMU counter
> >   * @vcpu: The vcpu pointer
> > - * @pmc: The counter to dissable
> > + * @select_idx: The counter index
> >   */
> >  static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> >  					   u64 select_idx)
> > @@ -221,8 +362,40 @@ static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  
> > -	if (pmc->perf_event)
> > +	if (pmc->perf_event) {
> >  		perf_event_disable(pmc->perf_event);
> > +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > +			kvm_debug("fail to enable perf event\n");
> > +	}
> > +}
> > +
> > +/**
> > + * kvm_pmu_disable_counter_pair - disable counters pair at a time
> > + * @val: counters to disable
> > + * @pair_low: The low counter index
> > + */
> > +static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> > +					 u64 pair_low)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> > +
> > +	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		if (pmc_low->perf_event == pmc_high->perf_event) {
> > +			if (pmc_low->perf_event) {
> > +				kvm_pmu_stop_release_perf_event_pair(vcpu,
> > +								pair_low);
> > +				kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
> > +			}
> > +		}
> > +	}
> > +
> > +	if (val & BIT(pair_low))
> > +		kvm_pmu_disable_counter_single(vcpu, pair_low);
> > +
> > +	if (val & BIT(pair_low + 1))
> > +		kvm_pmu_disable_counter_single(vcpu, pair_low + 1);
> >  }
> >  
> >  /**
> > @@ -239,12 +412,8 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  	if (!val)
> >  		return;
> >  
> > -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		if (!(val & BIT(i)))
> > -			continue;
> > -
> > -		kvm_pmu_disable_counter_single(vcpu, i);
> > -	}
> > +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> > +		kvm_pmu_disable_counter_pair(vcpu, val, i);
> >  }
> >  
> >  static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
> > @@ -355,6 +524,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
> >  
> >  	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
> >  
> > +	if (kvm_pmu_event_is_chained(vcpu, idx + 1)) {
> 
> Doesn't kvm_pmu_event_is_chained() expect the low part of the counter pair?

Ah yes, good catch.

Thanks,

Andrew Murray

> 
> Cheers,
> 
> -- 
> Julien Thierry

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/4] KVM: arm/arm64: support chained PMU counters
@ 2019-01-28 17:13       ` Andrew Murray
  0 siblings, 0 replies; 38+ messages in thread
From: Andrew Murray @ 2019-01-28 17:13 UTC (permalink / raw)
  To: Julien Thierry; +Cc: Marc Zyngier, Christoffer Dall, linux-arm-kernel, kvmarm

On Tue, Jan 22, 2019 at 02:59:48PM +0000, Julien Thierry wrote:
> Hi Andrew
> 
> On 22/01/2019 10:49, Andrew Murray wrote:
> > Emulate chained PMU counters by creating a single 64 bit event counter
> > for a pair of chained KVM counters.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  include/kvm/arm_pmu.h |   2 +
> >  virt/kvm/arm/pmu.c    | 308 +++++++++++++++++++++++++++++++++++++++++---------
> >  2 files changed, 258 insertions(+), 52 deletions(-)
> > 
> > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> > index f87fe20..d4f3b28 100644
> > --- a/include/kvm/arm_pmu.h
> > +++ b/include/kvm/arm_pmu.h
> > @@ -29,6 +29,8 @@ struct kvm_pmc {
> >  	u8 idx;	/* index into the pmu->pmc array */
> >  	struct perf_event *perf_event;
> >  	u64 bitmask;
> > +	u64 sample_period;
> > +	u64 left;
> >  };
> >  
> >  struct kvm_pmu {
> > diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> > index 1921ca9..d111d5b 100644
> > --- a/virt/kvm/arm/pmu.c
> > +++ b/virt/kvm/arm/pmu.c
> > @@ -24,10 +24,26 @@
> >  #include <kvm/arm_pmu.h>
> >  #include <kvm/arm_vgic.h>
> >  
> > +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
> > +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> > +					    u64 pair_low);
> > +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
> > +					      u64 select_idx);
> > +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low);
> >  static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
> >  static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
> >  						      u64 select_idx);
> > -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> > +
> > +/**
> > + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
> > + * @pmc: The PMU counter pointer
> > + * @select_idx: The counter index
> > + */
> > +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
> > +{
> > +	return ((pmc->perf_event->attr.config1 & 0x1)
> > +		&& (pmc->idx % 2));
> > +}
> >  
> >  /**
> >   * kvm_pmu_get_counter_value - get PMU counter value
> > @@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
> >   */
> >  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >  {
> > -	u64 counter, reg, enabled, running;
> > +	u64 counter, reg, enabled, running, incr;
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  
> > @@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >  	/* The real counter value is equal to the value of counter register plus
> >  	 * the value perf event counts.
> >  	 */
> > -	if (pmc->perf_event)
> > -		counter += perf_event_read_value(pmc->perf_event, &enabled,
> > +	if (pmc->perf_event) {
> > +		incr = perf_event_read_value(pmc->perf_event, &enabled,
> >  						 &running);
> >  
> > +		if (kvm_pmu_counter_is_high_word(pmc))
> > +			incr = upper_32_bits(incr);
> > +		counter += incr;
> > +	}
> > +
> >  	return counter & pmc->bitmask;
> >  }
> >  
> >  /**
> > + * kvm_pmu_counter_is_enabled - is a counter active
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> > +{
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +
> > +	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
> > +	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
> > +}
> > +
> > +/**
> > + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The low counter index
> > + */
> > +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
> > +{
> > +	u64 eventsel;
> > +
> > +	eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) &
> > +			ARMV8_PMU_EVTYPE_EVENT;
> > +	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
> > +		return false;
> > +
> > +	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
> > +	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
> > +		return false;
> > +
> > +	return true;
> > +}
> > +
> > +/**
> >   * kvm_pmu_set_counter_value - set PMU counter value
> >   * @vcpu: The vcpu pointer
> >   * @select_idx: The counter index
> > @@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> >   */
> >  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
> >  {
> > -	u64 reg;
> > -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +	u64 reg, pair_low;
> >  
> >  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> >  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> >  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
> >  
> > -	kvm_pmu_stop_counter(vcpu, pmc);
> > -	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> > +	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
> 
> Don't really know if it's better but you can write it as:
> 
> 	pair_low = select_idx & ~(1ULL);
> 
> But the compiler might already optimize it.
> 

That's quite neat, though I'll leave it as it is, that seems clearer to me.

> > +
> > +	/* Recreate the perf event to reflect the updated sample_period */
> > +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> > +		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
> > +	} else {
> > +		kvm_pmu_stop_release_perf_event_single(vcpu, select_idx);
> > +		kvm_pmu_reenable_enabled_single(vcpu, select_idx);
> > +	}
> >  }
> >  
> >  /**
> >   * kvm_pmu_release_perf_event - remove the perf event
> >   * @pmc: The PMU counter pointer
> >   */
> > -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> > +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
> > +				       struct kvm_pmc *pmc)
> >  {
> > -	if (pmc->perf_event) {
> > -		perf_event_disable(pmc->perf_event);
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_alt;
> > +	u64 pair_alt;
> > +
> > +	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
> > +	pmc_alt = &pmu->pmc[pair_alt];
> > +
> > +	if (pmc->perf_event)
> >  		perf_event_release_kernel(pmc->perf_event);
> > -		pmc->perf_event = NULL;
> > -	}
> > +
> > +	if (pmc->perf_event == pmc_alt->perf_event)
> > +		pmc_alt->perf_event = NULL;
> 
> Shouldn't we release pmc_alt->perf_event before setting it to NULL?

No, becuase these are the same event, we don't want to free the same
event twice.

> 
> > +
> > +	pmc->perf_event = NULL;
> >  }
> >  
> >  /**
> > @@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
> >   * @vcpu: The vcpu pointer
> >   * @pmc: The PMU counter pointer
> >   *
> > - * If this counter has been configured to monitor some event, release it here.
> > + * If this counter has been configured to monitor some event, stop it here.
> >   */
> >  static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
> >  {
> >  	u64 counter, reg;
> >  
> >  	if (pmc->perf_event) {
> > +		perf_event_disable(pmc->perf_event);
> >  		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> >  		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
> >  		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
> >  		__vcpu_sys_reg(vcpu, reg) = counter;
> > -		kvm_pmu_release_perf_event(pmc);
> >  	}
> >  }
> >  
> >  /**
> > + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
> > + * @vcpu: The vcpu pointer
> > + * @pmc_low: The PMU counter pointer for lower word
> > + * @pmc_high: The PMU counter pointer for higher word
> > + *
> > + * As chained counters share the underlying perf event, we stop them
> > + * both first before discarding the underlying perf event
> > + */
> > +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
> > +					    u64 idx_low)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
> > +
> > +	kvm_pmu_stop_counter(vcpu, pmc_low);
> > +	kvm_pmu_stop_counter(vcpu, pmc_high);
> > +
> > +	kvm_pmu_release_perf_event(vcpu, pmc_low);
> > +	kvm_pmu_release_perf_event(vcpu, pmc_high);
> 
> Hmmm, I think there is some confusion between what this function and
> kvm_pmu_release_perf_event() should do, at this point
> pmc_high->perf_event == NULL and we can't release it.
>

Don't forget that for paired events, both pmc_{low,high} refer to the
same perf_event. (This is why we stop the counters individually before
releasing them - so that we can use the event information whilst working
out the counter value of pmc_high).

Or is there something I'm missing?
 
> > +}
> > +
> > +/**
> > + * kvm_pmu_stop_release_perf_event_single - stop and release a counter
> > + * @vcpu: The vcpu pointer
> > + * @select_idx: The counter index
> > + */
> > +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
> > +					      u64 select_idx)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> > +
> > +	kvm_pmu_stop_counter(vcpu, pmc);
> > +	kvm_pmu_release_perf_event(vcpu, pmc);
> > +}
> > +
> > +/**
> >   * kvm_pmu_vcpu_reset - reset pmu state for cpu
> >   * @vcpu: The vcpu pointer
> >   *
> > @@ -118,7 +227,7 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  
> >  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
> > +		kvm_pmu_stop_release_perf_event_single(vcpu, i);
> >  		pmu->pmc[i].idx = i;
> >  		pmu->pmc[i].bitmask = 0xffffffffUL;
> >  	}
> > @@ -136,7 +245,7 @@ void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
> >  
> >  	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> >  		struct kvm_pmc *pmc = &pmu->pmc[i];
> > -		kvm_pmu_release_perf_event(pmc);
> > +		kvm_pmu_release_perf_event(vcpu, pmc);
> >  	}
> >  }
> >  
> > @@ -171,49 +280,81 @@ static void kvm_pmu_enable_counter_single(struct kvm_vcpu *vcpu, u64 select_idx)
> >  }
> >  
> >  /**
> > - * kvm_pmu_enable_counter - enable selected PMU counter
> > + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> >   * @vcpu: The vcpu pointer
> > - * @val: the value guest writes to PMCNTENSET register
> > - *
> > - * Call perf_event_enable to start counting the perf event
> > + * @select_idx: The counter index
> >   */
> > -void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> > +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> > +					    u64 select_idx)
> >  {
> > -	int i;
> > +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> >  
> > -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> > +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> >  		return;
> >  
> > -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		if (!(val & BIT(i)))
> > -			continue;
> > +	if (set & BIT(select_idx))
> > +		kvm_pmu_enable_counter_single(vcpu, select_idx);
> > +}
> >  
> > -		kvm_pmu_enable_counter_single(vcpu, i);
> > +/**
> > + * kvm_pmu_reenable_enabled_pair - reenable a pair if they should be enabled
> > + * @vcpu: The vcpu pointer
> > + * @pair_low: The low counter index
> > + */
> > +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low)
> > +{
> > +	kvm_pmu_reenable_enabled_single(vcpu, pair_low);
> > +	kvm_pmu_reenable_enabled_single(vcpu, pair_low+1);
> > +}
> > +
> > +/**
> > + * kvm_pmu_enable_counter_pair - enable counters pair at a time
> > + * @vcpu: The vcpu pointer
> > + * @val: counters to enable
> > + * @pair_low: The low counter index
> > + */
> > +static void kvm_pmu_enable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> > +					u64 pair_low)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> > +
> > +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		if (pmc_low->perf_event != pmc_high->perf_event)
> > +			kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
> >  	}
> > +
> > +	if (val & BIT(pair_low))
> > +		kvm_pmu_enable_counter_single(vcpu, pair_low);
> > +
> > +	if (val & BIT(pair_low+1))
> > +		kvm_pmu_enable_counter_single(vcpu, pair_low + 1);
> >  }
> >  
> >  /**
> > - * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
> > + * kvm_pmu_enable_counter - enable selected PMU counter
> >   * @vcpu: The vcpu pointer
> > - * @select_idx: The counter index
> > + * @val: the value guest writes to PMCNTENSET register
> > + *
> > + * Call perf_event_enable to start counting the perf event
> >   */
> > -static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
> > -					    u64 select_idx)
> > +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  {
> > -	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> > -	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> > +	int i;
> >  
> > -	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
> > +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val)
> >  		return;
> >  
> > -	if (set & BIT(select_idx))
> > -		kvm_pmu_enable_counter_single(vcpu, select_idx);
> > +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> > +		kvm_pmu_enable_counter_pair(vcpu, val, i);
> >  }
> >  
> >  /**
> >   * kvm_pmu_disable_counter - disable selected PMU counter
> >   * @vcpu: The vcpu pointer
> > - * @pmc: The counter to dissable
> > + * @select_idx: The counter index
> >   */
> >  static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> >  					   u64 select_idx)
> > @@ -221,8 +362,40 @@ static void kvm_pmu_disable_counter_single(struct kvm_vcpu *vcpu,
> >  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> >  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> >  
> > -	if (pmc->perf_event)
> > +	if (pmc->perf_event) {
> >  		perf_event_disable(pmc->perf_event);
> > +		if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> > +			kvm_debug("fail to enable perf event\n");
> > +	}
> > +}
> > +
> > +/**
> > + * kvm_pmu_disable_counter_pair - disable counters pair at a time
> > + * @val: counters to disable
> > + * @pair_low: The low counter index
> > + */
> > +static void kvm_pmu_disable_counter_pair(struct kvm_vcpu *vcpu, u64 val,
> > +					 u64 pair_low)
> > +{
> > +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> > +	struct kvm_pmc *pmc_low = &pmu->pmc[pair_low];
> > +	struct kvm_pmc *pmc_high = &pmu->pmc[pair_low + 1];
> > +
> > +	if (!kvm_pmu_event_is_chained(vcpu, pair_low)) {
> > +		if (pmc_low->perf_event == pmc_high->perf_event) {
> > +			if (pmc_low->perf_event) {
> > +				kvm_pmu_stop_release_perf_event_pair(vcpu,
> > +								pair_low);
> > +				kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
> > +			}
> > +		}
> > +	}
> > +
> > +	if (val & BIT(pair_low))
> > +		kvm_pmu_disable_counter_single(vcpu, pair_low);
> > +
> > +	if (val & BIT(pair_low + 1))
> > +		kvm_pmu_disable_counter_single(vcpu, pair_low + 1);
> >  }
> >  
> >  /**
> > @@ -239,12 +412,8 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> >  	if (!val)
> >  		return;
> >  
> > -	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) {
> > -		if (!(val & BIT(i)))
> > -			continue;
> > -
> > -		kvm_pmu_disable_counter_single(vcpu, i);
> > -	}
> > +	for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i += 2)
> > +		kvm_pmu_disable_counter_pair(vcpu, val, i);
> >  }
> >  
> >  static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
> > @@ -355,6 +524,17 @@ static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
> >  
> >  	__vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= BIT(idx);
> >  
> > +	if (kvm_pmu_event_is_chained(vcpu, idx + 1)) {
> 
> Doesn't kvm_pmu_event_is_chained() expect the low part of the counter pair?

Ah yes, good catch.

Thanks,

Andrew Murray

> 
> Cheers,
> 
> -- 
> Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/4] KVM: arm/arm64: support chained PMU counters
  2019-01-28 17:13       ` Andrew Murray
@ 2019-01-29  9:07         ` Julien Thierry
  -1 siblings, 0 replies; 38+ messages in thread
From: Julien Thierry @ 2019-01-29  9:07 UTC (permalink / raw)
  To: Andrew Murray; +Cc: Marc Zyngier, linux-arm-kernel, kvmarm



On 28/01/2019 17:13, Andrew Murray wrote:
> On Tue, Jan 22, 2019 at 02:59:48PM +0000, Julien Thierry wrote:
>> Hi Andrew
>>
>> On 22/01/2019 10:49, Andrew Murray wrote:
>>> Emulate chained PMU counters by creating a single 64 bit event counter
>>> for a pair of chained KVM counters.
>>>
>>> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
>>> ---
>>>  include/kvm/arm_pmu.h |   2 +
>>>  virt/kvm/arm/pmu.c    | 308 +++++++++++++++++++++++++++++++++++++++++---------
>>>  2 files changed, 258 insertions(+), 52 deletions(-)
>>>
>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>> index f87fe20..d4f3b28 100644
>>> --- a/include/kvm/arm_pmu.h
>>> +++ b/include/kvm/arm_pmu.h
>>> @@ -29,6 +29,8 @@ struct kvm_pmc {
>>>  	u8 idx;	/* index into the pmu->pmc array */
>>>  	struct perf_event *perf_event;
>>>  	u64 bitmask;
>>> +	u64 sample_period;
>>> +	u64 left;
>>>  };
>>>  
>>>  struct kvm_pmu {
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index 1921ca9..d111d5b 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -24,10 +24,26 @@
>>>  #include <kvm/arm_pmu.h>
>>>  #include <kvm/arm_vgic.h>
>>>  
>>> +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
>>> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
>>> +					    u64 pair_low);
>>> +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
>>> +					      u64 select_idx);
>>> +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low);
>>>  static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
>>>  static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
>>>  						      u64 select_idx);
>>> -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
>>> +
>>> +/**
>>> + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
>>> + * @pmc: The PMU counter pointer
>>> + * @select_idx: The counter index
>>> + */
>>> +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
>>> +{
>>> +	return ((pmc->perf_event->attr.config1 & 0x1)
>>> +		&& (pmc->idx % 2));
>>> +}
>>>  
>>>  /**
>>>   * kvm_pmu_get_counter_value - get PMU counter value
>>> @@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
>>>   */
>>>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>>>  {
>>> -	u64 counter, reg, enabled, running;
>>> +	u64 counter, reg, enabled, running, incr;
>>>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>>>  
>>> @@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>>>  	/* The real counter value is equal to the value of counter register plus
>>>  	 * the value perf event counts.
>>>  	 */
>>> -	if (pmc->perf_event)
>>> -		counter += perf_event_read_value(pmc->perf_event, &enabled,
>>> +	if (pmc->perf_event) {
>>> +		incr = perf_event_read_value(pmc->perf_event, &enabled,
>>>  						 &running);
>>>  
>>> +		if (kvm_pmu_counter_is_high_word(pmc))
>>> +			incr = upper_32_bits(incr);
>>> +		counter += incr;
>>> +	}
>>> +
>>>  	return counter & pmc->bitmask;
>>>  }
>>>  
>>>  /**
>>> + * kvm_pmu_counter_is_enabled - is a counter active
>>> + * @vcpu: The vcpu pointer
>>> + * @select_idx: The counter index
>>> + */
>>> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>>> +{
>>> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
>>> +
>>> +	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
>>> +	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
>>> +}
>>> +
>>> +/**
>>> + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
>>> + * @vcpu: The vcpu pointer
>>> + * @select_idx: The low counter index
>>> + */
>>> +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
>>> +{
>>> +	u64 eventsel;
>>> +
>>> +	eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) &
>>> +			ARMV8_PMU_EVTYPE_EVENT;
>>> +	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
>>> +		return false;
>>> +
>>> +	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
>>> +	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
>>> +		return false;
>>> +
>>> +	return true;
>>> +}
>>> +
>>> +/**
>>>   * kvm_pmu_set_counter_value - set PMU counter value
>>>   * @vcpu: The vcpu pointer
>>>   * @select_idx: The counter index
>>> @@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>>>   */
>>>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>>>  {
>>> -	u64 reg;
>>> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>> -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>>> +	u64 reg, pair_low;
>>>  
>>>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>>>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>>>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>>>  
>>> -	kvm_pmu_stop_counter(vcpu, pmc);
>>> -	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>>> +	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
>>
>> Don't really know if it's better but you can write it as:
>>
>> 	pair_low = select_idx & ~(1ULL);
>>
>> But the compiler might already optimize it.
>>
> 
> That's quite neat, though I'll leave it as it is, that seems clearer to me.
> 
>>> +
>>> +	/* Recreate the perf event to reflect the updated sample_period */
>>> +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
>>> +		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
>>> +		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
>>> +	} else {
>>> +		kvm_pmu_stop_release_perf_event_single(vcpu, select_idx);
>>> +		kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>>> +	}
>>>  }
>>>  
>>>  /**
>>>   * kvm_pmu_release_perf_event - remove the perf event
>>>   * @pmc: The PMU counter pointer
>>>   */
>>> -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>>> +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
>>> +				       struct kvm_pmc *pmc)
>>>  {
>>> -	if (pmc->perf_event) {
>>> -		perf_event_disable(pmc->perf_event);
>>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>> +	struct kvm_pmc *pmc_alt;
>>> +	u64 pair_alt;
>>> +
>>> +	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
>>> +	pmc_alt = &pmu->pmc[pair_alt];
>>> +
>>> +	if (pmc->perf_event)
>>>  		perf_event_release_kernel(pmc->perf_event);
>>> -		pmc->perf_event = NULL;
>>> -	}
>>> +
>>> +	if (pmc->perf_event == pmc_alt->perf_event)
>>> +		pmc_alt->perf_event = NULL;
>>
>> Shouldn't we release pmc_alt->perf_event before setting it to NULL?
> 
> No, becuase these are the same event, we don't want to free the same
> event twice.
> 

Irrefutable logic, I don't know what I was thinking.

>>
>>> +
>>> +	pmc->perf_event = NULL;
>>>  }
>>>  
>>>  /**
>>> @@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>>>   * @vcpu: The vcpu pointer
>>>   * @pmc: The PMU counter pointer
>>>   *
>>> - * If this counter has been configured to monitor some event, release it here.
>>> + * If this counter has been configured to monitor some event, stop it here.
>>>   */
>>>  static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
>>>  {
>>>  	u64 counter, reg;
>>>  
>>>  	if (pmc->perf_event) {
>>> +		perf_event_disable(pmc->perf_event);
>>>  		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>>>  		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
>>>  		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
>>>  		__vcpu_sys_reg(vcpu, reg) = counter;
>>> -		kvm_pmu_release_perf_event(pmc);
>>>  	}
>>>  }
>>>  
>>>  /**
>>> + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
>>> + * @vcpu: The vcpu pointer
>>> + * @pmc_low: The PMU counter pointer for lower word
>>> + * @pmc_high: The PMU counter pointer for higher word
>>> + *
>>> + * As chained counters share the underlying perf event, we stop them
>>> + * both first before discarding the underlying perf event
>>> + */
>>> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
>>> +					    u64 idx_low)
>>> +{
>>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>> +	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
>>> +	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
>>> +
>>> +	kvm_pmu_stop_counter(vcpu, pmc_low);
>>> +	kvm_pmu_stop_counter(vcpu, pmc_high);
>>> +
>>> +	kvm_pmu_release_perf_event(vcpu, pmc_low);
>>> +	kvm_pmu_release_perf_event(vcpu, pmc_high);
>>
>> Hmmm, I think there is some confusion between what this function and
>> kvm_pmu_release_perf_event() should do, at this point
>> pmc_high->perf_event == NULL and we can't release it.
>>
> 
> Don't forget that for paired events, both pmc_{low,high} refer to the
> same perf_event. (This is why we stop the counters individually before
> releasing them - so that we can use the event information whilst working
> out the counter value of pmc_high).
> 
> Or is there something I'm missing?
>  

No, I think I'm the one who got confused between chained registers and
the fact that we try to enable counters two by two.

It's probably best to ignore my comments concerning the release event
stuff :) .

Thanks for explaining.

Cheers,

-- 
Julien Thierry

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 4/4] KVM: arm/arm64: support chained PMU counters
@ 2019-01-29  9:07         ` Julien Thierry
  0 siblings, 0 replies; 38+ messages in thread
From: Julien Thierry @ 2019-01-29  9:07 UTC (permalink / raw)
  To: Andrew Murray; +Cc: Marc Zyngier, Christoffer Dall, linux-arm-kernel, kvmarm



On 28/01/2019 17:13, Andrew Murray wrote:
> On Tue, Jan 22, 2019 at 02:59:48PM +0000, Julien Thierry wrote:
>> Hi Andrew
>>
>> On 22/01/2019 10:49, Andrew Murray wrote:
>>> Emulate chained PMU counters by creating a single 64 bit event counter
>>> for a pair of chained KVM counters.
>>>
>>> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
>>> ---
>>>  include/kvm/arm_pmu.h |   2 +
>>>  virt/kvm/arm/pmu.c    | 308 +++++++++++++++++++++++++++++++++++++++++---------
>>>  2 files changed, 258 insertions(+), 52 deletions(-)
>>>
>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>> index f87fe20..d4f3b28 100644
>>> --- a/include/kvm/arm_pmu.h
>>> +++ b/include/kvm/arm_pmu.h
>>> @@ -29,6 +29,8 @@ struct kvm_pmc {
>>>  	u8 idx;	/* index into the pmu->pmc array */
>>>  	struct perf_event *perf_event;
>>>  	u64 bitmask;
>>> +	u64 sample_period;
>>> +	u64 left;
>>>  };
>>>  
>>>  struct kvm_pmu {
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index 1921ca9..d111d5b 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -24,10 +24,26 @@
>>>  #include <kvm/arm_pmu.h>
>>>  #include <kvm/arm_vgic.h>
>>>  
>>> +#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
>>> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
>>> +					    u64 pair_low);
>>> +static void kvm_pmu_stop_release_perf_event_single(struct kvm_vcpu *vcpu,
>>> +					      u64 select_idx);
>>> +static void kvm_pmu_reenable_enabled_pair(struct kvm_vcpu *vcpu, u64 pair_low);
>>>  static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
>>>  static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
>>>  						      u64 select_idx);
>>> -static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
>>> +
>>> +/**
>>> + * kvm_pmu_counter_is_high_word - is select_idx high counter of 64bit event
>>> + * @pmc: The PMU counter pointer
>>> + * @select_idx: The counter index
>>> + */
>>> +static inline bool kvm_pmu_counter_is_high_word(struct kvm_pmc *pmc)
>>> +{
>>> +	return ((pmc->perf_event->attr.config1 & 0x1)
>>> +		&& (pmc->idx % 2));
>>> +}
>>>  
>>>  /**
>>>   * kvm_pmu_get_counter_value - get PMU counter value
>>> @@ -36,7 +52,7 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc);
>>>   */
>>>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>>>  {
>>> -	u64 counter, reg, enabled, running;
>>> +	u64 counter, reg, enabled, running, incr;
>>>  	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>>  	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>>>  
>>> @@ -47,14 +63,53 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>>>  	/* The real counter value is equal to the value of counter register plus
>>>  	 * the value perf event counts.
>>>  	 */
>>> -	if (pmc->perf_event)
>>> -		counter += perf_event_read_value(pmc->perf_event, &enabled,
>>> +	if (pmc->perf_event) {
>>> +		incr = perf_event_read_value(pmc->perf_event, &enabled,
>>>  						 &running);
>>>  
>>> +		if (kvm_pmu_counter_is_high_word(pmc))
>>> +			incr = upper_32_bits(incr);
>>> +		counter += incr;
>>> +	}
>>> +
>>>  	return counter & pmc->bitmask;
>>>  }
>>>  
>>>  /**
>>> + * kvm_pmu_counter_is_enabled - is a counter active
>>> + * @vcpu: The vcpu pointer
>>> + * @select_idx: The counter index
>>> + */
>>> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>>> +{
>>> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
>>> +
>>> +	return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) &&
>>> +	       (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask & BIT(select_idx));
>>> +}
>>> +
>>> +/**
>>> + * kvnm_pmu_event_is_chained - is a pair of counters chained and enabled
>>> + * @vcpu: The vcpu pointer
>>> + * @select_idx: The low counter index
>>> + */
>>> +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 pair_low)
>>> +{
>>> +	u64 eventsel;
>>> +
>>> +	eventsel = __vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + pair_low + 1) &
>>> +			ARMV8_PMU_EVTYPE_EVENT;
>>> +	if (eventsel != ARMV8_PMUV3_PERFCTR_CHAIN)
>>> +		return false;
>>> +
>>> +	if (kvm_pmu_counter_is_enabled(vcpu, pair_low) !=
>>> +	    kvm_pmu_counter_is_enabled(vcpu, pair_low + 1))
>>> +		return false;
>>> +
>>> +	return true;
>>> +}
>>> +
>>> +/**
>>>   * kvm_pmu_set_counter_value - set PMU counter value
>>>   * @vcpu: The vcpu pointer
>>>   * @select_idx: The counter index
>>> @@ -62,29 +117,45 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
>>>   */
>>>  void kvm_pmu_set_counter_value(struct kvm_vcpu *vcpu, u64 select_idx, u64 val)
>>>  {
>>> -	u64 reg;
>>> -	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>> -	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>>> +	u64 reg, pair_low;
>>>  
>>>  	reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
>>>  	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
>>>  	__vcpu_sys_reg(vcpu, reg) += (s64)val - kvm_pmu_get_counter_value(vcpu, select_idx);
>>>  
>>> -	kvm_pmu_stop_counter(vcpu, pmc);
>>> -	kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>>> +	pair_low = (select_idx % 2) ? select_idx - 1 : select_idx;
>>
>> Don't really know if it's better but you can write it as:
>>
>> 	pair_low = select_idx & ~(1ULL);
>>
>> But the compiler might already optimize it.
>>
> 
> That's quite neat, though I'll leave it as it is, that seems clearer to me.
> 
>>> +
>>> +	/* Recreate the perf event to reflect the updated sample_period */
>>> +	if (kvm_pmu_event_is_chained(vcpu, pair_low)) {
>>> +		kvm_pmu_stop_release_perf_event_pair(vcpu, pair_low);
>>> +		kvm_pmu_reenable_enabled_pair(vcpu, pair_low);
>>> +	} else {
>>> +		kvm_pmu_stop_release_perf_event_single(vcpu, select_idx);
>>> +		kvm_pmu_reenable_enabled_single(vcpu, select_idx);
>>> +	}
>>>  }
>>>  
>>>  /**
>>>   * kvm_pmu_release_perf_event - remove the perf event
>>>   * @pmc: The PMU counter pointer
>>>   */
>>> -static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>>> +static void kvm_pmu_release_perf_event(struct kvm_vcpu *vcpu,
>>> +				       struct kvm_pmc *pmc)
>>>  {
>>> -	if (pmc->perf_event) {
>>> -		perf_event_disable(pmc->perf_event);
>>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>> +	struct kvm_pmc *pmc_alt;
>>> +	u64 pair_alt;
>>> +
>>> +	pair_alt = (pmc->idx % 2) ? pmc->idx - 1 : pmc->idx + 1;
>>> +	pmc_alt = &pmu->pmc[pair_alt];
>>> +
>>> +	if (pmc->perf_event)
>>>  		perf_event_release_kernel(pmc->perf_event);
>>> -		pmc->perf_event = NULL;
>>> -	}
>>> +
>>> +	if (pmc->perf_event == pmc_alt->perf_event)
>>> +		pmc_alt->perf_event = NULL;
>>
>> Shouldn't we release pmc_alt->perf_event before setting it to NULL?
> 
> No, becuase these are the same event, we don't want to free the same
> event twice.
> 

Irrefutable logic, I don't know what I was thinking.

>>
>>> +
>>> +	pmc->perf_event = NULL;
>>>  }
>>>  
>>>  /**
>>> @@ -92,22 +163,60 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
>>>   * @vcpu: The vcpu pointer
>>>   * @pmc: The PMU counter pointer
>>>   *
>>> - * If this counter has been configured to monitor some event, release it here.
>>> + * If this counter has been configured to monitor some event, stop it here.
>>>   */
>>>  static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
>>>  {
>>>  	u64 counter, reg;
>>>  
>>>  	if (pmc->perf_event) {
>>> +		perf_event_disable(pmc->perf_event);
>>>  		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>>>  		reg = (pmc->idx == ARMV8_PMU_CYCLE_IDX)
>>>  		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
>>>  		__vcpu_sys_reg(vcpu, reg) = counter;
>>> -		kvm_pmu_release_perf_event(pmc);
>>>  	}
>>>  }
>>>  
>>>  /**
>>> + * kvm_pmu_stop_release_perf_event_pair - stop and release a pair of counters
>>> + * @vcpu: The vcpu pointer
>>> + * @pmc_low: The PMU counter pointer for lower word
>>> + * @pmc_high: The PMU counter pointer for higher word
>>> + *
>>> + * As chained counters share the underlying perf event, we stop them
>>> + * both first before discarding the underlying perf event
>>> + */
>>> +static void kvm_pmu_stop_release_perf_event_pair(struct kvm_vcpu *vcpu,
>>> +					    u64 idx_low)
>>> +{
>>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>> +	struct kvm_pmc *pmc_low = &pmu->pmc[idx_low];
>>> +	struct kvm_pmc *pmc_high = &pmu->pmc[idx_low + 1];
>>> +
>>> +	kvm_pmu_stop_counter(vcpu, pmc_low);
>>> +	kvm_pmu_stop_counter(vcpu, pmc_high);
>>> +
>>> +	kvm_pmu_release_perf_event(vcpu, pmc_low);
>>> +	kvm_pmu_release_perf_event(vcpu, pmc_high);
>>
>> Hmmm, I think there is some confusion between what this function and
>> kvm_pmu_release_perf_event() should do, at this point
>> pmc_high->perf_event == NULL and we can't release it.
>>
> 
> Don't forget that for paired events, both pmc_{low,high} refer to the
> same perf_event. (This is why we stop the counters individually before
> releasing them - so that we can use the event information whilst working
> out the counter value of pmc_high).
> 
> Or is there something I'm missing?
>  

No, I think I'm the one who got confused between chained registers and
the fact that we try to enable counters two by two.

It's probably best to ignore my comments concerning the release event
stuff :) .

Thanks for explaining.

Cheers,

-- 
Julien Thierry

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
  2019-01-28 11:47       ` Andrew Murray
@ 2019-01-29 10:56         ` Suzuki K Poulose
  -1 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-29 10:56 UTC (permalink / raw)
  To: andrew.murray; +Cc: marc.zyngier, christoffer.dall, linux-arm-kernel, kvmarm

Hi Andrew,

On 28/01/2019 11:47, Andrew Murray wrote:
> On Tue, Jan 22, 2019 at 02:18:17PM +0000, Suzuki K Poulose wrote:
>> Hi Andrew
>>
>> On 01/22/2019 10:49 AM, Andrew Murray wrote:
>>> The perf event sample_period is currently set based upon the current
>>> counter value, when PMXEVTYPER is written to and the perf event is created.
>>> However the user may choose to write the type before the counter value in
>>> which case sample_period will be set incorrectly. Let's instead decouple
>>> event creation from PMXEVTYPER and (re)create the event in either
>>> suitation.
>>>
>>> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
>>
>> The approach looks fine to me. However this patch seems to introduce a
>> memory leak, see below, which you may be addressing in a later patch in the
>> series. But this will affect bisecting issues.
> 
> See below, I don't think this is true.

You're right. Sorry for the noise.

> 
>>
>>> ---
>>>    virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
>>>    1 file changed, 30 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index 531d27f..4464899 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -24,6 +24,8 @@
>>>    #include <kvm/arm_pmu.h>
>>>    #include <kvm/arm_vgic.h>
>>> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
>>> +				      u64 select_idx);
>>
>> Could we just pass the counter index (i.e, select_idx) after updating
>> the event_type/counter value in the respective functions.
> 
> Unless I misunderstand we need the value of 'data' as it is used to
> populate the function local perf_event_attr structure.

Yes, we do program the "data" (which is the event code) in attr.config.
So, since this is now a helper routine, so the name "data" doesn't make any
sense.

> 
> However it is possible to instead read 'data' from __vcpu_sys_reg in
> kvm_pmu_create_perf_event instead of the call site. However
> kvm_pmu_set_counter_event_type would have to set the value of
> __vcpu_sys_reg from its data argument (as __vcpu_sys_reg normally gets
> set after kvm_pmu_set_counter_event_type returns). This is OK as we
> do this in the next patch in this series anyway - so perhaps I can
> bring that forward to this patch?

Yes, please.

Cheers
Suzuki

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value
@ 2019-01-29 10:56         ` Suzuki K Poulose
  0 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-29 10:56 UTC (permalink / raw)
  To: andrew.murray; +Cc: marc.zyngier, christoffer.dall, linux-arm-kernel, kvmarm

Hi Andrew,

On 28/01/2019 11:47, Andrew Murray wrote:
> On Tue, Jan 22, 2019 at 02:18:17PM +0000, Suzuki K Poulose wrote:
>> Hi Andrew
>>
>> On 01/22/2019 10:49 AM, Andrew Murray wrote:
>>> The perf event sample_period is currently set based upon the current
>>> counter value, when PMXEVTYPER is written to and the perf event is created.
>>> However the user may choose to write the type before the counter value in
>>> which case sample_period will be set incorrectly. Let's instead decouple
>>> event creation from PMXEVTYPER and (re)create the event in either
>>> suitation.
>>>
>>> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
>>
>> The approach looks fine to me. However this patch seems to introduce a
>> memory leak, see below, which you may be addressing in a later patch in the
>> series. But this will affect bisecting issues.
> 
> See below, I don't think this is true.

You're right. Sorry for the noise.

> 
>>
>>> ---
>>>    virt/kvm/arm/pmu.c | 39 ++++++++++++++++++++++++++++++---------
>>>    1 file changed, 30 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index 531d27f..4464899 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -24,6 +24,8 @@
>>>    #include <kvm/arm_pmu.h>
>>>    #include <kvm/arm_vgic.h>
>>> +static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
>>> +				      u64 select_idx);
>>
>> Could we just pass the counter index (i.e, select_idx) after updating
>> the event_type/counter value in the respective functions.
> 
> Unless I misunderstand we need the value of 'data' as it is used to
> populate the function local perf_event_attr structure.

Yes, we do program the "data" (which is the event code) in attr.config.
So, since this is now a helper routine, so the name "data" doesn't make any
sense.

> 
> However it is possible to instead read 'data' from __vcpu_sys_reg in
> kvm_pmu_create_perf_event instead of the call site. However
> kvm_pmu_set_counter_event_type would have to set the value of
> __vcpu_sys_reg from its data argument (as __vcpu_sys_reg normally gets
> set after kvm_pmu_set_counter_event_type returns). This is OK as we
> do this in the next patch in this series anyway - so perhaps I can
> bring that forward to this patch?

Yes, please.

Cheers
Suzuki



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
  2019-01-28 14:28       ` Andrew Murray
@ 2019-01-29 11:11         ` Suzuki K Poulose
  -1 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-29 11:11 UTC (permalink / raw)
  To: andrew.murray; +Cc: marc.zyngier, linux-arm-kernel, kvmarm

On 28/01/2019 14:28, Andrew Murray wrote:
> On Tue, Jan 22, 2019 at 10:12:22PM +0000, Suzuki K Poulose wrote:
>> Hi Andrew,
>>
>> On 01/22/2019 10:49 AM, Andrew Murray wrote:
>>> To prevent re-creating perf events everytime the counter registers
>>> are changed, let's instead lazily create the event when the event
>>> is first enabled and destroy it when it changes.
>>>
>>> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
>>
>>
>>> ---
>>>    virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
>>>    1 file changed, 78 insertions(+), 36 deletions(-)
>>>
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index 4464899..1921ca9 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -24,8 +24,11 @@
>>>    #include <kvm/arm_pmu.h>
>>>    #include <kvm/arm_vgic.h>
>>> -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
>>> -				      u64 select_idx);
>>> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
>>
>> I find the approach good. However the function names are a bit odd and
>> it makes the code read a bit difficult.
> 
> Thanks - the odd naming probably came about as I started with a patch that
> added chained PMU support and worked backward to split it into smaller patches
> that each made individual sense. The _single suffix was the counterpart of
> _pair.
> 
>>
>> I think we could :
>>
>> 1) Rename the existing
>>   kvm_pmu_{enable/disable}_counter => kvm_pmu_{enable/disable}_[mask or
>> counters ]
>> as they operate on a set of counters (as a mask) instead of a single
>> counter.
>> And then you may be able to drop "_single" from
>> kvm_pmu_{enable/disable}_counter"_single() functions below, which makes
>> better sense for what they do.
> 
> Thanks for this suggestion. I like this.
> 
>>
>>> +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
>>> +						      u64 select_idx);
>>
>> Could we simply keep kvm_pmu_counter_create_event() and add a comment above
>> the function explaining that the events are enabled as they are
>> created lazily ?
> 
> OK.
> 



>>> + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
>>> + * @vcpu: The vcpu pointer
>>> + * @select_idx: The counter index
>>> + */
>>> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
>>> +					    u64 select_idx)
>>> +{
>>> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
>>> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
>>> +
>>> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
>>> +		return;
>>> +
>>> +	if (set & BIT(select_idx))
>>> +		kvm_pmu_enable_counter_single(vcpu, select_idx);
>>
>> Could we not reuse kvm_pmu_enable_counter() here :
>> 	i.e,
>> static inline void kvm_pmu_reenable_counter(struct kvm_vcpu *vcpu, u64
>> 						select_idx)
>> {
>> 	kvm_pmu_enable_counter(vcpu, BIT(select_idx));
>> }
>>
> 
> Not quite - when we call kvm_pmu_reenable_enabled_single the individual
> counter may or may not be enabled. We only want to recreate the perf event
> if it was previously enabled.
> 
> But we can do better, e.g.
> 
> static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
>                                              u64 select_idx)

nit: could we use the name kvm_pmu_reenable_counter() ?

> {
>          u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> 
>          if (set & BIT(select_idx))
>                  kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
> }

Yep, thats correct. Alternatively, you could move the CNTENSET check
into the kvm_pmu_enable_counter_mask().

Suzuki

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable
@ 2019-01-29 11:11         ` Suzuki K Poulose
  0 siblings, 0 replies; 38+ messages in thread
From: Suzuki K Poulose @ 2019-01-29 11:11 UTC (permalink / raw)
  To: andrew.murray; +Cc: marc.zyngier, christoffer.dall, linux-arm-kernel, kvmarm

On 28/01/2019 14:28, Andrew Murray wrote:
> On Tue, Jan 22, 2019 at 10:12:22PM +0000, Suzuki K Poulose wrote:
>> Hi Andrew,
>>
>> On 01/22/2019 10:49 AM, Andrew Murray wrote:
>>> To prevent re-creating perf events everytime the counter registers
>>> are changed, let's instead lazily create the event when the event
>>> is first enabled and destroy it when it changes.
>>>
>>> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
>>
>>
>>> ---
>>>    virt/kvm/arm/pmu.c | 114 ++++++++++++++++++++++++++++++++++++-----------------
>>>    1 file changed, 78 insertions(+), 36 deletions(-)
>>>
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index 4464899..1921ca9 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -24,8 +24,11 @@
>>>    #include <kvm/arm_pmu.h>
>>>    #include <kvm/arm_vgic.h>
>>> -static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 data,
>>> -				      u64 select_idx);
>>> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu, u64 pair);
>>
>> I find the approach good. However the function names are a bit odd and
>> it makes the code read a bit difficult.
> 
> Thanks - the odd naming probably came about as I started with a patch that
> added chained PMU support and worked backward to split it into smaller patches
> that each made individual sense. The _single suffix was the counterpart of
> _pair.
> 
>>
>> I think we could :
>>
>> 1) Rename the existing
>>   kvm_pmu_{enable/disable}_counter => kvm_pmu_{enable/disable}_[mask or
>> counters ]
>> as they operate on a set of counters (as a mask) instead of a single
>> counter.
>> And then you may be able to drop "_single" from
>> kvm_pmu_{enable/disable}_counter"_single() functions below, which makes
>> better sense for what they do.
> 
> Thanks for this suggestion. I like this.
> 
>>
>>> +static void kvm_pmu_counter_create_enabled_perf_event(struct kvm_vcpu *vcpu,
>>> +						      u64 select_idx);
>>
>> Could we simply keep kvm_pmu_counter_create_event() and add a comment above
>> the function explaining that the events are enabled as they are
>> created lazily ?
> 
> OK.
> 



>>> + * kvm_pmu_reenable_enabled_single - reenable a counter if it should be enabled
>>> + * @vcpu: The vcpu pointer
>>> + * @select_idx: The counter index
>>> + */
>>> +static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
>>> +					    u64 select_idx)
>>> +{
>>> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
>>> +	u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
>>> +
>>> +	if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E))
>>> +		return;
>>> +
>>> +	if (set & BIT(select_idx))
>>> +		kvm_pmu_enable_counter_single(vcpu, select_idx);
>>
>> Could we not reuse kvm_pmu_enable_counter() here :
>> 	i.e,
>> static inline void kvm_pmu_reenable_counter(struct kvm_vcpu *vcpu, u64
>> 						select_idx)
>> {
>> 	kvm_pmu_enable_counter(vcpu, BIT(select_idx));
>> }
>>
> 
> Not quite - when we call kvm_pmu_reenable_enabled_single the individual
> counter may or may not be enabled. We only want to recreate the perf event
> if it was previously enabled.
> 
> But we can do better, e.g.
> 
> static void kvm_pmu_reenable_enabled_single(struct kvm_vcpu *vcpu,
>                                              u64 select_idx)

nit: could we use the name kvm_pmu_reenable_counter() ?

> {
>          u64 set = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> 
>          if (set & BIT(select_idx))
>                  kvm_pmu_enable_counter_mask(vcpu, BIT(select_idx));
> }

Yep, thats correct. Alternatively, you could move the CNTENSET check
into the kvm_pmu_enable_counter_mask().

Suzuki

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2019-01-29 11:11 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-22 10:49 [PATCH 0/4] KVM: arm/arm64: add support for chained counters Andrew Murray
2019-01-22 10:49 ` Andrew Murray
2019-01-22 10:49 ` [PATCH 1/4] KVM: arm/arm64: extract duplicated code to own function Andrew Murray
2019-01-22 10:49   ` Andrew Murray
2019-01-22 14:20   ` Suzuki K Poulose
2019-01-22 14:20     ` Suzuki K Poulose
2019-01-22 10:49 ` [PATCH 2/4] KVM: arm/arm64: re-create event when setting counter value Andrew Murray
2019-01-22 10:49   ` Andrew Murray
2019-01-22 12:12   ` Julien Thierry
2019-01-22 12:12     ` Julien Thierry
2019-01-22 12:42     ` Andrew Murray
2019-01-22 12:42       ` Andrew Murray
2019-01-22 14:18   ` Suzuki K Poulose
2019-01-22 14:18     ` Suzuki K Poulose
2019-01-28 11:47     ` Andrew Murray
2019-01-28 11:47       ` Andrew Murray
2019-01-29 10:56       ` Suzuki K Poulose
2019-01-29 10:56         ` Suzuki K Poulose
2019-01-22 10:49 ` [PATCH 3/4] KVM: arm/arm64: lazily create perf events on enable Andrew Murray
2019-01-22 10:49   ` Andrew Murray
2019-01-22 13:41   ` Julien Thierry
2019-01-22 13:41     ` Julien Thierry
2019-01-28 17:02     ` Andrew Murray
2019-01-28 17:02       ` Andrew Murray
2019-01-22 22:12   ` Suzuki K Poulose
2019-01-22 22:12     ` Suzuki K Poulose
2019-01-28 14:28     ` Andrew Murray
2019-01-28 14:28       ` Andrew Murray
2019-01-29 11:11       ` Suzuki K Poulose
2019-01-29 11:11         ` Suzuki K Poulose
2019-01-22 10:49 ` [PATCH 4/4] KVM: arm/arm64: support chained PMU counters Andrew Murray
2019-01-22 10:49   ` Andrew Murray
2019-01-22 14:59   ` Julien Thierry
2019-01-22 14:59     ` Julien Thierry
2019-01-28 17:13     ` Andrew Murray
2019-01-28 17:13       ` Andrew Murray
2019-01-29  9:07       ` Julien Thierry
2019-01-29  9:07         ` Julien Thierry

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.