All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Jinank Jain <jinankj@amazon.de>
Cc: <linux-arm-kernel@lists.infradead.org>,
	<kvmarm@lists.cs.columbia.edu>, <linux-kernel@vger.kernel.org>,
	Alexander Graf <graf@amazon.de>,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>
Subject: Re: [PATCH] KVM: arm64: Properly restore PMU state during live-migration
Date: Thu, 03 Jun 2021 17:03:56 +0100	[thread overview]
Message-ID: <87wnrbylxv.wl-maz@kernel.org> (raw)
In-Reply-To: <20210603110554.13643-1-jinankj@amazon.de>

Hi Jinank,

On Thu, 03 Jun 2021 12:05:54 +0100,
Jinank Jain <jinankj@amazon.de> wrote:
> 
> Currently if a guest is live-migrated while it is actively using perf
> counters, then after live-migrate it will notice that all counters would
> suddenly start reporting 0s. This is due to the fact we are not
> re-creating the relevant perf events inside the kernel.
> 
> Usually on live-migration guest state is restored using KVM_SET_ONE_REG
> ioctl interface, which simply restores the value of PMU registers
> values but does not re-program the perf events so that the guest can seamlessly
> use these counters even after live-migration like it was doing before
> live-migration.
> 
> Instead there are two completely different code path between guest
> accessing PMU registers and VMM restoring counters on
> live-migration.
> 
> In case of KVM_SET_ONE_REG:
> 
> kvm_arm_set_reg()
> ...... kvm_arm_sys_reg_set_reg()
> ........... reg_from_user()
> 
> but in case when guest tries to access these counters:
> 
> handle_exit()
> ..... kvm_handle_sys_reg()
> ..........perform_access()
> ...............access_pmu_evcntr()
> ...................kvm_pmu_set_counter_value()
> .......................kvm_pmu_create_perf_event()
> 
> The drawback of using the KVM_SET_ONE_REG interface is that the host pmu
> events which were registered for the source instance and not present for
> the destination instance.

I can't parse this sentence. Do you mean "are not present"?

> Thus passively restoring PMCR_EL0 using
> KVM_SET_ONE_REG interface would not create the necessary host pmu events
> which are crucial for seamless guest experience across live migration.
> 
> In ordet to fix the situation, on first vcpu load we should restore
> PMCR_EL0 in the same exact way like the guest was trying to access
> these counters. And then we will also recreate the relevant host pmu
> events.
> 
> Signed-off-by: Jinank Jain <jinankj@amazon.de>
> Cc: Alexander Graf (AWS) <graf@amazon.de>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: James Morse <james.morse@arm.com>
> Cc: Alexandru Elisei <alexandru.elisei@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/arm.c              |  1 +
>  arch/arm64/kvm/pmu-emul.c         | 10 ++++++++--
>  arch/arm64/kvm/pmu.c              | 15 +++++++++++++++
>  include/kvm/arm_pmu.h             |  3 +++
>  5 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 7cd7d5c8c4bc..2376ad3c2fc2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -745,6 +745,7 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
>  void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr);
>  void kvm_clr_pmu_events(u32 clr);
>  
> +void kvm_vcpu_pmu_restore(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
>  #else
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index e720148232a0..c66f6d16ec06 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -408,6 +408,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  	if (has_vhe())
>  		kvm_vcpu_load_sysregs_vhe(vcpu);
>  	kvm_arch_vcpu_load_fp(vcpu);
> +	kvm_vcpu_pmu_restore(vcpu);

If this only needs to be run once per vcpu, why not trigger it from
kvm_arm_pmu_v3_enable(), which is also called once per vcpu?

This can done on the back of a request, saving most of the overhead
and not requiring any extra field. Essentially, something like the
(untested) patch below.

>  	kvm_vcpu_pmu_restore_guest(vcpu);
>  	if (kvm_arm_is_pvtime_enabled(&vcpu->arch))
>  		kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu);
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index fd167d4f4215..12a40f4b5f0d 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -574,10 +574,16 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  		kvm_pmu_disable_counter_mask(vcpu, mask);
>  	}
>  
> -	if (val & ARMV8_PMU_PMCR_C)
> +	/*
> +	 * Cycle counter needs to reset in case of first vcpu load.
> +	 */
> +	if (val & ARMV8_PMU_PMCR_C || !kvm_arm_pmu_v3_restored(vcpu))

Why? There is no architectural guarantee that a counter resets to 0
without writing PMCR_EL0.C. And if you want the guest to continue
counting where it left off, resetting the counter is at best
counter-productive.

So I must be missing something...

>  		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
>  
> -	if (val & ARMV8_PMU_PMCR_P) {
> +	/*
> +	 * All the counters needs to reset in case of first vcpu load.
> +	 */
> +	if (val & ARMV8_PMU_PMCR_P || !kvm_arm_pmu_v3_restored(vcpu)) {

Same thing here.

>  		for_each_set_bit(i, &mask, 32)
>  			kvm_pmu_set_counter_value(vcpu, i, 0);
>  	}

The rest of the changes should be unnecessary with the patch below.

Thanks,

	M.

From 1be188ab71867632a8c17384be6e55f47f42aa8b Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Thu, 3 Jun 2021 16:50:02 +0100
Subject: [PATCH] KVM: arm64: Restore PMU configuration on first run

Restoring a guest with an active virtual PMU results in no perf
counters being instanciated on the host side. Not quite what
you'd expect from a restore.

In order to fix this, force a writeback of PMCR_EL0 on the first
run of a vcpu (using a new request so that it happens once the
vcpu has been loaded). This will in turn create all the host-side
counters that were missing.

Reported-by: Jinank Jain <jinankj@amazon.de>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/arm.c              | 4 ++++
 arch/arm64/kvm/pmu-emul.c         | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7cd7d5c8c4bc..6336b4309114 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -46,6 +46,7 @@
 #define KVM_REQ_VCPU_RESET	KVM_ARCH_REQ(2)
 #define KVM_REQ_RECORD_STEAL	KVM_ARCH_REQ(3)
 #define KVM_REQ_RELOAD_GICv4	KVM_ARCH_REQ(4)
+#define KVM_REQ_RELOAD_PMU	KVM_ARCH_REQ(5)
 
 #define KVM_DIRTY_LOG_MANUAL_CAPS   (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
 				     KVM_DIRTY_LOG_INITIALLY_SET)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e720148232a0..facf4d41d32a 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -689,6 +689,10 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 			vgic_v4_load(vcpu);
 			preempt_enable();
 		}
+
+		if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
+			kvm_pmu_handle_pmcr(vcpu,
+					    __vcpu_sys_reg(vcpu, PMCR_EL0));
 	}
 }
 
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index fd167d4f4215..a0bbb7111f57 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -850,6 +850,9 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
 		   return -EINVAL;
 	}
 
+	/* One-off reload of the PMU on first run */
+	kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
+
 	return 0;
 }
 
-- 
2.30.2


-- 
Without deviation from the norm, progress is not possible.

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Jinank Jain <jinankj@amazon.de>
Cc: Alexander Graf <graf@amazon.de>, Will Deacon <will@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH] KVM: arm64: Properly restore PMU state during live-migration
Date: Thu, 03 Jun 2021 17:03:56 +0100	[thread overview]
Message-ID: <87wnrbylxv.wl-maz@kernel.org> (raw)
In-Reply-To: <20210603110554.13643-1-jinankj@amazon.de>

Hi Jinank,

On Thu, 03 Jun 2021 12:05:54 +0100,
Jinank Jain <jinankj@amazon.de> wrote:
> 
> Currently if a guest is live-migrated while it is actively using perf
> counters, then after live-migrate it will notice that all counters would
> suddenly start reporting 0s. This is due to the fact we are not
> re-creating the relevant perf events inside the kernel.
> 
> Usually on live-migration guest state is restored using KVM_SET_ONE_REG
> ioctl interface, which simply restores the value of PMU registers
> values but does not re-program the perf events so that the guest can seamlessly
> use these counters even after live-migration like it was doing before
> live-migration.
> 
> Instead there are two completely different code path between guest
> accessing PMU registers and VMM restoring counters on
> live-migration.
> 
> In case of KVM_SET_ONE_REG:
> 
> kvm_arm_set_reg()
> ...... kvm_arm_sys_reg_set_reg()
> ........... reg_from_user()
> 
> but in case when guest tries to access these counters:
> 
> handle_exit()
> ..... kvm_handle_sys_reg()
> ..........perform_access()
> ...............access_pmu_evcntr()
> ...................kvm_pmu_set_counter_value()
> .......................kvm_pmu_create_perf_event()
> 
> The drawback of using the KVM_SET_ONE_REG interface is that the host pmu
> events which were registered for the source instance and not present for
> the destination instance.

I can't parse this sentence. Do you mean "are not present"?

> Thus passively restoring PMCR_EL0 using
> KVM_SET_ONE_REG interface would not create the necessary host pmu events
> which are crucial for seamless guest experience across live migration.
> 
> In ordet to fix the situation, on first vcpu load we should restore
> PMCR_EL0 in the same exact way like the guest was trying to access
> these counters. And then we will also recreate the relevant host pmu
> events.
> 
> Signed-off-by: Jinank Jain <jinankj@amazon.de>
> Cc: Alexander Graf (AWS) <graf@amazon.de>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: James Morse <james.morse@arm.com>
> Cc: Alexandru Elisei <alexandru.elisei@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/arm.c              |  1 +
>  arch/arm64/kvm/pmu-emul.c         | 10 ++++++++--
>  arch/arm64/kvm/pmu.c              | 15 +++++++++++++++
>  include/kvm/arm_pmu.h             |  3 +++
>  5 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 7cd7d5c8c4bc..2376ad3c2fc2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -745,6 +745,7 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
>  void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr);
>  void kvm_clr_pmu_events(u32 clr);
>  
> +void kvm_vcpu_pmu_restore(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
>  #else
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index e720148232a0..c66f6d16ec06 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -408,6 +408,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  	if (has_vhe())
>  		kvm_vcpu_load_sysregs_vhe(vcpu);
>  	kvm_arch_vcpu_load_fp(vcpu);
> +	kvm_vcpu_pmu_restore(vcpu);

If this only needs to be run once per vcpu, why not trigger it from
kvm_arm_pmu_v3_enable(), which is also called once per vcpu?

This can done on the back of a request, saving most of the overhead
and not requiring any extra field. Essentially, something like the
(untested) patch below.

>  	kvm_vcpu_pmu_restore_guest(vcpu);
>  	if (kvm_arm_is_pvtime_enabled(&vcpu->arch))
>  		kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu);
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index fd167d4f4215..12a40f4b5f0d 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -574,10 +574,16 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  		kvm_pmu_disable_counter_mask(vcpu, mask);
>  	}
>  
> -	if (val & ARMV8_PMU_PMCR_C)
> +	/*
> +	 * Cycle counter needs to reset in case of first vcpu load.
> +	 */
> +	if (val & ARMV8_PMU_PMCR_C || !kvm_arm_pmu_v3_restored(vcpu))

Why? There is no architectural guarantee that a counter resets to 0
without writing PMCR_EL0.C. And if you want the guest to continue
counting where it left off, resetting the counter is at best
counter-productive.

So I must be missing something...

>  		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
>  
> -	if (val & ARMV8_PMU_PMCR_P) {
> +	/*
> +	 * All the counters needs to reset in case of first vcpu load.
> +	 */
> +	if (val & ARMV8_PMU_PMCR_P || !kvm_arm_pmu_v3_restored(vcpu)) {

Same thing here.

>  		for_each_set_bit(i, &mask, 32)
>  			kvm_pmu_set_counter_value(vcpu, i, 0);
>  	}

The rest of the changes should be unnecessary with the patch below.

Thanks,

	M.

From 1be188ab71867632a8c17384be6e55f47f42aa8b Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Thu, 3 Jun 2021 16:50:02 +0100
Subject: [PATCH] KVM: arm64: Restore PMU configuration on first run

Restoring a guest with an active virtual PMU results in no perf
counters being instanciated on the host side. Not quite what
you'd expect from a restore.

In order to fix this, force a writeback of PMCR_EL0 on the first
run of a vcpu (using a new request so that it happens once the
vcpu has been loaded). This will in turn create all the host-side
counters that were missing.

Reported-by: Jinank Jain <jinankj@amazon.de>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/arm.c              | 4 ++++
 arch/arm64/kvm/pmu-emul.c         | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7cd7d5c8c4bc..6336b4309114 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -46,6 +46,7 @@
 #define KVM_REQ_VCPU_RESET	KVM_ARCH_REQ(2)
 #define KVM_REQ_RECORD_STEAL	KVM_ARCH_REQ(3)
 #define KVM_REQ_RELOAD_GICv4	KVM_ARCH_REQ(4)
+#define KVM_REQ_RELOAD_PMU	KVM_ARCH_REQ(5)
 
 #define KVM_DIRTY_LOG_MANUAL_CAPS   (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
 				     KVM_DIRTY_LOG_INITIALLY_SET)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e720148232a0..facf4d41d32a 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -689,6 +689,10 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 			vgic_v4_load(vcpu);
 			preempt_enable();
 		}
+
+		if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
+			kvm_pmu_handle_pmcr(vcpu,
+					    __vcpu_sys_reg(vcpu, PMCR_EL0));
 	}
 }
 
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index fd167d4f4215..a0bbb7111f57 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -850,6 +850,9 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
 		   return -EINVAL;
 	}
 
+	/* One-off reload of the PMU on first run */
+	kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
+
 	return 0;
 }
 
-- 
2.30.2


-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Jinank Jain <jinankj@amazon.de>
Cc: <linux-arm-kernel@lists.infradead.org>,
	<kvmarm@lists.cs.columbia.edu>, <linux-kernel@vger.kernel.org>,
	Alexander Graf <graf@amazon.de>,
	James Morse <james.morse@arm.com>,
	Alexandru Elisei <alexandru.elisei@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>
Subject: Re: [PATCH] KVM: arm64: Properly restore PMU state during live-migration
Date: Thu, 03 Jun 2021 17:03:56 +0100	[thread overview]
Message-ID: <87wnrbylxv.wl-maz@kernel.org> (raw)
In-Reply-To: <20210603110554.13643-1-jinankj@amazon.de>

Hi Jinank,

On Thu, 03 Jun 2021 12:05:54 +0100,
Jinank Jain <jinankj@amazon.de> wrote:
> 
> Currently if a guest is live-migrated while it is actively using perf
> counters, then after live-migrate it will notice that all counters would
> suddenly start reporting 0s. This is due to the fact we are not
> re-creating the relevant perf events inside the kernel.
> 
> Usually on live-migration guest state is restored using KVM_SET_ONE_REG
> ioctl interface, which simply restores the value of PMU registers
> values but does not re-program the perf events so that the guest can seamlessly
> use these counters even after live-migration like it was doing before
> live-migration.
> 
> Instead there are two completely different code path between guest
> accessing PMU registers and VMM restoring counters on
> live-migration.
> 
> In case of KVM_SET_ONE_REG:
> 
> kvm_arm_set_reg()
> ...... kvm_arm_sys_reg_set_reg()
> ........... reg_from_user()
> 
> but in case when guest tries to access these counters:
> 
> handle_exit()
> ..... kvm_handle_sys_reg()
> ..........perform_access()
> ...............access_pmu_evcntr()
> ...................kvm_pmu_set_counter_value()
> .......................kvm_pmu_create_perf_event()
> 
> The drawback of using the KVM_SET_ONE_REG interface is that the host pmu
> events which were registered for the source instance and not present for
> the destination instance.

I can't parse this sentence. Do you mean "are not present"?

> Thus passively restoring PMCR_EL0 using
> KVM_SET_ONE_REG interface would not create the necessary host pmu events
> which are crucial for seamless guest experience across live migration.
> 
> In ordet to fix the situation, on first vcpu load we should restore
> PMCR_EL0 in the same exact way like the guest was trying to access
> these counters. And then we will also recreate the relevant host pmu
> events.
> 
> Signed-off-by: Jinank Jain <jinankj@amazon.de>
> Cc: Alexander Graf (AWS) <graf@amazon.de>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: James Morse <james.morse@arm.com>
> Cc: Alexandru Elisei <alexandru.elisei@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/arm.c              |  1 +
>  arch/arm64/kvm/pmu-emul.c         | 10 ++++++++--
>  arch/arm64/kvm/pmu.c              | 15 +++++++++++++++
>  include/kvm/arm_pmu.h             |  3 +++
>  5 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 7cd7d5c8c4bc..2376ad3c2fc2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -745,6 +745,7 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
>  void kvm_set_pmu_events(u32 set, struct perf_event_attr *attr);
>  void kvm_clr_pmu_events(u32 clr);
>  
> +void kvm_vcpu_pmu_restore(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu);
>  void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu);
>  #else
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index e720148232a0..c66f6d16ec06 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -408,6 +408,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  	if (has_vhe())
>  		kvm_vcpu_load_sysregs_vhe(vcpu);
>  	kvm_arch_vcpu_load_fp(vcpu);
> +	kvm_vcpu_pmu_restore(vcpu);

If this only needs to be run once per vcpu, why not trigger it from
kvm_arm_pmu_v3_enable(), which is also called once per vcpu?

This can done on the back of a request, saving most of the overhead
and not requiring any extra field. Essentially, something like the
(untested) patch below.

>  	kvm_vcpu_pmu_restore_guest(vcpu);
>  	if (kvm_arm_is_pvtime_enabled(&vcpu->arch))
>  		kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu);
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index fd167d4f4215..12a40f4b5f0d 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -574,10 +574,16 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
>  		kvm_pmu_disable_counter_mask(vcpu, mask);
>  	}
>  
> -	if (val & ARMV8_PMU_PMCR_C)
> +	/*
> +	 * Cycle counter needs to reset in case of first vcpu load.
> +	 */
> +	if (val & ARMV8_PMU_PMCR_C || !kvm_arm_pmu_v3_restored(vcpu))

Why? There is no architectural guarantee that a counter resets to 0
without writing PMCR_EL0.C. And if you want the guest to continue
counting where it left off, resetting the counter is at best
counter-productive.

So I must be missing something...

>  		kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0);
>  
> -	if (val & ARMV8_PMU_PMCR_P) {
> +	/*
> +	 * All the counters needs to reset in case of first vcpu load.
> +	 */
> +	if (val & ARMV8_PMU_PMCR_P || !kvm_arm_pmu_v3_restored(vcpu)) {

Same thing here.

>  		for_each_set_bit(i, &mask, 32)
>  			kvm_pmu_set_counter_value(vcpu, i, 0);
>  	}

The rest of the changes should be unnecessary with the patch below.

Thanks,

	M.

From 1be188ab71867632a8c17384be6e55f47f42aa8b Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Thu, 3 Jun 2021 16:50:02 +0100
Subject: [PATCH] KVM: arm64: Restore PMU configuration on first run

Restoring a guest with an active virtual PMU results in no perf
counters being instanciated on the host side. Not quite what
you'd expect from a restore.

In order to fix this, force a writeback of PMCR_EL0 on the first
run of a vcpu (using a new request so that it happens once the
vcpu has been loaded). This will in turn create all the host-side
counters that were missing.

Reported-by: Jinank Jain <jinankj@amazon.de>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/arm.c              | 4 ++++
 arch/arm64/kvm/pmu-emul.c         | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7cd7d5c8c4bc..6336b4309114 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -46,6 +46,7 @@
 #define KVM_REQ_VCPU_RESET	KVM_ARCH_REQ(2)
 #define KVM_REQ_RECORD_STEAL	KVM_ARCH_REQ(3)
 #define KVM_REQ_RELOAD_GICv4	KVM_ARCH_REQ(4)
+#define KVM_REQ_RELOAD_PMU	KVM_ARCH_REQ(5)
 
 #define KVM_DIRTY_LOG_MANUAL_CAPS   (KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | \
 				     KVM_DIRTY_LOG_INITIALLY_SET)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e720148232a0..facf4d41d32a 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -689,6 +689,10 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 			vgic_v4_load(vcpu);
 			preempt_enable();
 		}
+
+		if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu))
+			kvm_pmu_handle_pmcr(vcpu,
+					    __vcpu_sys_reg(vcpu, PMCR_EL0));
 	}
 }
 
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index fd167d4f4215..a0bbb7111f57 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -850,6 +850,9 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
 		   return -EINVAL;
 	}
 
+	/* One-off reload of the PMU on first run */
+	kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu);
+
 	return 0;
 }
 
-- 
2.30.2


-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2021-06-03 16:04 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-03 11:05 [PATCH] KVM: arm64: Properly restore PMU state during live-migration Jinank Jain
2021-06-03 11:05 ` Jinank Jain
2021-06-03 11:05 ` Jinank Jain
2021-06-03 15:20 ` kernel test robot
2021-06-03 15:20   ` kernel test robot
2021-06-03 15:20   ` kernel test robot
2021-06-03 16:03 ` Marc Zyngier [this message]
2021-06-03 16:03   ` Marc Zyngier
2021-06-03 16:03   ` Marc Zyngier
2021-06-07 16:05   ` Jain, Jinank
2021-06-07 16:05     ` Jain, Jinank
2021-06-07 16:05     ` Jain, Jinank
2021-06-07 16:35     ` Marc Zyngier
2021-06-07 16:35       ` Marc Zyngier
2021-06-07 18:34       ` Jain, Jinank
2021-06-07 18:34         ` Jain, Jinank
2021-06-07 18:34         ` Jain, Jinank
2021-06-08  8:18         ` Marc Zyngier
2021-06-08  8:18           ` Marc Zyngier
2021-06-08  8:18           ` Marc Zyngier
2021-06-08  8:24           ` Jain, Jinank
2021-06-08  8:24             ` Jain, Jinank
2021-06-08  8:24             ` Jain, Jinank
2021-06-07 18:58 ` [PATCH v2] " Jinank Jain
2021-06-07 18:58   ` Jinank Jain
2021-06-07 18:58   ` Jinank Jain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wnrbylxv.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=alexandru.elisei@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=graf@amazon.de \
    --cc=james.morse@arm.com \
    --cc=jinankj@amazon.de \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.