All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Murray <andrew.murray@arm.com>
To: Marc Zyngier <maz@kernel.org>
Cc: Catalin Marinas <Catalin.Marinas@arm.com>,
	Mark Rutland <Mark.Rutland@arm.com>,
	will@kernel.org, Sudeep Holla <Sudeep.Holla@arm.com>,
	kvm@vger.kernel.org, kvmarm <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 09/18] arm64: KVM: enable conditional save/restore full SPE profiling buffer controls
Date: Tue, 7 Jan 2020 15:13:29 +0000	[thread overview]
Message-ID: <20200107151328.GW42593@e119886-lin.cambridge.arm.com> (raw)
In-Reply-To: <20191221141325.5a177343@why>

On Sat, Dec 21, 2019 at 02:13:25PM +0000, Marc Zyngier wrote:
> On Fri, 20 Dec 2019 14:30:16 +0000
> Andrew Murray <andrew.murray@arm.com> wrote:
> 
> [somehow managed not to do a reply all, re-sending]
> 
> > From: Sudeep Holla <sudeep.holla@arm.com>
> > 
> > Now that we can save/restore the full SPE controls, we can enable it
> > if SPE is setup and ready to use in KVM. It's supported in KVM only if
> > all the CPUs in the system supports SPE.
> > 
> > However to support heterogenous systems, we need to move the check if
> > host supports SPE and do a partial save/restore.
> 
> No. Let's just not go down that path. For now, KVM on heterogeneous
> systems do not get SPE.

At present these patches only offer the SPE feature to VCPU's where the
sanitised AA64DFR0 register indicates that all CPUs have this support
(kvm_arm_support_spe_v1) at the time of setting the attribute
(KVM_SET_DEVICE_ATTR).

Therefore if a new CPU comes online without SPE support, and an
existing VCPU is scheduled onto it, then bad things happen - which I guess
must have been the intention behind this patch.


> If SPE has been enabled on a guest and a CPU
> comes up without SPE, this CPU should fail to boot (same as exposing a
> feature to userspace).

I'm unclear as how to prevent this. We can set the FTR_STRICT flag on
the sanitised register - thus tainting the kernel if such a non-SPE CPU
comes online - thought that doesn't prevent KVM from blowing up. Though
I don't believe we can prevent a CPU coming up. At the moment this is
my preferred approach.

Looking at the vcpu_load and related code, I don't see a way of saying
'don't schedule this VCPU on this CPU' or bailing in any way.

One solution could be to allow scheduling onto non-SPE VCPUs but wrap the
SPE save/restore code in a macro (much like kvm_arm_spe_v1_ready) that
reads the non-sanitised feature register. Therefore we don't go bang, but
we also increase the size of any black-holes in SPE capturing. Though this
feels like something that will cause grief down the line.

Is there something else that can be done?

Thanks,

Andrew Murray

> 
> > 
> > Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/debug-sr.c | 33 ++++++++++++++++-----------------
> >  include/kvm/arm_spe.h         |  6 ++++++
> >  2 files changed, 22 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
> > index 12429b212a3a..d8d857067e6d 100644
> > --- a/arch/arm64/kvm/hyp/debug-sr.c
> > +++ b/arch/arm64/kvm/hyp/debug-sr.c
> > @@ -86,18 +86,13 @@
> >  	}
> >  
> >  static void __hyp_text
> > -__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> > +__debug_save_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  {
> >  	u64 reg;
> >  
> >  	/* Clear pmscr in case of early return */
> >  	ctxt->sys_regs[PMSCR_EL1] = 0;
> >  
> > -	/* SPE present on this CPU? */
> > -	if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
> > -						  ID_AA64DFR0_PMSVER_SHIFT))
> > -		return;
> > -
> >  	/* Yes; is it owned by higher EL? */
> >  	reg = read_sysreg_s(SYS_PMBIDR_EL1);
> >  	if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
> > @@ -142,7 +137,7 @@ __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  }
> >  
> >  static void __hyp_text
> > -__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> > +__debug_restore_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  {
> >  	if (!ctxt->sys_regs[PMSCR_EL1])
> >  		return;
> > @@ -210,11 +205,14 @@ void __hyp_text __debug_restore_guest_context(struct kvm_vcpu *vcpu)
> >  	struct kvm_guest_debug_arch *host_dbg;
> >  	struct kvm_guest_debug_arch *guest_dbg;
> >  
> > +	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > +	guest_ctxt = &vcpu->arch.ctxt;
> > +
> > +	__debug_restore_spe_context(guest_ctxt, kvm_arm_spe_v1_ready(vcpu));
> > +
> >  	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> >  		return;
> >  
> > -	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > -	guest_ctxt = &vcpu->arch.ctxt;
> >  	host_dbg = &vcpu->arch.host_debug_state.regs;
> >  	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
> >  
> > @@ -232,8 +230,7 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu)
> >  	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> >  	guest_ctxt = &vcpu->arch.ctxt;
> >  
> > -	if (!has_vhe())
> > -		__debug_restore_spe_nvhe(host_ctxt, false);
> > +	__debug_restore_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
> 
> So you now do an unconditional save/restore on the exit path for VHE as
> well? Even if the host isn't using the SPE HW? That's not acceptable
> as, in most cases, only the host /or/ the guest will use SPE. Here, you
> put a measurable overhead on each exit.
> 
> If the host is not using SPE, then the restore/save should happen in
> vcpu_load/vcpu_put. Only if the host is using SPE should you do
> something in the run loop. Of course, this only applies to VHE and
> non-VHE must switch eagerly.
> 
> >  
> >  	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> >  		return;
> > @@ -249,19 +246,21 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu)
> >  
> >  void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu)
> >  {
> > -	/*
> > -	 * Non-VHE: Disable and flush SPE data generation
> > -	 * VHE: The vcpu can run, but it can't hide.
> > -	 */
> >  	struct kvm_cpu_context *host_ctxt;
> >  
> >  	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > -	if (!has_vhe())
> > -		__debug_save_spe_nvhe(host_ctxt, false);
> > +	if (cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
> > +						 ID_AA64DFR0_PMSVER_SHIFT))
> > +		__debug_save_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
> >  }
> >  
> >  void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
> >  {
> > +	bool kvm_spe_ready = kvm_arm_spe_v1_ready(vcpu);
> > +
> > +	/* SPE present on this vCPU? */
> > +	if (kvm_spe_ready)
> > +		__debug_save_spe_context(&vcpu->arch.ctxt, kvm_spe_ready);
> >  }
> >  
> >  u32 __hyp_text __kvm_get_mdcr_el2(void)
> > diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
> > index 48d118fdb174..30c40b1bc385 100644
> > --- a/include/kvm/arm_spe.h
> > +++ b/include/kvm/arm_spe.h
> > @@ -16,4 +16,10 @@ struct kvm_spe {
> >  	bool irq_level;
> >  };
> >  
> > +#ifdef CONFIG_KVM_ARM_SPE
> > +#define kvm_arm_spe_v1_ready(v)		((v)->arch.spe.ready)
> > +#else
> > +#define kvm_arm_spe_v1_ready(v)		(false)
> > +#endif /* CONFIG_KVM_ARM_SPE */
> > +
> >  #endif /* __ASM_ARM_KVM_SPE_H */
> 
> Thanks,
> 
> 	M.
> -- 
> Jazz is not dead. It just smells funny...

WARNING: multiple messages have this Message-ID (diff)
From: Andrew Murray <andrew.murray@arm.com>
To: Marc Zyngier <maz@kernel.org>
Cc: kvm@vger.kernel.org, Catalin Marinas <Catalin.Marinas@arm.com>,
	linux-kernel@vger.kernel.org, Sudeep Holla <Sudeep.Holla@arm.com>,
	will@kernel.org, kvmarm <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v2 09/18] arm64: KVM: enable conditional save/restore full SPE profiling buffer controls
Date: Tue, 7 Jan 2020 15:13:29 +0000	[thread overview]
Message-ID: <20200107151328.GW42593@e119886-lin.cambridge.arm.com> (raw)
In-Reply-To: <20191221141325.5a177343@why>

On Sat, Dec 21, 2019 at 02:13:25PM +0000, Marc Zyngier wrote:
> On Fri, 20 Dec 2019 14:30:16 +0000
> Andrew Murray <andrew.murray@arm.com> wrote:
> 
> [somehow managed not to do a reply all, re-sending]
> 
> > From: Sudeep Holla <sudeep.holla@arm.com>
> > 
> > Now that we can save/restore the full SPE controls, we can enable it
> > if SPE is setup and ready to use in KVM. It's supported in KVM only if
> > all the CPUs in the system supports SPE.
> > 
> > However to support heterogenous systems, we need to move the check if
> > host supports SPE and do a partial save/restore.
> 
> No. Let's just not go down that path. For now, KVM on heterogeneous
> systems do not get SPE.

At present these patches only offer the SPE feature to VCPU's where the
sanitised AA64DFR0 register indicates that all CPUs have this support
(kvm_arm_support_spe_v1) at the time of setting the attribute
(KVM_SET_DEVICE_ATTR).

Therefore if a new CPU comes online without SPE support, and an
existing VCPU is scheduled onto it, then bad things happen - which I guess
must have been the intention behind this patch.


> If SPE has been enabled on a guest and a CPU
> comes up without SPE, this CPU should fail to boot (same as exposing a
> feature to userspace).

I'm unclear as how to prevent this. We can set the FTR_STRICT flag on
the sanitised register - thus tainting the kernel if such a non-SPE CPU
comes online - thought that doesn't prevent KVM from blowing up. Though
I don't believe we can prevent a CPU coming up. At the moment this is
my preferred approach.

Looking at the vcpu_load and related code, I don't see a way of saying
'don't schedule this VCPU on this CPU' or bailing in any way.

One solution could be to allow scheduling onto non-SPE VCPUs but wrap the
SPE save/restore code in a macro (much like kvm_arm_spe_v1_ready) that
reads the non-sanitised feature register. Therefore we don't go bang, but
we also increase the size of any black-holes in SPE capturing. Though this
feels like something that will cause grief down the line.

Is there something else that can be done?

Thanks,

Andrew Murray

> 
> > 
> > Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/debug-sr.c | 33 ++++++++++++++++-----------------
> >  include/kvm/arm_spe.h         |  6 ++++++
> >  2 files changed, 22 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
> > index 12429b212a3a..d8d857067e6d 100644
> > --- a/arch/arm64/kvm/hyp/debug-sr.c
> > +++ b/arch/arm64/kvm/hyp/debug-sr.c
> > @@ -86,18 +86,13 @@
> >  	}
> >  
> >  static void __hyp_text
> > -__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> > +__debug_save_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  {
> >  	u64 reg;
> >  
> >  	/* Clear pmscr in case of early return */
> >  	ctxt->sys_regs[PMSCR_EL1] = 0;
> >  
> > -	/* SPE present on this CPU? */
> > -	if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
> > -						  ID_AA64DFR0_PMSVER_SHIFT))
> > -		return;
> > -
> >  	/* Yes; is it owned by higher EL? */
> >  	reg = read_sysreg_s(SYS_PMBIDR_EL1);
> >  	if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
> > @@ -142,7 +137,7 @@ __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  }
> >  
> >  static void __hyp_text
> > -__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> > +__debug_restore_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  {
> >  	if (!ctxt->sys_regs[PMSCR_EL1])
> >  		return;
> > @@ -210,11 +205,14 @@ void __hyp_text __debug_restore_guest_context(struct kvm_vcpu *vcpu)
> >  	struct kvm_guest_debug_arch *host_dbg;
> >  	struct kvm_guest_debug_arch *guest_dbg;
> >  
> > +	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > +	guest_ctxt = &vcpu->arch.ctxt;
> > +
> > +	__debug_restore_spe_context(guest_ctxt, kvm_arm_spe_v1_ready(vcpu));
> > +
> >  	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> >  		return;
> >  
> > -	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > -	guest_ctxt = &vcpu->arch.ctxt;
> >  	host_dbg = &vcpu->arch.host_debug_state.regs;
> >  	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
> >  
> > @@ -232,8 +230,7 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu)
> >  	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> >  	guest_ctxt = &vcpu->arch.ctxt;
> >  
> > -	if (!has_vhe())
> > -		__debug_restore_spe_nvhe(host_ctxt, false);
> > +	__debug_restore_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
> 
> So you now do an unconditional save/restore on the exit path for VHE as
> well? Even if the host isn't using the SPE HW? That's not acceptable
> as, in most cases, only the host /or/ the guest will use SPE. Here, you
> put a measurable overhead on each exit.
> 
> If the host is not using SPE, then the restore/save should happen in
> vcpu_load/vcpu_put. Only if the host is using SPE should you do
> something in the run loop. Of course, this only applies to VHE and
> non-VHE must switch eagerly.
> 
> >  
> >  	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> >  		return;
> > @@ -249,19 +246,21 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu)
> >  
> >  void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu)
> >  {
> > -	/*
> > -	 * Non-VHE: Disable and flush SPE data generation
> > -	 * VHE: The vcpu can run, but it can't hide.
> > -	 */
> >  	struct kvm_cpu_context *host_ctxt;
> >  
> >  	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > -	if (!has_vhe())
> > -		__debug_save_spe_nvhe(host_ctxt, false);
> > +	if (cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
> > +						 ID_AA64DFR0_PMSVER_SHIFT))
> > +		__debug_save_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
> >  }
> >  
> >  void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
> >  {
> > +	bool kvm_spe_ready = kvm_arm_spe_v1_ready(vcpu);
> > +
> > +	/* SPE present on this vCPU? */
> > +	if (kvm_spe_ready)
> > +		__debug_save_spe_context(&vcpu->arch.ctxt, kvm_spe_ready);
> >  }
> >  
> >  u32 __hyp_text __kvm_get_mdcr_el2(void)
> > diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
> > index 48d118fdb174..30c40b1bc385 100644
> > --- a/include/kvm/arm_spe.h
> > +++ b/include/kvm/arm_spe.h
> > @@ -16,4 +16,10 @@ struct kvm_spe {
> >  	bool irq_level;
> >  };
> >  
> > +#ifdef CONFIG_KVM_ARM_SPE
> > +#define kvm_arm_spe_v1_ready(v)		((v)->arch.spe.ready)
> > +#else
> > +#define kvm_arm_spe_v1_ready(v)		(false)
> > +#endif /* CONFIG_KVM_ARM_SPE */
> > +
> >  #endif /* __ASM_ARM_KVM_SPE_H */
> 
> Thanks,
> 
> 	M.
> -- 
> Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Andrew Murray <andrew.murray@arm.com>
To: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <Mark.Rutland@arm.com>,
	kvm@vger.kernel.org, Catalin Marinas <Catalin.Marinas@arm.com>,
	linux-kernel@vger.kernel.org, Sudeep Holla <Sudeep.Holla@arm.com>,
	will@kernel.org, kvmarm <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v2 09/18] arm64: KVM: enable conditional save/restore full SPE profiling buffer controls
Date: Tue, 7 Jan 2020 15:13:29 +0000	[thread overview]
Message-ID: <20200107151328.GW42593@e119886-lin.cambridge.arm.com> (raw)
In-Reply-To: <20191221141325.5a177343@why>

On Sat, Dec 21, 2019 at 02:13:25PM +0000, Marc Zyngier wrote:
> On Fri, 20 Dec 2019 14:30:16 +0000
> Andrew Murray <andrew.murray@arm.com> wrote:
> 
> [somehow managed not to do a reply all, re-sending]
> 
> > From: Sudeep Holla <sudeep.holla@arm.com>
> > 
> > Now that we can save/restore the full SPE controls, we can enable it
> > if SPE is setup and ready to use in KVM. It's supported in KVM only if
> > all the CPUs in the system supports SPE.
> > 
> > However to support heterogenous systems, we need to move the check if
> > host supports SPE and do a partial save/restore.
> 
> No. Let's just not go down that path. For now, KVM on heterogeneous
> systems do not get SPE.

At present these patches only offer the SPE feature to VCPU's where the
sanitised AA64DFR0 register indicates that all CPUs have this support
(kvm_arm_support_spe_v1) at the time of setting the attribute
(KVM_SET_DEVICE_ATTR).

Therefore if a new CPU comes online without SPE support, and an
existing VCPU is scheduled onto it, then bad things happen - which I guess
must have been the intention behind this patch.


> If SPE has been enabled on a guest and a CPU
> comes up without SPE, this CPU should fail to boot (same as exposing a
> feature to userspace).

I'm unclear as how to prevent this. We can set the FTR_STRICT flag on
the sanitised register - thus tainting the kernel if such a non-SPE CPU
comes online - thought that doesn't prevent KVM from blowing up. Though
I don't believe we can prevent a CPU coming up. At the moment this is
my preferred approach.

Looking at the vcpu_load and related code, I don't see a way of saying
'don't schedule this VCPU on this CPU' or bailing in any way.

One solution could be to allow scheduling onto non-SPE VCPUs but wrap the
SPE save/restore code in a macro (much like kvm_arm_spe_v1_ready) that
reads the non-sanitised feature register. Therefore we don't go bang, but
we also increase the size of any black-holes in SPE capturing. Though this
feels like something that will cause grief down the line.

Is there something else that can be done?

Thanks,

Andrew Murray

> 
> > 
> > Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/debug-sr.c | 33 ++++++++++++++++-----------------
> >  include/kvm/arm_spe.h         |  6 ++++++
> >  2 files changed, 22 insertions(+), 17 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
> > index 12429b212a3a..d8d857067e6d 100644
> > --- a/arch/arm64/kvm/hyp/debug-sr.c
> > +++ b/arch/arm64/kvm/hyp/debug-sr.c
> > @@ -86,18 +86,13 @@
> >  	}
> >  
> >  static void __hyp_text
> > -__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> > +__debug_save_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  {
> >  	u64 reg;
> >  
> >  	/* Clear pmscr in case of early return */
> >  	ctxt->sys_regs[PMSCR_EL1] = 0;
> >  
> > -	/* SPE present on this CPU? */
> > -	if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
> > -						  ID_AA64DFR0_PMSVER_SHIFT))
> > -		return;
> > -
> >  	/* Yes; is it owned by higher EL? */
> >  	reg = read_sysreg_s(SYS_PMBIDR_EL1);
> >  	if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
> > @@ -142,7 +137,7 @@ __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  }
> >  
> >  static void __hyp_text
> > -__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
> > +__debug_restore_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
> >  {
> >  	if (!ctxt->sys_regs[PMSCR_EL1])
> >  		return;
> > @@ -210,11 +205,14 @@ void __hyp_text __debug_restore_guest_context(struct kvm_vcpu *vcpu)
> >  	struct kvm_guest_debug_arch *host_dbg;
> >  	struct kvm_guest_debug_arch *guest_dbg;
> >  
> > +	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > +	guest_ctxt = &vcpu->arch.ctxt;
> > +
> > +	__debug_restore_spe_context(guest_ctxt, kvm_arm_spe_v1_ready(vcpu));
> > +
> >  	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> >  		return;
> >  
> > -	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > -	guest_ctxt = &vcpu->arch.ctxt;
> >  	host_dbg = &vcpu->arch.host_debug_state.regs;
> >  	guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
> >  
> > @@ -232,8 +230,7 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu)
> >  	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> >  	guest_ctxt = &vcpu->arch.ctxt;
> >  
> > -	if (!has_vhe())
> > -		__debug_restore_spe_nvhe(host_ctxt, false);
> > +	__debug_restore_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
> 
> So you now do an unconditional save/restore on the exit path for VHE as
> well? Even if the host isn't using the SPE HW? That's not acceptable
> as, in most cases, only the host /or/ the guest will use SPE. Here, you
> put a measurable overhead on each exit.
> 
> If the host is not using SPE, then the restore/save should happen in
> vcpu_load/vcpu_put. Only if the host is using SPE should you do
> something in the run loop. Of course, this only applies to VHE and
> non-VHE must switch eagerly.
> 
> >  
> >  	if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
> >  		return;
> > @@ -249,19 +246,21 @@ void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu)
> >  
> >  void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu)
> >  {
> > -	/*
> > -	 * Non-VHE: Disable and flush SPE data generation
> > -	 * VHE: The vcpu can run, but it can't hide.
> > -	 */
> >  	struct kvm_cpu_context *host_ctxt;
> >  
> >  	host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
> > -	if (!has_vhe())
> > -		__debug_save_spe_nvhe(host_ctxt, false);
> > +	if (cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
> > +						 ID_AA64DFR0_PMSVER_SHIFT))
> > +		__debug_save_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
> >  }
> >  
> >  void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
> >  {
> > +	bool kvm_spe_ready = kvm_arm_spe_v1_ready(vcpu);
> > +
> > +	/* SPE present on this vCPU? */
> > +	if (kvm_spe_ready)
> > +		__debug_save_spe_context(&vcpu->arch.ctxt, kvm_spe_ready);
> >  }
> >  
> >  u32 __hyp_text __kvm_get_mdcr_el2(void)
> > diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
> > index 48d118fdb174..30c40b1bc385 100644
> > --- a/include/kvm/arm_spe.h
> > +++ b/include/kvm/arm_spe.h
> > @@ -16,4 +16,10 @@ struct kvm_spe {
> >  	bool irq_level;
> >  };
> >  
> > +#ifdef CONFIG_KVM_ARM_SPE
> > +#define kvm_arm_spe_v1_ready(v)		((v)->arch.spe.ready)
> > +#else
> > +#define kvm_arm_spe_v1_ready(v)		(false)
> > +#endif /* CONFIG_KVM_ARM_SPE */
> > +
> >  #endif /* __ASM_ARM_KVM_SPE_H */
> 
> Thanks,
> 
> 	M.
> -- 
> Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-01-07 15:13 UTC|newest]

Thread overview: 234+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-20 14:30 [PATCH v2 00/18] arm64: KVM: add SPE profiling support Andrew Murray
2019-12-20 14:30 ` Andrew Murray
2019-12-20 14:30 ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 01/18] dt-bindings: ARM SPE: highlight the need for PPI partitions on heterogeneous systems Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 02/18] arm64: KVM: reset E2PB correctly in MDCR_EL2 when exiting the guest(VHE) Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-21 13:12   ` Marc Zyngier
2019-12-21 13:12     ` Marc Zyngier
2019-12-21 13:12     ` Marc Zyngier
2019-12-24 10:29     ` Andrew Murray
2019-12-24 10:29       ` Andrew Murray
2019-12-24 10:29       ` Andrew Murray
2020-01-02 16:21       ` Andrew Murray
2020-01-02 16:21         ` Andrew Murray
2020-01-02 16:21         ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 03/18] arm64: KVM: define SPE data structure for each vcpu Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-21 13:19   ` Marc Zyngier
2019-12-21 13:19     ` Marc Zyngier
2019-12-21 13:19     ` Marc Zyngier
2019-12-24 12:01     ` Andrew Murray
2019-12-24 12:01       ` Andrew Murray
2019-12-24 12:01       ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 04/18] arm64: KVM: add SPE system registers to sys_reg_descs Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 05/18] arm64: KVM/VHE: enable the use PMSCR_EL12 on VHE systems Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 06/18] arm64: KVM: split debug save restore across vm/traps activation Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 07/18] arm64: KVM/debug: drop pmscr_el1 and use sys_regs[PMSCR_EL1] in kvm_cpu_context Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 08/18] arm64: KVM: add support to save/restore SPE profiling buffer controls Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-21 13:57   ` Marc Zyngier
2019-12-21 13:57     ` Marc Zyngier
2019-12-21 13:57     ` Marc Zyngier
2019-12-24 10:49     ` Andrew Murray
2019-12-24 10:49       ` Andrew Murray
2019-12-24 10:49       ` Andrew Murray
2019-12-24 15:17       ` Andrew Murray
2019-12-24 15:17         ` Andrew Murray
2019-12-24 15:17         ` Andrew Murray
2019-12-24 15:48         ` Marc Zyngier
2019-12-24 15:48           ` Marc Zyngier
2019-12-24 15:48           ` Marc Zyngier
2019-12-20 14:30 ` [PATCH v2 09/18] arm64: KVM: enable conditional save/restore full " Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 18:06   ` Mark Rutland
2019-12-20 18:06     ` Mark Rutland
2019-12-20 18:06     ` Mark Rutland
2019-12-24 12:15     ` Andrew Murray
2019-12-24 12:15       ` Andrew Murray
2019-12-24 12:15       ` Andrew Murray
2019-12-21 14:13   ` Marc Zyngier
2019-12-21 14:13     ` Marc Zyngier
2019-12-21 14:13     ` Marc Zyngier
2020-01-07 15:13     ` Andrew Murray [this message]
2020-01-07 15:13       ` Andrew Murray
2020-01-07 15:13       ` Andrew Murray
2020-01-08 11:17       ` Marc Zyngier
2020-01-08 11:17         ` Marc Zyngier
2020-01-08 11:17         ` Marc Zyngier
2020-01-08 11:58         ` Will Deacon
2020-01-08 11:58           ` Will Deacon
2020-01-08 11:58           ` Will Deacon
2020-01-08 12:36           ` Marc Zyngier
2020-01-08 12:36             ` Marc Zyngier
2020-01-08 12:36             ` Marc Zyngier
2020-01-08 13:10             ` Will Deacon
2020-01-08 13:10               ` Will Deacon
2020-01-08 13:10               ` Will Deacon
2020-01-09 11:23               ` Andrew Murray
2020-01-09 11:23                 ` Andrew Murray
2020-01-09 11:23                 ` Andrew Murray
2020-01-09 11:25                 ` Andrew Murray
2020-01-09 11:25                   ` Andrew Murray
2020-01-09 11:25                   ` Andrew Murray
2020-01-09 12:01                   ` Will Deacon
2020-01-09 12:01                     ` Will Deacon
2020-01-09 12:01                     ` Will Deacon
2020-01-10 10:54     ` Andrew Murray
2020-01-10 10:54       ` Andrew Murray
2020-01-10 10:54       ` Andrew Murray
2020-01-10 11:04       ` Andrew Murray
2020-01-10 11:04         ` Andrew Murray
2020-01-10 11:04         ` Andrew Murray
2020-01-10 11:51         ` Marc Zyngier
2020-01-10 11:51           ` Marc Zyngier
2020-01-10 11:51           ` Marc Zyngier
2020-01-10 12:12           ` Andrew Murray
2020-01-10 12:12             ` Andrew Murray
2020-01-10 12:12             ` Andrew Murray
2020-01-10 11:18       ` Marc Zyngier
2020-01-10 11:18         ` Marc Zyngier
2020-01-10 11:18         ` Marc Zyngier
2020-01-10 12:12         ` Andrew Murray
2020-01-10 12:12           ` Andrew Murray
2020-01-10 12:12           ` Andrew Murray
2020-01-10 13:34           ` Marc Zyngier
2020-01-10 13:34             ` Marc Zyngier
2020-01-10 13:34             ` Marc Zyngier
2019-12-20 14:30 ` [PATCH v2 10/18] arm64: KVM/debug: use EL1&0 stage 1 translation regime Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-22 10:34   ` Marc Zyngier
2019-12-22 10:34     ` Marc Zyngier
2019-12-22 10:34     ` Marc Zyngier
2019-12-24 11:11     ` Andrew Murray
2019-12-24 11:11       ` Andrew Murray
2019-12-24 11:11       ` Andrew Murray
2020-01-13 16:31     ` Andrew Murray
2020-01-13 16:31       ` Andrew Murray
2020-01-13 16:31       ` Andrew Murray
2020-01-15 14:03       ` Marc Zyngier
2020-01-15 14:03         ` Marc Zyngier
2020-01-15 14:03         ` Marc Zyngier
2019-12-20 14:30 ` [PATCH v2 11/18] KVM: arm64: don't trap Statistical Profiling controls to EL2 Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 18:08   ` Mark Rutland
2019-12-20 18:08     ` Mark Rutland
2019-12-20 18:08     ` Mark Rutland
2019-12-22 10:42   ` Marc Zyngier
2019-12-22 10:42     ` Marc Zyngier
2019-12-22 10:42     ` Marc Zyngier
2019-12-23 11:56     ` Andrew Murray
2019-12-23 11:56       ` Andrew Murray
2019-12-23 11:56       ` Andrew Murray
2019-12-23 12:05       ` Marc Zyngier
2019-12-23 12:05         ` Marc Zyngier
2019-12-23 12:05         ` Marc Zyngier
2019-12-23 12:10         ` Andrew Murray
2019-12-23 12:10           ` Andrew Murray
2019-12-23 12:10           ` Andrew Murray
2020-01-09 17:25           ` Andrew Murray
2020-01-09 17:25             ` Andrew Murray
2020-01-09 17:25             ` Andrew Murray
2020-01-09 17:42             ` Mark Rutland
2020-01-09 17:42               ` Mark Rutland
2020-01-09 17:42               ` Mark Rutland
2020-01-09 17:46               ` Andrew Murray
2020-01-09 17:46                 ` Andrew Murray
2020-01-09 17:46                 ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 12/18] KVM: arm64: add a new vcpu device control group for SPEv1 Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-22 11:03   ` Marc Zyngier
2019-12-22 11:03     ` Marc Zyngier
2019-12-22 11:03     ` Marc Zyngier
2019-12-24 12:30     ` Andrew Murray
2019-12-24 12:30       ` Andrew Murray
2019-12-24 12:30       ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 13/18] perf: arm_spe: Add KVM structure for obtaining IRQ info Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-22 11:24   ` Marc Zyngier
2019-12-22 11:24     ` Marc Zyngier
2019-12-22 11:24     ` Marc Zyngier
2019-12-24 12:35     ` Andrew Murray
2019-12-24 12:35       ` Andrew Murray
2019-12-24 12:35       ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 14/18] KVM: arm64: spe: Provide guest virtual interrupts for SPE Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-22 12:07   ` Marc Zyngier
2019-12-22 12:07     ` Marc Zyngier
2019-12-22 12:07     ` Marc Zyngier
2019-12-24 11:50     ` Andrew Murray
2019-12-24 11:50       ` Andrew Murray
2019-12-24 11:50       ` Andrew Murray
2019-12-24 12:42       ` Marc Zyngier
2019-12-24 12:42         ` Marc Zyngier
2019-12-24 12:42         ` Marc Zyngier
2019-12-24 13:08         ` Andrew Murray
2019-12-24 13:08           ` Andrew Murray
2019-12-24 13:08           ` Andrew Murray
2019-12-24 13:22           ` Marc Zyngier
2019-12-24 13:22             ` Marc Zyngier
2019-12-24 13:22             ` Marc Zyngier
2019-12-24 13:36             ` Andrew Murray
2019-12-24 13:36               ` Andrew Murray
2019-12-24 13:36               ` Andrew Murray
2019-12-24 13:46               ` Marc Zyngier
2019-12-24 13:46                 ` Marc Zyngier
2019-12-24 13:46                 ` Marc Zyngier
2019-12-20 14:30 ` [PATCH v2 15/18] perf: arm_spe: Handle guest/host exclusion flags Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 18:10   ` Mark Rutland
2019-12-20 18:10     ` Mark Rutland
2019-12-20 18:10     ` Mark Rutland
2019-12-22 12:10   ` Marc Zyngier
2019-12-22 12:10     ` Marc Zyngier
2019-12-22 12:10     ` Marc Zyngier
2019-12-23 12:10     ` Andrew Murray
2019-12-23 12:10       ` Andrew Murray
2019-12-23 12:10       ` Andrew Murray
2019-12-23 12:18       ` Marc Zyngier
2019-12-23 12:18         ` Marc Zyngier
2019-12-23 12:18         ` Marc Zyngier
2019-12-20 14:30 ` [PATCH v2 16/18] KVM: arm64: enable SPE support Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 17/18, KVMTOOL] update_headers: Sync kvm UAPI headers with linux v5.5-rc2 Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30 ` [PATCH v2 18/18, KVMTOOL] kvm: add a vcpu feature for SPEv1 support Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 14:30   ` Andrew Murray
2019-12-20 17:55 ` [PATCH v2 00/18] arm64: KVM: add SPE profiling support Mark Rutland
2019-12-20 17:55   ` Mark Rutland
2019-12-20 17:55   ` Mark Rutland
2019-12-24 12:54   ` Andrew Murray
2019-12-24 12:54     ` Andrew Murray
2019-12-24 12:54     ` Andrew Murray
2019-12-21 10:48 ` Marc Zyngier
2019-12-21 10:48   ` Marc Zyngier
2019-12-21 10:48   ` Marc Zyngier
2019-12-22 12:22   ` Marc Zyngier
2019-12-22 12:22     ` Marc Zyngier
2019-12-22 12:22     ` Marc Zyngier
2019-12-24 12:56     ` Andrew Murray
2019-12-24 12:56       ` Andrew Murray
2019-12-24 12:56       ` Andrew Murray

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200107151328.GW42593@e119886-lin.cambridge.arm.com \
    --to=andrew.murray@arm.com \
    --cc=Catalin.Marinas@arm.com \
    --cc=Mark.Rutland@arm.com \
    --cc=Sudeep.Holla@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.