All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shannon Zhao <zhaoshenglong@huawei.com>
To: Marc Zyngier <marc.zyngier@arm.com>,
	<kvmarm@lists.cs.columbia.edu>, <christoffer.dall@linaro.org>
Cc: <linux-arm-kernel@lists.infradead.org>, <kvm@vger.kernel.org>,
	<will.deacon@arm.com>, <wei@redhat.com>, <cov@codeaurora.org>,
	<shannon.zhao@linaro.org>, <peter.huangpeng@huawei.com>,
	<hangaohuai@huawei.com>
Subject: Re: [PATCH v8 16/20] KVM: ARM64: Add access handler for PMUSERENR register
Date: Thu, 7 Jan 2016 19:15:05 +0800	[thread overview]
Message-ID: <568E48B9.8040006@huawei.com> (raw)
In-Reply-To: <568E3A6C.2010404@arm.com>



On 2016/1/7 18:14, Marc Zyngier wrote:
> On 22/12/15 08:08, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > This register resets as unknown in 64bit mode while it resets as zero
>> > in 32bit mode. Here we choose to reset it as zero for consistency.
>> > 
>> > PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
>> > accessed from EL0. Add some check helpers to handle the access from EL0.
>> > 
>> > When these bits are zero, only reading PMUSERENR will trap to EL2 and
>> > writing PMUSERENR or reading/writing other PMU registers will trap to
>> > EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
>> > (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
>> > physical PMUSERENR register on VM entry, so that it will trap PMU access
>> > from EL0 to EL2. Within the register access handler we check the real
>> > value of guest PMUSERENR register to decide whether this access is
>> > allowed. If not allowed, forward this trap to EL1.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/include/asm/pmu.h |   9 ++++
>> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
>> >  arch/arm64/kvm/sys_regs.c    | 122 +++++++++++++++++++++++++++++++++++++++++--
>> >  3 files changed, 129 insertions(+), 5 deletions(-)
>> > 
>> > diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> > index 2588f9c..1238ade 100644
>> > --- a/arch/arm64/include/asm/pmu.h
>> > +++ b/arch/arm64/include/asm/pmu.h
>> > @@ -67,4 +67,13 @@
>> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>> >  
>> > +/*
>> > + * PMUSERENR: user enable reg
>> > + */
>> > +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
>> > +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
>> > +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
>> > +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
>> > +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
>> > +
>> >  #endif /* __ASM_PMU_H */
>> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> > index ca8f5a5..a85375f 100644
>> > --- a/arch/arm64/kvm/hyp/switch.c
>> > +++ b/arch/arm64/kvm/hyp/switch.c
>> > @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>> >  	write_sysreg(1 << 15, hstr_el2);
>> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
>> > +	/* Make sure we trap PMU access from EL0 to EL2 */
>> > +	write_sysreg(15, pmuserenr_el0);
> Please use the ARMV8_USERENR_* constants here instead of a magic number
> (since you went through the hassle of defining them!).
> 
Ok.

>> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>> >  }
>> >  
>> > @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>> >  	write_sysreg(HCR_RW, hcr_el2);
>> >  	write_sysreg(0, hstr_el2);
>> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
>> > +	write_sysreg(0, pmuserenr_el0);
>> >  	write_sysreg(0, cptr_el2);
>> >  }
>> >  
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index 04281f1..ac0cbf8 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -453,11 +453,47 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>> >  	vcpu_sys_reg(vcpu, r->reg) = val;
>> >  }
>> >  
>> > +static inline bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
> Please drop all the inline attributes. The compiler knows its stuff well
> enough to do it automagically, and this is hardly a fast path...
> 
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  			const struct sys_reg_desc *r)
>> >  {
>> >  	u64 val;
>> >  
>> > +	if (pmu_access_el0_disabled(vcpu)) {
>> > +		kvm_forward_trap_to_el1(vcpu);
>> > +		return true;
>> > +	}
> So with the patch I posted earlier
> (http://www.spinics.net/lists/arm-kernel/msg472693.html), all the
> instances similar to that code can be rewritten as
> 
> +       if (pmu_access_el0_disabled(vcpu))
> +               return false;
> 
> You can then completely drop both patch 15 and my original patch to fix
> the PC stuff (which is far from being perfect, as noted by Peter).
Yeah, will fix this.

Thanks,
-- 
Shannon


WARNING: multiple messages have this Message-ID (diff)
From: Shannon Zhao <zhaoshenglong@huawei.com>
To: Marc Zyngier <marc.zyngier@arm.com>,
	kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org
Cc: linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org,
	will.deacon@arm.com, wei@redhat.com, cov@codeaurora.org,
	shannon.zhao@linaro.org, peter.huangpeng@huawei.com,
	hangaohuai@huawei.com
Subject: Re: [PATCH v8 16/20] KVM: ARM64: Add access handler for PMUSERENR register
Date: Thu, 7 Jan 2016 19:15:05 +0800	[thread overview]
Message-ID: <568E48B9.8040006@huawei.com> (raw)
In-Reply-To: <568E3A6C.2010404@arm.com>



On 2016/1/7 18:14, Marc Zyngier wrote:
> On 22/12/15 08:08, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > This register resets as unknown in 64bit mode while it resets as zero
>> > in 32bit mode. Here we choose to reset it as zero for consistency.
>> > 
>> > PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
>> > accessed from EL0. Add some check helpers to handle the access from EL0.
>> > 
>> > When these bits are zero, only reading PMUSERENR will trap to EL2 and
>> > writing PMUSERENR or reading/writing other PMU registers will trap to
>> > EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
>> > (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
>> > physical PMUSERENR register on VM entry, so that it will trap PMU access
>> > from EL0 to EL2. Within the register access handler we check the real
>> > value of guest PMUSERENR register to decide whether this access is
>> > allowed. If not allowed, forward this trap to EL1.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/include/asm/pmu.h |   9 ++++
>> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
>> >  arch/arm64/kvm/sys_regs.c    | 122 +++++++++++++++++++++++++++++++++++++++++--
>> >  3 files changed, 129 insertions(+), 5 deletions(-)
>> > 
>> > diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> > index 2588f9c..1238ade 100644
>> > --- a/arch/arm64/include/asm/pmu.h
>> > +++ b/arch/arm64/include/asm/pmu.h
>> > @@ -67,4 +67,13 @@
>> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>> >  
>> > +/*
>> > + * PMUSERENR: user enable reg
>> > + */
>> > +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
>> > +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
>> > +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
>> > +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
>> > +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
>> > +
>> >  #endif /* __ASM_PMU_H */
>> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> > index ca8f5a5..a85375f 100644
>> > --- a/arch/arm64/kvm/hyp/switch.c
>> > +++ b/arch/arm64/kvm/hyp/switch.c
>> > @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>> >  	write_sysreg(1 << 15, hstr_el2);
>> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
>> > +	/* Make sure we trap PMU access from EL0 to EL2 */
>> > +	write_sysreg(15, pmuserenr_el0);
> Please use the ARMV8_USERENR_* constants here instead of a magic number
> (since you went through the hassle of defining them!).
> 
Ok.

>> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>> >  }
>> >  
>> > @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>> >  	write_sysreg(HCR_RW, hcr_el2);
>> >  	write_sysreg(0, hstr_el2);
>> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
>> > +	write_sysreg(0, pmuserenr_el0);
>> >  	write_sysreg(0, cptr_el2);
>> >  }
>> >  
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index 04281f1..ac0cbf8 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -453,11 +453,47 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>> >  	vcpu_sys_reg(vcpu, r->reg) = val;
>> >  }
>> >  
>> > +static inline bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
> Please drop all the inline attributes. The compiler knows its stuff well
> enough to do it automagically, and this is hardly a fast path...
> 
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  			const struct sys_reg_desc *r)
>> >  {
>> >  	u64 val;
>> >  
>> > +	if (pmu_access_el0_disabled(vcpu)) {
>> > +		kvm_forward_trap_to_el1(vcpu);
>> > +		return true;
>> > +	}
> So with the patch I posted earlier
> (http://www.spinics.net/lists/arm-kernel/msg472693.html), all the
> instances similar to that code can be rewritten as
> 
> +       if (pmu_access_el0_disabled(vcpu))
> +               return false;
> 
> You can then completely drop both patch 15 and my original patch to fix
> the PC stuff (which is far from being perfect, as noted by Peter).
Yeah, will fix this.

Thanks,
-- 
Shannon


WARNING: multiple messages have this Message-ID (diff)
From: zhaoshenglong@huawei.com (Shannon Zhao)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v8 16/20] KVM: ARM64: Add access handler for PMUSERENR register
Date: Thu, 7 Jan 2016 19:15:05 +0800	[thread overview]
Message-ID: <568E48B9.8040006@huawei.com> (raw)
In-Reply-To: <568E3A6C.2010404@arm.com>



On 2016/1/7 18:14, Marc Zyngier wrote:
> On 22/12/15 08:08, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > This register resets as unknown in 64bit mode while it resets as zero
>> > in 32bit mode. Here we choose to reset it as zero for consistency.
>> > 
>> > PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
>> > accessed from EL0. Add some check helpers to handle the access from EL0.
>> > 
>> > When these bits are zero, only reading PMUSERENR will trap to EL2 and
>> > writing PMUSERENR or reading/writing other PMU registers will trap to
>> > EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
>> > (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
>> > physical PMUSERENR register on VM entry, so that it will trap PMU access
>> > from EL0 to EL2. Within the register access handler we check the real
>> > value of guest PMUSERENR register to decide whether this access is
>> > allowed. If not allowed, forward this trap to EL1.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/include/asm/pmu.h |   9 ++++
>> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
>> >  arch/arm64/kvm/sys_regs.c    | 122 +++++++++++++++++++++++++++++++++++++++++--
>> >  3 files changed, 129 insertions(+), 5 deletions(-)
>> > 
>> > diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> > index 2588f9c..1238ade 100644
>> > --- a/arch/arm64/include/asm/pmu.h
>> > +++ b/arch/arm64/include/asm/pmu.h
>> > @@ -67,4 +67,13 @@
>> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>> >  
>> > +/*
>> > + * PMUSERENR: user enable reg
>> > + */
>> > +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
>> > +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed@EL0 */
>> > +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written@EL0 */
>> > +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read@EL0 */
>> > +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read@EL0 */
>> > +
>> >  #endif /* __ASM_PMU_H */
>> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> > index ca8f5a5..a85375f 100644
>> > --- a/arch/arm64/kvm/hyp/switch.c
>> > +++ b/arch/arm64/kvm/hyp/switch.c
>> > @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>> >  	write_sysreg(1 << 15, hstr_el2);
>> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
>> > +	/* Make sure we trap PMU access from EL0 to EL2 */
>> > +	write_sysreg(15, pmuserenr_el0);
> Please use the ARMV8_USERENR_* constants here instead of a magic number
> (since you went through the hassle of defining them!).
> 
Ok.

>> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>> >  }
>> >  
>> > @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>> >  	write_sysreg(HCR_RW, hcr_el2);
>> >  	write_sysreg(0, hstr_el2);
>> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
>> > +	write_sysreg(0, pmuserenr_el0);
>> >  	write_sysreg(0, cptr_el2);
>> >  }
>> >  
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index 04281f1..ac0cbf8 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -453,11 +453,47 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>> >  	vcpu_sys_reg(vcpu, r->reg) = val;
>> >  }
>> >  
>> > +static inline bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
> Please drop all the inline attributes. The compiler knows its stuff well
> enough to do it automagically, and this is hardly a fast path...
> 
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static inline bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  			const struct sys_reg_desc *r)
>> >  {
>> >  	u64 val;
>> >  
>> > +	if (pmu_access_el0_disabled(vcpu)) {
>> > +		kvm_forward_trap_to_el1(vcpu);
>> > +		return true;
>> > +	}
> So with the patch I posted earlier
> (http://www.spinics.net/lists/arm-kernel/msg472693.html), all the
> instances similar to that code can be rewritten as
> 
> +       if (pmu_access_el0_disabled(vcpu))
> +               return false;
> 
> You can then completely drop both patch 15 and my original patch to fix
> the PC stuff (which is far from being perfect, as noted by Peter).
Yeah, will fix this.

Thanks,
-- 
Shannon

  reply	other threads:[~2016-01-07 11:15 UTC|newest]

Thread overview: 197+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-22  8:07 [PATCH v8 00/20] KVM: ARM64: Add guest PMU support Shannon Zhao
2015-12-22  8:07 ` Shannon Zhao
2015-12-22  8:07 ` Shannon Zhao
2015-12-22  8:07 ` [PATCH v8 01/20] ARM64: Move PMU register related defines to asm/pmu.h Shannon Zhao
2015-12-22  8:07   ` Shannon Zhao
2015-12-22  8:07   ` Shannon Zhao
2016-01-07 10:20   ` Marc Zyngier
2016-01-07 10:20     ` Marc Zyngier
2015-12-22  8:07 ` [PATCH v8 02/20] KVM: ARM64: Define PMU data structure for each vcpu Shannon Zhao
2015-12-22  8:07   ` Shannon Zhao
2015-12-22  8:07   ` Shannon Zhao
2016-01-07 10:21   ` Marc Zyngier
2016-01-07 10:21     ` Marc Zyngier
2016-01-07 19:07   ` Andrew Jones
2016-01-07 19:07     ` Andrew Jones
2015-12-22  8:07 ` [PATCH v8 03/20] KVM: ARM64: Add offset defines for PMU registers Shannon Zhao
2015-12-22  8:07   ` Shannon Zhao
2015-12-22  8:07   ` Shannon Zhao
2016-01-07 10:23   ` Marc Zyngier
2016-01-07 10:23     ` Marc Zyngier
2015-12-22  8:07 ` [PATCH v8 04/20] KVM: ARM64: Add access handler for PMCR register Shannon Zhao
2015-12-22  8:07   ` Shannon Zhao
2015-12-22  8:07   ` Shannon Zhao
2016-01-07 10:43   ` Marc Zyngier
2016-01-07 10:43     ` Marc Zyngier
2016-01-07 11:16     ` Shannon Zhao
2016-01-07 11:16       ` Shannon Zhao
2016-01-07 11:16       ` Shannon Zhao
2015-12-22  8:08 ` [PATCH v8 05/20] KVM: ARM64: Add access handler for PMSELR register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 10:43   ` Marc Zyngier
2016-01-07 10:43     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 06/20] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 10:44   ` Marc Zyngier
2016-01-07 10:44     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 07/20] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 10:55   ` Marc Zyngier
2016-01-07 10:55     ` Marc Zyngier
2016-01-07 13:48   ` Marc Zyngier
2016-01-07 13:48     ` Marc Zyngier
2016-01-07 14:00     ` Shannon Zhao
2016-01-07 14:00       ` Shannon Zhao
2016-01-07 14:00       ` Shannon Zhao
2015-12-22  8:08 ` [PATCH v8 08/20] KVM: ARM64: Add access handler for event typer register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 11:03   ` Marc Zyngier
2016-01-07 11:03     ` Marc Zyngier
2016-01-07 11:11     ` Shannon Zhao
2016-01-07 11:11       ` Shannon Zhao
2016-01-07 11:11       ` Shannon Zhao
2016-01-07 12:36     ` Shannon Zhao
2016-01-07 12:36       ` Shannon Zhao
2016-01-07 12:36       ` Shannon Zhao
2016-01-07 13:15       ` Marc Zyngier
2016-01-07 13:15         ` Marc Zyngier
2016-01-07 12:09   ` Shannon Zhao
2016-01-07 12:09     ` Shannon Zhao
2016-01-07 12:09     ` Shannon Zhao
2016-01-07 13:01     ` Marc Zyngier
2016-01-07 13:01       ` Marc Zyngier
2016-01-07 19:17   ` Andrew Jones
2016-01-07 19:17     ` Andrew Jones
2015-12-22  8:08 ` [PATCH v8 09/20] KVM: ARM64: Add access handler for event counter register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 11:06   ` Marc Zyngier
2016-01-07 11:06     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 10/20] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 11:09   ` Marc Zyngier
2016-01-07 11:09     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 11/20] KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 11:13   ` Marc Zyngier
2016-01-07 11:13     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 12/20] KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 11:14   ` Marc Zyngier
2016-01-07 11:14     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 13/20] KVM: ARM64: Add access handler for PMSWINC register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 11:29   ` Marc Zyngier
2016-01-07 11:29     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 14/20] KVM: ARM64: Add helper to handle PMCR register bits Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 11:59   ` Marc Zyngier
2016-01-07 11:59     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 15/20] KVM: ARM64: Add a helper to forward trap to guest EL1 Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08 ` [PATCH v8 16/20] KVM: ARM64: Add access handler for PMUSERENR register Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 10:14   ` Marc Zyngier
2016-01-07 10:14     ` Marc Zyngier
2016-01-07 11:15     ` Shannon Zhao [this message]
2016-01-07 11:15       ` Shannon Zhao
2016-01-07 11:15       ` Shannon Zhao
2015-12-22  8:08 ` [PATCH v8 17/20] KVM: ARM64: Add PMU overflow interrupt routing Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 13:28   ` Marc Zyngier
2016-01-07 13:28     ` Marc Zyngier
2016-01-07 13:28     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 18/20] KVM: ARM64: Reset PMU state when resetting vcpu Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 13:39   ` Marc Zyngier
2016-01-07 13:39     ` Marc Zyngier
2016-01-07 13:39     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 19/20] KVM: ARM64: Free perf event of PMU when destroying vcpu Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 13:51   ` Marc Zyngier
2016-01-07 13:51     ` Marc Zyngier
2015-12-22  8:08 ` [PATCH v8 20/20] KVM: ARM64: Add a new kvm ARM PMU device Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2015-12-22  8:08   ` Shannon Zhao
2016-01-07 13:56   ` Marc Zyngier
2016-01-07 13:56     ` Marc Zyngier
2016-01-07 14:35     ` Shannon Zhao
2016-01-07 14:35       ` Shannon Zhao
2016-01-07 14:35       ` Shannon Zhao
2016-01-07 14:36   ` Peter Maydell
2016-01-07 14:36     ` Peter Maydell
2016-01-07 14:49     ` Shannon Zhao
2016-01-07 14:49       ` Shannon Zhao
2016-01-07 14:56       ` Peter Maydell
2016-01-07 14:56         ` Peter Maydell
2016-01-07 20:36         ` Andrew Jones
2016-01-07 20:36           ` Andrew Jones
2016-01-09 12:29           ` Christoffer Dall
2016-01-09 12:29             ` Christoffer Dall
2016-01-09 15:03             ` Marc Zyngier
2016-01-09 15:03               ` Marc Zyngier
2016-01-11  8:45               ` Shannon Zhao
2016-01-11  8:45                 ` Shannon Zhao
2016-01-11  8:59                 ` Marc Zyngier
2016-01-11  8:59                   ` Marc Zyngier
2016-01-11 11:52                   ` Andrew Jones
2016-01-11 11:52                     ` Andrew Jones
2016-01-11 12:03                     ` Shannon Zhao
2016-01-11 12:03                       ` Shannon Zhao
2016-01-11 14:07               ` Andrew Jones
2016-01-11 14:07                 ` Andrew Jones
2016-01-11 15:09                 ` Christoffer Dall
2016-01-11 15:09                   ` Christoffer Dall
2016-01-11 16:09                   ` Andrew Jones
2016-01-11 16:09                     ` Andrew Jones
2016-01-11 16:13                     ` Peter Maydell
2016-01-11 16:13                       ` Peter Maydell
2016-01-11 16:48                       ` Andrew Jones
2016-01-11 16:48                         ` Andrew Jones
2016-01-11 16:21                     ` Andrew Jones
2016-01-11 16:21                       ` Andrew Jones
2016-01-11 16:29                       ` Peter Maydell
2016-01-11 16:29                         ` Peter Maydell
2016-01-11 16:44                         ` Andrew Jones
2016-01-11 16:44                           ` Andrew Jones
2016-01-08  3:06         ` Shannon Zhao
2016-01-08  3:06           ` Shannon Zhao
2016-01-08 10:24           ` Peter Maydell
2016-01-08 10:24             ` Peter Maydell
2016-01-08 12:15             ` Shannon Zhao
2016-01-08 12:15               ` Shannon Zhao
2016-01-08 12:56               ` Peter Maydell
2016-01-08 12:56                 ` Peter Maydell
2016-01-08 13:31                 ` Shannon Zhao
2016-01-08 13:31                   ` Shannon Zhao
2016-01-07 20:18   ` Andrew Jones
2016-01-07 20:18     ` Andrew Jones
2016-01-08  2:53     ` Shannon Zhao
2016-01-08  2:53       ` Shannon Zhao
2016-01-08  2:53       ` Shannon Zhao
2016-01-08 11:22       ` Andrew Jones
2016-01-08 11:22         ` Andrew Jones
2016-01-08 15:20         ` Andrew Jones
2016-01-08 15:20           ` Andrew Jones
2016-01-08 15:59           ` Andrew Jones
2016-01-08 15:59             ` Andrew Jones
2016-01-07 14:10 ` [PATCH v8 00/20] KVM: ARM64: Add guest PMU support Marc Zyngier
2016-01-07 14:10   ` Marc Zyngier
2016-01-07 14:12   ` Will Deacon
2016-01-07 14:12     ` Will Deacon
2016-01-07 14:21     ` Marc Zyngier
2016-01-07 14:21       ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=568E48B9.8040006@huawei.com \
    --to=zhaoshenglong@huawei.com \
    --cc=christoffer.dall@linaro.org \
    --cc=cov@codeaurora.org \
    --cc=hangaohuai@huawei.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=marc.zyngier@arm.com \
    --cc=peter.huangpeng@huawei.com \
    --cc=shannon.zhao@linaro.org \
    --cc=wei@redhat.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.