All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] arm64: PMU: Sanitize usage of PMSELR_EL0.SEL
@ 2016-12-02 15:50 ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2016-12-02 15:50 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland
  Cc: Itaru Kitayama, Catalin Marinas, Hoan Tran, linux-arm-kernel,
	kvmarm, Tai Tri Nguyen

An ugly interaction between the use of PMSELR_EL0 in a host kernel and
the use of PMXEVCNTR_EL0 in a guest has recently come to light [1],
leading to the guest taking an UNDEF exception in EL1 when using the
PMU.

The fix is pretty simple ("don't do that!"), making the PMU useable on
X-Gene, which seems to have a stricter (but nonetheless valid)
interpretation of the architecture.

Patches against 4.9-rc6.

[1] https://lists.cs.columbia.edu/pipermail/kvmarm/2016-November/022545.html

Marc Zyngier (2):
  arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
  arm64: PMU: Reset PMSELR_EL0 to a sane value at boot time

 arch/arm64/kernel/perf_event.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/2] arm64: PMU: Sanitize usage of PMSELR_EL0.SEL
@ 2016-12-02 15:50 ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2016-12-02 15:50 UTC (permalink / raw)
  To: linux-arm-kernel

An ugly interaction between the use of PMSELR_EL0 in a host kernel and
the use of PMXEVCNTR_EL0 in a guest has recently come to light [1],
leading to the guest taking an UNDEF exception in EL1 when using the
PMU.

The fix is pretty simple ("don't do that!"), making the PMU useable on
X-Gene, which seems to have a stricter (but nonetheless valid)
interpretation of the architecture.

Patches against 4.9-rc6.

[1] https://lists.cs.columbia.edu/pipermail/kvmarm/2016-November/022545.html

Marc Zyngier (2):
  arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
  arm64: PMU: Reset PMSELR_EL0 to a sane value at boot time

 arch/arm64/kernel/perf_event.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
  2016-12-02 15:50 ` Marc Zyngier
@ 2016-12-02 15:50   ` Marc Zyngier
  -1 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2016-12-02 15:50 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland
  Cc: Itaru Kitayama, Catalin Marinas, Hoan Tran, linux-arm-kernel,
	kvmarm, Tai Tri Nguyen

The ARMv8 architecture allows the cycle counter to be configured
by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0,
hence accessing PMCCFILTR_EL0. But it disallows the use of
PMSELR_EL0.SEL==0x1f to access the cycle counter itself through
PMXEVCNTR_EL0.

Linux itself doesn't violate this rule, but we may end up with
PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that
guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1,
despite the guest not having done anything wrong.

In order to avoid this unfortunate course of events (haha!), let's
apply the same method armv8pmu_write_counter and co are using,
explicitely checking for the cycle counter and writing to
PMCCFILTR_EL0 directly. This prevents writing 0x1f to PMSELR_EL0,
and saves a Linux guest an extra trap.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/perf_event.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 57ae9d9..a65b757 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -632,7 +632,10 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value)
 
 static inline void armv8pmu_write_evtype(int idx, u32 val)
 {
-	if (armv8pmu_select_counter(idx) == idx) {
+	if (idx == ARMV8_IDX_CYCLE_COUNTER) {
+		val &= ARMV8_PMU_EVTYPE_MASK & ~ARMV8_PMU_EVTYPE_EVENT;
+		write_sysreg(val, pmccfiltr_el0);
+	} else if (armv8pmu_select_counter(idx) == idx) {
 		val &= ARMV8_PMU_EVTYPE_MASK;
 		write_sysreg(val, pmxevtyper_el0);
 	}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 1/2] arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
@ 2016-12-02 15:50   ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2016-12-02 15:50 UTC (permalink / raw)
  To: linux-arm-kernel

The ARMv8 architecture allows the cycle counter to be configured
by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0,
hence accessing PMCCFILTR_EL0. But it disallows the use of
PMSELR_EL0.SEL==0x1f to access the cycle counter itself through
PMXEVCNTR_EL0.

Linux itself doesn't violate this rule, but we may end up with
PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that
guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1,
despite the guest not having done anything wrong.

In order to avoid this unfortunate course of events (haha!), let's
apply the same method armv8pmu_write_counter and co are using,
explicitely checking for the cycle counter and writing to
PMCCFILTR_EL0 directly. This prevents writing 0x1f to PMSELR_EL0,
and saves a Linux guest an extra trap.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/perf_event.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 57ae9d9..a65b757 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -632,7 +632,10 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value)
 
 static inline void armv8pmu_write_evtype(int idx, u32 val)
 {
-	if (armv8pmu_select_counter(idx) == idx) {
+	if (idx == ARMV8_IDX_CYCLE_COUNTER) {
+		val &= ARMV8_PMU_EVTYPE_MASK & ~ARMV8_PMU_EVTYPE_EVENT;
+		write_sysreg(val, pmccfiltr_el0);
+	} else if (armv8pmu_select_counter(idx) == idx) {
 		val &= ARMV8_PMU_EVTYPE_MASK;
 		write_sysreg(val, pmxevtyper_el0);
 	}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] arm64: PMU: Reset PMSELR_EL0 to a sane value at boot time
  2016-12-02 15:50 ` Marc Zyngier
@ 2016-12-02 15:50   ` Marc Zyngier
  -1 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2016-12-02 15:50 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland
  Cc: Itaru Kitayama, Catalin Marinas, Hoan Tran, linux-arm-kernel,
	kvmarm, Tai Tri Nguyen

In order to avoid any ugly surprise, let's reset PMSELR_EL0 to the
first valid value (avoiding the cycle counter which has been proven
to be troublesome) at CPU boot time. This ensures that no guest will
be faced with some odd value which it cannot modify (due to
MDCR_EL2.TPM being set).

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/perf_event.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index a65b757..42d1840 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -910,6 +910,14 @@ static void armv8pmu_reset(void *info)
 	 */
 	armv8pmu_pmcr_write(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C |
 			    ARMV8_PMU_PMCR_LC);
+
+	/*
+	 * If we have at least one available counter, reset to that
+	 * one so that no illegal value is left in PMSELR_EL0, which
+	 * could have an impact on a guest.
+	 */
+	if (armv8pmu_counter_valid(cpu_pmu, ARMV8_IDX_COUNTER0))
+		armv8pmu_select_counter(ARMV8_IDX_COUNTER0);
 }
 
 static int armv8_pmuv3_map_event(struct perf_event *event)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] arm64: PMU: Reset PMSELR_EL0 to a sane value at boot time
@ 2016-12-02 15:50   ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2016-12-02 15:50 UTC (permalink / raw)
  To: linux-arm-kernel

In order to avoid any ugly surprise, let's reset PMSELR_EL0 to the
first valid value (avoiding the cycle counter which has been proven
to be troublesome) at CPU boot time. This ensures that no guest will
be faced with some odd value which it cannot modify (due to
MDCR_EL2.TPM being set).

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/perf_event.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index a65b757..42d1840 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -910,6 +910,14 @@ static void armv8pmu_reset(void *info)
 	 */
 	armv8pmu_pmcr_write(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C |
 			    ARMV8_PMU_PMCR_LC);
+
+	/*
+	 * If we have at least one available counter, reset to that
+	 * one so that no illegal value is left in PMSELR_EL0, which
+	 * could have an impact on a guest.
+	 */
+	if (armv8pmu_counter_valid(cpu_pmu, ARMV8_IDX_COUNTER0))
+		armv8pmu_select_counter(ARMV8_IDX_COUNTER0);
 }
 
 static int armv8_pmuv3_map_event(struct perf_event *event)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/2] arm64: PMU: Sanitize usage of PMSELR_EL0.SEL
  2016-12-02 15:50 ` Marc Zyngier
@ 2016-12-02 21:28   ` Itaru Kitayama
  -1 siblings, 0 replies; 12+ messages in thread
From: Itaru Kitayama @ 2016-12-02 21:28 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Hoan Tran, Will Deacon, linux-arm-kernel,
	kvmarm, Tai Tri Nguyen

Successfully verified them on a Rev B0 CPU. Thank you for quickly 
providing the fix.

On 2016/12/03 0:50, Marc Zyngier wrote:
> An ugly interaction between the use of PMSELR_EL0 in a host kernel and
> the use of PMXEVCNTR_EL0 in a guest has recently come to light [1],
> leading to the guest taking an UNDEF exception in EL1 when using the
> PMU.
>
> The fix is pretty simple ("don't do that!"), making the PMU useable on
> X-Gene, which seems to have a stricter (but nonetheless valid)
> interpretation of the architecture.
>
> Patches against 4.9-rc6.
>
> [1] https://lists.cs.columbia.edu/pipermail/kvmarm/2016-November/022545.html
>
> Marc Zyngier (2):
>   arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
>   arm64: PMU: Reset PMSELR_EL0 to a sane value at boot time
>
>  arch/arm64/kernel/perf_event.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/2] arm64: PMU: Sanitize usage of PMSELR_EL0.SEL
@ 2016-12-02 21:28   ` Itaru Kitayama
  0 siblings, 0 replies; 12+ messages in thread
From: Itaru Kitayama @ 2016-12-02 21:28 UTC (permalink / raw)
  To: linux-arm-kernel

Successfully verified them on a Rev B0 CPU. Thank you for quickly 
providing the fix.

On 2016/12/03 0:50, Marc Zyngier wrote:
> An ugly interaction between the use of PMSELR_EL0 in a host kernel and
> the use of PMXEVCNTR_EL0 in a guest has recently come to light [1],
> leading to the guest taking an UNDEF exception in EL1 when using the
> PMU.
>
> The fix is pretty simple ("don't do that!"), making the PMU useable on
> X-Gene, which seems to have a stricter (but nonetheless valid)
> interpretation of the architecture.
>
> Patches against 4.9-rc6.
>
> [1] https://lists.cs.columbia.edu/pipermail/kvmarm/2016-November/022545.html
>
> Marc Zyngier (2):
>   arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
>   arm64: PMU: Reset PMSELR_EL0 to a sane value at boot time
>
>  arch/arm64/kernel/perf_event.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
  2016-12-02 15:50   ` Marc Zyngier
@ 2016-12-06 13:50     ` Will Deacon
  -1 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2016-12-06 13:50 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Itaru Kitayama, Catalin Marinas, Hoan Tran, linux-arm-kernel,
	kvmarm, Tai Tri Nguyen

On Fri, Dec 02, 2016 at 03:50:58PM +0000, Marc Zyngier wrote:
> The ARMv8 architecture allows the cycle counter to be configured
> by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0,
> hence accessing PMCCFILTR_EL0. But it disallows the use of
> PMSELR_EL0.SEL==0x1f to access the cycle counter itself through
> PMXEVCNTR_EL0.
> 
> Linux itself doesn't violate this rule, but we may end up with
> PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that
> guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1,
> despite the guest not having done anything wrong.
> 
> In order to avoid this unfortunate course of events (haha!), let's
> apply the same method armv8pmu_write_counter and co are using,
> explicitely checking for the cycle counter and writing to
> PMCCFILTR_EL0 directly. This prevents writing 0x1f to PMSELR_EL0,
> and saves a Linux guest an extra trap.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 57ae9d9..a65b757 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -632,7 +632,10 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value)
>  
>  static inline void armv8pmu_write_evtype(int idx, u32 val)
>  {
> -	if (armv8pmu_select_counter(idx) == idx) {
> +	if (idx == ARMV8_IDX_CYCLE_COUNTER) {
> +		val &= ARMV8_PMU_EVTYPE_MASK & ~ARMV8_PMU_EVTYPE_EVENT;
> +		write_sysreg(val, pmccfiltr_el0);
> +	} else if (armv8pmu_select_counter(idx) == idx) {

If we go down this route, then we also have to "fix" the 32-bit code,
which uses PMSELR in a similar way. However, neither of the perf drivers
are actually doing anything wrong here -- the problem comes about because
the architecture doesn't guarantee that PMU accesses trap to EL2 unless
both MDCR.TPM=1 *and* PMSELR_EL0 is valid. So I think that this should
be handled together, in the KVM code that enables PMU traps.

Given that the perf callbacks tend to run with preemption disabled, I
think you should be fine nuking PMSELR_EL0 to zero (i.e. no need to
save/restore).

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
@ 2016-12-06 13:50     ` Will Deacon
  0 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2016-12-06 13:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Dec 02, 2016 at 03:50:58PM +0000, Marc Zyngier wrote:
> The ARMv8 architecture allows the cycle counter to be configured
> by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0,
> hence accessing PMCCFILTR_EL0. But it disallows the use of
> PMSELR_EL0.SEL==0x1f to access the cycle counter itself through
> PMXEVCNTR_EL0.
> 
> Linux itself doesn't violate this rule, but we may end up with
> PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that
> guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1,
> despite the guest not having done anything wrong.
> 
> In order to avoid this unfortunate course of events (haha!), let's
> apply the same method armv8pmu_write_counter and co are using,
> explicitely checking for the cycle counter and writing to
> PMCCFILTR_EL0 directly. This prevents writing 0x1f to PMSELR_EL0,
> and saves a Linux guest an extra trap.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/perf_event.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
> index 57ae9d9..a65b757 100644
> --- a/arch/arm64/kernel/perf_event.c
> +++ b/arch/arm64/kernel/perf_event.c
> @@ -632,7 +632,10 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value)
>  
>  static inline void armv8pmu_write_evtype(int idx, u32 val)
>  {
> -	if (armv8pmu_select_counter(idx) == idx) {
> +	if (idx == ARMV8_IDX_CYCLE_COUNTER) {
> +		val &= ARMV8_PMU_EVTYPE_MASK & ~ARMV8_PMU_EVTYPE_EVENT;
> +		write_sysreg(val, pmccfiltr_el0);
> +	} else if (armv8pmu_select_counter(idx) == idx) {

If we go down this route, then we also have to "fix" the 32-bit code,
which uses PMSELR in a similar way. However, neither of the perf drivers
are actually doing anything wrong here -- the problem comes about because
the architecture doesn't guarantee that PMU accesses trap to EL2 unless
both MDCR.TPM=1 *and* PMSELR_EL0 is valid. So I think that this should
be handled together, in the KVM code that enables PMU traps.

Given that the perf callbacks tend to run with preemption disabled, I
think you should be fine nuking PMSELR_EL0 to zero (i.e. no need to
save/restore).

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
  2016-12-06 13:50     ` Will Deacon
@ 2016-12-06 14:32       ` Marc Zyngier
  -1 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2016-12-06 14:32 UTC (permalink / raw)
  To: Will Deacon
  Cc: Itaru Kitayama, Catalin Marinas, Hoan Tran, linux-arm-kernel,
	kvmarm, Tai Tri Nguyen

On 06/12/16 13:50, Will Deacon wrote:
> On Fri, Dec 02, 2016 at 03:50:58PM +0000, Marc Zyngier wrote:
>> The ARMv8 architecture allows the cycle counter to be configured
>> by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0,
>> hence accessing PMCCFILTR_EL0. But it disallows the use of
>> PMSELR_EL0.SEL==0x1f to access the cycle counter itself through
>> PMXEVCNTR_EL0.
>>
>> Linux itself doesn't violate this rule, but we may end up with
>> PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that
>> guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1,
>> despite the guest not having done anything wrong.
>>
>> In order to avoid this unfortunate course of events (haha!), let's
>> apply the same method armv8pmu_write_counter and co are using,
>> explicitely checking for the cycle counter and writing to
>> PMCCFILTR_EL0 directly. This prevents writing 0x1f to PMSELR_EL0,
>> and saves a Linux guest an extra trap.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index 57ae9d9..a65b757 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -632,7 +632,10 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value)
>>  
>>  static inline void armv8pmu_write_evtype(int idx, u32 val)
>>  {
>> -	if (armv8pmu_select_counter(idx) == idx) {
>> +	if (idx == ARMV8_IDX_CYCLE_COUNTER) {
>> +		val &= ARMV8_PMU_EVTYPE_MASK & ~ARMV8_PMU_EVTYPE_EVENT;
>> +		write_sysreg(val, pmccfiltr_el0);
>> +	} else if (armv8pmu_select_counter(idx) == idx) {
> 
> If we go down this route, then we also have to "fix" the 32-bit code,
> which uses PMSELR in a similar way. However, neither of the perf drivers
> are actually doing anything wrong here -- the problem comes about because
> the architecture doesn't guarantee that PMU accesses trap to EL2 unless
> both MDCR.TPM=1 *and* PMSELR_EL0 is valid. So I think that this should
> be handled together, in the KVM code that enables PMU traps.
> 
> Given that the perf callbacks tend to run with preemption disabled, I
> think you should be fine nuking PMSELR_EL0 to zero (i.e. no need to
> save/restore).

Fair enough. I'll respin another patch in a bit.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0
@ 2016-12-06 14:32       ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2016-12-06 14:32 UTC (permalink / raw)
  To: linux-arm-kernel

On 06/12/16 13:50, Will Deacon wrote:
> On Fri, Dec 02, 2016 at 03:50:58PM +0000, Marc Zyngier wrote:
>> The ARMv8 architecture allows the cycle counter to be configured
>> by setting PMSELR_EL0.SEL==0x1f and then accessing PMXEVTYPER_EL0,
>> hence accessing PMCCFILTR_EL0. But it disallows the use of
>> PMSELR_EL0.SEL==0x1f to access the cycle counter itself through
>> PMXEVCNTR_EL0.
>>
>> Linux itself doesn't violate this rule, but we may end up with
>> PMSELR_EL0.SEL being set to 0x1f when we enter a guest. If that
>> guest accesses PMXEVCNTR_EL0, the access may UNDEF at EL1,
>> despite the guest not having done anything wrong.
>>
>> In order to avoid this unfortunate course of events (haha!), let's
>> apply the same method armv8pmu_write_counter and co are using,
>> explicitely checking for the cycle counter and writing to
>> PMCCFILTR_EL0 directly. This prevents writing 0x1f to PMSELR_EL0,
>> and saves a Linux guest an extra trap.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/kernel/perf_event.c | 5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
>> index 57ae9d9..a65b757 100644
>> --- a/arch/arm64/kernel/perf_event.c
>> +++ b/arch/arm64/kernel/perf_event.c
>> @@ -632,7 +632,10 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value)
>>  
>>  static inline void armv8pmu_write_evtype(int idx, u32 val)
>>  {
>> -	if (armv8pmu_select_counter(idx) == idx) {
>> +	if (idx == ARMV8_IDX_CYCLE_COUNTER) {
>> +		val &= ARMV8_PMU_EVTYPE_MASK & ~ARMV8_PMU_EVTYPE_EVENT;
>> +		write_sysreg(val, pmccfiltr_el0);
>> +	} else if (armv8pmu_select_counter(idx) == idx) {
> 
> If we go down this route, then we also have to "fix" the 32-bit code,
> which uses PMSELR in a similar way. However, neither of the perf drivers
> are actually doing anything wrong here -- the problem comes about because
> the architecture doesn't guarantee that PMU accesses trap to EL2 unless
> both MDCR.TPM=1 *and* PMSELR_EL0 is valid. So I think that this should
> be handled together, in the KVM code that enables PMU traps.
> 
> Given that the perf callbacks tend to run with preemption disabled, I
> think you should be fine nuking PMSELR_EL0 to zero (i.e. no need to
> save/restore).

Fair enough. I'll respin another patch in a bit.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-12-06 14:32 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-02 15:50 [PATCH 0/2] arm64: PMU: Sanitize usage of PMSELR_EL0.SEL Marc Zyngier
2016-12-02 15:50 ` Marc Zyngier
2016-12-02 15:50 ` [PATCH 1/2] arm64: PMU: Do not use PMSELR_EL0 to access PMCCFILTR_EL0 Marc Zyngier
2016-12-02 15:50   ` Marc Zyngier
2016-12-06 13:50   ` Will Deacon
2016-12-06 13:50     ` Will Deacon
2016-12-06 14:32     ` Marc Zyngier
2016-12-06 14:32       ` Marc Zyngier
2016-12-02 15:50 ` [PATCH 2/2] arm64: PMU: Reset PMSELR_EL0 to a sane value at boot time Marc Zyngier
2016-12-02 15:50   ` Marc Zyngier
2016-12-02 21:28 ` [PATCH 0/2] arm64: PMU: Sanitize usage of PMSELR_EL0.SEL Itaru Kitayama
2016-12-02 21:28   ` Itaru Kitayama

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.