* [PATCH] arm64: use sanitized feature registers for conditional ZCR_EL1 and SMCR_EL1 reads
@ 2022-07-13 18:18 Peter Collingbourne
2022-07-13 18:41 ` Mark Brown
2022-07-14 6:47 ` Marc Zyngier
0 siblings, 2 replies; 5+ messages in thread
From: Peter Collingbourne @ 2022-07-13 18:18 UTC (permalink / raw)
To: Marc Zyngier
Cc: Peter Collingbourne, linux-arm-kernel, Will Deacon,
Catalin Marinas, Mark Rutland, Ard Biesheuvel, broonie,
danielmentz, saravanak
With arm64.nosve we would still read ZCR_EL1 in __cpuinfo_store_cpu
because the condition for reading it was based on the unsanitized feature
register value info->reg_id_aa64pfr0. Fix the problem by moving the reads
to init_cpu_features after we have computed the sanitized value. Fix
the SMCR_EL1 read for SME similarly.
Fixes: 504ee23611c4 ("arm64: Add the arm64.nosve command line option")
Signed-off-by: Peter Collingbourne <pcc@google.com>
---
arch/arm64/kernel/cpufeature.c | 11 +++++++----
arch/arm64/kernel/cpuinfo.c | 8 --------
2 files changed, 7 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e5afa9eba85d..71d290fec36f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -999,15 +999,18 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
init_32bit_cpu_features(&info->aarch32);
- if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
+ if (IS_ENABLED(CONFIG_ARM64_SVE) &&
+ id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
+ info->reg_zcr = read_zcr_features();
init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr);
vec_init_vq_map(ARM64_VEC_SVE);
}
- if (id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
+ if (IS_ENABLED(CONFIG_ARM64_SME) &&
+ id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ info->reg_smcr = read_smcr_features();
init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr);
- if (IS_ENABLED(CONFIG_ARM64_SME))
- vec_init_vq_map(ARM64_VEC_SME);
+ vec_init_vq_map(ARM64_VEC_SME);
}
if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 8eff0a34ffd4..66bc6f25d3b4 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -418,14 +418,6 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
__cpuinfo_store_cpu_32bit(&info->aarch32);
- if (IS_ENABLED(CONFIG_ARM64_SVE) &&
- id_aa64pfr0_sve(info->reg_id_aa64pfr0))
- info->reg_zcr = read_zcr_features();
-
- if (IS_ENABLED(CONFIG_ARM64_SME) &&
- id_aa64pfr1_sme(info->reg_id_aa64pfr1))
- info->reg_smcr = read_smcr_features();
-
cpuinfo_detect_icache_policy(info);
}
--
2.37.0.144.g8ac04bfd2-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] arm64: use sanitized feature registers for conditional ZCR_EL1 and SMCR_EL1 reads
2022-07-13 18:18 [PATCH] arm64: use sanitized feature registers for conditional ZCR_EL1 and SMCR_EL1 reads Peter Collingbourne
@ 2022-07-13 18:41 ` Mark Brown
2022-07-14 6:47 ` Marc Zyngier
1 sibling, 0 replies; 5+ messages in thread
From: Mark Brown @ 2022-07-13 18:41 UTC (permalink / raw)
To: Peter Collingbourne
Cc: Marc Zyngier, linux-arm-kernel, Will Deacon, Catalin Marinas,
Mark Rutland, Ard Biesheuvel, danielmentz, saravanak
[-- Attachment #1.1: Type: text/plain, Size: 447 bytes --]
On Wed, Jul 13, 2022 at 11:18:52AM -0700, Peter Collingbourne wrote:
> With arm64.nosve we would still read ZCR_EL1 in __cpuinfo_store_cpu
> because the condition for reading it was based on the unsanitized feature
> register value info->reg_id_aa64pfr0. Fix the problem by moving the reads
> to init_cpu_features after we have computed the sanitized value. Fix
> the SMCR_EL1 read for SME similarly.
Reviewed-by: Mark Brown <broonie@kernel.org>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] arm64: use sanitized feature registers for conditional ZCR_EL1 and SMCR_EL1 reads
2022-07-13 18:18 [PATCH] arm64: use sanitized feature registers for conditional ZCR_EL1 and SMCR_EL1 reads Peter Collingbourne
2022-07-13 18:41 ` Mark Brown
@ 2022-07-14 6:47 ` Marc Zyngier
2022-07-14 13:25 ` Mark Brown
2022-07-14 17:47 ` Peter Collingbourne
1 sibling, 2 replies; 5+ messages in thread
From: Marc Zyngier @ 2022-07-14 6:47 UTC (permalink / raw)
To: Peter Collingbourne
Cc: linux-arm-kernel, Will Deacon, Catalin Marinas, Mark Rutland,
Ard Biesheuvel, broonie, danielmentz, saravanak
On Wed, 13 Jul 2022 19:18:52 +0100,
Peter Collingbourne <pcc@google.com> wrote:
>
> With arm64.nosve we would still read ZCR_EL1 in __cpuinfo_store_cpu
> because the condition for reading it was based on the unsanitized feature
> register value info->reg_id_aa64pfr0. Fix the problem by moving the reads
> to init_cpu_features after we have computed the sanitized value. Fix
> the SMCR_EL1 read for SME similarly.
>
> Fixes: 504ee23611c4 ("arm64: Add the arm64.nosve command line option")
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> ---
> arch/arm64/kernel/cpufeature.c | 11 +++++++----
> arch/arm64/kernel/cpuinfo.c | 8 --------
> 2 files changed, 7 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index e5afa9eba85d..71d290fec36f 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -999,15 +999,18 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
> if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
> init_32bit_cpu_features(&info->aarch32);
>
> - if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
> + if (IS_ENABLED(CONFIG_ARM64_SVE) &&
> + id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
> + info->reg_zcr = read_zcr_features();
> init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr);
> vec_init_vq_map(ARM64_VEC_SVE);
> }
>
> - if (id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
> + if (IS_ENABLED(CONFIG_ARM64_SME) &&
> + id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
> + info->reg_smcr = read_smcr_features();
> init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr);
> - if (IS_ENABLED(CONFIG_ARM64_SME))
> - vec_init_vq_map(ARM64_VEC_SME);
> + vec_init_vq_map(ARM64_VEC_SME);
> }
>
> if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
> diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
> index 8eff0a34ffd4..66bc6f25d3b4 100644
> --- a/arch/arm64/kernel/cpuinfo.c
> +++ b/arch/arm64/kernel/cpuinfo.c
> @@ -418,14 +418,6 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
> if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
> __cpuinfo_store_cpu_32bit(&info->aarch32);
>
> - if (IS_ENABLED(CONFIG_ARM64_SVE) &&
> - id_aa64pfr0_sve(info->reg_id_aa64pfr0))
> - info->reg_zcr = read_zcr_features();
> -
> - if (IS_ENABLED(CONFIG_ARM64_SME) &&
> - id_aa64pfr1_sme(info->reg_id_aa64pfr1))
> - info->reg_smcr = read_smcr_features();
Not sure what this applies on, but that's a kernel that doesn't
contain d69d564964872 ("arm64/sme: Expose SMIDR through sysfs").
> -
> cpuinfo_detect_icache_policy(info);
> }
>
This looks wrong to me. With this change, a secondary CPU never
initialises its own view of reg_{zcr,smcr}. I came up with the
following patch instead, also updating the cpuinfo_arm64 structure
when updating the capabilities on secondary CPU boot (on top of
arm64/boot):
From ee8d6a3741229336acedf13aa1304e05ed630ff0 Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Mon, 11 Jul 2022 16:43:36 +0100
Subject: [PATCH] arm64: Delay initialisation of cpuinfo_arm64::reg_{zcr,smcr}
Even if we are now able to tell the kernel to avoid exposing SVE/SME
from the command line, we still have a couple of places where we
unconditionally access the ZCR_EL1 (resp. SMCR_EL1) registers.
On systems with broken firmwares, this results in a crash even if
arm64.nosve (resp. arm64.nosme) was passed on the command-line.
To avoid this, only update cpuinfo_arm64::reg_{zcr,smcr} once
we have computed the sanitised version for the corresponding
feature registers (ID_AA64PFR0 for SVE, and ID_AA64PFR1 for
SME). This results in some minor refactoring.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 41 ++++++++++++++++++++++++----------
arch/arm64/kernel/cpuinfo.c | 16 -------------
2 files changed, 29 insertions(+), 28 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 1707576b7ca0..6adca2f337e8 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1000,15 +1000,24 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
init_32bit_cpu_features(&info->aarch32);
- if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
+ if (IS_ENABLED(CONFIG_ARM64_SVE) &&
+ id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
+ info->reg_zcr = read_zcr_features();
init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr);
vec_init_vq_map(ARM64_VEC_SVE);
}
- if (id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
+ if (IS_ENABLED(CONFIG_ARM64_SME) &&
+ id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ info->reg_smcr = read_smcr_features();
+ /*
+ * We mask out SMPS since even if the hardware
+ * supports priorities the kernel does not at present
+ * and we block access to them.
+ */
+ info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr);
- if (IS_ENABLED(CONFIG_ARM64_SME))
- vec_init_vq_map(ARM64_VEC_SME);
+ vec_init_vq_map(ARM64_VEC_SME);
}
if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
@@ -1240,23 +1249,31 @@ void update_cpu_features(int cpu,
taint |= check_update_ftr_reg(SYS_ID_AA64SMFR0_EL1, cpu,
info->reg_id_aa64smfr0, boot->reg_id_aa64smfr0);
- if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
+ if (IS_ENABLED(CONFIG_ARM64_SVE) &&
+ id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
+ info->reg_zcr = read_zcr_features();
taint |= check_update_ftr_reg(SYS_ZCR_EL1, cpu,
info->reg_zcr, boot->reg_zcr);
- /* Probe vector lengths, unless we already gave up on SVE */
- if (id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1)) &&
- !system_capabilities_finalized())
+ /* Probe vector lengths */
+ if (!system_capabilities_finalized())
vec_update_vq_map(ARM64_VEC_SVE);
}
- if (id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
+ if (IS_ENABLED(CONFIG_ARM64_SME) &&
+ id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ info->reg_smcr = read_smcr_features();
+ /*
+ * We mask out SMPS since even if the hardware
+ * supports priorities the kernel does not at present
+ * and we block access to them.
+ */
+ info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
taint |= check_update_ftr_reg(SYS_SMCR_EL1, cpu,
info->reg_smcr, boot->reg_smcr);
- /* Probe vector lengths, unless we already gave up on SME */
- if (id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1)) &&
- !system_capabilities_finalized())
+ /* Probe vector lengths */
+ if (!system_capabilities_finalized())
vec_update_vq_map(ARM64_VEC_SME);
}
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 3628b8a28e92..5b53d275adc1 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -438,22 +438,6 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
__cpuinfo_store_cpu_32bit(&info->aarch32);
- if (IS_ENABLED(CONFIG_ARM64_SVE) &&
- id_aa64pfr0_sve(info->reg_id_aa64pfr0))
- info->reg_zcr = read_zcr_features();
-
- if (IS_ENABLED(CONFIG_ARM64_SME) &&
- id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
- info->reg_smcr = read_smcr_features();
-
- /*
- * We mask out SMPS since even if the hardware
- * supports priorities the kernel does not at present
- * and we block access to them.
- */
- info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
- }
-
cpuinfo_detect_icache_policy(info);
}
--
2.34.1
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] arm64: use sanitized feature registers for conditional ZCR_EL1 and SMCR_EL1 reads
2022-07-14 6:47 ` Marc Zyngier
@ 2022-07-14 13:25 ` Mark Brown
2022-07-14 17:47 ` Peter Collingbourne
1 sibling, 0 replies; 5+ messages in thread
From: Mark Brown @ 2022-07-14 13:25 UTC (permalink / raw)
To: Marc Zyngier
Cc: Peter Collingbourne, linux-arm-kernel, Will Deacon,
Catalin Marinas, Mark Rutland, Ard Biesheuvel, danielmentz,
saravanak
[-- Attachment #1.1: Type: text/plain, Size: 1519 bytes --]
On Thu, Jul 14, 2022 at 07:47:51AM +0100, Marc Zyngier wrote:
> Peter Collingbourne <pcc@google.com> wrote:
> > cpuinfo_detect_icache_policy(info);
> > }
> >
>
> This looks wrong to me. With this change, a secondary CPU never
> initialises its own view of reg_{zcr,smcr}. I came up with the
> following patch instead, also updating the cpuinfo_arm64 structure
> when updating the capabilities on secondary CPU boot (on top of
> arm64/boot):
Ugh, clearly I'm not up to speed enough with how all this stuff
bootstraps :/
> From ee8d6a3741229336acedf13aa1304e05ed630ff0 Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <maz@kernel.org>
> Date: Mon, 11 Jul 2022 16:43:36 +0100
> Subject: [PATCH] arm64: Delay initialisation of cpuinfo_arm64::reg_{zcr,smcr}
>
> Even if we are now able to tell the kernel to avoid exposing SVE/SME
> from the command line, we still have a couple of places where we
> unconditionally access the ZCR_EL1 (resp. SMCR_EL1) registers.
>
> On systems with broken firmwares, this results in a crash even if
> arm64.nosve (resp. arm64.nosme) was passed on the command-line.
>
> To avoid this, only update cpuinfo_arm64::reg_{zcr,smcr} once
> we have computed the sanitised version for the corresponding
> feature registers (ID_AA64PFR0 for SVE, and ID_AA64PFR1 for
> SME). This results in some minor refactoring.
FWIW
Reviewed-by: Mark Brown <broonie@kernel.org>
but that's more from a SVE/SME point of view than from an ID register
point of view.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] arm64: use sanitized feature registers for conditional ZCR_EL1 and SMCR_EL1 reads
2022-07-14 6:47 ` Marc Zyngier
2022-07-14 13:25 ` Mark Brown
@ 2022-07-14 17:47 ` Peter Collingbourne
1 sibling, 0 replies; 5+ messages in thread
From: Peter Collingbourne @ 2022-07-14 17:47 UTC (permalink / raw)
To: Marc Zyngier
Cc: Linux ARM, Will Deacon, Catalin Marinas, Mark Rutland,
Ard Biesheuvel, Mark Brown, Daniel Mentz, Saravana Kannan
On Wed, Jul 13, 2022 at 11:48 PM Marc Zyngier <maz@kernel.org> wrote:
>
> On Wed, 13 Jul 2022 19:18:52 +0100,
> Peter Collingbourne <pcc@google.com> wrote:
> >
> > With arm64.nosve we would still read ZCR_EL1 in __cpuinfo_store_cpu
> > because the condition for reading it was based on the unsanitized feature
> > register value info->reg_id_aa64pfr0. Fix the problem by moving the reads
> > to init_cpu_features after we have computed the sanitized value. Fix
> > the SMCR_EL1 read for SME similarly.
> >
> > Fixes: 504ee23611c4 ("arm64: Add the arm64.nosve command line option")
> > Signed-off-by: Peter Collingbourne <pcc@google.com>
> > ---
> > arch/arm64/kernel/cpufeature.c | 11 +++++++----
> > arch/arm64/kernel/cpuinfo.c | 8 --------
> > 2 files changed, 7 insertions(+), 12 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index e5afa9eba85d..71d290fec36f 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -999,15 +999,18 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
> > if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
> > init_32bit_cpu_features(&info->aarch32);
> >
> > - if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
> > + if (IS_ENABLED(CONFIG_ARM64_SVE) &&
> > + id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
> > + info->reg_zcr = read_zcr_features();
> > init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr);
> > vec_init_vq_map(ARM64_VEC_SVE);
> > }
> >
> > - if (id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
> > + if (IS_ENABLED(CONFIG_ARM64_SME) &&
> > + id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
> > + info->reg_smcr = read_smcr_features();
> > init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr);
> > - if (IS_ENABLED(CONFIG_ARM64_SME))
> > - vec_init_vq_map(ARM64_VEC_SME);
> > + vec_init_vq_map(ARM64_VEC_SME);
> > }
> >
> > if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
> > diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
> > index 8eff0a34ffd4..66bc6f25d3b4 100644
> > --- a/arch/arm64/kernel/cpuinfo.c
> > +++ b/arch/arm64/kernel/cpuinfo.c
> > @@ -418,14 +418,6 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
> > if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
> > __cpuinfo_store_cpu_32bit(&info->aarch32);
> >
> > - if (IS_ENABLED(CONFIG_ARM64_SVE) &&
> > - id_aa64pfr0_sve(info->reg_id_aa64pfr0))
> > - info->reg_zcr = read_zcr_features();
> > -
> > - if (IS_ENABLED(CONFIG_ARM64_SME) &&
> > - id_aa64pfr1_sme(info->reg_id_aa64pfr1))
> > - info->reg_smcr = read_smcr_features();
>
> Not sure what this applies on, but that's a kernel that doesn't
> contain d69d564964872 ("arm64/sme: Expose SMIDR through sysfs").
Right, I based this on for-next/boot, which didn't contain that patch.
> > -
> > cpuinfo_detect_icache_policy(info);
> > }
> >
>
> This looks wrong to me. With this change, a secondary CPU never
> initialises its own view of reg_{zcr,smcr}.
I see. I guess I managed to confuse myself about the flow for secondary CPUs.
> I came up with the
> following patch instead, also updating the cpuinfo_arm64 structure
> when updating the capabilities on secondary CPU boot (on top of
> arm64/boot):
Yes, that looks better.
> From ee8d6a3741229336acedf13aa1304e05ed630ff0 Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <maz@kernel.org>
> Date: Mon, 11 Jul 2022 16:43:36 +0100
> Subject: [PATCH] arm64: Delay initialisation of cpuinfo_arm64::reg_{zcr,smcr}
>
> Even if we are now able to tell the kernel to avoid exposing SVE/SME
> from the command line, we still have a couple of places where we
> unconditionally access the ZCR_EL1 (resp. SMCR_EL1) registers.
>
> On systems with broken firmwares, this results in a crash even if
> arm64.nosve (resp. arm64.nosme) was passed on the command-line.
>
> To avoid this, only update cpuinfo_arm64::reg_{zcr,smcr} once
> we have computed the sanitised version for the corresponding
> feature registers (ID_AA64PFR0 for SVE, and ID_AA64PFR1 for
> SME). This results in some minor refactoring.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Peter Collingbourne <pcc@google.com>
Tested-by: Peter Collingbourne <pcc@google.com>
Peter
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-07-14 17:48 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-13 18:18 [PATCH] arm64: use sanitized feature registers for conditional ZCR_EL1 and SMCR_EL1 reads Peter Collingbourne
2022-07-13 18:41 ` Mark Brown
2022-07-14 6:47 ` Marc Zyngier
2022-07-14 13:25 ` Mark Brown
2022-07-14 17:47 ` Peter Collingbourne
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.