All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] An alternative series for asymmetric AArch32 systems
@ 2020-11-09 21:30 ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Will Deacon, Catalin Marinas, Marc Zyngier,
	Greg Kroah-Hartman, Peter Zijlstra, Morten Rasmussen,
	Qais Yousef, Suren Baghdasaryan, Quentin Perret, kernel-team

Hello again,

This is version two of the ravingly popular series I previously posted
here:

https://lore.kernel.org/r/20201021104611.2744565-1-qais.yousef@arm.com

Changes since v1 include:

  * Fix setting of compat hwcaps
  * Simplify sysfs code to use DEVICE_ATTR_RO()
  * Allow onlining of late CPUs in face of mismatch
  * Use a static key in the context-switch path
  * Avoid printing that we detected 32-bit EL0 support when we didn't

This has unfortunately introduced more complexity than I would've liked,
but it also seems to be free of any rough edges.

I haven't yet tackled the execve() problem raised by Suren. I've got some
local hacks, but nothing I'm happy with yet. They will come as follow-up
patches in any case.

As before, I don't think we should merge this stuff until we've figured
out what's going on in Android, but hopefully we can reach some agreement
on the basics before then.

Cheers,

Will

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Qais Yousef <qais.yousef@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Quentin Perret <qperret@google.com>
Cc: kernel-team@android.com

--->8

Will Deacon (6):
  arm64: cpuinfo: Split AArch32 registers out into a separate struct
  arm64: Allow mismatched 32-bit EL0 support
  KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support
  arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs
  arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0

 .../ABI/testing/sysfs-devices-system-cpu      |   9 +
 .../admin-guide/kernel-parameters.txt         |   7 +
 arch/arm64/include/asm/cpu.h                  |  44 ++--
 arch/arm64/include/asm/cpucaps.h              |   2 +-
 arch/arm64/include/asm/cpufeature.h           |   8 +-
 arch/arm64/kernel/cpufeature.c                | 198 ++++++++++++++----
 arch/arm64/kernel/cpuinfo.c                   |  53 ++---
 arch/arm64/kernel/process.c                   |  19 +-
 arch/arm64/kernel/signal.c                    |  26 +++
 arch/arm64/kvm/arm.c                          |  11 +-
 10 files changed, 288 insertions(+), 89 deletions(-)

-- 
2.29.2.222.g5d2a92d10f8-goog


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/6] An alternative series for asymmetric AArch32 systems
@ 2020-11-09 21:30 ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Marc Zyngier, kernel-team, Quentin Perret,
	Peter Zijlstra, Catalin Marinas, Qais Yousef, Suren Baghdasaryan,
	Greg Kroah-Hartman, Will Deacon, Morten Rasmussen

Hello again,

This is version two of the ravingly popular series I previously posted
here:

https://lore.kernel.org/r/20201021104611.2744565-1-qais.yousef@arm.com

Changes since v1 include:

  * Fix setting of compat hwcaps
  * Simplify sysfs code to use DEVICE_ATTR_RO()
  * Allow onlining of late CPUs in face of mismatch
  * Use a static key in the context-switch path
  * Avoid printing that we detected 32-bit EL0 support when we didn't

This has unfortunately introduced more complexity than I would've liked,
but it also seems to be free of any rough edges.

I haven't yet tackled the execve() problem raised by Suren. I've got some
local hacks, but nothing I'm happy with yet. They will come as follow-up
patches in any case.

As before, I don't think we should merge this stuff until we've figured
out what's going on in Android, but hopefully we can reach some agreement
on the basics before then.

Cheers,

Will

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Qais Yousef <qais.yousef@arm.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Quentin Perret <qperret@google.com>
Cc: kernel-team@android.com

--->8

Will Deacon (6):
  arm64: cpuinfo: Split AArch32 registers out into a separate struct
  arm64: Allow mismatched 32-bit EL0 support
  KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support
  arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs
  arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0

 .../ABI/testing/sysfs-devices-system-cpu      |   9 +
 .../admin-guide/kernel-parameters.txt         |   7 +
 arch/arm64/include/asm/cpu.h                  |  44 ++--
 arch/arm64/include/asm/cpucaps.h              |   2 +-
 arch/arm64/include/asm/cpufeature.h           |   8 +-
 arch/arm64/kernel/cpufeature.c                | 198 ++++++++++++++----
 arch/arm64/kernel/cpuinfo.c                   |  53 ++---
 arch/arm64/kernel/process.c                   |  19 +-
 arch/arm64/kernel/signal.c                    |  26 +++
 arch/arm64/kvm/arm.c                          |  11 +-
 10 files changed, 288 insertions(+), 89 deletions(-)

-- 
2.29.2.222.g5d2a92d10f8-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 1/6] arm64: cpuinfo: Split AArch32 registers out into a separate struct
  2020-11-09 21:30 ` Will Deacon
@ 2020-11-09 21:30   ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Will Deacon, Catalin Marinas, Marc Zyngier,
	Greg Kroah-Hartman, Peter Zijlstra, Morten Rasmussen,
	Qais Yousef, Suren Baghdasaryan, Quentin Perret, kernel-team

In preparation for late initialisation of the "sanitised" AArch32 register
state, move the AArch32 registers out of 'struct cpuinfo' and into their
own struct definition.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpu.h   | 44 +++++++++++----------
 arch/arm64/kernel/cpufeature.c | 71 ++++++++++++++++++----------------
 arch/arm64/kernel/cpuinfo.c    | 53 +++++++++++++------------
 3 files changed, 89 insertions(+), 79 deletions(-)

diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
index 7faae6ff3ab4..f4e01aa0f442 100644
--- a/arch/arm64/include/asm/cpu.h
+++ b/arch/arm64/include/asm/cpu.h
@@ -12,26 +12,7 @@
 /*
  * Records attributes of an individual CPU.
  */
-struct cpuinfo_arm64 {
-	struct cpu	cpu;
-	struct kobject	kobj;
-	u32		reg_ctr;
-	u32		reg_cntfrq;
-	u32		reg_dczid;
-	u32		reg_midr;
-	u32		reg_revidr;
-
-	u64		reg_id_aa64dfr0;
-	u64		reg_id_aa64dfr1;
-	u64		reg_id_aa64isar0;
-	u64		reg_id_aa64isar1;
-	u64		reg_id_aa64mmfr0;
-	u64		reg_id_aa64mmfr1;
-	u64		reg_id_aa64mmfr2;
-	u64		reg_id_aa64pfr0;
-	u64		reg_id_aa64pfr1;
-	u64		reg_id_aa64zfr0;
-
+struct cpuinfo_32bit {
 	u32		reg_id_dfr0;
 	u32		reg_id_dfr1;
 	u32		reg_id_isar0;
@@ -54,6 +35,29 @@ struct cpuinfo_arm64 {
 	u32		reg_mvfr0;
 	u32		reg_mvfr1;
 	u32		reg_mvfr2;
+};
+
+struct cpuinfo_arm64 {
+	struct cpu	cpu;
+	struct kobject	kobj;
+	u32		reg_ctr;
+	u32		reg_cntfrq;
+	u32		reg_dczid;
+	u32		reg_midr;
+	u32		reg_revidr;
+
+	u64		reg_id_aa64dfr0;
+	u64		reg_id_aa64dfr1;
+	u64		reg_id_aa64isar0;
+	u64		reg_id_aa64isar1;
+	u64		reg_id_aa64mmfr0;
+	u64		reg_id_aa64mmfr1;
+	u64		reg_id_aa64mmfr2;
+	u64		reg_id_aa64pfr0;
+	u64		reg_id_aa64pfr1;
+	u64		reg_id_aa64zfr0;
+
+	struct cpuinfo_32bit	aarch32;
 
 	/* pseudo-ZCR for recording maximum ZCR_EL1 LEN value: */
 	u64		reg_zcr;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index dcc165b3fc04..d4a7e84b1513 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -819,6 +819,31 @@ static void __init init_cpu_hwcaps_indirect_list(void)
 
 static void __init setup_boot_cpu_capabilities(void);
 
+static void __init init_32bit_cpu_features(struct cpuinfo_32bit *info)
+{
+	init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
+	init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
+	init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
+	init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
+	init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
+	init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
+	init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
+	init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
+	init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6);
+	init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
+	init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
+	init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
+	init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
+	init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4);
+	init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5);
+	init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
+	init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
+	init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2);
+	init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
+	init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
+	init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
+}
+
 void __init init_cpu_features(struct cpuinfo_arm64 *info)
 {
 	/* Before we start using the tables, make sure it is sorted */
@@ -838,29 +863,8 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
 	init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
 	init_cpu_ftr_reg(SYS_ID_AA64ZFR0_EL1, info->reg_id_aa64zfr0);
 
-	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
-		init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
-		init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
-		init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
-		init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
-		init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
-		init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
-		init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
-		init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
-		init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6);
-		init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
-		init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
-		init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
-		init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
-		init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4);
-		init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5);
-		init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
-		init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
-		init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2);
-		init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
-		init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
-		init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
-	}
+	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
+		init_32bit_cpu_features(&info->aarch32);
 
 	if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
 		init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr);
@@ -931,20 +935,12 @@ static void relax_cpu_ftr_reg(u32 sys_id, int field)
 	WARN_ON(!ftrp->width);
 }
 
-static int update_32bit_cpu_features(int cpu, struct cpuinfo_arm64 *info,
-				     struct cpuinfo_arm64 *boot)
+static int update_32bit_cpu_features(int cpu, struct cpuinfo_32bit *info,
+				     struct cpuinfo_32bit *boot)
 {
 	int taint = 0;
 	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
 
-	/*
-	 * If we don't have AArch32 at all then skip the checks entirely
-	 * as the register values may be UNKNOWN and we're not going to be
-	 * using them for anything.
-	 */
-	if (!id_aa64pfr0_32bit_el0(pfr0))
-		return taint;
-
 	/*
 	 * If we don't have AArch32 at EL1, then relax the strictness of
 	 * EL1-dependent register fields to avoid spurious sanity check fails.
@@ -1091,10 +1087,17 @@ void update_cpu_features(int cpu,
 	}
 
 	/*
+	 * If we don't have AArch32 at all then skip the checks entirely
+	 * as the register values may be UNKNOWN and we're not going to be
+	 * using them for anything.
+	 *
 	 * This relies on a sanitised view of the AArch64 ID registers
 	 * (e.g. SYS_ID_AA64PFR0_EL1), so we call it last.
 	 */
-	taint |= update_32bit_cpu_features(cpu, info, boot);
+	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
+		taint |= update_32bit_cpu_features(cpu, &info->aarch32,
+						   &boot->aarch32);
+	}
 
 	/*
 	 * Mismatched CPU features are a recipe for disaster. Don't even
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 77605aec25fe..8ce33742ad6a 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -344,6 +344,32 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
 	pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu);
 }
 
+static void __cpuinfo_store_cpu_32bit(struct cpuinfo_32bit *info)
+{
+	info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
+	info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1);
+	info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
+	info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
+	info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
+	info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
+	info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
+	info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
+	info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1);
+	info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
+	info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
+	info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
+	info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
+	info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1);
+	info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1);
+	info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
+	info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
+	info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1);
+
+	info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
+	info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
+	info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
+}
+
 static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 {
 	info->reg_cntfrq = arch_timer_get_cntfrq();
@@ -371,31 +397,8 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 	info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1);
 	info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1);
 
-	/* Update the 32bit ID registers only if AArch32 is implemented */
-	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
-		info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
-		info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1);
-		info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
-		info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
-		info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
-		info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
-		info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
-		info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
-		info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1);
-		info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
-		info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
-		info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
-		info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
-		info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1);
-		info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1);
-		info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
-		info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
-		info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1);
-
-		info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
-		info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
-		info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
-	}
+	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
+		__cpuinfo_store_cpu_32bit(&info->aarch32);
 
 	if (IS_ENABLED(CONFIG_ARM64_SVE) &&
 	    id_aa64pfr0_sve(info->reg_id_aa64pfr0))
-- 
2.29.2.222.g5d2a92d10f8-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 1/6] arm64: cpuinfo: Split AArch32 registers out into a separate struct
@ 2020-11-09 21:30   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Marc Zyngier, kernel-team, Quentin Perret,
	Peter Zijlstra, Catalin Marinas, Qais Yousef, Suren Baghdasaryan,
	Greg Kroah-Hartman, Will Deacon, Morten Rasmussen

In preparation for late initialisation of the "sanitised" AArch32 register
state, move the AArch32 registers out of 'struct cpuinfo' and into their
own struct definition.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpu.h   | 44 +++++++++++----------
 arch/arm64/kernel/cpufeature.c | 71 ++++++++++++++++++----------------
 arch/arm64/kernel/cpuinfo.c    | 53 +++++++++++++------------
 3 files changed, 89 insertions(+), 79 deletions(-)

diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
index 7faae6ff3ab4..f4e01aa0f442 100644
--- a/arch/arm64/include/asm/cpu.h
+++ b/arch/arm64/include/asm/cpu.h
@@ -12,26 +12,7 @@
 /*
  * Records attributes of an individual CPU.
  */
-struct cpuinfo_arm64 {
-	struct cpu	cpu;
-	struct kobject	kobj;
-	u32		reg_ctr;
-	u32		reg_cntfrq;
-	u32		reg_dczid;
-	u32		reg_midr;
-	u32		reg_revidr;
-
-	u64		reg_id_aa64dfr0;
-	u64		reg_id_aa64dfr1;
-	u64		reg_id_aa64isar0;
-	u64		reg_id_aa64isar1;
-	u64		reg_id_aa64mmfr0;
-	u64		reg_id_aa64mmfr1;
-	u64		reg_id_aa64mmfr2;
-	u64		reg_id_aa64pfr0;
-	u64		reg_id_aa64pfr1;
-	u64		reg_id_aa64zfr0;
-
+struct cpuinfo_32bit {
 	u32		reg_id_dfr0;
 	u32		reg_id_dfr1;
 	u32		reg_id_isar0;
@@ -54,6 +35,29 @@ struct cpuinfo_arm64 {
 	u32		reg_mvfr0;
 	u32		reg_mvfr1;
 	u32		reg_mvfr2;
+};
+
+struct cpuinfo_arm64 {
+	struct cpu	cpu;
+	struct kobject	kobj;
+	u32		reg_ctr;
+	u32		reg_cntfrq;
+	u32		reg_dczid;
+	u32		reg_midr;
+	u32		reg_revidr;
+
+	u64		reg_id_aa64dfr0;
+	u64		reg_id_aa64dfr1;
+	u64		reg_id_aa64isar0;
+	u64		reg_id_aa64isar1;
+	u64		reg_id_aa64mmfr0;
+	u64		reg_id_aa64mmfr1;
+	u64		reg_id_aa64mmfr2;
+	u64		reg_id_aa64pfr0;
+	u64		reg_id_aa64pfr1;
+	u64		reg_id_aa64zfr0;
+
+	struct cpuinfo_32bit	aarch32;
 
 	/* pseudo-ZCR for recording maximum ZCR_EL1 LEN value: */
 	u64		reg_zcr;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index dcc165b3fc04..d4a7e84b1513 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -819,6 +819,31 @@ static void __init init_cpu_hwcaps_indirect_list(void)
 
 static void __init setup_boot_cpu_capabilities(void);
 
+static void __init init_32bit_cpu_features(struct cpuinfo_32bit *info)
+{
+	init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
+	init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
+	init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
+	init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
+	init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
+	init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
+	init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
+	init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
+	init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6);
+	init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
+	init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
+	init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
+	init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
+	init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4);
+	init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5);
+	init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
+	init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
+	init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2);
+	init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
+	init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
+	init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
+}
+
 void __init init_cpu_features(struct cpuinfo_arm64 *info)
 {
 	/* Before we start using the tables, make sure it is sorted */
@@ -838,29 +863,8 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
 	init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
 	init_cpu_ftr_reg(SYS_ID_AA64ZFR0_EL1, info->reg_id_aa64zfr0);
 
-	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
-		init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
-		init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
-		init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
-		init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
-		init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
-		init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
-		init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
-		init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
-		init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, info->reg_id_isar6);
-		init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
-		init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
-		init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
-		init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
-		init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, info->reg_id_mmfr4);
-		init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, info->reg_id_mmfr5);
-		init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
-		init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
-		init_cpu_ftr_reg(SYS_ID_PFR2_EL1, info->reg_id_pfr2);
-		init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
-		init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
-		init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
-	}
+	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
+		init_32bit_cpu_features(&info->aarch32);
 
 	if (id_aa64pfr0_sve(info->reg_id_aa64pfr0)) {
 		init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr);
@@ -931,20 +935,12 @@ static void relax_cpu_ftr_reg(u32 sys_id, int field)
 	WARN_ON(!ftrp->width);
 }
 
-static int update_32bit_cpu_features(int cpu, struct cpuinfo_arm64 *info,
-				     struct cpuinfo_arm64 *boot)
+static int update_32bit_cpu_features(int cpu, struct cpuinfo_32bit *info,
+				     struct cpuinfo_32bit *boot)
 {
 	int taint = 0;
 	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
 
-	/*
-	 * If we don't have AArch32 at all then skip the checks entirely
-	 * as the register values may be UNKNOWN and we're not going to be
-	 * using them for anything.
-	 */
-	if (!id_aa64pfr0_32bit_el0(pfr0))
-		return taint;
-
 	/*
 	 * If we don't have AArch32 at EL1, then relax the strictness of
 	 * EL1-dependent register fields to avoid spurious sanity check fails.
@@ -1091,10 +1087,17 @@ void update_cpu_features(int cpu,
 	}
 
 	/*
+	 * If we don't have AArch32 at all then skip the checks entirely
+	 * as the register values may be UNKNOWN and we're not going to be
+	 * using them for anything.
+	 *
 	 * This relies on a sanitised view of the AArch64 ID registers
 	 * (e.g. SYS_ID_AA64PFR0_EL1), so we call it last.
 	 */
-	taint |= update_32bit_cpu_features(cpu, info, boot);
+	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
+		taint |= update_32bit_cpu_features(cpu, &info->aarch32,
+						   &boot->aarch32);
+	}
 
 	/*
 	 * Mismatched CPU features are a recipe for disaster. Don't even
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 77605aec25fe..8ce33742ad6a 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -344,6 +344,32 @@ static void cpuinfo_detect_icache_policy(struct cpuinfo_arm64 *info)
 	pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu);
 }
 
+static void __cpuinfo_store_cpu_32bit(struct cpuinfo_32bit *info)
+{
+	info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
+	info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1);
+	info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
+	info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
+	info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
+	info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
+	info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
+	info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
+	info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1);
+	info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
+	info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
+	info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
+	info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
+	info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1);
+	info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1);
+	info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
+	info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
+	info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1);
+
+	info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
+	info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
+	info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
+}
+
 static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 {
 	info->reg_cntfrq = arch_timer_get_cntfrq();
@@ -371,31 +397,8 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 	info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1);
 	info->reg_id_aa64zfr0 = read_cpuid(ID_AA64ZFR0_EL1);
 
-	/* Update the 32bit ID registers only if AArch32 is implemented */
-	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
-		info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
-		info->reg_id_dfr1 = read_cpuid(ID_DFR1_EL1);
-		info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
-		info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
-		info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
-		info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
-		info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
-		info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
-		info->reg_id_isar6 = read_cpuid(ID_ISAR6_EL1);
-		info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
-		info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
-		info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
-		info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
-		info->reg_id_mmfr4 = read_cpuid(ID_MMFR4_EL1);
-		info->reg_id_mmfr5 = read_cpuid(ID_MMFR5_EL1);
-		info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
-		info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
-		info->reg_id_pfr2 = read_cpuid(ID_PFR2_EL1);
-
-		info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
-		info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
-		info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
-	}
+	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
+		__cpuinfo_store_cpu_32bit(&info->aarch32);
 
 	if (IS_ENABLED(CONFIG_ARM64_SVE) &&
 	    id_aa64pfr0_sve(info->reg_id_aa64pfr0))
-- 
2.29.2.222.g5d2a92d10f8-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support
  2020-11-09 21:30 ` Will Deacon
@ 2020-11-09 21:30   ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Will Deacon, Catalin Marinas, Marc Zyngier,
	Greg Kroah-Hartman, Peter Zijlstra, Morten Rasmussen,
	Qais Yousef, Suren Baghdasaryan, Quentin Perret, kernel-team

When confronted with a mixture of CPUs, some of which support 32-bit
applications and others which don't, we quite sensibly treat the system
as 64-bit only for userspace and prevent execve() of 32-bit binaries.

Unfortunately, some crazy folks have decided to build systems like this
with the intention of running 32-bit applications, so relax our
sanitisation logic to continue to advertise 32-bit support to userspace
on these systems and track the real 32-bit capable cores in a cpumask
instead. For now, the default behaviour remains but will be tied to
a command-line option in a later patch.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpucaps.h    |   2 +-
 arch/arm64/include/asm/cpufeature.h |   8 ++-
 arch/arm64/kernel/cpufeature.c      | 103 ++++++++++++++++++++++++++--
 3 files changed, 104 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index e7d98997c09c..e6f0eb4643a0 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -20,7 +20,7 @@
 #define ARM64_ALT_PAN_NOT_UAO			10
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
-#define ARM64_HAS_32BIT_EL0			13
+#define ARM64_HAS_32BIT_EL0_DO_NOT_USE		13
 #define ARM64_HARDEN_EL2_VECTORS		14
 #define ARM64_HAS_CNP				15
 #define ARM64_HAS_NO_FPSIMD			16
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 97244d4feca9..f447d313a9c5 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -604,9 +604,15 @@ static inline bool cpu_supports_mixed_endian_el0(void)
 	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
 }
 
+const struct cpumask *system_32bit_el0_cpumask(void);
+DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
+
 static inline bool system_supports_32bit_el0(void)
 {
-	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
+	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
+	return id_aa64pfr0_32bit_el0(pfr0) ||
+	       static_branch_unlikely(&arm64_mismatched_32bit_el0);
 }
 
 static inline bool system_supports_4kb_granule(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d4a7e84b1513..264998972627 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -104,6 +104,24 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 bool arm64_use_ng_mappings = false;
 EXPORT_SYMBOL(arm64_use_ng_mappings);
 
+/*
+ * Permit PER_LINUX32 and execve() of 32-bit binaries even if not all CPUs
+ * support it?
+ */
+static bool __read_mostly allow_mismatched_32bit_el0;
+
+/*
+ * Static branch enabled only if allow_mismatched_32bit_el0 is set and we have
+ * seen at least one CPU capable of 32-bit EL0.
+ */
+DEFINE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
+
+/*
+ * Mask of CPUs supporting 32-bit EL0.
+ * Only valid if arm64_mismatched_32bit_el0 is enabled.
+ */
+static cpumask_var_t cpu_32bit_el0_mask __cpumask_var_read_mostly;
+
 /*
  * Flag to indicate if we have computed the system wide
  * capabilities based on the boot time active CPUs. This
@@ -756,7 +774,7 @@ static void __init sort_ftr_regs(void)
  * Any bits that are not covered by an arm64_ftr_bits entry are considered
  * RES0 for the system-wide value, and must strictly match.
  */
-static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
+static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
 {
 	u64 val = 0;
 	u64 strict_mask = ~0x0ULL;
@@ -819,7 +837,7 @@ static void __init init_cpu_hwcaps_indirect_list(void)
 
 static void __init setup_boot_cpu_capabilities(void);
 
-static void __init init_32bit_cpu_features(struct cpuinfo_32bit *info)
+static void init_32bit_cpu_features(struct cpuinfo_32bit *info)
 {
 	init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
 	init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
@@ -935,6 +953,25 @@ static void relax_cpu_ftr_reg(u32 sys_id, int field)
 	WARN_ON(!ftrp->width);
 }
 
+static void update_compat_elf_hwcaps(void);
+
+static void update_mismatched_32bit_el0_cpu_features(struct cpuinfo_arm64 *info,
+						     struct cpuinfo_arm64 *boot)
+{
+	static bool boot_cpu_32bit_regs_overridden = false;
+
+	if (!allow_mismatched_32bit_el0 || boot_cpu_32bit_regs_overridden)
+		return;
+
+	if (id_aa64pfr0_32bit_el0(boot->reg_id_aa64pfr0))
+		return;
+
+	boot->aarch32 = info->aarch32;
+	init_32bit_cpu_features(&boot->aarch32);
+	update_compat_elf_hwcaps();
+	boot_cpu_32bit_regs_overridden = true;
+}
+
 static int update_32bit_cpu_features(int cpu, struct cpuinfo_32bit *info,
 				     struct cpuinfo_32bit *boot)
 {
@@ -1095,6 +1132,7 @@ void update_cpu_features(int cpu,
 	 * (e.g. SYS_ID_AA64PFR0_EL1), so we call it last.
 	 */
 	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
+		update_mismatched_32bit_el0_cpu_features(info, boot);
 		taint |= update_32bit_cpu_features(cpu, &info->aarch32,
 						   &boot->aarch32);
 	}
@@ -1196,6 +1234,52 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
 	return feature_matches(val, entry);
 }
 
+static int enable_mismatched_32bit_el0(unsigned int cpu)
+{
+	if (id_aa64pfr0_32bit_el0(per_cpu(cpu_data, cpu).reg_id_aa64pfr0)) {
+		cpumask_set_cpu(cpu, cpu_32bit_el0_mask);
+		static_branch_enable_cpuslocked(&arm64_mismatched_32bit_el0);
+	}
+
+	return 0;
+}
+
+static int __init init_32bit_el0_mask(void)
+{
+	if (!allow_mismatched_32bit_el0)
+		return 0;
+
+	if (!alloc_cpumask_var(&cpu_32bit_el0_mask, GFP_KERNEL))
+		return -ENOMEM;
+
+	return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
+				 "arm64/mismatched_32bit_el0:online",
+				 enable_mismatched_32bit_el0, NULL);
+}
+early_initcall(init_32bit_el0_mask);
+
+const struct cpumask *system_32bit_el0_cpumask(void)
+{
+	if (!system_supports_32bit_el0())
+		return cpu_none_mask;
+
+	if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
+		return cpu_32bit_el0_mask;
+
+	return cpu_present_mask;
+}
+
+static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	if (!has_cpuid_feature(entry, scope))
+		return allow_mismatched_32bit_el0;
+
+	if (scope == SCOPE_SYSTEM)
+		pr_info("detected: 32-bit EL0 Support\n");
+
+	return true;
+}
+
 static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope)
 {
 	bool has_sre;
@@ -1803,10 +1887,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 #endif	/* CONFIG_ARM64_VHE */
 	{
-		.desc = "32-bit EL0 Support",
-		.capability = ARM64_HAS_32BIT_EL0,
+		.capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE,
 		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = has_cpuid_feature,
+		.matches = has_32bit_el0,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
@@ -2299,7 +2382,7 @@ static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = {
 	{},
 };
 
-static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+static void cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 {
 	switch (cap->hwcap_type) {
 	case CAP_HWCAP:
@@ -2344,7 +2427,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 	return rc;
 }
 
-static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
+static void setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
 {
 	/* We support emulation of accesses to CPU ID feature registers */
 	cpu_set_named_feature(CPUID);
@@ -2353,6 +2436,12 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
 			cap_set_elf_hwcap(hwcaps);
 }
 
+static void update_compat_elf_hwcaps(void)
+{
+	if (system_capabilities_finalized())
+		setup_elf_hwcaps(compat_elf_hwcaps);
+}
+
 static void update_cpu_capabilities(u16 scope_mask)
 {
 	int i;
-- 
2.29.2.222.g5d2a92d10f8-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support
@ 2020-11-09 21:30   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Marc Zyngier, kernel-team, Quentin Perret,
	Peter Zijlstra, Catalin Marinas, Qais Yousef, Suren Baghdasaryan,
	Greg Kroah-Hartman, Will Deacon, Morten Rasmussen

When confronted with a mixture of CPUs, some of which support 32-bit
applications and others which don't, we quite sensibly treat the system
as 64-bit only for userspace and prevent execve() of 32-bit binaries.

Unfortunately, some crazy folks have decided to build systems like this
with the intention of running 32-bit applications, so relax our
sanitisation logic to continue to advertise 32-bit support to userspace
on these systems and track the real 32-bit capable cores in a cpumask
instead. For now, the default behaviour remains but will be tied to
a command-line option in a later patch.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cpucaps.h    |   2 +-
 arch/arm64/include/asm/cpufeature.h |   8 ++-
 arch/arm64/kernel/cpufeature.c      | 103 ++++++++++++++++++++++++++--
 3 files changed, 104 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index e7d98997c09c..e6f0eb4643a0 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -20,7 +20,7 @@
 #define ARM64_ALT_PAN_NOT_UAO			10
 #define ARM64_HAS_VIRT_HOST_EXTN		11
 #define ARM64_WORKAROUND_CAVIUM_27456		12
-#define ARM64_HAS_32BIT_EL0			13
+#define ARM64_HAS_32BIT_EL0_DO_NOT_USE		13
 #define ARM64_HARDEN_EL2_VECTORS		14
 #define ARM64_HAS_CNP				15
 #define ARM64_HAS_NO_FPSIMD			16
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 97244d4feca9..f447d313a9c5 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -604,9 +604,15 @@ static inline bool cpu_supports_mixed_endian_el0(void)
 	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
 }
 
+const struct cpumask *system_32bit_el0_cpumask(void);
+DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
+
 static inline bool system_supports_32bit_el0(void)
 {
-	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
+	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
+	return id_aa64pfr0_32bit_el0(pfr0) ||
+	       static_branch_unlikely(&arm64_mismatched_32bit_el0);
 }
 
 static inline bool system_supports_4kb_granule(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d4a7e84b1513..264998972627 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -104,6 +104,24 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 bool arm64_use_ng_mappings = false;
 EXPORT_SYMBOL(arm64_use_ng_mappings);
 
+/*
+ * Permit PER_LINUX32 and execve() of 32-bit binaries even if not all CPUs
+ * support it?
+ */
+static bool __read_mostly allow_mismatched_32bit_el0;
+
+/*
+ * Static branch enabled only if allow_mismatched_32bit_el0 is set and we have
+ * seen at least one CPU capable of 32-bit EL0.
+ */
+DEFINE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
+
+/*
+ * Mask of CPUs supporting 32-bit EL0.
+ * Only valid if arm64_mismatched_32bit_el0 is enabled.
+ */
+static cpumask_var_t cpu_32bit_el0_mask __cpumask_var_read_mostly;
+
 /*
  * Flag to indicate if we have computed the system wide
  * capabilities based on the boot time active CPUs. This
@@ -756,7 +774,7 @@ static void __init sort_ftr_regs(void)
  * Any bits that are not covered by an arm64_ftr_bits entry are considered
  * RES0 for the system-wide value, and must strictly match.
  */
-static void __init init_cpu_ftr_reg(u32 sys_reg, u64 new)
+static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
 {
 	u64 val = 0;
 	u64 strict_mask = ~0x0ULL;
@@ -819,7 +837,7 @@ static void __init init_cpu_hwcaps_indirect_list(void)
 
 static void __init setup_boot_cpu_capabilities(void);
 
-static void __init init_32bit_cpu_features(struct cpuinfo_32bit *info)
+static void init_32bit_cpu_features(struct cpuinfo_32bit *info)
 {
 	init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
 	init_cpu_ftr_reg(SYS_ID_DFR1_EL1, info->reg_id_dfr1);
@@ -935,6 +953,25 @@ static void relax_cpu_ftr_reg(u32 sys_id, int field)
 	WARN_ON(!ftrp->width);
 }
 
+static void update_compat_elf_hwcaps(void);
+
+static void update_mismatched_32bit_el0_cpu_features(struct cpuinfo_arm64 *info,
+						     struct cpuinfo_arm64 *boot)
+{
+	static bool boot_cpu_32bit_regs_overridden = false;
+
+	if (!allow_mismatched_32bit_el0 || boot_cpu_32bit_regs_overridden)
+		return;
+
+	if (id_aa64pfr0_32bit_el0(boot->reg_id_aa64pfr0))
+		return;
+
+	boot->aarch32 = info->aarch32;
+	init_32bit_cpu_features(&boot->aarch32);
+	update_compat_elf_hwcaps();
+	boot_cpu_32bit_regs_overridden = true;
+}
+
 static int update_32bit_cpu_features(int cpu, struct cpuinfo_32bit *info,
 				     struct cpuinfo_32bit *boot)
 {
@@ -1095,6 +1132,7 @@ void update_cpu_features(int cpu,
 	 * (e.g. SYS_ID_AA64PFR0_EL1), so we call it last.
 	 */
 	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
+		update_mismatched_32bit_el0_cpu_features(info, boot);
 		taint |= update_32bit_cpu_features(cpu, &info->aarch32,
 						   &boot->aarch32);
 	}
@@ -1196,6 +1234,52 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
 	return feature_matches(val, entry);
 }
 
+static int enable_mismatched_32bit_el0(unsigned int cpu)
+{
+	if (id_aa64pfr0_32bit_el0(per_cpu(cpu_data, cpu).reg_id_aa64pfr0)) {
+		cpumask_set_cpu(cpu, cpu_32bit_el0_mask);
+		static_branch_enable_cpuslocked(&arm64_mismatched_32bit_el0);
+	}
+
+	return 0;
+}
+
+static int __init init_32bit_el0_mask(void)
+{
+	if (!allow_mismatched_32bit_el0)
+		return 0;
+
+	if (!alloc_cpumask_var(&cpu_32bit_el0_mask, GFP_KERNEL))
+		return -ENOMEM;
+
+	return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
+				 "arm64/mismatched_32bit_el0:online",
+				 enable_mismatched_32bit_el0, NULL);
+}
+early_initcall(init_32bit_el0_mask);
+
+const struct cpumask *system_32bit_el0_cpumask(void)
+{
+	if (!system_supports_32bit_el0())
+		return cpu_none_mask;
+
+	if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
+		return cpu_32bit_el0_mask;
+
+	return cpu_present_mask;
+}
+
+static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	if (!has_cpuid_feature(entry, scope))
+		return allow_mismatched_32bit_el0;
+
+	if (scope == SCOPE_SYSTEM)
+		pr_info("detected: 32-bit EL0 Support\n");
+
+	return true;
+}
+
 static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope)
 {
 	bool has_sre;
@@ -1803,10 +1887,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 #endif	/* CONFIG_ARM64_VHE */
 	{
-		.desc = "32-bit EL0 Support",
-		.capability = ARM64_HAS_32BIT_EL0,
+		.capability = ARM64_HAS_32BIT_EL0_DO_NOT_USE,
 		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = has_cpuid_feature,
+		.matches = has_32bit_el0,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,
@@ -2299,7 +2382,7 @@ static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = {
 	{},
 };
 
-static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
+static void cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 {
 	switch (cap->hwcap_type) {
 	case CAP_HWCAP:
@@ -2344,7 +2427,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 	return rc;
 }
 
-static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
+static void setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
 {
 	/* We support emulation of accesses to CPU ID feature registers */
 	cpu_set_named_feature(CPUID);
@@ -2353,6 +2436,12 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
 			cap_set_elf_hwcap(hwcaps);
 }
 
+static void update_compat_elf_hwcaps(void)
+{
+	if (system_capabilities_finalized())
+		setup_elf_hwcaps(compat_elf_hwcaps);
+}
+
 static void update_cpu_capabilities(u16 scope_mask)
 {
 	int i;
-- 
2.29.2.222.g5d2a92d10f8-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 3/6] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support
  2020-11-09 21:30 ` Will Deacon
@ 2020-11-09 21:30   ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Will Deacon, Catalin Marinas, Marc Zyngier,
	Greg Kroah-Hartman, Peter Zijlstra, Morten Rasmussen,
	Qais Yousef, Suren Baghdasaryan, Quentin Perret, kernel-team

If a vCPU tries to run 32-bit code on a system with mismatched support
at EL0, then we should kill it.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/arm.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 5750ec34960e..d322ac0f4a8e 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -633,6 +633,15 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 	}
 }
 
+static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
+{
+	if (likely(!vcpu_mode_is_32bit(vcpu)))
+		return false;
+
+	return !system_supports_32bit_el0() ||
+		static_branch_unlikely(&arm64_mismatched_32bit_el0);
+}
+
 /**
  * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
  * @vcpu:	The VCPU pointer
@@ -816,7 +825,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		 * with the asymmetric AArch32 case), return to userspace with
 		 * a fatal error.
 		 */
-		if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) {
+		if (vcpu_mode_is_bad_32bit(vcpu)) {
 			/*
 			 * As we have caught the guest red-handed, decide that
 			 * it isn't fit for purpose anymore by making the vcpu
-- 
2.29.2.222.g5d2a92d10f8-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 3/6] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support
@ 2020-11-09 21:30   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Marc Zyngier, kernel-team, Quentin Perret,
	Peter Zijlstra, Catalin Marinas, Qais Yousef, Suren Baghdasaryan,
	Greg Kroah-Hartman, Will Deacon, Morten Rasmussen

If a vCPU tries to run 32-bit code on a system with mismatched support
at EL0, then we should kill it.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kvm/arm.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 5750ec34960e..d322ac0f4a8e 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -633,6 +633,15 @@ static void check_vcpu_requests(struct kvm_vcpu *vcpu)
 	}
 }
 
+static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
+{
+	if (likely(!vcpu_mode_is_32bit(vcpu)))
+		return false;
+
+	return !system_supports_32bit_el0() ||
+		static_branch_unlikely(&arm64_mismatched_32bit_el0);
+}
+
 /**
  * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code
  * @vcpu:	The VCPU pointer
@@ -816,7 +825,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		 * with the asymmetric AArch32 case), return to userspace with
 		 * a fatal error.
 		 */
-		if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) {
+		if (vcpu_mode_is_bad_32bit(vcpu)) {
 			/*
 			 * As we have caught the guest red-handed, decide that
 			 * it isn't fit for purpose anymore by making the vcpu
-- 
2.29.2.222.g5d2a92d10f8-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 4/6] arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs
  2020-11-09 21:30 ` Will Deacon
@ 2020-11-09 21:30   ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Will Deacon, Catalin Marinas, Marc Zyngier,
	Greg Kroah-Hartman, Peter Zijlstra, Morten Rasmussen,
	Qais Yousef, Suren Baghdasaryan, Quentin Perret, kernel-team

Scheduling a 32-bit application on a 64-bit-only CPU is a bad idea.

Ensure that 32-bit applications always take the slow-path when returning
to userspace on a system with mismatched support at EL0, so that we can
avoid trying to run on a 64-bit-only CPU and force a SIGKILL instead.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/process.c | 19 ++++++++++++++++++-
 arch/arm64/kernel/signal.c  | 26 ++++++++++++++++++++++++++
 2 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 4784011cecac..1540ab0fbf23 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -542,6 +542,15 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
 	write_sysreg(val, cntkctl_el1);
 }
 
+static void compat_thread_switch(struct task_struct *next)
+{
+	if (!is_compat_thread(task_thread_info(next)))
+		return;
+
+	if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
+		set_tsk_thread_flag(next, TIF_NOTIFY_RESUME);
+}
+
 /*
  * Thread switching.
  */
@@ -558,6 +567,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	uao_thread_switch(next);
 	ssbs_thread_switch(next);
 	erratum_1418040_thread_switch(prev, next);
+	compat_thread_switch(next);
 
 	/*
 	 * Complete any pending TLB or cache maintenance on this CPU in case
@@ -620,8 +630,15 @@ unsigned long arch_align_stack(unsigned long sp)
  */
 void arch_setup_new_exec(void)
 {
-	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
+	unsigned long mmflags = 0;
+
+	if (is_compat_task()) {
+		mmflags = MMCF_AARCH32;
+		if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
+			set_tsk_thread_flag(current, TIF_NOTIFY_RESUME);
+	}
 
+	current->mm->context.flags = mmflags;
 	ptrauth_thread_init_user(current);
 
 	if (task_spec_ssb_noexec(current)) {
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index a8184cad8890..bcb6ca2d9a7c 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -911,6 +911,19 @@ static void do_signal(struct pt_regs *regs)
 	restore_saved_sigmask();
 }
 
+static bool cpu_affinity_invalid(struct pt_regs *regs)
+{
+	if (!compat_user_mode(regs))
+		return false;
+
+	/*
+	 * We're preemptible, but a reschedule will cause us to check the
+	 * affinity again.
+	 */
+	return !cpumask_test_cpu(raw_smp_processor_id(),
+				 system_32bit_el0_cpumask());
+}
+
 asmlinkage void do_notify_resume(struct pt_regs *regs,
 				 unsigned long thread_flags)
 {
@@ -948,6 +961,19 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
 			if (thread_flags & _TIF_NOTIFY_RESUME) {
 				tracehook_notify_resume(regs);
 				rseq_handle_notify_resume(NULL, regs);
+
+				/*
+				 * If we reschedule after checking the affinity
+				 * then we must ensure that TIF_NOTIFY_RESUME
+				 * is set so that we check the affinity again.
+				 * Since tracehook_notify_resume() clears the
+				 * flag, ensure that the compiler doesn't move
+				 * it after the affinity check.
+				 */
+				barrier();
+
+				if (cpu_affinity_invalid(regs))
+					force_sig(SIGKILL);
 			}
 
 			if (thread_flags & _TIF_FOREIGN_FPSTATE)
-- 
2.29.2.222.g5d2a92d10f8-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 4/6] arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs
@ 2020-11-09 21:30   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Marc Zyngier, kernel-team, Quentin Perret,
	Peter Zijlstra, Catalin Marinas, Qais Yousef, Suren Baghdasaryan,
	Greg Kroah-Hartman, Will Deacon, Morten Rasmussen

Scheduling a 32-bit application on a 64-bit-only CPU is a bad idea.

Ensure that 32-bit applications always take the slow-path when returning
to userspace on a system with mismatched support at EL0, so that we can
avoid trying to run on a 64-bit-only CPU and force a SIGKILL instead.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/process.c | 19 ++++++++++++++++++-
 arch/arm64/kernel/signal.c  | 26 ++++++++++++++++++++++++++
 2 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 4784011cecac..1540ab0fbf23 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -542,6 +542,15 @@ static void erratum_1418040_thread_switch(struct task_struct *prev,
 	write_sysreg(val, cntkctl_el1);
 }
 
+static void compat_thread_switch(struct task_struct *next)
+{
+	if (!is_compat_thread(task_thread_info(next)))
+		return;
+
+	if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
+		set_tsk_thread_flag(next, TIF_NOTIFY_RESUME);
+}
+
 /*
  * Thread switching.
  */
@@ -558,6 +567,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
 	uao_thread_switch(next);
 	ssbs_thread_switch(next);
 	erratum_1418040_thread_switch(prev, next);
+	compat_thread_switch(next);
 
 	/*
 	 * Complete any pending TLB or cache maintenance on this CPU in case
@@ -620,8 +630,15 @@ unsigned long arch_align_stack(unsigned long sp)
  */
 void arch_setup_new_exec(void)
 {
-	current->mm->context.flags = is_compat_task() ? MMCF_AARCH32 : 0;
+	unsigned long mmflags = 0;
+
+	if (is_compat_task()) {
+		mmflags = MMCF_AARCH32;
+		if (static_branch_unlikely(&arm64_mismatched_32bit_el0))
+			set_tsk_thread_flag(current, TIF_NOTIFY_RESUME);
+	}
 
+	current->mm->context.flags = mmflags;
 	ptrauth_thread_init_user(current);
 
 	if (task_spec_ssb_noexec(current)) {
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index a8184cad8890..bcb6ca2d9a7c 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -911,6 +911,19 @@ static void do_signal(struct pt_regs *regs)
 	restore_saved_sigmask();
 }
 
+static bool cpu_affinity_invalid(struct pt_regs *regs)
+{
+	if (!compat_user_mode(regs))
+		return false;
+
+	/*
+	 * We're preemptible, but a reschedule will cause us to check the
+	 * affinity again.
+	 */
+	return !cpumask_test_cpu(raw_smp_processor_id(),
+				 system_32bit_el0_cpumask());
+}
+
 asmlinkage void do_notify_resume(struct pt_regs *regs,
 				 unsigned long thread_flags)
 {
@@ -948,6 +961,19 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
 			if (thread_flags & _TIF_NOTIFY_RESUME) {
 				tracehook_notify_resume(regs);
 				rseq_handle_notify_resume(NULL, regs);
+
+				/*
+				 * If we reschedule after checking the affinity
+				 * then we must ensure that TIF_NOTIFY_RESUME
+				 * is set so that we check the affinity again.
+				 * Since tracehook_notify_resume() clears the
+				 * flag, ensure that the compiler doesn't move
+				 * it after the affinity check.
+				 */
+				barrier();
+
+				if (cpu_affinity_invalid(regs))
+					force_sig(SIGKILL);
 			}
 
 			if (thread_flags & _TIF_FOREIGN_FPSTATE)
-- 
2.29.2.222.g5d2a92d10f8-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  2020-11-09 21:30 ` Will Deacon
@ 2020-11-09 21:30   ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Will Deacon, Catalin Marinas, Marc Zyngier,
	Greg Kroah-Hartman, Peter Zijlstra, Morten Rasmussen,
	Qais Yousef, Suren Baghdasaryan, Quentin Perret, kernel-team

Since 32-bit applications will be killed if they are caught trying to
execute on a 64-bit-only CPU in a mismatched system, advertise the set
of 32-bit capable CPUs to userspace in sysfs.

Signed-off-by: Will Deacon <will@kernel.org>
---
 .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
 arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 1a04ca8162ad..8a2e377b0dde 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -493,6 +493,15 @@ Description:	AArch64 CPU registers
 		'identification' directory exposes the CPU ID registers for
 		identifying model and revision of the CPU.
 
+What:		/sys/devices/system/cpu/aarch32_el0
+Date:		November 2020
+Contact:	Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
+Description:	Identifies the subset of CPUs in the system that can execute
+		AArch32 (32-bit ARM) applications. If present, the same format as
+		/sys/devices/system/cpu/{offline,online,possible,present} is used.
+		If absent, then all or none of the CPUs can execute AArch32
+		applications and execve() will behave accordingly.
+
 What:		/sys/devices/system/cpu/cpu#/cpu_capacity
 Date:		December 2016
 Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 264998972627..c90f4a18768c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -67,6 +67,7 @@
 #include <linux/crash_dump.h>
 #include <linux/sort.h>
 #include <linux/stop_machine.h>
+#include <linux/sysfs.h>
 #include <linux/types.h>
 #include <linux/mm.h>
 #include <linux/cpu.h>
@@ -1269,6 +1270,24 @@ const struct cpumask *system_32bit_el0_cpumask(void)
 	return cpu_present_mask;
 }
 
+static ssize_t aarch32_el0_show(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	const struct cpumask *mask = system_32bit_el0_cpumask();
+
+	return sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(mask));
+}
+static const DEVICE_ATTR_RO(aarch32_el0);
+
+static int __init aarch32_el0_sysfs_init(void)
+{
+	if (!allow_mismatched_32bit_el0)
+		return 0;
+
+	return device_create_file(cpu_subsys.dev_root, &dev_attr_aarch32_el0);
+}
+device_initcall(aarch32_el0_sysfs_init);
+
 static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
 {
 	if (!has_cpuid_feature(entry, scope))
-- 
2.29.2.222.g5d2a92d10f8-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
@ 2020-11-09 21:30   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Marc Zyngier, kernel-team, Quentin Perret,
	Peter Zijlstra, Catalin Marinas, Qais Yousef, Suren Baghdasaryan,
	Greg Kroah-Hartman, Will Deacon, Morten Rasmussen

Since 32-bit applications will be killed if they are caught trying to
execute on a 64-bit-only CPU in a mismatched system, advertise the set
of 32-bit capable CPUs to userspace in sysfs.

Signed-off-by: Will Deacon <will@kernel.org>
---
 .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
 arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 1a04ca8162ad..8a2e377b0dde 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -493,6 +493,15 @@ Description:	AArch64 CPU registers
 		'identification' directory exposes the CPU ID registers for
 		identifying model and revision of the CPU.
 
+What:		/sys/devices/system/cpu/aarch32_el0
+Date:		November 2020
+Contact:	Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
+Description:	Identifies the subset of CPUs in the system that can execute
+		AArch32 (32-bit ARM) applications. If present, the same format as
+		/sys/devices/system/cpu/{offline,online,possible,present} is used.
+		If absent, then all or none of the CPUs can execute AArch32
+		applications and execve() will behave accordingly.
+
 What:		/sys/devices/system/cpu/cpu#/cpu_capacity
 Date:		December 2016
 Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 264998972627..c90f4a18768c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -67,6 +67,7 @@
 #include <linux/crash_dump.h>
 #include <linux/sort.h>
 #include <linux/stop_machine.h>
+#include <linux/sysfs.h>
 #include <linux/types.h>
 #include <linux/mm.h>
 #include <linux/cpu.h>
@@ -1269,6 +1270,24 @@ const struct cpumask *system_32bit_el0_cpumask(void)
 	return cpu_present_mask;
 }
 
+static ssize_t aarch32_el0_show(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	const struct cpumask *mask = system_32bit_el0_cpumask();
+
+	return sysfs_emit(buf, "%*pbl\n", cpumask_pr_args(mask));
+}
+static const DEVICE_ATTR_RO(aarch32_el0);
+
+static int __init aarch32_el0_sysfs_init(void)
+{
+	if (!allow_mismatched_32bit_el0)
+		return 0;
+
+	return device_create_file(cpu_subsys.dev_root, &dev_attr_aarch32_el0);
+}
+device_initcall(aarch32_el0_sysfs_init);
+
 static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
 {
 	if (!has_cpuid_feature(entry, scope))
-- 
2.29.2.222.g5d2a92d10f8-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 6/6] arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0
  2020-11-09 21:30 ` Will Deacon
@ 2020-11-09 21:30   ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Will Deacon, Catalin Marinas, Marc Zyngier,
	Greg Kroah-Hartman, Peter Zijlstra, Morten Rasmussen,
	Qais Yousef, Suren Baghdasaryan, Quentin Perret, kernel-team

Allow systems with mismatched 32-bit support at EL0 to run 32-bit
applications based on a new kernel parameter.

Signed-off-by: Will Deacon <will@kernel.org>
---
 Documentation/admin-guide/kernel-parameters.txt | 7 +++++++
 arch/arm64/kernel/cpufeature.c                  | 7 +++++++
 2 files changed, 14 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 526d65d8573a..f20188c44d83 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -289,6 +289,13 @@
 			do not want to use tracing_snapshot_alloc() as it needs
 			to be done where GFP_KERNEL allocations are allowed.
 
+	allow_mismatched_32bit_el0 [ARM64]
+			Allow execve() of 32-bit applications and setting of the
+			PER_LINUX32 personality on systems where only a strict
+			subset of the CPUs support 32-bit EL0. When this
+			parameter is present, the set of CPUs supporting 32-bit
+			EL0 is indicated by /sys/devices/system/cpu/aarch32_el0.
+
 	amd_iommu=	[HW,X86-64]
 			Pass parameters to the AMD IOMMU driver in the system.
 			Possible values are:
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c90f4a18768c..cd106267408a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1270,6 +1270,13 @@ const struct cpumask *system_32bit_el0_cpumask(void)
 	return cpu_present_mask;
 }
 
+static int __init parse_32bit_el0_param(char *str)
+{
+	allow_mismatched_32bit_el0 = true;
+	return 0;
+}
+early_param("allow_mismatched_32bit_el0", parse_32bit_el0_param);
+
 static ssize_t aarch32_el0_show(struct device *dev,
 				struct device_attribute *attr, char *buf)
 {
-- 
2.29.2.222.g5d2a92d10f8-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v2 6/6] arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0
@ 2020-11-09 21:30   ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-09 21:30 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-arch, Marc Zyngier, kernel-team, Quentin Perret,
	Peter Zijlstra, Catalin Marinas, Qais Yousef, Suren Baghdasaryan,
	Greg Kroah-Hartman, Will Deacon, Morten Rasmussen

Allow systems with mismatched 32-bit support at EL0 to run 32-bit
applications based on a new kernel parameter.

Signed-off-by: Will Deacon <will@kernel.org>
---
 Documentation/admin-guide/kernel-parameters.txt | 7 +++++++
 arch/arm64/kernel/cpufeature.c                  | 7 +++++++
 2 files changed, 14 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 526d65d8573a..f20188c44d83 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -289,6 +289,13 @@
 			do not want to use tracing_snapshot_alloc() as it needs
 			to be done where GFP_KERNEL allocations are allowed.
 
+	allow_mismatched_32bit_el0 [ARM64]
+			Allow execve() of 32-bit applications and setting of the
+			PER_LINUX32 personality on systems where only a strict
+			subset of the CPUs support 32-bit EL0. When this
+			parameter is present, the set of CPUs supporting 32-bit
+			EL0 is indicated by /sys/devices/system/cpu/aarch32_el0.
+
 	amd_iommu=	[HW,X86-64]
 			Pass parameters to the AMD IOMMU driver in the system.
 			Possible values are:
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c90f4a18768c..cd106267408a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1270,6 +1270,13 @@ const struct cpumask *system_32bit_el0_cpumask(void)
 	return cpu_present_mask;
 }
 
+static int __init parse_32bit_el0_param(char *str)
+{
+	allow_mismatched_32bit_el0 = true;
+	return 0;
+}
+early_param("allow_mismatched_32bit_el0", parse_32bit_el0_param);
+
 static ssize_t aarch32_el0_show(struct device *dev,
 				struct device_attribute *attr, char *buf)
 {
-- 
2.29.2.222.g5d2a92d10f8-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  2020-11-09 21:30   ` Will Deacon
@ 2020-11-10  7:04     ` Greg Kroah-Hartman
  -1 siblings, 0 replies; 36+ messages in thread
From: Greg Kroah-Hartman @ 2020-11-10  7:04 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-arch, Catalin Marinas, Marc Zyngier,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On Mon, Nov 09, 2020 at 09:30:21PM +0000, Will Deacon wrote:
> Since 32-bit applications will be killed if they are caught trying to
> execute on a 64-bit-only CPU in a mismatched system, advertise the set
> of 32-bit capable CPUs to userspace in sysfs.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
>  arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
>  2 files changed, 28 insertions(+)

I still think the "kill processes that can not run on this CPU" is crazy
but that has nothing to do with this sysfs file patch, which looks good
to me:

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
@ 2020-11-10  7:04     ` Greg Kroah-Hartman
  0 siblings, 0 replies; 36+ messages in thread
From: Greg Kroah-Hartman @ 2020-11-10  7:04 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Catalin Marinas, Qais Yousef, Marc Zyngier, Suren Baghdasaryan,
	Morten Rasmussen, linux-arm-kernel

On Mon, Nov 09, 2020 at 09:30:21PM +0000, Will Deacon wrote:
> Since 32-bit applications will be killed if they are caught trying to
> execute on a 64-bit-only CPU in a mismatched system, advertise the set
> of 32-bit capable CPUs to userspace in sysfs.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
>  arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
>  2 files changed, 28 insertions(+)

I still think the "kill processes that can not run on this CPU" is crazy
but that has nothing to do with this sysfs file patch, which looks good
to me:

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  2020-11-10  7:04     ` Greg Kroah-Hartman
@ 2020-11-10  9:28       ` Catalin Marinas
  -1 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2020-11-10  9:28 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Will Deacon, linux-arm-kernel, linux-arch, Marc Zyngier,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On Tue, Nov 10, 2020 at 08:04:26AM +0100, Greg Kroah-Hartman wrote:
> On Mon, Nov 09, 2020 at 09:30:21PM +0000, Will Deacon wrote:
> > Since 32-bit applications will be killed if they are caught trying to
> > execute on a 64-bit-only CPU in a mismatched system, advertise the set
> > of 32-bit capable CPUs to userspace in sysfs.
> > 
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
> >  arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
> >  2 files changed, 28 insertions(+)
> 
> I still think the "kill processes that can not run on this CPU" is crazy

I agree it's crazy, though we try to keep the kernel support simple
while making it a user-space problem. The alternative is to
force-migrate such process to a more capable CPU, potentially against
the desired user cpumask. In addition, we'd have to block CPU hot-unplug
in case the last 32-bit capable CPU disappears.

The only sane thing is not to allow 32-bit processes at all on such
hardware but I think we lost that battle ;).

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
@ 2020-11-10  9:28       ` Catalin Marinas
  0 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2020-11-10  9:28 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Marc Zyngier, Qais Yousef, Suren Baghdasaryan, Will Deacon,
	Morten Rasmussen, linux-arm-kernel

On Tue, Nov 10, 2020 at 08:04:26AM +0100, Greg Kroah-Hartman wrote:
> On Mon, Nov 09, 2020 at 09:30:21PM +0000, Will Deacon wrote:
> > Since 32-bit applications will be killed if they are caught trying to
> > execute on a 64-bit-only CPU in a mismatched system, advertise the set
> > of 32-bit capable CPUs to userspace in sysfs.
> > 
> > Signed-off-by: Will Deacon <will@kernel.org>
> > ---
> >  .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
> >  arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
> >  2 files changed, 28 insertions(+)
> 
> I still think the "kill processes that can not run on this CPU" is crazy

I agree it's crazy, though we try to keep the kernel support simple
while making it a user-space problem. The alternative is to
force-migrate such process to a more capable CPU, potentially against
the desired user cpumask. In addition, we'd have to block CPU hot-unplug
in case the last 32-bit capable CPU disappears.

The only sane thing is not to allow 32-bit processes at all on such
hardware but I think we lost that battle ;).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/6] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support
  2020-11-09 21:30   ` Will Deacon
@ 2020-11-10  9:33     ` Marc Zyngier
  -1 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2020-11-10  9:33 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-arch, Catalin Marinas,
	Greg Kroah-Hartman, Peter Zijlstra, Morten Rasmussen,
	Qais Yousef, Suren Baghdasaryan, Quentin Perret, kernel-team

On 2020-11-09 21:30, Will Deacon wrote:
> If a vCPU tries to run 32-bit code on a system with mismatched support

nit: s/tries to run/is caught running/

> at EL0, then we should kill it.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kvm/arm.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 5750ec34960e..d322ac0f4a8e 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -633,6 +633,15 @@ static void check_vcpu_requests(struct kvm_vcpu 
> *vcpu)
>  	}
>  }
> 
> +static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
> +{
> +	if (likely(!vcpu_mode_is_32bit(vcpu)))
> +		return false;
> +
> +	return !system_supports_32bit_el0() ||
> +		static_branch_unlikely(&arm64_mismatched_32bit_el0);
> +}
> +
>  /**
>   * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute 
> guest code
>   * @vcpu:	The VCPU pointer
> @@ -816,7 +825,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  		 * with the asymmetric AArch32 case), return to userspace with
>  		 * a fatal error.
>  		 */
> -		if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) {
> +		if (vcpu_mode_is_bad_32bit(vcpu)) {
>  			/*
>  			 * As we have caught the guest red-handed, decide that
>  			 * it isn't fit for purpose anymore by making the vcpu

Acked-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/6] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched EL0 support
@ 2020-11-10  9:33     ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2020-11-10  9:33 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Catalin Marinas, Qais Yousef, Greg Kroah-Hartman,
	Suren Baghdasaryan, Morten Rasmussen, linux-arm-kernel

On 2020-11-09 21:30, Will Deacon wrote:
> If a vCPU tries to run 32-bit code on a system with mismatched support

nit: s/tries to run/is caught running/

> at EL0, then we should kill it.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kvm/arm.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 5750ec34960e..d322ac0f4a8e 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -633,6 +633,15 @@ static void check_vcpu_requests(struct kvm_vcpu 
> *vcpu)
>  	}
>  }
> 
> +static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
> +{
> +	if (likely(!vcpu_mode_is_32bit(vcpu)))
> +		return false;
> +
> +	return !system_supports_32bit_el0() ||
> +		static_branch_unlikely(&arm64_mismatched_32bit_el0);
> +}
> +
>  /**
>   * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute 
> guest code
>   * @vcpu:	The VCPU pointer
> @@ -816,7 +825,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  		 * with the asymmetric AArch32 case), return to userspace with
>  		 * a fatal error.
>  		 */
> -		if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) {
> +		if (vcpu_mode_is_bad_32bit(vcpu)) {
>  			/*
>  			 * As we have caught the guest red-handed, decide that
>  			 * it isn't fit for purpose anymore by making the vcpu

Acked-by: Marc Zyngier <maz@kernel.org>

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  2020-11-10  9:28       ` Catalin Marinas
@ 2020-11-10  9:36         ` Greg Kroah-Hartman
  -1 siblings, 0 replies; 36+ messages in thread
From: Greg Kroah-Hartman @ 2020-11-10  9:36 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Will Deacon, linux-arm-kernel, linux-arch, Marc Zyngier,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On Tue, Nov 10, 2020 at 09:28:43AM +0000, Catalin Marinas wrote:
> On Tue, Nov 10, 2020 at 08:04:26AM +0100, Greg Kroah-Hartman wrote:
> > On Mon, Nov 09, 2020 at 09:30:21PM +0000, Will Deacon wrote:
> > > Since 32-bit applications will be killed if they are caught trying to
> > > execute on a 64-bit-only CPU in a mismatched system, advertise the set
> > > of 32-bit capable CPUs to userspace in sysfs.
> > > 
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
> > >  arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
> > >  2 files changed, 28 insertions(+)
> > 
> > I still think the "kill processes that can not run on this CPU" is crazy
> 
> I agree it's crazy, though we try to keep the kernel support simple
> while making it a user-space problem. The alternative is to
> force-migrate such process to a more capable CPU, potentially against
> the desired user cpumask. In addition, we'd have to block CPU hot-unplug
> in case the last 32-bit capable CPU disappears.

You should block CPU hot-unplug for the last 32bit capable CPU, why
would you not want that if there are any active 32bit processes running?

And how is userspace going to know that it is creating a 32bit process?
Are you now going to force all calls to exec() to be mediated somehow by
putting an ELF parser in the init process?

> The only sane thing is not to allow 32-bit processes at all on such
> hardware but I think we lost that battle ;).

That was a hardware decision that was made for some specific reason, so
supporting it in the best way seems to be our best option given that
people obviously must want this crazy type of system otherwise they
wouldn't be paying for it!

While punting the logic out to userspace is simple for the kernel, and
of course my first option, I think this isn't going to work in the
long-run and the kernel will have to "know" what type of process it is
scheduling in order to correctly deal with this nightmare as userspace
can't do that well, if at all.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
@ 2020-11-10  9:36         ` Greg Kroah-Hartman
  0 siblings, 0 replies; 36+ messages in thread
From: Greg Kroah-Hartman @ 2020-11-10  9:36 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Marc Zyngier, Qais Yousef, Suren Baghdasaryan, Will Deacon,
	Morten Rasmussen, linux-arm-kernel

On Tue, Nov 10, 2020 at 09:28:43AM +0000, Catalin Marinas wrote:
> On Tue, Nov 10, 2020 at 08:04:26AM +0100, Greg Kroah-Hartman wrote:
> > On Mon, Nov 09, 2020 at 09:30:21PM +0000, Will Deacon wrote:
> > > Since 32-bit applications will be killed if they are caught trying to
> > > execute on a 64-bit-only CPU in a mismatched system, advertise the set
> > > of 32-bit capable CPUs to userspace in sysfs.
> > > 
> > > Signed-off-by: Will Deacon <will@kernel.org>
> > > ---
> > >  .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
> > >  arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
> > >  2 files changed, 28 insertions(+)
> > 
> > I still think the "kill processes that can not run on this CPU" is crazy
> 
> I agree it's crazy, though we try to keep the kernel support simple
> while making it a user-space problem. The alternative is to
> force-migrate such process to a more capable CPU, potentially against
> the desired user cpumask. In addition, we'd have to block CPU hot-unplug
> in case the last 32-bit capable CPU disappears.

You should block CPU hot-unplug for the last 32bit capable CPU, why
would you not want that if there are any active 32bit processes running?

And how is userspace going to know that it is creating a 32bit process?
Are you now going to force all calls to exec() to be mediated somehow by
putting an ELF parser in the init process?

> The only sane thing is not to allow 32-bit processes at all on such
> hardware but I think we lost that battle ;).

That was a hardware decision that was made for some specific reason, so
supporting it in the best way seems to be our best option given that
people obviously must want this crazy type of system otherwise they
wouldn't be paying for it!

While punting the logic out to userspace is simple for the kernel, and
of course my first option, I think this isn't going to work in the
long-run and the kernel will have to "know" what type of process it is
scheduling in order to correctly deal with this nightmare as userspace
can't do that well, if at all.

thanks,

greg k-h

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  2020-11-10  9:36         ` Greg Kroah-Hartman
@ 2020-11-10  9:53           ` Marc Zyngier
  -1 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2020-11-10  9:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-arch,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On 2020-11-10 09:36, Greg Kroah-Hartman wrote:

[...]

> While punting the logic out to userspace is simple for the kernel, and
> of course my first option, I think this isn't going to work in the
> long-run and the kernel will have to "know" what type of process it is
> scheduling in order to correctly deal with this nightmare as userspace
> can't do that well, if at all.

For that to happen, we must first decide which part of the userspace
ABI we are prepared to compromise on. The most obvious one would be to
allow overriding the affinity on exec, but I'm pretty sure it has bad
interactions with cgroups, on top of violating the existing ABI which
mandates that the affinity is inherited across exec.

There may be other options (always make at least one 32bit-capable CPU
part of the process affinity?), but they all imply some form of 
userspace
visible regressions.

Pick your poison... :-/

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
@ 2020-11-10  9:53           ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2020-11-10  9:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Catalin Marinas, Qais Yousef, Suren Baghdasaryan, Will Deacon,
	Morten Rasmussen, linux-arm-kernel

On 2020-11-10 09:36, Greg Kroah-Hartman wrote:

[...]

> While punting the logic out to userspace is simple for the kernel, and
> of course my first option, I think this isn't going to work in the
> long-run and the kernel will have to "know" what type of process it is
> scheduling in order to correctly deal with this nightmare as userspace
> can't do that well, if at all.

For that to happen, we must first decide which part of the userspace
ABI we are prepared to compromise on. The most obvious one would be to
allow overriding the affinity on exec, but I'm pretty sure it has bad
interactions with cgroups, on top of violating the existing ABI which
mandates that the affinity is inherited across exec.

There may be other options (always make at least one 32bit-capable CPU
part of the process affinity?), but they all imply some form of 
userspace
visible regressions.

Pick your poison... :-/

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  2020-11-10  9:53           ` Marc Zyngier
@ 2020-11-10 10:10             ` Greg Kroah-Hartman
  -1 siblings, 0 replies; 36+ messages in thread
From: Greg Kroah-Hartman @ 2020-11-10 10:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-arch,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On Tue, Nov 10, 2020 at 09:53:53AM +0000, Marc Zyngier wrote:
> On 2020-11-10 09:36, Greg Kroah-Hartman wrote:
> 
> [...]
> 
> > While punting the logic out to userspace is simple for the kernel, and
> > of course my first option, I think this isn't going to work in the
> > long-run and the kernel will have to "know" what type of process it is
> > scheduling in order to correctly deal with this nightmare as userspace
> > can't do that well, if at all.
> 
> For that to happen, we must first decide which part of the userspace
> ABI we are prepared to compromise on. The most obvious one would be to
> allow overriding the affinity on exec, but I'm pretty sure it has bad
> interactions with cgroups, on top of violating the existing ABI which
> mandates that the affinity is inherited across exec.

So you are saying that you have to violate this today with this patch
set?  Or would have to violate that if the scheduler got involved?

How is userspace going to "know" how to do all of this properly?  Who is
going to write that code?

> There may be other options (always make at least one 32bit-capable CPU
> part of the process affinity?), but they all imply some form of userspace
> visible regressions.
> 
> Pick your poison... :-/

What do the system designers and users of this hardware recommend?  They
are the ones that dreamed up this, and seem to somehow want this.  What
do they think the correct solution is here as obviously they must have
thought this through when designing such a beast, right?

And if they didn't think any of this through then why are they wanting
to run Linux on this thing?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
@ 2020-11-10 10:10             ` Greg Kroah-Hartman
  0 siblings, 0 replies; 36+ messages in thread
From: Greg Kroah-Hartman @ 2020-11-10 10:10 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Catalin Marinas, Qais Yousef, Suren Baghdasaryan, Will Deacon,
	Morten Rasmussen, linux-arm-kernel

On Tue, Nov 10, 2020 at 09:53:53AM +0000, Marc Zyngier wrote:
> On 2020-11-10 09:36, Greg Kroah-Hartman wrote:
> 
> [...]
> 
> > While punting the logic out to userspace is simple for the kernel, and
> > of course my first option, I think this isn't going to work in the
> > long-run and the kernel will have to "know" what type of process it is
> > scheduling in order to correctly deal with this nightmare as userspace
> > can't do that well, if at all.
> 
> For that to happen, we must first decide which part of the userspace
> ABI we are prepared to compromise on. The most obvious one would be to
> allow overriding the affinity on exec, but I'm pretty sure it has bad
> interactions with cgroups, on top of violating the existing ABI which
> mandates that the affinity is inherited across exec.

So you are saying that you have to violate this today with this patch
set?  Or would have to violate that if the scheduler got involved?

How is userspace going to "know" how to do all of this properly?  Who is
going to write that code?

> There may be other options (always make at least one 32bit-capable CPU
> part of the process affinity?), but they all imply some form of userspace
> visible regressions.
> 
> Pick your poison... :-/

What do the system designers and users of this hardware recommend?  They
are the ones that dreamed up this, and seem to somehow want this.  What
do they think the correct solution is here as obviously they must have
thought this through when designing such a beast, right?

And if they didn't think any of this through then why are they wanting
to run Linux on this thing?

thanks,

greg k-h

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  2020-11-10 10:10             ` Greg Kroah-Hartman
@ 2020-11-10 10:46               ` Marc Zyngier
  -1 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2020-11-10 10:46 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-arch,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On 2020-11-10 10:10, Greg Kroah-Hartman wrote:
> On Tue, Nov 10, 2020 at 09:53:53AM +0000, Marc Zyngier wrote:
>> On 2020-11-10 09:36, Greg Kroah-Hartman wrote:
>> 
>> [...]
>> 
>> > While punting the logic out to userspace is simple for the kernel, and
>> > of course my first option, I think this isn't going to work in the
>> > long-run and the kernel will have to "know" what type of process it is
>> > scheduling in order to correctly deal with this nightmare as userspace
>> > can't do that well, if at all.
>> 
>> For that to happen, we must first decide which part of the userspace
>> ABI we are prepared to compromise on. The most obvious one would be to
>> allow overriding the affinity on exec, but I'm pretty sure it has bad
>> interactions with cgroups, on top of violating the existing ABI which
>> mandates that the affinity is inherited across exec.
> 
> So you are saying that you have to violate this today with this patch
> set?  Or would have to violate that if the scheduler got involved?

Doing nothing (as with this series) doesn't result in an ABI breakage.
It "only" results in an unreliable system. If you start making decisions
behind userspace's back for the sake of making things reliable (which
is a commendable goal), you break the ABI.

Rock, please meet A Hard Place.

And that's the real issue: there is no good solution to this problem.
Only a different set of ugly compromises.

> How is userspace going to "know" how to do all of this properly?  Who 
> is
> going to write that code?
> 
>> There may be other options (always make at least one 32bit-capable CPU
>> part of the process affinity?), but they all imply some form of 
>> userspace
>> visible regressions.
>> 
>> Pick your poison... :-/
> 
> What do the system designers and users of this hardware recommend?  
> They
> are the ones that dreamed up this, and seem to somehow want this.  What
> do they think the correct solution is here as obviously they must have
> thought this through when designing such a beast, right?
> 
> And if they didn't think any of this through then why are they wanting
> to run Linux on this thing?

At this stage, I'll reach out for some Bombay mix (I dislike pop corn).

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
@ 2020-11-10 10:46               ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2020-11-10 10:46 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Catalin Marinas, Qais Yousef, Suren Baghdasaryan, Will Deacon,
	Morten Rasmussen, linux-arm-kernel

On 2020-11-10 10:10, Greg Kroah-Hartman wrote:
> On Tue, Nov 10, 2020 at 09:53:53AM +0000, Marc Zyngier wrote:
>> On 2020-11-10 09:36, Greg Kroah-Hartman wrote:
>> 
>> [...]
>> 
>> > While punting the logic out to userspace is simple for the kernel, and
>> > of course my first option, I think this isn't going to work in the
>> > long-run and the kernel will have to "know" what type of process it is
>> > scheduling in order to correctly deal with this nightmare as userspace
>> > can't do that well, if at all.
>> 
>> For that to happen, we must first decide which part of the userspace
>> ABI we are prepared to compromise on. The most obvious one would be to
>> allow overriding the affinity on exec, but I'm pretty sure it has bad
>> interactions with cgroups, on top of violating the existing ABI which
>> mandates that the affinity is inherited across exec.
> 
> So you are saying that you have to violate this today with this patch
> set?  Or would have to violate that if the scheduler got involved?

Doing nothing (as with this series) doesn't result in an ABI breakage.
It "only" results in an unreliable system. If you start making decisions
behind userspace's back for the sake of making things reliable (which
is a commendable goal), you break the ABI.

Rock, please meet A Hard Place.

And that's the real issue: there is no good solution to this problem.
Only a different set of ugly compromises.

> How is userspace going to "know" how to do all of this properly?  Who 
> is
> going to write that code?
> 
>> There may be other options (always make at least one 32bit-capable CPU
>> part of the process affinity?), but they all imply some form of 
>> userspace
>> visible regressions.
>> 
>> Pick your poison... :-/
> 
> What do the system designers and users of this hardware recommend?  
> They
> are the ones that dreamed up this, and seem to somehow want this.  What
> do they think the correct solution is here as obviously they must have
> thought this through when designing such a beast, right?
> 
> And if they didn't think any of this through then why are they wanting
> to run Linux on this thing?

At this stage, I'll reach out for some Bombay mix (I dislike pop corn).

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
  2020-11-10  9:36         ` Greg Kroah-Hartman
@ 2020-11-10 10:57           ` Catalin Marinas
  -1 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2020-11-10 10:57 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Will Deacon, linux-arm-kernel, linux-arch, Marc Zyngier,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On Tue, Nov 10, 2020 at 10:36:33AM +0100, Greg Kroah-Hartman wrote:
> On Tue, Nov 10, 2020 at 09:28:43AM +0000, Catalin Marinas wrote:
> > On Tue, Nov 10, 2020 at 08:04:26AM +0100, Greg Kroah-Hartman wrote:
> > > On Mon, Nov 09, 2020 at 09:30:21PM +0000, Will Deacon wrote:
> > > > Since 32-bit applications will be killed if they are caught trying to
> > > > execute on a 64-bit-only CPU in a mismatched system, advertise the set
> > > > of 32-bit capable CPUs to userspace in sysfs.
> > > > 
> > > > Signed-off-by: Will Deacon <will@kernel.org>
> > > > ---
> > > >  .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
> > > >  arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
> > > >  2 files changed, 28 insertions(+)
> > > 
> > > I still think the "kill processes that can not run on this CPU" is crazy
> > 
> > I agree it's crazy, though we try to keep the kernel support simple
> > while making it a user-space problem. The alternative is to
> > force-migrate such process to a more capable CPU, potentially against
> > the desired user cpumask. In addition, we'd have to block CPU hot-unplug
> > in case the last 32-bit capable CPU disappears.
> 
> You should block CPU hot-unplug for the last 32bit capable CPU, why
> would you not want that if there are any active 32bit processes running?

That's been done in one of the versions submitted to the Android kernel
(which also handles automatic task placement, though by overriding the
user cpumask):

https://android-review.googlesource.com/c/kernel/common/+/1437100/7

I think preventing CPU offlining makes sense only together with forcing
32-bit task placement from the kernel. If we decide to go for the
SIGKILL approach with user-driven affinity, careful CPU offlining should
also be handled by user-space.

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs
@ 2020-11-10 10:57           ` Catalin Marinas
  0 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2020-11-10 10:57 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Marc Zyngier, Qais Yousef, Suren Baghdasaryan, Will Deacon,
	Morten Rasmussen, linux-arm-kernel

On Tue, Nov 10, 2020 at 10:36:33AM +0100, Greg Kroah-Hartman wrote:
> On Tue, Nov 10, 2020 at 09:28:43AM +0000, Catalin Marinas wrote:
> > On Tue, Nov 10, 2020 at 08:04:26AM +0100, Greg Kroah-Hartman wrote:
> > > On Mon, Nov 09, 2020 at 09:30:21PM +0000, Will Deacon wrote:
> > > > Since 32-bit applications will be killed if they are caught trying to
> > > > execute on a 64-bit-only CPU in a mismatched system, advertise the set
> > > > of 32-bit capable CPUs to userspace in sysfs.
> > > > 
> > > > Signed-off-by: Will Deacon <will@kernel.org>
> > > > ---
> > > >  .../ABI/testing/sysfs-devices-system-cpu      |  9 +++++++++
> > > >  arch/arm64/kernel/cpufeature.c                | 19 +++++++++++++++++++
> > > >  2 files changed, 28 insertions(+)
> > > 
> > > I still think the "kill processes that can not run on this CPU" is crazy
> > 
> > I agree it's crazy, though we try to keep the kernel support simple
> > while making it a user-space problem. The alternative is to
> > force-migrate such process to a more capable CPU, potentially against
> > the desired user cpumask. In addition, we'd have to block CPU hot-unplug
> > in case the last 32-bit capable CPU disappears.
> 
> You should block CPU hot-unplug for the last 32bit capable CPU, why
> would you not want that if there are any active 32bit processes running?

That's been done in one of the versions submitted to the Android kernel
(which also handles automatic task placement, though by overriding the
user cpumask):

https://android-review.googlesource.com/c/kernel/common/+/1437100/7

I think preventing CPU offlining makes sense only together with forcing
32-bit task placement from the kernel. If we decide to go for the
SIGKILL approach with user-driven affinity, careful CPU offlining should
also be handled by user-space.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support
  2020-11-09 21:30   ` Will Deacon
@ 2020-11-11 19:10     ` Catalin Marinas
  -1 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2020-11-11 19:10 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-arch, Marc Zyngier, Greg Kroah-Hartman,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

Hi Will,

On Mon, Nov 09, 2020 at 09:30:18PM +0000, Will Deacon wrote:
> +static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
> +{
> +	if (!has_cpuid_feature(entry, scope))
> +		return allow_mismatched_32bit_el0;

I still don't like overriding the cpufeature mechanism in this way. What about
something like below? It still doesn't fit perfectly but at least the
capability represents what was detected in the system. We then decide in
system_supports_32bit_el0() whether to allow asymmetry. There is an
extra trick to park a non-AArch32 capable CPU in has_32bit_el0() if it
comes up late and the feature has already been advertised with
!allow_mismatched_32bit_el0.

I find it clearer, though I probably stared at it more than at your
patch ;).

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 97244d4feca9..0e0427997063 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -604,9 +604,13 @@ static inline bool cpu_supports_mixed_endian_el0(void)
 	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
 }
 
+extern bool allow_mismatched_32bit_el0;
+extern bool mismatched_32bit_el0;
+
 static inline bool system_supports_32bit_el0(void)
 {
-	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
+	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0) &&
+		(!mismatched_32bit_el0 || allow_mismatched_32bit_el0);
 }
 
 static inline bool system_supports_4kb_granule(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b7b6804cb931..67534327f92b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -104,6 +104,13 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 bool arm64_use_ng_mappings = false;
 EXPORT_SYMBOL(arm64_use_ng_mappings);
 
+/*
+ * Permit PER_LINUX32 and execve() of 32-bit binaries even if not all CPUs
+ * support it?
+ */
+bool __read_mostly allow_mismatched_32bit_el0;
+bool mismatched_32bit_el0;
+
 /*
  * Flag to indicate if we have computed the system wide
  * capabilities based on the boot time active CPUs. This
@@ -1193,6 +1200,35 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
 	return feature_matches(val, entry);
 }
 
+static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	if (has_cpuid_feature(entry, scope))
+		return true;
+
+	if (system_capabilities_finalized() && !allow_mismatched_32bit_el0) {
+		pr_crit("CPU%d: Asymmetric AArch32 not supported\n",
+			smp_processor_id());
+		cpu_die_early();
+	}
+
+	mismatched_32bit_el0 = true;
+	return false;
+}
+
+static int __init report_32bit_el0(void)
+{
+	if (!system_supports_32bit_el0())
+		return 0;
+
+	if (mismatched_32bit_el0)
+		pr_info("detected: asymmetric 32-bit EL0 support\n");
+	else
+		pr_info("detected: 32-bit EL0 support\n");
+
+	return 0;
+}
+core_initcall(report_32bit_el0);
+
 static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope)
 {
 	bool has_sre;
@@ -1800,10 +1836,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 #endif	/* CONFIG_ARM64_VHE */
 	{
-		.desc = "32-bit EL0 Support",
 		.capability = ARM64_HAS_32BIT_EL0,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = has_cpuid_feature,
+		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
+		.matches = has_32bit_el0,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,


-- 
Catalin

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support
@ 2020-11-11 19:10     ` Catalin Marinas
  0 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2020-11-11 19:10 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Marc Zyngier, Qais Yousef, Greg Kroah-Hartman,
	Suren Baghdasaryan, Morten Rasmussen, linux-arm-kernel

Hi Will,

On Mon, Nov 09, 2020 at 09:30:18PM +0000, Will Deacon wrote:
> +static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
> +{
> +	if (!has_cpuid_feature(entry, scope))
> +		return allow_mismatched_32bit_el0;

I still don't like overriding the cpufeature mechanism in this way. What about
something like below? It still doesn't fit perfectly but at least the
capability represents what was detected in the system. We then decide in
system_supports_32bit_el0() whether to allow asymmetry. There is an
extra trick to park a non-AArch32 capable CPU in has_32bit_el0() if it
comes up late and the feature has already been advertised with
!allow_mismatched_32bit_el0.

I find it clearer, though I probably stared at it more than at your
patch ;).

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 97244d4feca9..0e0427997063 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -604,9 +604,13 @@ static inline bool cpu_supports_mixed_endian_el0(void)
 	return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
 }
 
+extern bool allow_mismatched_32bit_el0;
+extern bool mismatched_32bit_el0;
+
 static inline bool system_supports_32bit_el0(void)
 {
-	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0);
+	return cpus_have_const_cap(ARM64_HAS_32BIT_EL0) &&
+		(!mismatched_32bit_el0 || allow_mismatched_32bit_el0);
 }
 
 static inline bool system_supports_4kb_granule(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b7b6804cb931..67534327f92b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -104,6 +104,13 @@ DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 bool arm64_use_ng_mappings = false;
 EXPORT_SYMBOL(arm64_use_ng_mappings);
 
+/*
+ * Permit PER_LINUX32 and execve() of 32-bit binaries even if not all CPUs
+ * support it?
+ */
+bool __read_mostly allow_mismatched_32bit_el0;
+bool mismatched_32bit_el0;
+
 /*
  * Flag to indicate if we have computed the system wide
  * capabilities based on the boot time active CPUs. This
@@ -1193,6 +1200,35 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
 	return feature_matches(val, entry);
 }
 
+static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	if (has_cpuid_feature(entry, scope))
+		return true;
+
+	if (system_capabilities_finalized() && !allow_mismatched_32bit_el0) {
+		pr_crit("CPU%d: Asymmetric AArch32 not supported\n",
+			smp_processor_id());
+		cpu_die_early();
+	}
+
+	mismatched_32bit_el0 = true;
+	return false;
+}
+
+static int __init report_32bit_el0(void)
+{
+	if (!system_supports_32bit_el0())
+		return 0;
+
+	if (mismatched_32bit_el0)
+		pr_info("detected: asymmetric 32-bit EL0 support\n");
+	else
+		pr_info("detected: 32-bit EL0 support\n");
+
+	return 0;
+}
+core_initcall(report_32bit_el0);
+
 static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope)
 {
 	bool has_sre;
@@ -1800,10 +1836,9 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 	},
 #endif	/* CONFIG_ARM64_VHE */
 	{
-		.desc = "32-bit EL0 Support",
 		.capability = ARM64_HAS_32BIT_EL0,
-		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
-		.matches = has_cpuid_feature,
+		.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
+		.matches = has_32bit_el0,
 		.sys_reg = SYS_ID_AA64PFR0_EL1,
 		.sign = FTR_UNSIGNED,
 		.field_pos = ID_AA64PFR0_EL0_SHIFT,


-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support
  2020-11-11 19:10     ` Catalin Marinas
@ 2020-11-13  9:36       ` Will Deacon
  -1 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-13  9:36 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arm-kernel, linux-arch, Marc Zyngier, Greg Kroah-Hartman,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On Wed, Nov 11, 2020 at 07:10:44PM +0000, Catalin Marinas wrote:
> On Mon, Nov 09, 2020 at 09:30:18PM +0000, Will Deacon wrote:
> > +static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
> > +{
> > +	if (!has_cpuid_feature(entry, scope))
> > +		return allow_mismatched_32bit_el0;
> 
> I still don't like overriding the cpufeature mechanism in this way. What about
> something like below? It still doesn't fit perfectly but at least the
> capability represents what was detected in the system. We then decide in
> system_supports_32bit_el0() whether to allow asymmetry. There is an
> extra trick to park a non-AArch32 capable CPU in has_32bit_el0() if it
> comes up late and the feature has already been advertised with
> !allow_mismatched_32bit_el0.

I deliberately allow late onlining of 64-bit-only cores and I don't think
this is something we should forbid (although it's not clear from your patch
when allow_mismatched_32bit_el0 gets set). Furthermore, killing CPUs from
the matches callback feels _very_ dodgy to me, as it's invoked indirectly
by things such as this_cpu_has_cap().

> I find it clearer, though I probably stared at it more than at your
> patch ;).

Yeah, swings and roundabouts...

I think we're quibbling on implementation details a bit here whereas we
should probably be focussing on what to do about execve() and CPU hotplug.
Your patch doesn't apply on top of my series or replace this one, so there's
not an awful lot I can do with it. I'm about to post a v3 with a tentative
solution for execve(), so please could you demonstrate your idea on top of
that so I can see how it fits together?

I'd like to move on from the "I don't like this" (none of us do) discussion
and figure out the functional aspects, if possible. We can always paint it
a different colour later on, but we don't even have a full solution yet.

Thanks,

Will

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support
@ 2020-11-13  9:36       ` Will Deacon
  0 siblings, 0 replies; 36+ messages in thread
From: Will Deacon @ 2020-11-13  9:36 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Marc Zyngier, Qais Yousef, Greg Kroah-Hartman,
	Suren Baghdasaryan, Morten Rasmussen, linux-arm-kernel

On Wed, Nov 11, 2020 at 07:10:44PM +0000, Catalin Marinas wrote:
> On Mon, Nov 09, 2020 at 09:30:18PM +0000, Will Deacon wrote:
> > +static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
> > +{
> > +	if (!has_cpuid_feature(entry, scope))
> > +		return allow_mismatched_32bit_el0;
> 
> I still don't like overriding the cpufeature mechanism in this way. What about
> something like below? It still doesn't fit perfectly but at least the
> capability represents what was detected in the system. We then decide in
> system_supports_32bit_el0() whether to allow asymmetry. There is an
> extra trick to park a non-AArch32 capable CPU in has_32bit_el0() if it
> comes up late and the feature has already been advertised with
> !allow_mismatched_32bit_el0.

I deliberately allow late onlining of 64-bit-only cores and I don't think
this is something we should forbid (although it's not clear from your patch
when allow_mismatched_32bit_el0 gets set). Furthermore, killing CPUs from
the matches callback feels _very_ dodgy to me, as it's invoked indirectly
by things such as this_cpu_has_cap().

> I find it clearer, though I probably stared at it more than at your
> patch ;).

Yeah, swings and roundabouts...

I think we're quibbling on implementation details a bit here whereas we
should probably be focussing on what to do about execve() and CPU hotplug.
Your patch doesn't apply on top of my series or replace this one, so there's
not an awful lot I can do with it. I'm about to post a v3 with a tentative
solution for execve(), so please could you demonstrate your idea on top of
that so I can see how it fits together?

I'd like to move on from the "I don't like this" (none of us do) discussion
and figure out the functional aspects, if possible. We can always paint it
a different colour later on, but we don't even have a full solution yet.

Thanks,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support
  2020-11-13  9:36       ` Will Deacon
@ 2020-11-13 10:26         ` Catalin Marinas
  -1 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2020-11-13 10:26 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, linux-arch, Marc Zyngier, Greg Kroah-Hartman,
	Peter Zijlstra, Morten Rasmussen, Qais Yousef,
	Suren Baghdasaryan, Quentin Perret, kernel-team

On Fri, Nov 13, 2020 at 09:36:30AM +0000, Will Deacon wrote:
> On Wed, Nov 11, 2020 at 07:10:44PM +0000, Catalin Marinas wrote:
> > On Mon, Nov 09, 2020 at 09:30:18PM +0000, Will Deacon wrote:
> > > +static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
> > > +{
> > > +	if (!has_cpuid_feature(entry, scope))
> > > +		return allow_mismatched_32bit_el0;
> > 
> > I still don't like overriding the cpufeature mechanism in this way. What about
> > something like below? It still doesn't fit perfectly but at least the
> > capability represents what was detected in the system. We then decide in
> > system_supports_32bit_el0() whether to allow asymmetry. There is an
> > extra trick to park a non-AArch32 capable CPU in has_32bit_el0() if it
> > comes up late and the feature has already been advertised with
> > !allow_mismatched_32bit_el0.
> 
> I deliberately allow late onlining of 64-bit-only cores and I don't think
> this is something we should forbid

I agree and my patch allows this. That's a property of
WEAK_LOCAL_CPUFEATURE.

> (although it's not clear from your patch when
> allow_mismatched_32bit_el0 gets set).

It doesn't, that was meant as a discussion point around the cpufeature
framework but still relying on your other patches for cpumask, cmdline
argument.

> Furthermore, killing CPUs from the matches callback feels _very_ dodgy
> to me, as it's invoked indirectly by things such as
> this_cpu_has_cap().

Yeah, this part is not nice. What we want here is for a late 64-bit only
CPU to be parked if !allow_mismatched_32bit_el0 and we detected 32-bit
already (symmetric).

> > I find it clearer, though I probably stared at it more than at your
> > patch ;).
> 
> Yeah, swings and roundabouts...
> 
> I think we're quibbling on implementation details a bit here whereas we
> should probably be focussing on what to do about execve() and CPU hotplug.

Do we need to solve the execve() problem? If yes, we have to get the
scheduler involved. The hotplug case I think is simpler, just make sure
we have a last standing 32-bit capable CPU, no ABI changes.

For execve(), arguably we don't necessarily break the execve() ABI as
the affinity change may be done later when scheduling the task. But
given that it's a user opt-in for this feature anyway, we just document
the ABI changes.

> Your patch doesn't apply on top of my series or replace this one, so there's
> not an awful lot I can do with it. I'm about to post a v3 with a tentative
> solution for execve(), so please could you demonstrate your idea on top of
> that so I can see how it fits together?

I'll give it a go and if it looks any nicer, I'll post something. As you
say, it's more like personal preference on the implementation details.

On the functional aspects, we must preserve the current behaviour like
detecting symmetry and parking late CPUs if they don't comply in the
!allow_mismatched_32bit_el0 case. In the allow_mismatched_32bit_el0
case, we relax this to the equivalent of a weak feature (either missing
or present for early/late CPUs) but only report 32-bit if we found an
early 32-bit capable CPU (I ended up with a truth table on a piece of
paper).

-- 
Catalin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support
@ 2020-11-13 10:26         ` Catalin Marinas
  0 siblings, 0 replies; 36+ messages in thread
From: Catalin Marinas @ 2020-11-13 10:26 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arch, kernel-team, Quentin Perret, Peter Zijlstra,
	Marc Zyngier, Qais Yousef, Greg Kroah-Hartman,
	Suren Baghdasaryan, Morten Rasmussen, linux-arm-kernel

On Fri, Nov 13, 2020 at 09:36:30AM +0000, Will Deacon wrote:
> On Wed, Nov 11, 2020 at 07:10:44PM +0000, Catalin Marinas wrote:
> > On Mon, Nov 09, 2020 at 09:30:18PM +0000, Will Deacon wrote:
> > > +static bool has_32bit_el0(const struct arm64_cpu_capabilities *entry, int scope)
> > > +{
> > > +	if (!has_cpuid_feature(entry, scope))
> > > +		return allow_mismatched_32bit_el0;
> > 
> > I still don't like overriding the cpufeature mechanism in this way. What about
> > something like below? It still doesn't fit perfectly but at least the
> > capability represents what was detected in the system. We then decide in
> > system_supports_32bit_el0() whether to allow asymmetry. There is an
> > extra trick to park a non-AArch32 capable CPU in has_32bit_el0() if it
> > comes up late and the feature has already been advertised with
> > !allow_mismatched_32bit_el0.
> 
> I deliberately allow late onlining of 64-bit-only cores and I don't think
> this is something we should forbid

I agree and my patch allows this. That's a property of
WEAK_LOCAL_CPUFEATURE.

> (although it's not clear from your patch when
> allow_mismatched_32bit_el0 gets set).

It doesn't, that was meant as a discussion point around the cpufeature
framework but still relying on your other patches for cpumask, cmdline
argument.

> Furthermore, killing CPUs from the matches callback feels _very_ dodgy
> to me, as it's invoked indirectly by things such as
> this_cpu_has_cap().

Yeah, this part is not nice. What we want here is for a late 64-bit only
CPU to be parked if !allow_mismatched_32bit_el0 and we detected 32-bit
already (symmetric).

> > I find it clearer, though I probably stared at it more than at your
> > patch ;).
> 
> Yeah, swings and roundabouts...
> 
> I think we're quibbling on implementation details a bit here whereas we
> should probably be focussing on what to do about execve() and CPU hotplug.

Do we need to solve the execve() problem? If yes, we have to get the
scheduler involved. The hotplug case I think is simpler, just make sure
we have a last standing 32-bit capable CPU, no ABI changes.

For execve(), arguably we don't necessarily break the execve() ABI as
the affinity change may be done later when scheduling the task. But
given that it's a user opt-in for this feature anyway, we just document
the ABI changes.

> Your patch doesn't apply on top of my series or replace this one, so there's
> not an awful lot I can do with it. I'm about to post a v3 with a tentative
> solution for execve(), so please could you demonstrate your idea on top of
> that so I can see how it fits together?

I'll give it a go and if it looks any nicer, I'll post something. As you
say, it's more like personal preference on the implementation details.

On the functional aspects, we must preserve the current behaviour like
detecting symmetry and parking late CPUs if they don't comply in the
!allow_mismatched_32bit_el0 case. In the allow_mismatched_32bit_el0
case, we relax this to the equivalent of a weak feature (either missing
or present for early/late CPUs) but only report 32-bit if we found an
early 32-bit capable CPU (I ended up with a truth table on a piece of
paper).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2020-11-13 10:27 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-09 21:30 [PATCH v2 0/6] An alternative series for asymmetric AArch32 systems Will Deacon
2020-11-09 21:30 ` Will Deacon
2020-11-09 21:30 ` [PATCH v2 1/6] arm64: cpuinfo: Split AArch32 registers out into a separate struct Will Deacon
2020-11-09 21:30   ` Will Deacon
2020-11-09 21:30 ` [PATCH v2 2/6] arm64: Allow mismatched 32-bit EL0 support Will Deacon
2020-11-09 21:30   ` Will Deacon
2020-11-11 19:10   ` Catalin Marinas
2020-11-11 19:10     ` Catalin Marinas
2020-11-13  9:36     ` Will Deacon
2020-11-13  9:36       ` Will Deacon
2020-11-13 10:26       ` Catalin Marinas
2020-11-13 10:26         ` Catalin Marinas
2020-11-09 21:30 ` [PATCH v2 3/6] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched " Will Deacon
2020-11-09 21:30   ` Will Deacon
2020-11-10  9:33   ` Marc Zyngier
2020-11-10  9:33     ` Marc Zyngier
2020-11-09 21:30 ` [PATCH v2 4/6] arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs Will Deacon
2020-11-09 21:30   ` Will Deacon
2020-11-09 21:30 ` [PATCH v2 5/6] arm64: Advertise CPUs capable of running 32-bit applications in sysfs Will Deacon
2020-11-09 21:30   ` Will Deacon
2020-11-10  7:04   ` Greg Kroah-Hartman
2020-11-10  7:04     ` Greg Kroah-Hartman
2020-11-10  9:28     ` Catalin Marinas
2020-11-10  9:28       ` Catalin Marinas
2020-11-10  9:36       ` Greg Kroah-Hartman
2020-11-10  9:36         ` Greg Kroah-Hartman
2020-11-10  9:53         ` Marc Zyngier
2020-11-10  9:53           ` Marc Zyngier
2020-11-10 10:10           ` Greg Kroah-Hartman
2020-11-10 10:10             ` Greg Kroah-Hartman
2020-11-10 10:46             ` Marc Zyngier
2020-11-10 10:46               ` Marc Zyngier
2020-11-10 10:57         ` Catalin Marinas
2020-11-10 10:57           ` Catalin Marinas
2020-11-09 21:30 ` [PATCH v2 6/6] arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 Will Deacon
2020-11-09 21:30   ` Will Deacon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.