From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7FB1C2BBCD for ; Tue, 8 Dec 2020 13:30:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A702723A79 for ; Tue, 8 Dec 2020 13:30:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729550AbgLHNaZ (ORCPT ); Tue, 8 Dec 2020 08:30:25 -0500 Received: from mail.kernel.org ([198.145.29.99]:49640 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728786AbgLHNaX (ORCPT ); Tue, 8 Dec 2020 08:30:23 -0500 From: Will Deacon Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Li Zefan , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , kernel-team@android.com Subject: [PATCH v5 13/15] arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system Date: Tue, 8 Dec 2020 13:28:33 +0000 Message-Id: <20201208132835.6151-14-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201208132835.6151-1-will@kernel.org> References: <20201208132835.6151-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If we want to support 32-bit applications, then when we identify a CPU with mismatched 32-bit EL0 support we must ensure that we will always have an active 32-bit CPU available to us from then on. This is important for the scheduler, because is_cpu_allowed() will be constrained to 32-bit CPUs for compat tasks and forced migration due to a hotplug event will hang if no 32-bit CPUs are available. On detecting a mismatch, prevent offlining of either the mismatching CPU if it is 32-bit capable, or find the first active 32-bit capable CPU otherwise. Signed-off-by: Will Deacon --- arch/arm64/kernel/cpufeature.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 088bf668cbe7..08b558a221b7 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -1237,6 +1237,8 @@ has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope) static int enable_mismatched_32bit_el0(unsigned int cpu) { + static int lucky_winner = -1; + struct cpuinfo_arm64 *info = &per_cpu(cpu_data, cpu); bool cpu_32bit = id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0); @@ -1245,6 +1247,22 @@ static int enable_mismatched_32bit_el0(unsigned int cpu) static_branch_enable_cpuslocked(&arm64_mismatched_32bit_el0); } + if (cpumask_test_cpu(0, cpu_32bit_el0_mask) == cpu_32bit) + return 0; + + if (lucky_winner >= 0) + return 0; + + /* + * We've detected a mismatch. We need to keep one of our CPUs with + * 32-bit EL0 online so that is_cpu_allowed() doesn't end up rejecting + * every CPU in the system for a 32-bit task. + */ + lucky_winner = cpu_32bit ? cpu : cpumask_any_and(cpu_32bit_el0_mask, + cpu_active_mask); + get_cpu_device(lucky_winner)->offline_disabled = true; + pr_info("Asymmetric 32-bit EL0 support detected on CPU %u; CPU hot-unplug disabled on CPU %u\n", + cpu, lucky_winner); return 0; } -- 2.29.2.576.ga3fc446d84-goog