From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B545C4707F for ; Tue, 25 May 2021 15:15:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F230A6141D for ; Tue, 25 May 2021 15:15:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232433AbhEYPRW (ORCPT ); Tue, 25 May 2021 11:17:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:54634 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232202AbhEYPRL (ORCPT ); Tue, 25 May 2021 11:17:11 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id E8B246141D; Tue, 25 May 2021 15:15:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621955742; bh=QagufuVpOB4kXBKDhC/C+NjO+ztB6hqCnw2dJisJ2TM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pOWNl3Ngb0Mbd4T+OcanPvU4JIFpSJzcilJqEM/j4OfZnS8VQDZlC09dRG5Ry1mQW yfFPsp2eHau679tdJxMO1kOZhD2jIMXOk0oNRrni/tKFHZSuS7IJ+IbYHcVSrJ6k+U RKVnaQZ1r1bZ436wutDgRzdIrFNsFlXhUiji4dtp1p918kQtsXvCMUG08fsCOWhjH+ oRkATuClqNcqT4z36pzJYUdH6kJm6xvUprg+MfRIEN01XsJ1BP/D0Ghq9+qVcX9thl CNw9MN2XqWBdC8ujh6yd2TQQmb9QdzOHDCw6TEOO+VwDkTS/Rn5IoGzffqnJTtD01A QJMIj7bAzqy+Q== From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , "Rafael J. Wysocki" , Dietmar Eggemann , Daniel Bristot de Oliveira , kernel-team@android.com Subject: [PATCH v7 07/22] sched: Introduce task_cpu_possible_mask() to limit fallback rq selection Date: Tue, 25 May 2021 16:14:17 +0100 Message-Id: <20210525151432.16875-8-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210525151432.16875-1-will@kernel.org> References: <20210525151432.16875-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-arch@vger.kernel.org Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. On such a system, we must take care not to migrate a task to an unsupported CPU when forcefully moving tasks in select_fallback_rq() in response to a CPU hot-unplug operation. Introduce a task_cpu_possible_mask() hook which, given a task argument, allows an architecture to return a cpumask of CPUs that are capable of executing that task. The default implementation returns the cpu_possible_mask, since sane machines do not suffer from per-cpu ISA limitations that affect scheduling. The new mask is used when selecting the fallback runqueue as a last resort before forcing a migration to the first active CPU. Reviewed-by: Quentin Perret Signed-off-by: Will Deacon --- include/linux/mmu_context.h | 11 +++++++++++ kernel/sched/core.c | 5 ++--- 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h index 03dee12d2b61..1a599ba3524f 100644 --- a/include/linux/mmu_context.h +++ b/include/linux/mmu_context.h @@ -14,4 +14,15 @@ static inline void leave_mm(int cpu) { } #endif +/* + * CPUs that are capable of running task @p. By default, we assume a sane, + * homogeneous system. Must contain at least one active CPU. + */ +#ifndef task_cpu_possible_mask +# define task_cpu_possible_mask(p) cpu_possible_mask +# define task_cpu_possible(cpu, p) true +#else +# define task_cpu_possible(cpu, p) cpumask_test_cpu((cpu), task_cpu_possible_mask(p)) +#endif + #endif diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1702a60d178d..00ed51528c70 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1814,7 +1814,7 @@ static inline bool is_cpu_allowed(struct task_struct *p, int cpu) /* Non kernel threads are not allowed during either online or offline. */ if (!(p->flags & PF_KTHREAD)) - return cpu_active(cpu); + return cpu_active(cpu) && task_cpu_possible(cpu, p); /* KTHREAD_IS_PER_CPU is always allowed. */ if (kthread_is_per_cpu(p)) @@ -2791,10 +2791,9 @@ static int select_fallback_rq(int cpu, struct task_struct *p) * * More yuck to audit. */ - do_set_cpus_allowed(p, cpu_possible_mask); + do_set_cpus_allowed(p, task_cpu_possible_mask(p)); state = fail; break; - case fail: BUG(); break; -- 2.31.1.818.g46aad6cb9e-goog