From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 373FBC4742C for ; Fri, 13 Nov 2020 09:38:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CEAD522263 for ; Fri, 13 Nov 2020 09:38:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="glb1JkM6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726532AbgKMJiN (ORCPT ); Fri, 13 Nov 2020 04:38:13 -0500 Received: from mail.kernel.org ([198.145.29.99]:50448 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726493AbgKMJiD (ORCPT ); Fri, 13 Nov 2020 04:38:03 -0500 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C265D2224F; Fri, 13 Nov 2020 09:37:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605260282; bh=kh5GjB6MAIpgmD/FHzAXbuyCFW5mzNLi0B6uZoznJGw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=glb1JkM6YQPPJcvCppQ5OgIar8jO0PLxF2kZLEXRBBu9qojO4FcDUh5u0Ia9iPVAB hHQpIPRniCCQUev+fafOxzSrzPclkm1qod7vYDB27liZwUd4I1EKumIO6ZqD1Lr9Qe nAj7MhLdtbQ6FA8+WocO0HNpbuR48QXB24q3xBaw= From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Will Deacon , Catalin Marinas , Marc Zyngier , Greg Kroah-Hartman , Peter Zijlstra , Morten Rasmussen , Qais Yousef , Suren Baghdasaryan , Quentin Perret , Tejun Heo , Li Zefan , Johannes Weiner , Ingo Molnar , Juri Lelli , Vincent Guittot , kernel-team@android.com Subject: [PATCH v3 07/14] sched: Introduce restrict_cpus_allowed_ptr() to limit task CPU affinity Date: Fri, 13 Nov 2020 09:37:12 +0000 Message-Id: <20201113093720.21106-8-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201113093720.21106-1-will@kernel.org> References: <20201113093720.21106-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Although userspace can carefully manage the affinity masks for such tasks, one place where it is particularly problematic is execve() because the CPU on which the execve() is occurring may be incompatible with the new application image. In such a situation, it is desirable to restrict the affinity mask of the task and ensure that the new image is entered on a compatible CPU. From userspace's point of view, this looks the same as if the incompatible CPUs have been hotplugged off in its affinity mask. In preparation for restricting the affinity mask for compat tasks on arm64 systems without uniform support for 32-bit applications, introduce a restrict_cpus_allowed_ptr(), which allows the current affinity mask for a task to be shrunk to the intersection of a parameter mask. Signed-off-by: Will Deacon --- include/linux/sched.h | 1 + kernel/sched/core.c | 73 ++++++++++++++++++++++++++++++++++--------- 2 files changed, 59 insertions(+), 15 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 063cd120b459..1cd12c3ce9ee 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1631,6 +1631,7 @@ extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_ #ifdef CONFIG_SMP extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask); extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask); +extern int restrict_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *mask); #else static inline void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab5..818c8f7bdf2a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1860,24 +1860,18 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) } /* - * Change a given task's CPU affinity. Migrate the thread to a - * proper CPU and schedule it away if the CPU it's executing on - * is removed from the allowed bitmask. - * - * NOTE: the caller must have a valid reference to the task, the - * task must not exit() & deallocate itself prematurely. The - * call is not atomic; no spinlocks may be held. + * Called with both p->pi_lock and rq->lock held; drops both before returning. */ -static int __set_cpus_allowed_ptr(struct task_struct *p, - const struct cpumask *new_mask, bool check) +static int __set_cpus_allowed_ptr_locked(struct task_struct *p, + const struct cpumask *new_mask, + bool check, + struct rq *rq, + struct rq_flags *rf) { const struct cpumask *cpu_valid_mask = cpu_active_mask; unsigned int dest_cpu; - struct rq_flags rf; - struct rq *rq; int ret = 0; - rq = task_rq_lock(p, &rf); update_rq_clock(rq); if (p->flags & PF_KTHREAD) { @@ -1929,7 +1923,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, if (task_running(rq, p) || p->state == TASK_WAKING) { struct migration_arg arg = { p, dest_cpu }; /* Need help from migration thread: drop lock and wait. */ - task_rq_unlock(rq, p, &rf); + task_rq_unlock(rq, p, rf); stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg); return 0; } else if (task_on_rq_queued(p)) { @@ -1937,20 +1931,69 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, * OK, since we're going to drop the lock immediately * afterwards anyway. */ - rq = move_queued_task(rq, &rf, p, dest_cpu); + rq = move_queued_task(rq, rf, p, dest_cpu); } out: - task_rq_unlock(rq, p, &rf); + task_rq_unlock(rq, p, rf); return ret; } +/* + * Change a given task's CPU affinity. Migrate the thread to a + * proper CPU and schedule it away if the CPU it's executing on + * is removed from the allowed bitmask. + * + * NOTE: the caller must have a valid reference to the task, the + * task must not exit() & deallocate itself prematurely. The + * call is not atomic; no spinlocks may be held. + */ +static int __set_cpus_allowed_ptr(struct task_struct *p, + const struct cpumask *new_mask, bool check) +{ + struct rq_flags rf; + struct rq *rq; + + rq = task_rq_lock(p, &rf); + return __set_cpus_allowed_ptr_locked(p, new_mask, check, rq, &rf); +} + int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask) { return __set_cpus_allowed_ptr(p, new_mask, false); } EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); +/* + * Change a given task's CPU affinity to the intersection of its current + * affinity mask and @subset_mask. If the resulting mask is empty, leave + * the affinity unchanged and return -EINVAL. + */ +int restrict_cpus_allowed_ptr(struct task_struct *p, + const struct cpumask *subset_mask) +{ + struct rq_flags rf; + struct rq *rq; + cpumask_var_t new_mask; + int retval; + + if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) + return -ENOMEM; + + rq = task_rq_lock(p, &rf); + if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) { + task_rq_unlock(rq, p, &rf); + retval = -EINVAL; + goto out_free_new_mask; + } + + retval = __set_cpus_allowed_ptr_locked(p, new_mask, false, rq, &rf); + +out_free_new_mask: + free_cpumask_var(new_mask); + return retval; +} + void set_task_cpu(struct task_struct *p, unsigned int new_cpu) { #ifdef CONFIG_SCHED_DEBUG -- 2.29.2.299.gdc1121823c-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41013C388F7 for ; Fri, 13 Nov 2020 09:40:21 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BF2DB22263 for ; Fri, 13 Nov 2020 09:40:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="opnYBbZD"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="glb1JkM6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF2DB22263 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hzTfUxaPb6GX2jwWP3WW7yLe2P1TrS4bn2L/CO0wYmQ=; b=opnYBbZDI26Ysk/P4Y/cnIztI fvilIW6kBSyqCof9FsmmLWIeH3K3YBFmhF9TiyHhYuVysMCDE5ovEhBgT0nnMzBFKXdfnl3Vmgfhc lx3qlUzpDyML1ucEIK4BKzVt25Y+4YuHoP0I2f61K/XozauOL9Xc+DCUwRn0odmQ4CWyBIKeVj64y AdTrpi4r8bzOT4lISfBibx7h9+r25ChIWR8X41+ZT7N8eZgfn4ZB4irFo+yrSvAkq0PHGrsutX+PK mxYNwebIjBXcTisGOAtkNNDJhHpd9P8QKsK6C7C4VqEIMyyxQWkSLfeXm1BAa+qZN4zGAfo4Cl5Rj qtZXgDupQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kdVXj-0002iq-5T; Fri, 13 Nov 2020 09:38:55 +0000 Received: from mail.kernel.org ([198.145.29.99]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kdVWs-0002MV-NW for linux-arm-kernel@lists.infradead.org; Fri, 13 Nov 2020 09:38:04 +0000 Received: from localhost.localdomain (236.31.169.217.in-addr.arpa [217.169.31.236]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C265D2224F; Fri, 13 Nov 2020 09:37:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605260282; bh=kh5GjB6MAIpgmD/FHzAXbuyCFW5mzNLi0B6uZoznJGw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=glb1JkM6YQPPJcvCppQ5OgIar8jO0PLxF2kZLEXRBBu9qojO4FcDUh5u0Ia9iPVAB hHQpIPRniCCQUev+fafOxzSrzPclkm1qod7vYDB27liZwUd4I1EKumIO6ZqD1Lr9Qe nAj7MhLdtbQ6FA8+WocO0HNpbuR48QXB24q3xBaw= From: Will Deacon To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v3 07/14] sched: Introduce restrict_cpus_allowed_ptr() to limit task CPU affinity Date: Fri, 13 Nov 2020 09:37:12 +0000 Message-Id: <20201113093720.21106-8-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201113093720.21106-1-will@kernel.org> References: <20201113093720.21106-1-will@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201113_043802_981422_8B491134 X-CRM114-Status: GOOD ( 23.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, Marc Zyngier , kernel-team@android.com, Vincent Guittot , Juri Lelli , Quentin Perret , Peter Zijlstra , Catalin Marinas , Johannes Weiner , linux-kernel@vger.kernel.org, Qais Yousef , Suren Baghdasaryan , Ingo Molnar , Li Zefan , Greg Kroah-Hartman , Tejun Heo , Will Deacon , Morten Rasmussen Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Asymmetric systems may not offer the same level of userspace ISA support across all CPUs, meaning that some applications cannot be executed by some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do not feature support for 32-bit applications on both clusters. Although userspace can carefully manage the affinity masks for such tasks, one place where it is particularly problematic is execve() because the CPU on which the execve() is occurring may be incompatible with the new application image. In such a situation, it is desirable to restrict the affinity mask of the task and ensure that the new image is entered on a compatible CPU. From userspace's point of view, this looks the same as if the incompatible CPUs have been hotplugged off in its affinity mask. In preparation for restricting the affinity mask for compat tasks on arm64 systems without uniform support for 32-bit applications, introduce a restrict_cpus_allowed_ptr(), which allows the current affinity mask for a task to be shrunk to the intersection of a parameter mask. Signed-off-by: Will Deacon --- include/linux/sched.h | 1 + kernel/sched/core.c | 73 ++++++++++++++++++++++++++++++++++--------- 2 files changed, 59 insertions(+), 15 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 063cd120b459..1cd12c3ce9ee 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1631,6 +1631,7 @@ extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_ #ifdef CONFIG_SMP extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask); extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask); +extern int restrict_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *mask); #else static inline void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) { diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab5..818c8f7bdf2a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1860,24 +1860,18 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) } /* - * Change a given task's CPU affinity. Migrate the thread to a - * proper CPU and schedule it away if the CPU it's executing on - * is removed from the allowed bitmask. - * - * NOTE: the caller must have a valid reference to the task, the - * task must not exit() & deallocate itself prematurely. The - * call is not atomic; no spinlocks may be held. + * Called with both p->pi_lock and rq->lock held; drops both before returning. */ -static int __set_cpus_allowed_ptr(struct task_struct *p, - const struct cpumask *new_mask, bool check) +static int __set_cpus_allowed_ptr_locked(struct task_struct *p, + const struct cpumask *new_mask, + bool check, + struct rq *rq, + struct rq_flags *rf) { const struct cpumask *cpu_valid_mask = cpu_active_mask; unsigned int dest_cpu; - struct rq_flags rf; - struct rq *rq; int ret = 0; - rq = task_rq_lock(p, &rf); update_rq_clock(rq); if (p->flags & PF_KTHREAD) { @@ -1929,7 +1923,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, if (task_running(rq, p) || p->state == TASK_WAKING) { struct migration_arg arg = { p, dest_cpu }; /* Need help from migration thread: drop lock and wait. */ - task_rq_unlock(rq, p, &rf); + task_rq_unlock(rq, p, rf); stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg); return 0; } else if (task_on_rq_queued(p)) { @@ -1937,20 +1931,69 @@ static int __set_cpus_allowed_ptr(struct task_struct *p, * OK, since we're going to drop the lock immediately * afterwards anyway. */ - rq = move_queued_task(rq, &rf, p, dest_cpu); + rq = move_queued_task(rq, rf, p, dest_cpu); } out: - task_rq_unlock(rq, p, &rf); + task_rq_unlock(rq, p, rf); return ret; } +/* + * Change a given task's CPU affinity. Migrate the thread to a + * proper CPU and schedule it away if the CPU it's executing on + * is removed from the allowed bitmask. + * + * NOTE: the caller must have a valid reference to the task, the + * task must not exit() & deallocate itself prematurely. The + * call is not atomic; no spinlocks may be held. + */ +static int __set_cpus_allowed_ptr(struct task_struct *p, + const struct cpumask *new_mask, bool check) +{ + struct rq_flags rf; + struct rq *rq; + + rq = task_rq_lock(p, &rf); + return __set_cpus_allowed_ptr_locked(p, new_mask, check, rq, &rf); +} + int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask) { return __set_cpus_allowed_ptr(p, new_mask, false); } EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); +/* + * Change a given task's CPU affinity to the intersection of its current + * affinity mask and @subset_mask. If the resulting mask is empty, leave + * the affinity unchanged and return -EINVAL. + */ +int restrict_cpus_allowed_ptr(struct task_struct *p, + const struct cpumask *subset_mask) +{ + struct rq_flags rf; + struct rq *rq; + cpumask_var_t new_mask; + int retval; + + if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) + return -ENOMEM; + + rq = task_rq_lock(p, &rf); + if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) { + task_rq_unlock(rq, p, &rf); + retval = -EINVAL; + goto out_free_new_mask; + } + + retval = __set_cpus_allowed_ptr_locked(p, new_mask, false, rq, &rf); + +out_free_new_mask: + free_cpumask_var(new_mask); + return retval; +} + void set_task_cpu(struct task_struct *p, unsigned int new_cpu) { #ifdef CONFIG_SCHED_DEBUG -- 2.29.2.299.gdc1121823c-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel