From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13CD5C07E9B for ; Fri, 9 Jul 2021 22:00:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F2E0C613F8 for ; Fri, 9 Jul 2021 22:00:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231327AbhGIWDF (ORCPT ); Fri, 9 Jul 2021 18:03:05 -0400 Received: from mail.kernel.org ([198.145.29.99]:50310 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229563AbhGIWDC (ORCPT ); Fri, 9 Jul 2021 18:03:02 -0400 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 87A4C613D1; Fri, 9 Jul 2021 22:00:18 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.94.2) (envelope-from ) id 1m1yXh-000CJJ-Ja; Fri, 09 Jul 2021 18:00:17 -0400 Message-ID: <20210709220017.448985284@goodmis.org> User-Agent: quilt/0.66 Date: Fri, 09 Jul 2021 17:59:55 -0400 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Daniel Wagner , Tom Zanussi , "Srivatsa S. Bhat" , stable@kernel.org, "Peter Zijlstra (Intel)" , Ingo Molnar , Valentin Schneider , Paul Gortmaker Subject: [PATCH RT 2/8] sched: Simplify migration_cpu_stop() References: <20210709215953.122804544@goodmis.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 5.10.47-rt46-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Zijlstra commit c20cf065d4a619d394d23290093b1002e27dff86 upstream. When affine_move_task() issues a migration_cpu_stop(), the purpose of that function is to complete that @pending, not any random other p->migration_pending that might have gotten installed since. This realization much simplifies migration_cpu_stop() and allows further necessary steps to fix all this as it provides the guarantee that @pending's stopper will complete @pending (and not some random other @pending). Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Cc: stable@kernel.org Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Reviewed-by: Valentin Schneider Link: https://lkml.kernel.org/r/20210224131355.430014682@infradead.org Signed-off-by: Paul Gortmaker Signed-off-by: Steven Rostedt (VMware) --- kernel/sched/core.c | 56 +++++++-------------------------------------- 1 file changed, 8 insertions(+), 48 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a3dea38f410a..d497b13efb53 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1934,8 +1934,8 @@ static struct rq *__migrate_task(struct rq *rq, struct rq_flags *rf, */ static int migration_cpu_stop(void *data) { - struct set_affinity_pending *pending; struct migration_arg *arg = data; + struct set_affinity_pending *pending = arg->pending; struct task_struct *p = arg->task; int dest_cpu = arg->dest_cpu; struct rq *rq = this_rq(); @@ -1957,25 +1957,6 @@ static int migration_cpu_stop(void *data) raw_spin_lock(&p->pi_lock); rq_lock(rq, &rf); - pending = p->migration_pending; - if (pending && !arg->pending) { - /* - * This happens from sched_exec() and migrate_task_to(), - * neither of them care about pending and just want a task to - * maybe move about. - * - * Even if there is a pending, we can ignore it, since - * affine_move_task() will have it's own stop_work's in flight - * which will manage the completion. - * - * Notably, pending doesn't need to match arg->pending. This can - * happen when tripple concurrent affine_move_task() first sets - * pending, then clears pending and eventually sets another - * pending. - */ - pending = NULL; - } - /* * If task_rq(p) != rq, it cannot be migrated here, because we're * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because @@ -1986,31 +1967,20 @@ static int migration_cpu_stop(void *data) goto out; if (pending) { - p->migration_pending = NULL; + if (p->migration_pending == pending) + p->migration_pending = NULL; complete = true; } - /* migrate_enable() -- we must not race against SCA */ - if (dest_cpu < 0) { - /* - * When this was migrate_enable() but we no longer - * have a @pending, a concurrent SCA 'fixed' things - * and we should be valid again. Nothing to do. - */ - if (!pending) { - WARN_ON_ONCE(!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)); - goto out; - } - + if (dest_cpu < 0) dest_cpu = cpumask_any_distribute(&p->cpus_mask); - } if (task_on_rq_queued(p)) rq = __migrate_task(rq, &rf, p, dest_cpu); else p->wake_cpu = dest_cpu; - } else if (dest_cpu < 0 || pending) { + } else if (pending) { /* * This happens when we get migrated between migrate_enable()'s * preempt_enable() and scheduling the stopper task. At that @@ -2025,22 +1995,13 @@ static int migration_cpu_stop(void *data) * ->pi_lock, so the allowed mask is stable - if it got * somewhere allowed, we're done. */ - if (pending && cpumask_test_cpu(task_cpu(p), p->cpus_ptr)) { - p->migration_pending = NULL; + if (cpumask_test_cpu(task_cpu(p), p->cpus_ptr)) { + if (p->migration_pending == pending) + p->migration_pending = NULL; complete = true; goto out; } - /* - * When this was migrate_enable() but we no longer have an - * @pending, a concurrent SCA 'fixed' things and we should be - * valid again. Nothing to do. - */ - if (!pending) { - WARN_ON_ONCE(!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)); - goto out; - } - /* * When migrate_enable() hits a rq mis-match we can't reliably * determine is_migration_disabled() and so have to chase after @@ -2058,7 +2019,6 @@ static int migration_cpu_stop(void *data) complete_all(&pending->done); /* For pending->{arg,stop_work} */ - pending = arg->pending; if (pending && refcount_dec_and_test(&pending->refs)) wake_up_var(&pending->refs); -- 2.30.2