From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C706C43381 for ; Mon, 18 Feb 2019 17:41:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C34BD2085A for ; Mon, 18 Feb 2019 17:41:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ur67zoQ7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391349AbfBRRll (ORCPT ); Mon, 18 Feb 2019 12:41:41 -0500 Received: from merlin.infradead.org ([205.233.59.134]:47708 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388989AbfBRRkh (ORCPT ); Mon, 18 Feb 2019 12:40:37 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-Id:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Ajz4DzwnBzMQGWvcgGF79wLVdpeez41JRWekYvMXKm0=; b=ur67zoQ7jgWIKznNYDlQeSV7Kk yPHXq0YDHV7s84StxiEXpds+L72ewmZJDoCumbRDwmYsU0r8QnDuiUXJOrf5RbCanWJE8Dyli9Rud aRWJ0VMzkq3eg2hw2zHOWCbBmXkCfv4YVVMu/Cf6Bnv4IT7l2bMHdRfFnYuf9cXTiKHXzNPO2Gv5o k3CbjOwPuOfnh5ZEAyWSwImccl1Bkz7sD/Hg2uVDJXO0b9RFTxlNM7kqdEXFjuP1oXKRsxTJjjZ+s bCQZkAF0LaMsUkg2rhshOor9CsJgEPLhug3+2b166iq8nIN4GG5KG7DWVisJmARxIokdDqfTx/ydh K+cmp4jQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1gvmu2-0001Eu-En; Mon, 18 Feb 2019 17:40:26 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 2ACF52848B878; Mon, 18 Feb 2019 18:40:23 +0100 (CET) Message-Id: <20190218173514.299003758@infradead.org> User-Agent: quilt/0.65 Date: Mon, 18 Feb 2019 17:56:27 +0100 From: Peter Zijlstra To: mingo@kernel.org, tglx@linutronix.de, pjt@google.com, tim.c.chen@linux.intel.com, torvalds@linux-foundation.org Cc: linux-kernel@vger.kernel.org, subhra.mazumdar@oracle.com, fweisbec@gmail.com, keescook@chromium.org, kerrnel@google.com, "Peter Zijlstra (Intel)" Subject: [RFC][PATCH 07/16] sched: Allow put_prev_task() to drop rq->lock References: <20190218165620.383905466@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the pick_next_task() loop is convoluted and ugly because of how it can drop the rq->lock and needs to restart the picking. For the RT/Deadline classes, it is put_prev_task() where we do balancing, and we could do this before the picking loop. Make this possible. Signed-off-by: Peter Zijlstra (Intel) --- kernel/sched/core.c | 2 +- kernel/sched/deadline.c | 14 +++++++++++++- kernel/sched/fair.c | 2 +- kernel/sched/idle.c | 2 +- kernel/sched/rt.c | 14 +++++++++++++- kernel/sched/sched.h | 4 ++-- kernel/sched/stop_task.c | 2 +- 7 files changed, 32 insertions(+), 8 deletions(-) --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5592,7 +5592,7 @@ static void calc_load_migrate(struct rq atomic_long_add(delta, &calc_load_tasks); } -static void put_prev_task_fake(struct rq *rq, struct task_struct *prev) +static void put_prev_task_fake(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) { } --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1773,13 +1773,25 @@ pick_next_task_dl(struct rq *rq, struct return p; } -static void put_prev_task_dl(struct rq *rq, struct task_struct *p) +static void put_prev_task_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf) { update_curr_dl(rq); update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 1); if (on_dl_rq(&p->dl) && p->nr_cpus_allowed > 1) enqueue_pushable_dl_task(rq, p); + + if (rf && !on_dl_rq(&p->dl) && need_pull_dl_task(rq, p)) { + /* + * This is OK, because current is on_cpu, which avoids it being + * picked for load-balance and preemption/IRQs are still + * disabled avoiding further scheduler activity on it and we've + * not yet started the picking loop. + */ + rq_unpin_lock(rq, rf); + pull_dl_task(rq); + rq_repin_lock(rq, rf); + } } /* --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7080,7 +7080,7 @@ done: __maybe_unused; /* * Account for a descheduled task: */ -static void put_prev_task_fair(struct rq *rq, struct task_struct *prev) +static void put_prev_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) { struct sched_entity *se = &prev->se; struct cfs_rq *cfs_rq; --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -373,7 +373,7 @@ static void check_preempt_curr_idle(stru resched_curr(rq); } -static void put_prev_task_idle(struct rq *rq, struct task_struct *prev) +static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) { } --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1593,7 +1593,7 @@ pick_next_task_rt(struct rq *rq, struct return p; } -static void put_prev_task_rt(struct rq *rq, struct task_struct *p) +static void put_prev_task_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf) { update_curr_rt(rq); @@ -1605,6 +1605,18 @@ static void put_prev_task_rt(struct rq * */ if (on_rt_rq(&p->rt) && p->nr_cpus_allowed > 1) enqueue_pushable_task(rq, p); + + if (rf && !on_rt_rq(&p->rt) && need_pull_rt_task(rq, p)) { + /* + * This is OK, because current is on_cpu, which avoids it being + * picked for load-balance and preemption/IRQs are still + * disabled avoiding further scheduler activity on it and we've + * not yet started the picking loop. + */ + rq_unpin_lock(rq, rf); + pull_rt_task(rq); + rq_repin_lock(rq, rf); + } } #ifdef CONFIG_SMP --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1675,7 +1675,7 @@ struct sched_class { struct task_struct * (*pick_next_task)(struct rq *rq, struct task_struct *prev, struct rq_flags *rf); - void (*put_prev_task)(struct rq *rq, struct task_struct *p); + void (*put_prev_task)(struct rq *rq, struct task_struct *p, struct rq_flags *rf); void (*set_next_task)(struct rq *rq, struct task_struct *p); #ifdef CONFIG_SMP @@ -1721,7 +1721,7 @@ struct sched_class { static inline void put_prev_task(struct rq *rq, struct task_struct *prev) { WARN_ON_ONCE(rq->curr != prev); - prev->sched_class->put_prev_task(rq, prev); + prev->sched_class->put_prev_task(rq, prev, NULL); } static inline void set_next_task(struct rq *rq, struct task_struct *next) --- a/kernel/sched/stop_task.c +++ b/kernel/sched/stop_task.c @@ -59,7 +59,7 @@ static void yield_task_stop(struct rq *r BUG(); /* the stop task should never yield, its pointless. */ } -static void put_prev_task_stop(struct rq *rq, struct task_struct *prev) +static void put_prev_task_stop(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) { struct task_struct *curr = rq->curr; u64 delta_exec;