From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753769AbaIOJsG (ORCPT ); Mon, 15 Sep 2014 05:48:06 -0400 Received: from mail-oa0-f50.google.com ([209.85.219.50]:44978 "EHLO mail-oa0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753493AbaIOJsE (ORCPT ); Mon, 15 Sep 2014 05:48:04 -0400 MIME-Version: 1.0 In-Reply-To: <1410519814.3569.7.camel@tkhai> References: <1410519814.3569.7.camel@tkhai> Date: Mon, 15 Sep 2014 15:18:02 +0530 Message-ID: Subject: Re: [PATCH] sched: Do not stop cpu in set_cpus_allowed_ptr() if task is not running From: Preeti Murthy To: Kirill Tkhai Cc: LKML , Peter Zijlstra , Ingo Molnar , Kirill Tkhai , Preeti U Murthy Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Kirill, Which tree is this patch based on? __migrate_task() does a double_rq_lock/unlock() today in mainline, doesn't it? I don't however see that in your patch. Regards Preeti U Murthy On Fri, Sep 12, 2014 at 4:33 PM, Kirill Tkhai wrote: > > If a task is queued but not running on it rq, we can simply migrate > it without migration thread and switching of context. > > Signed-off-by: Kirill Tkhai > --- > kernel/sched/core.c | 47 ++++++++++++++++++++++++++++++++--------------- > 1 file changed, 32 insertions(+), 15 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index d4399b4..dbbba26 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -4594,6 +4594,33 @@ void init_idle(struct task_struct *idle, int cpu) > } > > #ifdef CONFIG_SMP > +/* > + * move_queued_task - move a queued task to new rq. > + * > + * Returns (locked) new rq. Old rq's lock is released. > + */ > +static struct rq *move_queued_task(struct task_struct *p, int new_cpu) > +{ > + struct rq *rq = task_rq(p); > + > + lockdep_assert_held(&rq->lock); > + > + dequeue_task(rq, p, 0); > + p->on_rq = TASK_ON_RQ_MIGRATING; > + set_task_cpu(p, new_cpu); > + raw_spin_unlock(&rq->lock); > + > + rq = cpu_rq(new_cpu); > + > + raw_spin_lock(&rq->lock); > + BUG_ON(task_cpu(p) != new_cpu); > + p->on_rq = TASK_ON_RQ_QUEUED; > + enqueue_task(rq, p, 0); > + check_preempt_curr(rq, p, 0); > + > + return rq; > +} > + > void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask) > { > if (p->sched_class && p->sched_class->set_cpus_allowed) > @@ -4650,14 +4677,15 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask) > goto out; > > dest_cpu = cpumask_any_and(cpu_active_mask, new_mask); > - if (task_on_rq_queued(p) || p->state == TASK_WAKING) { > + if (task_running(rq, p) || p->state == TASK_WAKING) { > struct migration_arg arg = { p, dest_cpu }; > /* Need help from migration thread: drop lock and wait. */ > task_rq_unlock(rq, p, &flags); > stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg); > tlb_migrate_finish(p->mm); > return 0; > - } > + } else if (task_on_rq_queued(p)) > + rq = move_queued_task(p, dest_cpu); > out: > task_rq_unlock(rq, p, &flags); > > @@ -4700,19 +4728,8 @@ static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu) > * If we're not on a rq, the next wake-up will ensure we're > * placed properly. > */ > - if (task_on_rq_queued(p)) { > - dequeue_task(rq, p, 0); > - p->on_rq = TASK_ON_RQ_MIGRATING; > - set_task_cpu(p, dest_cpu); > - raw_spin_unlock(&rq->lock); > - > - rq = cpu_rq(dest_cpu); > - raw_spin_lock(&rq->lock); > - BUG_ON(task_rq(p) != rq); > - p->on_rq = TASK_ON_RQ_QUEUED; > - enqueue_task(rq, p, 0); > - check_preempt_curr(rq, p, 0); > - } > + if (task_on_rq_queued(p)) > + rq = move_queued_task(p, dest_cpu); > done: > ret = 1; > fail: > > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/