* [PATCH v5 0/5] sched: Add on_rq states and remove several double rq locks
@ 2014-08-20 9:47 Kirill Tkhai
2014-08-20 12:54 ` Ingo Molnar
0 siblings, 1 reply; 2+ messages in thread
From: Kirill Tkhai @ 2014-08-20 9:47 UTC (permalink / raw)
To: linux-kernel
Cc: Peter Zijlstra, Paul Turner, Oleg Nesterov, Steven Rostedt,
Mike Galbraith, Kirill Tkhai, Tim Chen, Ingo Molnar,
Nicolas Pitre
v5: New names: TASK_ON_RQ_QUEUED, TASK_ON_RQ_MIGRATING, task_on_rq_migrating()
and task_on_rq_queued().
I've pulled the latest version from peterz/queue.git, and Peter's changes
are included.
This series aims to get rid of some places where locks of two RQs are held
at the same time.
Patch [1/5] is a preparation/cleanup. It replaces old (task_struct::on_rq == 1)
with new (task_struct::on_rq == TASK_ON_RQ_QUEUED) everywhere. No functional changes.
Patch [2/5] is main in the series. It introduces new TASK_ON_RQ_MIGRATING state and
teaches scheduler to understand it (we need little changes in try_to_wake_up()
and task_rq_lock() family). This will be used in the following way:
(we are changing task's rq)
raw_spin_lock(&src_rq->lock);
p = ...; /* Some src_rq task */
dequeue_task(src_rq, p, 0);
p->on_rq = TASK_ON_RQ_MIGRATING;
set_task_cpu(p, dst_cpu);
raw_spin_unlock(&src_rq->lock);
/*
* Now p is dequeued, and both
* RQ locks are unlocked, but
* its on_rq is not zero.
* Nobody can manipulate p
* while it's migrating,
* even when spinlocks are
* unlocked.
*/
raw_spin_lock(&dst_rq->lock);
p->on_rq = TASK_ON_RQ_QUEUED;
enqueue_task(dst_rq, p, 0);
raw_spin_unlock(&dst_rq->lock);
Patches [3,4,5/5] remove double locks and use new TASK_ON_RQ_MIGRATING state.
They allow unlocked using of 3-4 function, which looks safe for me.
The profit is that double_rq_lock() is not need in several places. We reduce
the total time when RQs are locked.
---
Kirill Tkhai (5):
sched: Wrapper for checking task_struct::on_rq
sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state
sched: Remove double_rq_lock() from __migrate_task()
sched/fair: Remove double_lock_balance() from active_load_balance_cpu_stop()
sched/fair: Remove double_lock_balance() from load_balance()
kernel/sched/core.c | 113 +++++++++++++++------------
kernel/sched/deadline.c | 15 ++--
kernel/sched/fair.c | 195 ++++++++++++++++++++++++++++++++--------------
kernel/sched/rt.c | 16 ++--
kernel/sched/sched.h | 13 +++
kernel/sched/stop_task.c | 2
6 files changed, 228 insertions(+), 126 deletions(-)
--
Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v5 0/5] sched: Add on_rq states and remove several double rq locks
2014-08-20 9:47 [PATCH v5 0/5] sched: Add on_rq states and remove several double rq locks Kirill Tkhai
@ 2014-08-20 12:54 ` Ingo Molnar
0 siblings, 0 replies; 2+ messages in thread
From: Ingo Molnar @ 2014-08-20 12:54 UTC (permalink / raw)
To: Kirill Tkhai
Cc: linux-kernel, Peter Zijlstra, Paul Turner, Oleg Nesterov,
Steven Rostedt, Mike Galbraith, Kirill Tkhai, Tim Chen,
Nicolas Pitre
* Kirill Tkhai <ktkhai@parallels.com> wrote:
> v5: New names: TASK_ON_RQ_QUEUED, TASK_ON_RQ_MIGRATING, task_on_rq_migrating()
> and task_on_rq_queued().
>
> I've pulled the latest version from peterz/queue.git, and Peter's changes
> are included.
>
> This series aims to get rid of some places where locks of two RQs are held
> at the same time.
>
> Patch [1/5] is a preparation/cleanup. It replaces old (task_struct::on_rq == 1)
> with new (task_struct::on_rq == TASK_ON_RQ_QUEUED) everywhere. No functional changes.
>
> Patch [2/5] is main in the series. It introduces new TASK_ON_RQ_MIGRATING state and
> teaches scheduler to understand it (we need little changes in try_to_wake_up()
> and task_rq_lock() family). This will be used in the following way:
>
> (we are changing task's rq)
>
> raw_spin_lock(&src_rq->lock);
>
> p = ...; /* Some src_rq task */
>
> dequeue_task(src_rq, p, 0);
> p->on_rq = TASK_ON_RQ_MIGRATING;
> set_task_cpu(p, dst_cpu);
> raw_spin_unlock(&src_rq->lock);
>
> /*
> * Now p is dequeued, and both
> * RQ locks are unlocked, but
> * its on_rq is not zero.
> * Nobody can manipulate p
> * while it's migrating,
> * even when spinlocks are
> * unlocked.
> */
>
> raw_spin_lock(&dst_rq->lock);
> p->on_rq = TASK_ON_RQ_QUEUED;
> enqueue_task(dst_rq, p, 0);
> raw_spin_unlock(&dst_rq->lock);
>
> Patches [3,4,5/5] remove double locks and use new TASK_ON_RQ_MIGRATING state.
> They allow unlocked using of 3-4 function, which looks safe for me.
>
> The profit is that double_rq_lock() is not need in several places. We reduce
> the total time when RQs are locked.
>
> ---
>
> Kirill Tkhai (5):
> sched: Wrapper for checking task_struct::on_rq
> sched: Teach scheduler to understand TASK_ON_RQ_MIGRATING state
> sched: Remove double_rq_lock() from __migrate_task()
> sched/fair: Remove double_lock_balance() from active_load_balance_cpu_stop()
> sched/fair: Remove double_lock_balance() from load_balance()
>
>
> kernel/sched/core.c | 113 +++++++++++++++------------
> kernel/sched/deadline.c | 15 ++--
> kernel/sched/fair.c | 195 ++++++++++++++++++++++++++++++++--------------
> kernel/sched/rt.c | 16 ++--
> kernel/sched/sched.h | 13 +++
> kernel/sched/stop_task.c | 2
> 6 files changed, 228 insertions(+), 126 deletions(-)
>
> --
> Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
Ok, looks good. I picked up this version, with a few minor
tweaks and fixes to the changelogs.
Thanks,
Ingo
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2014-08-20 12:54 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-20 9:47 [PATCH v5 0/5] sched: Add on_rq states and remove several double rq locks Kirill Tkhai
2014-08-20 12:54 ` Ingo Molnar
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.