All of lore.kernel.org
 help / color / mirror / Atom feed
From: Namhyung Kim <namhyung@kernel.org>
To: Paul Turner <pjt@google.com>
Cc: linux-kernel@vger.kernel.org, Venki Pallipadi <venki@google.com>,
	Srivatsa Vaddagiri <vatsa@in.ibm.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>,
	Mike Galbraith <efault@gmx.de>,
	Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>,
	Ben Segall <bsegall@google.com>, Ingo Molnar <mingo@elte.hu>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Morten Rasmussen <Morten.Rasmussen@arm.com>,
	Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Subject: Re: [PATCH 05/16] sched: add an rq migration call-back to sched_class
Date: Fri, 29 Jun 2012 10:32:30 +0900	[thread overview]
Message-ID: <87395eykb5.fsf@sejong.aot.lge.com> (raw)
In-Reply-To: <20120628022414.30496.73413.stgit@kitami.mtv.corp.google.com> (Paul Turner's message of "Wed, 27 Jun 2012 19:24:14 -0700")

On Wed, 27 Jun 2012 19:24:14 -0700, Paul Turner wrote:
> Since we are now doing bottom up load accumulation we need explicit
> notification when a task has been re-parented so that the old hierarchy can be
> updated.
>
> Adds task_migrate_rq(struct rq *prev, struct *rq new_rq);

It should be:
	migrate_task_rq(struct task_struct *p, int next_cpu);


>
> (The alternative is to do this out of __set_task_cpu, but it was suggested that
> this would be a cleaner encapsulation.)
>
> Signed-off-by: Paul Turner <pjt@google.com>
> ---
>  include/linux/sched.h |    1 +
>  kernel/sched/core.c   |    2 ++
>  kernel/sched/fair.c   |   12 ++++++++++++
>  3 files changed, 15 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 842c4df..fdfdfab 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1102,6 +1102,7 @@ struct sched_class {
>  
>  #ifdef CONFIG_SMP
>  	int  (*select_task_rq)(struct task_struct *p, int sd_flag, int flags);
> +	void (*migrate_task_rq)(struct task_struct *p, int next_cpu);
>  
>  	void (*pre_schedule) (struct rq *this_rq, struct task_struct *task);
>  	void (*post_schedule) (struct rq *this_rq);
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index aeb8e56..c3686eb 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1109,6 +1109,8 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
>  	trace_sched_migrate_task(p, new_cpu);
>  
>  	if (task_cpu(p) != new_cpu) {
> +		if (p->sched_class->migrate_task_rq)
> +			p->sched_class->migrate_task_rq(p, new_cpu);
>  		p->se.nr_migrations++;
>  		perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, NULL, 0);
>  	}
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6200d20..33f582a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3089,6 +3089,17 @@ unlock:
>  
>  	return new_cpu;
>  }
> +
> +/*
> + * Called immediately before a task is migrated to a new cpu; task_cpu(p) and
> + * cfs_rq_of(p) references at time of call are still valid and identify the
> + * previous cpu.  However, the caller only guarantees p->pi_lock is held; no
> + * other assumptions, including rq->lock state, should be made.
> + * Caller guarantees p->pi_lock held, but nothing else.

Duplicate sentence?


> + */
> +static void
> +migrate_task_rq_fair(struct task_struct *p, int next_cpu) {

The opening brace should start on the next line.

Thanks,
Namhyung

> +}
>  #endif /* CONFIG_SMP */
>  
>  static unsigned long
> @@ -5754,6 +5765,7 @@ const struct sched_class fair_sched_class = {
>  
>  #ifdef CONFIG_SMP
>  	.select_task_rq		= select_task_rq_fair,
> +	.migrate_task_rq	= migrate_task_rq_fair,
>  
>  	.rq_online		= rq_online_fair,
>  	.rq_offline		= rq_offline_fair,

  reply	other threads:[~2012-06-29  1:36 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-28  2:24 [PATCH 00/16] Series short description Paul Turner
2012-06-28  2:24 ` [PATCH 09/16] sched: normalize tg load contributions against runnable time Paul Turner
2012-06-29  7:26   ` Namhyung Kim
2012-07-04 19:48   ` Peter Zijlstra
2012-07-06 11:52     ` Peter Zijlstra
2012-07-12  1:08       ` Andre Noll
2012-07-12  0:02     ` Paul Turner
2012-07-06 12:23   ` Peter Zijlstra
2012-06-28  2:24 ` [PATCH 06/16] sched: account for blocked load waking back up Paul Turner
2012-06-28  2:24 ` [PATCH 02/16] sched: maintain per-rq runnable averages Paul Turner
2012-06-28  2:24 ` [PATCH 01/16] sched: track the runnable average on a per-task entitiy basis Paul Turner
2012-06-28  6:06   ` Namhyung Kim
2012-07-12  0:14     ` Paul Turner
2012-07-04 15:32   ` Peter Zijlstra
2012-07-12  0:12     ` Paul Turner
2012-06-28  2:24 ` [PATCH 04/16] sched: maintain the load contribution of blocked entities Paul Turner
2012-06-29  1:27   ` Namhyung Kim
2012-06-28  2:24 ` [PATCH 07/16] sched: aggregate total task_group load Paul Turner
2012-06-28  2:24 ` [PATCH 05/16] sched: add an rq migration call-back to sched_class Paul Turner
2012-06-29  1:32   ` Namhyung Kim [this message]
2012-06-28  2:24 ` [PATCH 08/16] sched: compute load contribution by a group entity Paul Turner
2012-06-28  2:24 ` [PATCH 03/16] sched: aggregate load contributed by task entities on parenting cfs_rq Paul Turner
2012-06-28  6:33   ` Namhyung Kim
2012-07-04 15:28   ` Peter Zijlstra
2012-07-06 14:53     ` Peter Zijlstra
2012-07-09  9:15       ` Ingo Molnar
2012-06-28  2:24 ` [PATCH 11/16] sched: replace update_shares weight distribution with per-entity computation Paul Turner
2012-06-28  2:24 ` [PATCH 16/16] sched: introduce temporary FAIR_GROUP_SCHED dependency for load-tracking Paul Turner
2012-06-28  2:24 ` [PATCH 12/16] sched: refactor update_shares_cpu() -> update_blocked_avgs() Paul Turner
2012-06-29  7:28   ` Namhyung Kim
2012-07-12  0:03     ` Paul Turner
2012-07-05 11:58   ` Peter Zijlstra
2012-07-12  0:11     ` Paul Turner
2012-07-12 14:40       ` Peter Zijlstra
2012-06-28  2:24 ` [PATCH 15/16] sched: implement usage tracking Paul Turner
2012-06-28  2:24 ` [PATCH 13/16] sched: update_cfs_shares at period edge Paul Turner
2012-06-28  2:24 ` [PATCH 10/16] sched: maintain runnable averages across throttled periods Paul Turner
2012-06-28  2:24 ` [PATCH 14/16] sched: make __update_entity_runnable_avg() fast Paul Turner
2012-07-04 15:41   ` Peter Zijlstra
2012-07-04 17:20     ` Peter Zijlstra
2012-07-09 20:18       ` Benjamin Segall
2012-07-10 10:51         ` Peter Zijlstra
2012-07-12  0:15           ` Paul Turner
2012-07-12 14:30             ` Peter Zijlstra
2012-07-04 16:51   ` Peter Zijlstra
2012-08-23 14:14 [patch 00/16] sched: per-entity load-tracking pjt
2012-08-23 14:14 ` [patch 05/16] sched: add an rq migration call-back to sched_class pjt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87395eykb5.fsf@sejong.aot.lge.com \
    --to=namhyung@kernel.org \
    --cc=Morten.Rasmussen@arm.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=bsegall@google.com \
    --cc=efault@gmx.de \
    --cc=kamalesh@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=nikunj@linux.vnet.ibm.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=pjt@google.com \
    --cc=svaidy@linux.vnet.ibm.com \
    --cc=vatsa@in.ibm.com \
    --cc=venki@google.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.