All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: Alex Shi <alex.shi@intel.com>
Cc: "mingo@redhat.com" <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	Arjan van de Ven <arjan@linux.intel.com>,
	Borislav Petkov <bp@alien8.de>, Paul Turner <pjt@google.com>,
	Namhyung Kim <namhyung@kernel.org>,
	Mike Galbraith <efault@gmx.de>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	gregkh@linuxfoundation.org,
	Preeti U Murthy <preeti@linux.vnet.ibm.com>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	Len Brown <len.brown@intel.com>,
	rafael.j.wysocki@intel.com, jkosina@suse.cz,
	Clark Williams <clark.williams@gmail.com>,
	"tony.luck@intel.com" <tony.luck@intel.com>,
	keescook@chromium.org, Mel Gorman <mgorman@suse.de>,
	riel@redhat.com
Subject: Re: [Resend patch v8 06/13] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task
Date: Mon, 24 Jun 2013 13:04:24 +0200	[thread overview]
Message-ID: <CAKfTPtCz4bP05sqS7VBuZ_iDYBNGyb8sNUcoSTGbOkZoZU2D9g@mail.gmail.com> (raw)
In-Reply-To: <51C80C33.4050606@intel.com>

On 24 June 2013 11:06, Alex Shi <alex.shi@intel.com> wrote:
> On 06/20/2013 10:18 AM, Alex Shi wrote:
>> They are the base values in load balance, update them with rq runnable
>> load average, then the load balance will consider runnable load avg
>> naturally.
>>
>> We also try to include the blocked_load_avg as cpu load in balancing,
>> but that cause kbuild performance drop 6% on every Intel machine, and
>> aim7/oltp drop on some of 4 CPU sockets machines.
>> Or only add blocked_load_avg into get_rq_runable_load, hackbench still
>> drop a little on NHM EX.
>>
>> Signed-off-by: Alex Shi <alex.shi@intel.com>
>> Reviewed-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
>
>
> I am sorry for still having some swing on cfs and rt task load consideration.
> So give extra RFC patch to consider RT load in balance.
> With or without this patch, my test result has no change, since there is no
> much RT tasks in my testing.
>
> I am not familiar with RT scheduler, just rely on PeterZ who is experts on this. :)
>
> ---
>
> From b9ed5363b0a579a87256b589278c8c66500c7db3 Mon Sep 17 00:00:00 2001
> From: Alex Shi <alex.shi@intel.com>
> Date: Mon, 24 Jun 2013 16:12:29 +0800
> Subject: [PATCH 08/16] sched: recover to whole rq load include rt tasks'
>
> patch 'sched: compute runnable load avg in cpu_load and
> cpu_avg_load_per_task' weight rq's load on cfs.runnable_load_avg instead
> of rq->load.weight. That is fine when system has no much RT load.
>
> But if there are lots of RT load on rq, that code will just
> weight the cfs tasks in load balance without consideration of RT, that

AFAICT, the RT tasks activity is already taken into account by
decreasing the cpu_power that is used during load balance like in the
find_busiest_queue where weighted_cpuload is divided by cpu_power.

Vincent

> may cause load imbalance if much RT load isn't imbalanced among cpu.
> Using rq->avg.load_avg_contrib can resolve this problem and keep the
> advantages from runnable load balance.
>
> BTW, this patch may increase the balance failed times, if move_tasks can
> not balance loads between CPUs, like there is only RT load in CPUs.
>
> Signed-off-by: Alex Shi <alex.shi@intel.com>
> ---
>  kernel/sched/fair.c | 4 ++--
>  kernel/sched/proc.c | 2 +-
>  2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 37a5720..6979906 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2968,7 +2968,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>  /* Used instead of source_load when we know the type == 0 */
>  static unsigned long weighted_cpuload(const int cpu)
>  {
> -       return cpu_rq(cpu)->cfs.runnable_load_avg;
> +       return cpu_rq(cpu)->avg.load_avg_contrib;
>  }
>
>  /*
> @@ -3013,7 +3013,7 @@ static unsigned long cpu_avg_load_per_task(int cpu)
>  {
>         struct rq *rq = cpu_rq(cpu);
>         unsigned long nr_running = ACCESS_ONCE(rq->nr_running);
> -       unsigned long load_avg = rq->cfs.runnable_load_avg;
> +       unsigned long load_avg = rq->avg.load_avg_contrib;
>
>         if (nr_running)
>                 return load_avg / nr_running;
> diff --git a/kernel/sched/proc.c b/kernel/sched/proc.c
> index ce5cd48..4f2490c 100644
> --- a/kernel/sched/proc.c
> +++ b/kernel/sched/proc.c
> @@ -504,7 +504,7 @@ static void __update_cpu_load(struct rq *this_rq, unsigned long this_load,
>  #ifdef CONFIG_SMP
>  unsigned long get_rq_runnable_load(struct rq *rq)
>  {
> -       return rq->cfs.runnable_load_avg;
> +       return rq->avg.load_avg_contrib;
>  }
>  #else
>  unsigned long get_rq_runnable_load(struct rq *rq)
> --
> 1.7.12
>
>

  parent reply	other threads:[~2013-06-24 11:04 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-20  2:18 [Resend patch v8 0/13] use runnable load in schedule balance Alex Shi
2013-06-20  2:18 ` [Resend patch v8 01/13] Revert "sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking" Alex Shi
2013-06-26  5:05   ` Alex Shi
2013-06-26 20:30     ` Ingo Molnar
2013-06-27  1:07       ` Alex Shi
2013-06-27  9:01     ` [tip:sched/core] " tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 02/13] sched: move few runnable tg variables into CONFIG_SMP Alex Shi
2013-06-27  9:01   ` [tip:sched/core] sched: Move a " tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 03/13] sched: set initial value of runnable avg for new forked task Alex Shi
2013-06-27  9:01   ` [tip:sched/core] sched: Set an " tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 04/13] sched: fix slept time double counting in enqueue entity Alex Shi
2013-06-27  9:01   ` [tip:sched/core] sched: Fix sleep time double accounting " tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 05/13] sched: update cpu load after task_tick Alex Shi
2013-06-27  9:02   ` [tip:sched/core] sched: Update " tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 06/13] sched: compute runnable load avg in cpu_load and cpu_avg_load_per_task Alex Shi
2013-06-20 13:29   ` Vincent Guittot
2013-06-24  9:06   ` Alex Shi
2013-06-24 10:54     ` Paul Turner
2013-06-24 11:04     ` Vincent Guittot [this message]
2013-06-24 11:06       ` Paul Turner
2013-06-24 14:56         ` Alex Shi
2013-06-27  9:02   ` [tip:sched/core] sched: Compute " tip-bot for Alex Shi
2013-06-27 13:30   ` [Resend patch v8 06/13] sched: compute " Alex Shi
2013-06-20  2:18 ` [Resend patch v8 07/13] sched: consider runnable load average in move_tasks Alex Shi
2013-06-27  9:02   ` [tip:sched/core] sched: Consider runnable load average in move_tasks() tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 08/13] sched/tg: remove blocked_load_avg in balance Alex Shi
2013-06-20  2:18 ` [Resend patch v8 09/13] sched: change cfs_rq load avg to unsigned long Alex Shi
2013-06-27  9:02   ` [tip:sched/core] sched: Change " tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 10/13] sched/tg: use 'unsigned long' for load variable in task group Alex Shi
2013-06-27  9:02   ` [tip:sched/core] sched/tg: Use " tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 11/13] sched/cfs_rq: change atomic64_t removed_load to atomic_long_t Alex Shi
2013-06-27  9:02   ` [tip:sched/core] sched/cfs_rq: Change " tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 12/13] sched/tg: remove tg.load_weight Alex Shi
2013-06-27  9:02   ` [tip:sched/core] sched/tg: Remove tg.load_weight tip-bot for Alex Shi
2013-06-20  2:18 ` [Resend patch v8 13/13] sched: get_rq_runnable_load() can be static and inline Alex Shi
2013-06-27  9:03   ` [tip:sched/core] sched: Change get_rq_runnable_load() to " tip-bot for Alex Shi
2013-06-24  3:15 ` [Resend patch v8 0/13] use runnable load in schedule balance Alex Shi
2013-06-24 10:40   ` Paul Turner
2013-06-24 15:37     ` Alex Shi
2013-06-25 13:13       ` Alex Shi
2013-06-28 10:56       ` Paul Turner
2013-06-28 11:07         ` Peter Zijlstra
2013-06-28 11:12           ` Alex Shi
2013-10-28 10:25           ` Frederic Weisbecker
2013-10-28 12:22             ` Peter Zijlstra
     [not found]         ` <CAGjg+kGUo8vqv6hzobuyNoQjipLBXXofZ5q1rUyEehbD3cba9A@mail.gmail.com>
2013-06-28 16:00           ` Paul Turner
2013-07-09  8:53             ` Alex Shi
2013-06-26 14:27 ` Peter Zijlstra
2013-06-26 15:23   ` Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKfTPtCz4bP05sqS7VBuZ_iDYBNGyb8sNUcoSTGbOkZoZU2D9g@mail.gmail.com \
    --to=vincent.guittot@linaro.org \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@intel.com \
    --cc=arjan@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=clark.williams@gmail.com \
    --cc=efault@gmx.de \
    --cc=gregkh@linuxfoundation.org \
    --cc=jkosina@suse.cz \
    --cc=keescook@chromium.org \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=preeti@linux.vnet.ibm.com \
    --cc=rafael.j.wysocki@intel.com \
    --cc=riel@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.