From: Dietmar Eggemann <dietmar.eggemann@arm.com>
To: Tejun Heo <tj@kernel.org>, Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>
Cc: linux-kernel@vger.kernel.org,
Linus Torvalds <torvalds@linux-foundation.org>,
Vincent Guittot <vincent.guittot@linaro.org>,
Mike Galbraith <efault@gmx.de>, Paul Turner <pjt@google.com>,
Chris Mason <clm@fb.com>,
kernel-team@fb.com
Subject: Re: [PATCH 2/3] sched/fair: Add load_weight->runnable_load_{sum|avg}
Date: Fri, 5 May 2017 14:22:05 +0100 [thread overview]
Message-ID: <f3344dc1-37ea-8a77-8d0f-f4c4bdd3710f@arm.com> (raw)
In-Reply-To: <20170504202948.GC2647@htj.duckdns.org>
Hi Tejun,
On 04/05/17 21:29, Tejun Heo wrote:
> Currently, runnable_load_avg, which represents the portion of load avg
> only from tasks which are currently active, is tracked by cfs_rq but
> not by sched_entity. We want to propagate runnable_load_avg of a
> nested cfs_rq without affecting load_avg propagation. To implement an
> equivalent propagation channel, sched_entity needs to track
> runnable_load_avg too.
>
> This patch moves cfs_rq->runnable_load_{sum|avg} into struct
> load_weight which is already used to track load_avg and shared by both
> cfs_rq and sched_entity.
>
> This patch only changes where runnable_load_{sum|avg} are located and
> doesn't cause any actual behavior changes. The fields are still only
> used for cfs_rqs.
This one doesn't apply cleanly on tip/sched/core. There has been a lot
of changes in the actual PELT code.
e.g.
a481db34b9be - sched/fair: Optimize ___update_sched_avg() (2017-03-30
Yuyang Du)
0ccb977f4c80 - sched/fair: Explicitly generate __update_load_avg()
instances (2017-03-30 Peter Zijlstra)
I stitch this up locally to be able to run some tests.
[...]
next prev parent reply other threads:[~2017-05-05 13:22 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-04 20:28 [RFC PATCHSET v2] sched/fair: fix load balancer behavior when cgroup is in use Tejun Heo
2017-05-04 20:29 ` [PATCH 1/3] sched/fair: Peter's shares_type patch Tejun Heo
2017-05-05 10:40 ` Vincent Guittot
2017-05-05 15:30 ` Tejun Heo
2017-05-10 15:09 ` Tejun Heo
2017-05-10 16:07 ` Vincent Guittot
2017-05-11 6:59 ` Peter Zijlstra
2017-05-05 15:41 ` Peter Zijlstra
2017-05-04 20:29 ` [PATCH 2/3] sched/fair: Add load_weight->runnable_load_{sum|avg} Tejun Heo
2017-05-05 13:22 ` Dietmar Eggemann [this message]
2017-05-05 13:26 ` Tejun Heo
2017-05-05 13:37 ` Dietmar Eggemann
2017-05-04 20:30 ` [PATCH 3/3] sched/fair: Propagate runnable_load_avg independently from load_avg Tejun Heo
2017-05-05 10:42 ` Vincent Guittot
2017-05-05 12:18 ` Vincent Guittot
2017-05-05 13:26 ` Tejun Heo
2017-05-05 16:51 ` Vincent Guittot
2017-05-05 8:46 ` [RFC PATCHSET v2] sched/fair: fix load balancer behavior when cgroup is in use Vincent Guittot
2017-05-05 13:28 ` Tejun Heo
2017-05-05 13:32 ` Vincent Guittot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f3344dc1-37ea-8a77-8d0f-f4c4bdd3710f@arm.com \
--to=dietmar.eggemann@arm.com \
--cc=clm@fb.com \
--cc=efault@gmx.de \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).