linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rik van Riel <riel@surriel.com>
To: Dietmar Eggemann <dietmar.eggemann@arm.com>, peterz@infradead.org
Cc: mingo@redhat.com, linux-kernel@vger.kernel.org,
	kernel-team@fb.com, morten.rasmussen@arm.com, tglx@linutronix.de,
	Mel Gorman <mgorman@techsingularity.net>,
	vincent.guittot@linaro.org
Subject: Re: [PATCH 8/8] sched,fair: flatten hierarchical runqueues
Date: Tue, 25 Jun 2019 09:51:30 -0400	[thread overview]
Message-ID: <ab58d07361198e555e4b8278a4264c8dafa54b93.camel@surriel.com> (raw)
In-Reply-To: <960c2571-7a32-f7aa-08ca-07f1136e835d@arm.com>

[-- Attachment #1: Type: text/plain, Size: 2081 bytes --]

On Tue, 2019-06-25 at 11:50 +0200, Dietmar Eggemann wrote:
> On 6/12/19 9:32 PM, Rik van Riel wrote:
> 
> [...]
> 
> > @@ -410,6 +412,11 @@ static inline struct sched_entity
> > *parent_entity(struct sched_entity *se)
> >  	return se->parent;
> >  }
> >  
> > +static inline bool task_se_in_cgroup(struct sched_entity *se)
> > +{
> > +	return parent_entity(se);
> > +}
> 
> IMHO, s/in_cgroup/not_in_root_tg/ reads easier. "/", i.e. the root tg
> is
> still a cgroup, I guess. But you could use existing parent_entity(se)
> as
> well.

I agree my name is not the prettiest, but I am not
entirely convinced your idea is an improvement.

I'll hold out for better ideas by other reviewers :)

> > @@ -679,22 +710,16 @@ static inline u64 calc_delta_fair(u64 delta,
> > struct sched_entity *se)
> >  static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity
> > *se)
> >  {
> >  	u64 slice = sysctl_sched_latency;
> > +	struct load_weight *load = &cfs_rq->load;
> > +	struct load_weight lw;
> >  
> > -	for_each_sched_entity(se) {
> > -		struct load_weight *load;
> > -		struct load_weight lw;
> > +	if (unlikely(!se->on_rq)) {
> > +		lw = cfs_rq->load;
> >  
> > -		cfs_rq = cfs_rq_of(se);
> > -		load = &cfs_rq->load;
> > -
> > -		if (unlikely(!se->on_rq)) {
> > -			lw = cfs_rq->load;
> > -
> > -			update_load_add(&lw, se->load.weight);
> > -			load = &lw;
> > -		}
> > -		slice = __calc_delta(slice, se->load.weight, load);
> > +		update_load_add(&lw, task_se_h_load(se));
> > +		load = &lw;
> >  	}
> > +	slice = __calc_delta(slice, task_se_h_load(se), load);
> 
> task_se_h_load(se) and se->load.weight are off my factor of >= 1024
> on
> 64bit.

Oh indeed they are!

I wonder if this is the root cause of that
performance regression I have been hunting for
the past few weeks :)

Let me go test some things...

> ...
>     bash pid=3250: task_se_h_load(se)=1023 se->load.weight=1048576
>     sysctl_sched_latency=18000000 slice=0 old_slice=17999995
> ...
> 
> [...]
> 
-- 
All Rights Reversed.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2019-06-25 13:51 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-12 19:32 [RFC] sched,cfs: flatten CPU controller runqueues Rik van Riel
2019-06-12 19:32 ` [PATCH 1/8] sched: introduce task_se_h_load helper Rik van Riel
2019-06-19 12:52   ` Dietmar Eggemann
2019-06-19 13:57     ` Rik van Riel
2019-06-19 15:18       ` Dietmar Eggemann
2019-06-19 15:55         ` Rik van Riel
2019-06-12 19:32 ` [PATCH 2/8] sched: change /proc/sched_debug fields Rik van Riel
2019-06-12 19:32 ` [PATCH 3/8] sched,fair: redefine runnable_load_avg as the sum of task_h_load Rik van Riel
2019-06-18  9:08   ` Dietmar Eggemann
2019-06-26 14:34   ` Dietmar Eggemann
2019-06-12 19:32 ` [PATCH 4/8] sched,fair: remove cfs rqs from leaf_cfs_rq_list bottom up Rik van Riel
2019-06-12 19:32 ` [PATCH 5/8] sched,cfs: use explicit cfs_rq of parent se helper Rik van Riel
2019-06-20 16:23   ` Dietmar Eggemann
2019-06-20 16:29     ` Rik van Riel
2019-06-24 11:24       ` Dietmar Eggemann
2019-06-26 15:58   ` Dietmar Eggemann
2019-06-26 16:15     ` Rik van Riel
2019-06-12 19:32 ` [PATCH 6/8] sched,cfs: fix zero length timeslice calculation Rik van Riel
2019-06-12 19:32 ` [PATCH 7/8] sched,fair: refactor enqueue/dequeue_entity Rik van Riel
2019-06-12 19:32 ` [PATCH 8/8] sched,fair: flatten hierarchical runqueues Rik van Riel
2019-06-25  9:50   ` Dietmar Eggemann
2019-06-25 13:51     ` Rik van Riel [this message]
2019-06-28 10:26   ` Dietmar Eggemann
2019-06-28 19:36     ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ab58d07361198e555e4b8278a4264c8dafa54b93.camel@surriel.com \
    --to=riel@surriel.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).