From: Vincent Guittot <firstname.lastname@example.org> To: Quentin Perret <email@example.com> Cc: Peter Zijlstra <firstname.lastname@example.org>, "Rafael J. Wysocki" <email@example.com>, linux-kernel <firstname.lastname@example.org>, "open list:THERMAL" <email@example.com>, "firstname.lastname@example.org" <email@example.com>, Ingo Molnar <firstname.lastname@example.org>, Dietmar Eggemann <email@example.com>, Morten Rasmussen <firstname.lastname@example.org>, Chris Redpath <email@example.com>, Patrick Bellasi <firstname.lastname@example.org>, Valentin Schneider <email@example.com>, Thara Gopinath <firstname.lastname@example.org>, viresh kumar <email@example.com>, Todd Kjos <firstname.lastname@example.org>, Joel Fernandes <email@example.com>, "Cc: Steve Muckle" <firstname.lastname@example.org>, email@example.com, "Kannan, Saravana" <firstname.lastname@example.org>, email@example.com, Juri Lelli <firstname.lastname@example.org>, Eduardo Valentin <email@example.com>, Srinivas Pandruvada <firstname.lastname@example.org>, email@example.com, Javi Merino <firstname.lastname@example.org> Subject: Re: [PATCH v5 09/14] sched: Add over-utilization/tipping point indicator Date: Mon, 6 Aug 2018 10:40:46 +0200 [thread overview] Message-ID: <CAKfTPtBT-p-Z0EneirfOTwUw=5jEeDGa+4_EE4ogi2Ht7GU9Bg@mail.gmail.com> (raw) In-Reply-To: <20180803155547.sxlhxpmhwcoappit@queper01-lin> On Fri, 3 Aug 2018 at 17:55, Quentin Perret <email@example.com> wrote: > > On Friday 03 Aug 2018 at 15:49:24 (+0200), Vincent Guittot wrote: > > On Fri, 3 Aug 2018 at 10:18, Quentin Perret <firstname.lastname@example.org> wrote: > > > > > > On Friday 03 Aug 2018 at 09:48:47 (+0200), Vincent Guittot wrote: > > > > On Thu, 2 Aug 2018 at 18:59, Quentin Perret <email@example.com> wrote: > > > > I'm not really concerned about re-enabling load balance but more that > > > > the effort of packing of tasks in few cpus/clusters that EAS tries to > > > > do can be broken for every new task. > > > > > > Well, re-enabling load balance immediately would break the nice placement > > > that EAS found, because it would shuffle all tasks around and break the > > > packing strategy. Letting that sole new task go in find_idlest_cpu() > > > > Sorry I was not clear in my explanation. Re enabling load balance > > would be a problem of course. I wanted to say that there is few chance > > that this will re-enable the load balance immediately and break EAS > > and I'm not worried by this case. But i'm only concerned by the new > > task being put outside EAS policy. > > > > For example, if you run on hikey960 the simple script below, which > > can't really be seen as a fork bomb IMHO, you will see threads > > scheduled on big cores every 0.5 seconds whereas everything should be > > packed on little core > > I guess that also depends on what's running on the little cores, but I > see your point. In my case, the system was idle and nothing else than this script was running > > I think we're discussing two different things right now: > 1. Should forkees go in find_energy_efficient_cpu() ? > 2. Should forkees have 0 of initial util_avg when EAS is enabled ? It's the same topic: How EAS should consider a newly created task ? For now, we let the "performance" mode selects a CPU. This CPU will most probably be worst CPU from a EAS pov because it's the idlest CPU in the idlest group which is the opposite of what EAS tries to do The current behavior is : For every new task, the cpu selection is done assuming it's a heavy task with the max possible load_avg, and it looks for the idlest cpu. This means that if the system is lightly loaded, scheduler will select most probably a idle big core. The utilization of this new task is then set to half of the remaining capacity of the selected CPU which means that the idlest you are, the biggest the task will be initialized to. This can easily be half a big core which can be bigger than the max capacity of a little like on hikey960. Then, util_est will keep track of this value for a while which will make this task like a big one. > > For 1, that would mean all forkees go on prev_cpu. I can see how that > can be more energy-efficient in some use-cases (the one you described > for example), but that also has drawbacks. Placing the task on a big > CPU can have an energy cost, but that should also help the task build > it's utilization faster, which is what we want to make smart decisions With current behavior, little task are seen as big for a long time which is not really help the task to build its utilization faster IMHO. > with EAS. Also, it isn't always true that going on the little CPUs is > more energy efficient, only the Energy Model can tell. There is just no selecting big or Little is not the problem here. The problem is that we don't use Energy Model so we will most probably do the wrong choice. Nevertheless, putting a task on big is clearly the wrong choice in the case I mentioned above " shell script on hikey960". > perfect solution, so I'm still not fully decided on that one ... > > For 2, I'm a little bit more reluctant, because that has more > implications ... That could probably harm some fairly standard use > cases (an simple app-launch for example). Enqueueing something new on a > CPU would go unnoticed, which might be fine for a very small task, but > probably a major issue if the task is actually big. I'd be more > comfortable with 2 only if we also speed-up the PELT half-life TBH ... > > Is there a 3 that I missed ? Having something in the middle like taking into account load and/org utilization of the parent in order to mitigate big task starting with small utilization and small task starting with big utilization. It's probably not perfect because big tasks can create small ones and the opposite but if there are already big tasks, assuming that the new one is also a big one should have less power impact as we are already consuming power for the current bigs. At the opposite, if little are running, assuming that new task is little will not harm the power consumption unnecessarily. My main concern is that by making no choice, you clearly make the most power consumption choice which is a bit awkward for a policy that wants to minimize power consumption. Regards, Vincent > > Thanks, > Quentin
next prev parent reply other threads:[~2018-08-06 8:41 UTC|newest] Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-07-24 12:25 [PATCH v5 00/14] Energy Aware Scheduling Quentin Perret 2018-07-24 12:25 ` [PATCH v5 01/14] sched: Relocate arch_scale_cpu_capacity Quentin Perret 2018-07-24 12:25 ` [PATCH v5 02/14] sched/cpufreq: Factor out utilization to frequency mapping Quentin Perret 2018-07-24 12:25 ` [PATCH v5 03/14] PM: Introduce an Energy Model management framework Quentin Perret 2018-08-09 21:52 ` Rafael J. Wysocki 2018-08-10 8:15 ` Quentin Perret 2018-08-10 8:41 ` Rafael J. Wysocki 2018-08-10 9:12 ` Quentin Perret 2018-08-10 11:13 ` Rafael J. Wysocki 2018-08-10 12:30 ` Quentin Perret 2018-08-12 9:49 ` Rafael J. Wysocki 2018-07-24 12:25 ` [PATCH v5 04/14] PM / EM: Expose the Energy Model in sysfs Quentin Perret 2018-07-24 12:25 ` [PATCH v5 05/14] sched/topology: Reference the Energy Model of CPUs when available Quentin Perret 2018-07-24 12:25 ` [PATCH v5 06/14] sched/topology: Lowest energy aware balancing sched_domain level pointer Quentin Perret 2018-07-26 16:00 ` Valentin Schneider 2018-07-26 17:01 ` Quentin Perret 2018-07-24 12:25 ` [PATCH v5 07/14] sched/topology: Introduce sched_energy_present static key Quentin Perret 2018-07-24 12:25 ` [PATCH v5 08/14] sched/fair: Clean-up update_sg_lb_stats parameters Quentin Perret 2018-07-24 12:25 ` [PATCH v5 09/14] sched: Add over-utilization/tipping point indicator Quentin Perret 2018-08-02 12:26 ` Peter Zijlstra 2018-08-02 13:03 ` Quentin Perret 2018-08-02 13:08 ` Peter Zijlstra 2018-08-02 13:18 ` Quentin Perret 2018-08-02 13:48 ` Vincent Guittot 2018-08-02 14:14 ` Quentin Perret 2018-08-02 15:14 ` Vincent Guittot 2018-08-02 15:30 ` Quentin Perret 2018-08-02 15:55 ` Vincent Guittot 2018-08-02 16:00 ` Quentin Perret 2018-08-02 16:07 ` Vincent Guittot 2018-08-02 16:10 ` Quentin Perret 2018-08-02 16:38 ` Vincent Guittot 2018-08-02 16:59 ` Quentin Perret 2018-08-03 7:48 ` Vincent Guittot 2018-08-03 8:18 ` Quentin Perret 2018-08-03 13:49 ` Vincent Guittot 2018-08-03 14:21 ` Vincent Guittot 2018-08-03 15:55 ` Quentin Perret 2018-08-06 8:40 ` Vincent Guittot [this message] 2018-08-06 9:43 ` Quentin Perret 2018-08-06 10:45 ` Vincent Guittot 2018-08-06 11:02 ` Quentin Perret 2018-08-06 10:08 ` Dietmar Eggemann 2018-08-06 10:33 ` Vincent Guittot 2018-08-06 12:29 ` Dietmar Eggemann 2018-08-06 12:37 ` Vincent Guittot 2018-08-06 13:20 ` Dietmar Eggemann 2018-08-09 9:30 ` Vincent Guittot 2018-08-09 9:38 ` Quentin Perret 2018-07-24 12:25 ` [PATCH v5 10/14] sched/cpufreq: Refactor the utilization aggregation method Quentin Perret 2018-07-30 19:35 ` skannan 2018-07-31 7:59 ` Quentin Perret 2018-07-31 19:31 ` skannan 2018-08-01 7:32 ` Rafael J. Wysocki 2018-08-01 8:23 ` Quentin Perret 2018-08-01 8:35 ` Rafael J. Wysocki 2018-08-01 9:23 ` Quentin Perret 2018-08-01 9:40 ` Rafael J. Wysocki 2018-08-02 13:04 ` Peter Zijlstra 2018-08-02 15:39 ` Quentin Perret 2018-08-03 13:04 ` Quentin Perret 2018-08-02 12:33 ` Peter Zijlstra 2018-08-02 12:45 ` Peter Zijlstra 2018-08-02 15:21 ` Quentin Perret 2018-08-02 17:36 ` Peter Zijlstra 2018-08-03 12:42 ` Quentin Perret 2018-07-24 12:25 ` [PATCH v5 11/14] sched/fair: Introduce an energy estimation helper function Quentin Perret 2018-07-24 12:25 ` [PATCH v5 12/14] sched/fair: Select an energy-efficient CPU on task wake-up Quentin Perret 2018-08-02 13:54 ` Peter Zijlstra 2018-08-02 16:21 ` Quentin Perret 2018-07-24 12:25 ` [PATCH v5 13/14] OPTIONAL: arch_topology: Start Energy Aware Scheduling Quentin Perret 2018-07-24 12:25 ` [PATCH v5 14/14] OPTIONAL: cpufreq: dt: Register an Energy Model Quentin Perret
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to='CAKfTPtBT-p-Z0EneirfOTwUw=5jEeDGa+4_EE4ogi2Ht7GU9Bg@mail.gmail.com' \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --subject='Re: [PATCH v5 09/14] sched: Add over-utilization/tipping point indicator' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).