linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qais Yousef <qais.yousef@arm.com>
To: Pavan Kondeti <pkondeti@codeaurora.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2] sched: rt: Make RT capacity aware
Date: Mon, 3 Feb 2020 14:27:14 +0000	[thread overview]
Message-ID: <20200203142712.a7yvlyo2y3le5cpn@e107158-lin> (raw)
In-Reply-To: <CAEU1=PnYryM26F-tNAT0JVUoFcygRgE374JiBeJPQeTEoZpANg@mail.gmail.com>

On 02/01/20 07:08, Pavan Kondeti wrote:
> Thanks for the results. I see that tasks are indeed spreading on to silver.
> However it is not
> clear to me from the code how tasks would get spread. Because cpupri_find()
> never returns
> little CPU in the lowest_mask because RT task does not fit on little CPU.
> So from wake up
> path, we place the task on the previous CPU or BIG CPU. The push logic also
> has the
> RT capacity checks, so an overloaded BIG CPU may not push tasks to an idle
> (RT free) little CPU.

Okay I see what you mean.

Sorry I had to cache this back in again as it's been a while.

So yes you're right we won't force move to the little CPUs, but if we were
running there already we won't force overload the big CPU either. IIRC I didn't
see more than 2 tasks packed on a big core. But maybe I needed to try more
combination of things. The 2 tasks packed can be seen in my results. Which
I don't think it's a necessarily bad thing.

I tried to call out that in overloaded case we don't give any guarantees. I'll
expand below, but if a user asked for RT tasks to run on big cores more than
available big cores, then there's very little we can do.

If the tasks really want to be on a big core but the user is asking for more
than what the system can get, then the end result isn't the best whether we
pack or spread. Packing might end up good if the 2 tasks aren't intensive. Or
could end up being bad if they are.

Similarly if the 2 tasks aren't computationally intensive, then spreading to
the little is bad. Most likely the request for running at a certain performance
level, is to guarantee the predictability to finish execution within a certain
window of time. But don't quote me on this :)

I don't see one right answer here. The current mechanism could certainly do
better; but it's not clear what better means without delving into system
specific details. I am open to any suggestions to improve it.

As it stands, it allows the admin to boost RT tasks and guarantee they end up
running on the correct CPU. But it doesn't protect against bad RT planning.

> Yes, we do search with in the affined CPUs. However in cpupri_find(), we
> clear
> the CPUs on which the task does not fit. so the lowest_mask always be empty
> and we return -1. There is no fallback.

The question here is: if a task has its uclamp_min set to 1024 but is affined
to the wrong type of cpus, is it a user error or a kernel failure to deal with
this case?

The default p->uclamp_min = 1024 is not the right default to use in these
systems. I am pushing for a patch [1] to allow modifying this default behavior
at runtime. AFAIU Android has always disabled max RT boosting.

The common use case that we are trying to cater for is that most of the tasks
are happy to run anywhere, but there might be few that need to be boosted and
this boost value can only be guaranteed by running on a set of CPUs, in that
case we give this guarantee.

Again I agree the logic could be improved, but I prefer to see a real use case
first where this improvement helps.

[1] https://lore.kernel.org/lkml/20191220164838.31619-1-qais.yousef@arm.com/

Cheers

--
Qais Yousef

  parent reply	other threads:[~2020-02-03 14:27 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-09 10:46 [PATCH v2] sched: rt: Make RT capacity aware Qais Yousef
2019-10-23 12:34 ` Qais Yousef
2019-10-28 14:37 ` Peter Zijlstra
2019-10-28 18:01   ` Steven Rostedt
2019-10-28 20:50     ` Peter Zijlstra
2019-12-20 16:01       ` Qais Yousef
2019-12-20 17:18         ` Peter Zijlstra
2019-12-20 17:36           ` Qais Yousef
2019-11-07  9:15     ` Qais Yousef
2019-11-18 15:43     ` Qais Yousef
2019-11-18 15:53       ` Steven Rostedt
2019-11-18 16:12         ` Qais Yousef
2019-10-29  8:13 ` Vincent Guittot
2019-10-29 11:02   ` Qais Yousef
2019-10-29 11:17     ` Vincent Guittot
2019-10-29 11:48       ` Qais Yousef
2019-10-29 12:20         ` Vincent Guittot
2019-10-29 12:46           ` Qais Yousef
2019-10-29 12:54             ` Vincent Guittot
2019-10-29 13:02               ` Peter Zijlstra
2019-10-29 20:36               ` Patrick Bellasi
2019-10-30  8:04                 ` Vincent Guittot
2019-10-30  9:26                   ` Qais Yousef
2019-10-30 12:11                   ` Quentin Perret
2019-10-30 11:57 ` Dietmar Eggemann
2019-10-30 17:43   ` Qais Yousef
2019-11-28 13:59     ` Dietmar Eggemann
2019-11-25 21:36 ` Steven Rostedt
2019-11-26  9:39   ` Qais Yousef
2019-12-25 10:38 ` [tip: sched/core] sched/rt: Make RT capacity-aware tip-bot2 for Qais Yousef
2020-01-31 10:06 ` [PATCH v2] sched: rt: Make RT capacity aware Pavan Kondeti
2020-01-31 15:34   ` Qais Yousef
     [not found]     ` <CAEU1=PnYryM26F-tNAT0JVUoFcygRgE374JiBeJPQeTEoZpANg@mail.gmail.com>
2020-02-03  5:32       ` Pavan Kondeti
2020-02-03 14:57         ` Qais Yousef
2020-02-03 14:27       ` Qais Yousef [this message]
2020-02-03 16:14         ` Steven Rostedt
2020-02-03 17:15           ` Valentin Schneider
2020-02-03 17:17           ` Qais Yousef
2020-02-03 18:12             ` Steven Rostedt
2020-02-03 19:03               ` Qais Yousef
2020-02-04 17:23                 ` Dietmar Eggemann
2020-02-05 14:48                   ` Qais Yousef

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200203142712.a7yvlyo2y3le5cpn@e107158-lin \
    --to=qais.yousef@arm.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pkondeti@codeaurora.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).