LKML Archive on lore.kernel.org
 help / color / Atom feed
From: Joel Fernandes <joel@joelfernandes.org>
To: Qais Yousef <qais.yousef@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Patrick Bellasi <patrick.bellasi@arm.com>,
	Subhra Mazumdar <subhra.mazumdar@oracle.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Thomas Glexiner <tglx@linutronix.de>,
	steven.sistare@oracle.com, Dhaval Giani <dhaval.giani@oracle.com>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	parth@linux.ibm.com
Subject: Re: [RFC PATCH 1/9] sched,cgroup: Add interface for latency-nice
Date: Mon, 20 Apr 2020 15:10:47 -0400
Message-ID: <20200420191047.GA67569@google.com> (raw)
In-Reply-To: <20200420114728.iy6w34khketplxvk@e107158-lin.cambridge.arm.com>

Hi Qais!

On Mon, Apr 20, 2020 at 12:47:29PM +0100, Qais Yousef wrote:
> On 04/18/20 12:01, Joel Fernandes wrote:
> > > What's missing is the per-taskgroup implementation, at least from the
> > > standpoint of ACK.
> > >
> > > The (mainline) EAS use-case for latency nice is already in ACK
> > > (android-5.4):
> > >
> > > https://android.googlesource.com/kernel/common/+/760b82c9b88d2c8125abfc5f732cc3cd460b2a54
> > 
> > Yes, I was aware of this. But if we use task groups, then the
> > transition from schedtune -> uclamp means now the tasks that use
> > uclamp would also be subjected to cpu.shares. That's why we were
> > looking into the per-task interface and glad there's some work on this
> > already done.
> 
> Hmm uclamp doesn't do anything with cpu.shares. I assume this is some
> implementation detail at your end? IOW, you don't have to use cpu.shares to use
> uclamp.

Right, it is a ChromeOS-specific issue. We have CONFIG_FAIR_GROUP_SCHED
enabled in the kernel for container workloads. However there are CGroups of
tasks that used "schedtune" CGroup interface before to provide util clamping
like behavior. We are now migrating these to the upstream util-clamp.

We can't disable CONFIG_FAIR_GROUP_SCHED because that would break the
container workloads.

So we have to use the per-process interface of util clamp.

If we used the CGroups interface of util clamping, we would get the
cpu.shares as well since the CGroup interface comes with shares. There's no
way to avoid being subject to cpu.shares (that I'm aware off anyway).

> Although there should be few tasks in the system that need the latency-nice, so
> I prefer the per-task interface rather than lump everything in a cgroup. Though
> there could be valid use cases for the latter.

Yes, with either interface, we need something like latency_nice to indicate
that the task is low-latency (something we used for a number of years with
the out-of-tree schedtune).

thanks!

 - Joel


> 
> Thanks
> 
> --
> Qais Yousef

  reply index

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-30 17:49 [RFC PATCH 0/9] Task latency-nice subhra mazumdar
2019-08-30 17:49 ` [RFC PATCH 1/9] sched,cgroup: Add interface for latency-nice subhra mazumdar
2019-09-04 17:32   ` Tim Chen
2019-09-05  6:15     ` Parth Shah
2019-09-05 10:11       ` Patrick Bellasi
2019-09-06 12:22         ` Parth Shah
2019-09-05  8:31   ` Peter Zijlstra
2019-09-05  9:45     ` Patrick Bellasi
2019-09-05 10:46       ` Peter Zijlstra
2019-09-05 11:13         ` Qais Yousef
2019-09-05 11:30           ` Peter Zijlstra
2019-09-05 11:40             ` Patrick Bellasi
2019-09-05 11:48               ` Peter Zijlstra
2019-09-05 13:32                 ` Qais Yousef
2019-09-05 11:47             ` Qais Yousef
2020-04-16  0:02               ` Joel Fernandes
2020-04-16 17:23                 ` Dietmar Eggemann
2020-04-18 16:01                   ` Joel Fernandes
2020-04-20 11:26                     ` Parth Shah
2020-04-20 19:14                       ` Joel Fernandes
2020-04-20 11:47                     ` Qais Yousef
2020-04-20 19:10                       ` Joel Fernandes [this message]
2019-09-05 11:30           ` Patrick Bellasi
2019-09-05 11:47             ` Peter Zijlstra
2019-09-05 11:18         ` Patrick Bellasi
2019-09-05 11:40           ` Peter Zijlstra
2019-09-05 11:46             ` Patrick Bellasi
2019-09-05 11:46           ` Valentin Schneider
2019-09-05 13:07             ` Patrick Bellasi
2019-09-05 14:48               ` Valentin Schneider
2019-09-06 12:45               ` Parth Shah
2019-09-06 14:13                 ` Valentin Schneider
2019-09-06 14:32                   ` Vincent Guittot
2019-09-06 17:10                   ` Parth Shah
2019-09-06 22:50                     ` Valentin Schneider
2019-09-06 12:31       ` Parth Shah
2019-09-05 10:05   ` Patrick Bellasi
2019-09-05 10:48     ` Peter Zijlstra
2019-08-30 17:49 ` [RFC PATCH 2/9] sched: add search limit as per latency-nice subhra mazumdar
2019-09-05  6:22   ` Parth Shah
2019-08-30 17:49 ` [RFC PATCH 3/9] sched: add sched feature to disable idle core search subhra mazumdar
2019-09-05 10:17   ` Patrick Bellasi
2019-09-05 22:02     ` Subhra Mazumdar
2019-08-30 17:49 ` [RFC PATCH 4/9] sched: SIS_CORE " subhra mazumdar
2019-09-05 10:19   ` Patrick Bellasi
2019-08-30 17:49 ` [RFC PATCH 5/9] sched: Define macro for number of CPUs in core subhra mazumdar
2019-08-30 17:49 ` [RFC PATCH 6/9] x86/smpboot: Optimize cpumask_weight_sibling macro for x86 subhra mazumdar
2019-08-30 17:49 ` [RFC PATCH 7/9] sched: search SMT before LLC domain subhra mazumdar
2019-09-05  9:31   ` Peter Zijlstra
2019-09-05 20:40     ` Subhra Mazumdar
2019-08-30 17:49 ` [RFC PATCH 8/9] sched: introduce per-cpu var next_cpu to track search limit subhra mazumdar
2019-08-30 17:49 ` [RFC PATCH 9/9] sched: rotate the cpu search window for better spread subhra mazumdar
2019-09-05  6:37   ` Parth Shah
2019-09-05  5:55 ` [RFC PATCH 0/9] Task latency-nice Parth Shah
2019-09-05 10:31 ` Patrick Bellasi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200420191047.GA67569@google.com \
    --to=joel@joelfernandes.org \
    --cc=daniel.lezcano@linaro.org \
    --cc=dhaval.giani@oracle.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=mingo@redhat.com \
    --cc=parth@linux.ibm.com \
    --cc=patrick.bellasi@arm.com \
    --cc=peterz@infradead.org \
    --cc=qais.yousef@arm.com \
    --cc=steven.sistare@oracle.com \
    --cc=subhra.mazumdar@oracle.com \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git
	git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git
	git clone --mirror https://lore.kernel.org/lkml/9 lkml/git/9.git
	git clone --mirror https://lore.kernel.org/lkml/10 lkml/git/10.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git