linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Josh Don <joshdon@google.com>
To: Tejun Heo <tj@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Valentin Schneider <vschneid@redhat.com>,
	linux-kernel@vger.kernel.org,
	Joel Fernandes <joel@joelfernandes.org>
Subject: Re: [PATCH v2] sched: async unthrottling for cfs bandwidth
Date: Mon, 31 Oct 2022 18:01:19 -0700	[thread overview]
Message-ID: <CABk29Nvqv-T1JuAq2cf9=AwRu=y1+YOR4xS2qnVo6+XpWd2UNQ@mail.gmail.com> (raw)
In-Reply-To: <Y2Bf+CeQ8x2jKQ3S@slm.duckdns.org>

On Mon, Oct 31, 2022 at 4:53 PM Tejun Heo <tj@kernel.org> wrote:
>
> Hello,
>
> On Mon, Oct 31, 2022 at 04:15:54PM -0700, Josh Don wrote:
> > On Mon, Oct 31, 2022 at 2:50 PM Tejun Heo <tj@kernel.org> wrote:
> > Yes, but schemes such as shares and idle can still end up creating
> > some severe inversions. For example, a SCHED_IDLE thread on a cpu with
> > many other threads. Eventually the SCHED_IDLE thread will get run, but
> > the round robin times can easily get pushes out to several hundred ms
> > (or even into the seconds range), due to min granularity. cpusets
> > combined with the load balancer's struggle to find low weight tasks
> > exacerbates such situations.
>
> Yeah, especially with narrow cpuset (or task cpu affinity) configurations,
> it can get pretty bad. Outside that tho, at least I haven't seen a lot of
> problematic cases as long as the low priority one isn't tightly entangled
> with high priority tasks, mostly because 1. if the resource the low pri one
> is holding affects large part of the system, the problem is self-solving as
> the system quickly runs out of other things to do 2. if the resource isn't
> affecting large part of the system, their blast radius is usually reasonably
> confined to things tightly coupled with it. I'm sure there are exceptions
> and we definitely wanna improve the situation where it makes sense.

cgroup_mutex and kernfs rwsem beg to differ :) These are shared with
control plane threads, so it is pretty easy to starve those out even
while the system has plenty of work to do.

> > > > chatted with the folks working on the proxy execution patch series,
> > > > and it seems like that could be a better generic solution to these
> > > > types of issues.
> > >
> > > Care to elaborate?
> >
> > https://lwn.net/Articles/793502/ gives some historical context, see
> > also https://lwn.net/Articles/910302/.
>
> Ah, full blown priority inheritance. They're great to pursue but I think we
> wanna fix cpu bw control regardless. It's such an obvious and basic issue
> and given how much problem we have with actually understanding resource and
> control dependencies with all the custom synchronization contstructs in the
> kernel, fixing it will be useful even in the future where we have a better
> priority inheritance mechanism.

Sure, even something like not throttling when there exist threads in
kernel mode (while not a complete solution), helps get some of the way
towards improving that case.

> > > I don't follow. If you only throttle at predefined safe spots, the easiest
> > > place being the kernel-user boundary, you cannot get system-wide stalls from
> > > BW restrictions, which is something the kernel shouldn't allow userspace to
> > > cause. In your example, a thread holding a kernel mutex waking back up into
> > > a hierarchy that is currently throttled should keep running in the kernel
> > > until it encounters such safe throttling point where it would have released
> > > the kernel mutex and then throttle.
> >
> > Agree except that for the task waking back up, it isn't on cpu, so
> > there is no "wait to throttle it until it returns to user", since
> > throttling happens in the context of the entire cfs_rq. We'd have to
>
> Oh yeah, we'd have to be able to allow threads running in kernel regardless
> of cfq_rq throttled state and then force charge the cpu cycles to be paid
> later. It would definitely require quite a bit of work.
>
> > treat threads in a bandwidth hierarchy that are also in kernel mode
> > specially. Mechanically, it is more straightforward to implement the
> > mechanism to wait to throttle until the cfs_rq has no more threads in
> > kernel mode, than it is to exclude a woken task from the currently
> > throttled period of its cfs_rq, though this is incomplete.
>
> My hunch is that bunching them together is likely gonna create too many
> escape scenarios and control artifacts and it'd be better to always push
> throttling decisions to the leaves (tasks) so that each task can be
> controlled separately. That'd involve architectural changes but the eventual
> behavior would be a lot better.

Also a tradeoff, since it is extra overhead to handle individually at
the leaf level vs dequeuing a single cfs_rq.

> > What you're suggesting would also require that we find a way to
> > preempt the current thread to start running the thread that woke up in
> > kernel (and this becomes more complex when the current thread is also
> > in kernel, or if there are n other waiting threads that are also in
> > kernel).
>
> I don't think it needs that. What allows userspace to easily trigger
> pathological scenarios is the ability to force the machine idle when there's
> something which is ready to run in the kernel. If you take that away, most
> of the problems disappear. It's not perfect but reasonable enough and not
> worse than a system without cpu bw control.
>
> Thanks.
>
> --
> tejun

  reply	other threads:[~2022-11-01  1:01 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-26 22:44 [PATCH v2] sched: async unthrottling for cfs bandwidth Josh Don
2022-10-31 13:04 ` Peter Zijlstra
2022-10-31 21:22   ` Josh Don
2022-10-31 21:50     ` Tejun Heo
2022-10-31 23:15       ` Josh Don
2022-10-31 23:53         ` Tejun Heo
2022-11-01  1:01           ` Josh Don [this message]
2022-11-01  1:45             ` Tejun Heo
2022-11-01 19:11               ` Josh Don
2022-11-01 19:15                 ` Tejun Heo
2022-11-01 20:56                   ` Josh Don
2022-11-01 21:49                     ` Tejun Heo
2022-11-01 21:59                       ` Josh Don
2022-11-01 22:38                         ` Tejun Heo
2022-11-02 17:10                           ` Michal Koutný
2022-11-02 17:18                             ` Tejun Heo
2022-10-31 21:56   ` Benjamin Segall
2022-11-02  8:40     ` Peter Zijlstra
2022-11-11  0:14       ` Josh Don
2022-11-02 16:59 ` Michal Koutný
2022-11-03  0:10   ` Josh Don
2022-11-03 10:11     ` Michal Koutný
2022-11-16  3:01   ` Josh Don
2022-11-16  9:57     ` Michal Koutný
2022-11-16 21:45       ` Josh Don

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABk29Nvqv-T1JuAq2cf9=AwRu=y1+YOR4xS2qnVo6+XpWd2UNQ@mail.gmail.com' \
    --to=joshdon@google.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=joel@joelfernandes.org \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).