linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Josh Don <joshdon@google.com>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Valentin Schneider <vschneid@redhat.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] sched: async unthrottling for cfs bandwidth
Date: Wed, 2 Nov 2022 17:10:08 -0700	[thread overview]
Message-ID: <CABk29Nsjbex9VYw01HQN4Bgvrf66w2YDfpRLuns2nDt5UxCjUg@mail.gmail.com> (raw)
In-Reply-To: <20221102165922.GA31833@blackbody.suse.cz>

Hi Michal,

Thanks for taking a look.

On Wed, Nov 2, 2022 at 9:59 AM Michal Koutný <mkoutny@suse.com> wrote:
>
> Hello.
>
> On Wed, Oct 26, 2022 at 03:44:49PM -0700, Josh Don <joshdon@google.com> wrote:
> > To fix this, we can instead unthrottle cfs_rq's asynchronously via a
> > CSD. Each cpu is responsible for unthrottling itself, thus sharding the
> > total work more fairly across the system, and avoiding hard lockups.
>
> FIFO behavior of the cfs_b->throttled_cfs_rq is quite important to
> ensure fairness of throttling (historically when it FIFO wasn't honored,
> it caused some cfs_rq starving issues).
>
> Despite its name, distribute_cfs_runtime() doesn't distribute the
> runtime, the time is pulled inside assign_cfs_rq_runtime() (but that's
> already on target cpu).
> Currently, it's all synchronized under cfs_b->lock but with your change,
> throttled cfs_rq would be dissolved among cpus that'd run concurrently
> (assign_cfs_rq_runtime() still takes cfs_b->lock but it won't be
> necessarily in the unthrottling order).

I don't think my patch meaningfully regresses this; the prior state
was also very potentially unfair in a similar way.

Without my patch, distribute_cfs_runtime() will unthrottle the
cfs_rq's, and as you point out, it doesn't actually give them any real
quota, it lets assign_cfs_rq_runtime() take care of that. But this
happens asynchronously on those cpus. If they are idle, they wait for
an IPI from the resched_curr() in unthrottled_cfs_rq(), otherwise they
simply wait until potentially the next rescheduling point. So we are
currently far from ever being guaranteed that the order the cpus pull
actual quota via assign_cfs_rq_runtime() matches the order they were
unthrottled from the list.

> > +static inline void __unthrottle_cfs_rq_async(struct cfs_rq *cfs_rq)
> > [...]
> > +     if (rq == this_rq()) {
> > +             unthrottle_cfs_rq(cfs_rq);
> > +             return;
> > +     }
>
> It was pointed out to me that generic_exec_single() does something
> similar.
> Wouldn't the flow bandwidth control code be simpler relying on that?

We already hold rq lock so we couldn't rely on the
generic_exec_single() special case since that would double lock.

> Also, can a particular cfs_rq be on both cfs_b->throttled_csd_list and
> cfs_b->throttled_cfs_rq lists at any moment?
> I wonder if having a single list_head node in cfs_rq would be feasible
> (and hence enforcing this constraint in data).

That's an interesting idea, this could be rewritten so that
distribute() pulls the entity off this list and moves it to the
throttled_csd_list; we never have an actual need to have entities on
both lists at the same time.

I'll wait to see if Peter has any comments, but that could be made in
a v3 for this patch.

Best,
Josh

  reply	other threads:[~2022-11-03  0:10 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-26 22:44 [PATCH v2] sched: async unthrottling for cfs bandwidth Josh Don
2022-10-31 13:04 ` Peter Zijlstra
2022-10-31 21:22   ` Josh Don
2022-10-31 21:50     ` Tejun Heo
2022-10-31 23:15       ` Josh Don
2022-10-31 23:53         ` Tejun Heo
2022-11-01  1:01           ` Josh Don
2022-11-01  1:45             ` Tejun Heo
2022-11-01 19:11               ` Josh Don
2022-11-01 19:15                 ` Tejun Heo
2022-11-01 20:56                   ` Josh Don
2022-11-01 21:49                     ` Tejun Heo
2022-11-01 21:59                       ` Josh Don
2022-11-01 22:38                         ` Tejun Heo
2022-11-02 17:10                           ` Michal Koutný
2022-11-02 17:18                             ` Tejun Heo
2022-10-31 21:56   ` Benjamin Segall
2022-11-02  8:40     ` Peter Zijlstra
2022-11-11  0:14       ` Josh Don
2022-11-02 16:59 ` Michal Koutný
2022-11-03  0:10   ` Josh Don [this message]
2022-11-03 10:11     ` Michal Koutný
2022-11-16  3:01   ` Josh Don
2022-11-16  9:57     ` Michal Koutný
2022-11-16 21:45       ` Josh Don

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABk29Nsjbex9VYw01HQN4Bgvrf66w2YDfpRLuns2nDt5UxCjUg@mail.gmail.com \
    --to=joshdon@google.com \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=mkoutny@suse.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).