linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: bsegall@google.com
To: Dave Chiluk <chiluk+linux@indeed.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Pqhil Auld <pauld@redhat.com>, Peter Oskolkov <posk@posk.io>,
	Ingo Molnar <mingo@redhat.com>,
	cgroups@vger.kernel.org,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Brendan Gregg <bgregg@netflix.com>, Kyle Anderson <kwa@yelp.com>,
	Gabriel Munos <gmunoz@netflix.com>,
	John Hammond <jhammond@indeed.com>,
	Cong Wang <xiyou.wangcong@gmail.com>,
	Jonathan Corbet <corbet@lwn.net>,
	linux-doc@vger.kernel.org, Paul Turner <pjt@google.com>
Subject: Re: [PATCH v5 1/1] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices
Date: Fri, 12 Jul 2019 11:01:34 -0700	[thread overview]
Message-ID: <xm26r26vjfnl.fsf@bsegall-linux.svl.corp.google.com> (raw)
In-Reply-To: <CAC=E7cV4sO50NpYOZ06n_BkZTcBqf1KQp83prc+oave3ircBrw@mail.gmail.com> (Dave Chiluk's message of "Thu, 11 Jul 2019 18:48:24 -0500")

Dave Chiluk <chiluk+linux@indeed.com> writes:

> So I spent some more time testing this new patch as is *(interrupts disabled).  I know I probably should have fixed the patch, but it's hard to get time on big test hardware sometimes, and I was already well along my way with testing.
>
> In regards to the quota usage overage I was seeing earlier: I have a
> theory as to what might be happening here, and I'm pretty sure it's
> related to the IRQs being disabled during the rq->lock walk. I think
> that the main fast thread was able to use an excess amount of quota
> because the timer interrupt meant to stop it wasn't being handled
> timely due to the interrupts being disabled. On my 8 core machine this
> resulted in a what looked like simply improved usage of the quota, but
> when I ran the test on an 80 core machine I saw a massive overage of
> cpu usage when running fibtest. Specifically when running fibtest for
> 5 seconds with 50ms quota/100ms period expecting ~2500ms of quota
> usage; I got 3731 ms of cpu usage which was an unexpected overage of
> 1231ms. Is that a reasonable theory?

Tht doesn't seem likely - taking 1ms would be way longer than I'd expect
to begin with, and runtime_remaining can go negative for that sort of
reason anyways assuming the irq time is even counted towards the task.
Also I don't that the enable-irqs version will help for the scheduler
tick at least without rt patchsets.

That is still also too much for what I was thinking of though. I'll have
to look into this more.

>
> I'll try to get some time again tomorrow to test with IRQs disabled before the walk.  Ben if you have a chance to fix and resend the patch that'd help.
>
> I'm really starting to think that simply removing the quota expiration
> may be the best solution here.  Mathmatically it works out, it makes
> the code simpler, it doesn't have any of the lock walk issues, it
> doesn't add extra latency or overhead due to the slack timer,

It works out _for the job that is supposed to be throttled_. If the job
then gets a burst of actually-expensive work on many threads it can then
use NCPUs extra ms, adding latency to any other job on the system. Given
that it's still only 1ms on each runqueue, maybe this isn't the end of
the world, but the fail case does exist.

(We have to do exactly the same locking stuff on distribute, both more
rarely on the period timer, and on the currently existing slack timer)

> and that behavior is exactly what the kernel was doing for 5 years with few complaints about overage afaik.
>
> Either way, I'm very glad that we are getting to the end of this one, and all solutions appear to solve the core of the problem.  I thank you all the work you guys have put into this.
>
> On Thu, Jul 11, 2019 at 12:46 PM <bsegall@google.com> wrote:
>
>  Peter Zijlstra <peterz@infradead.org> writes:
>
>  > FWIW, good to see progress, still waiting for you guys to agree :-)
>  >
>  > On Mon, Jul 01, 2019 at 01:15:44PM -0700, bsegall@google.com wrote:
>  >
>  >> - Taking up-to-every rq->lock is bad and expensive and 5ms may be too
>  >>   short a delay for this. I haven't tried microbenchmarks on the cost of
>  >>   this vs min_cfs_rq_runtime = 0 vs baseline.
>  >
>  > Yes, that's tricky, SGI/HPE have definite ideas about that.
>  >
>  >> @@ -4781,12 +4790,41 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq)
>  >>   */
>  >>  static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
>  >>  {
>  >> -    u64 runtime = 0, slice = sched_cfs_bandwidth_slice();
>  >> +    u64 runtime = 0;
>  >>      unsigned long flags;
>  >>      u64 expires;
>  >> +    struct cfs_rq *cfs_rq, *temp;
>  >> +    LIST_HEAD(temp_head);
>  >> +
>  >> +    local_irq_save(flags);
>  >> +
>  >> +    raw_spin_lock(&cfs_b->lock);
>  >> +    cfs_b->slack_started = false;
>  >> +    list_splice_init(&cfs_b->slack_cfs_rq, &temp_head);
>  >> +    raw_spin_unlock(&cfs_b->lock);
>  >> +
>  >> +
>  >> +    /* Gather all left over runtime from all rqs */
>  >> +    list_for_each_entry_safe(cfs_rq, temp, &temp_head, slack_list) {
>  >> +            struct rq *rq = rq_of(cfs_rq);
>  >> +            struct rq_flags rf;
>  >> +
>  >> +            rq_lock(rq, &rf);
>  >> +
>  >> +            raw_spin_lock(&cfs_b->lock);
>  >> +            list_del_init(&cfs_rq->slack_list);
>  >> +            if (!cfs_rq->nr_running && cfs_rq->runtime_remaining > 0 &&
>  >> +                cfs_rq->runtime_expires == cfs_b->runtime_expires) {
>  >> +                    cfs_b->runtime += cfs_rq->runtime_remaining;
>  >> +                    cfs_rq->runtime_remaining = 0;
>  >> +            }
>  >> +            raw_spin_unlock(&cfs_b->lock);
>  >> +
>  >> +            rq_unlock(rq, &rf);
>  >> +    }
>  >
>  > But worse still, you take possibly every rq->lock without ever
>  > re-enabling IRQs.
>  >
>
>  Yeah, I'm not sure why I did that, it isn't correctness.

  parent reply	other threads:[~2019-07-12 18:01 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-17 19:30 [PATCH] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu slices Dave Chiluk
2019-05-23 18:44 ` [PATCH v2 0/1] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices Dave Chiluk
2019-05-23 18:44   ` [PATCH v2 1/1] " Dave Chiluk
2019-05-23 21:01     ` Peter Oskolkov
2019-05-24 14:32       ` Phil Auld
2019-05-24 15:14         ` Dave Chiluk
2019-05-24 15:59           ` Phil Auld
2019-05-24 16:28           ` Peter Oskolkov
2019-05-24 21:35             ` Dave Chiluk
2019-05-24 22:07               ` Peter Oskolkov
2019-05-28 22:25                 ` Dave Chiluk
2019-05-24  8:55     ` Peter Zijlstra
2019-05-29 19:08 ` [PATCH v3 0/1] " Dave Chiluk
2019-05-29 19:08   ` [PATCH v3 1/1] " Dave Chiluk
2019-05-29 19:28     ` Phil Auld
2019-05-29 19:50     ` bsegall
2019-05-29 21:05     ` bsegall
2019-05-30 17:53       ` Dave Chiluk
2019-05-30 20:44         ` bsegall
     [not found] ` <1561391404-14450-1-git-send-email-chiluk+linux@indeed.com>
2019-06-24 15:50   ` [PATCH v4 1/1] sched/fair: Return all runtime when cfs_b has very little remaining Dave Chiluk
2019-06-24 17:33     ` bsegall
2019-06-26 22:10       ` Dave Chiluk
2019-06-27 20:18         ` bsegall
2019-06-27 19:09 ` [PATCH] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices Dave Chiluk
2019-06-27 19:49 ` [PATCH v5 0/1] " Dave Chiluk
2019-06-27 19:49   ` [PATCH v5 1/1] " Dave Chiluk
2019-07-01 20:15     ` bsegall
2019-07-11  9:51       ` Peter Zijlstra
2019-07-11 17:46         ` bsegall
     [not found]           ` <CAC=E7cV4sO50NpYOZ06n_BkZTcBqf1KQp83prc+oave3ircBrw@mail.gmail.com>
2019-07-12 18:01             ` bsegall [this message]
2019-07-12 22:09             ` bsegall
2019-07-15 15:44               ` Dave Chiluk
2019-07-16 19:58     ` bsegall
2019-07-23 16:44 ` [PATCH v6 0/1] " Dave Chiluk
2019-07-23 16:44   ` [PATCH v6 1/1] " Dave Chiluk
2019-07-23 17:13     ` Phil Auld
2019-07-23 22:12       ` Dave Chiluk
2019-07-23 23:26         ` Phil Auld
2019-07-26 18:14       ` Peter Zijlstra
2019-08-08 10:53     ` [tip:sched/core] " tip-bot for Dave Chiluk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=xm26r26vjfnl.fsf@bsegall-linux.svl.corp.google.com \
    --to=bsegall@google.com \
    --cc=bgregg@netflix.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chiluk+linux@indeed.com \
    --cc=corbet@lwn.net \
    --cc=gmunoz@netflix.com \
    --cc=jhammond@indeed.com \
    --cc=kwa@yelp.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pauld@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=posk@posk.io \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).