From: Phil Auld <pauld@redhat.com>
To: Dave Chiluk <chiluk+linux@indeed.com>
Cc: Ben Segall <bsegall@google.com>, Peter Oskolkov <posk@posk.io>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
Brendan Gregg <bgregg@netflix.com>, Kyle Anderson <kwa@yelp.com>,
Gabriel Munos <gmunoz@netflix.com>,
John Hammond <jhammond@indeed.com>,
Cong Wang <xiyou.wangcong@gmail.com>,
Jonathan Corbet <corbet@lwn.net>,
linux-doc@vger.kernel.org
Subject: Re: [PATCH v3 1/1] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices
Date: Wed, 29 May 2019 15:28:33 -0400 [thread overview]
Message-ID: <20190529192833.GF26206@pauld.bos.csb> (raw)
In-Reply-To: <1559156926-31336-2-git-send-email-chiluk+linux@indeed.com>
On Wed, May 29, 2019 at 02:08:46PM -0500 Dave Chiluk wrote:
> It has been observed, that highly-threaded, non-cpu-bound applications
> running under cpu.cfs_quota_us constraints can hit a high percentage of
> periods throttled while simultaneously not consuming the allocated
> amount of quota. This use case is typical of user-interactive non-cpu
> bound applications, such as those running in kubernetes or mesos when
> run on multiple cpu cores.
>
> This has been root caused to threads being allocated per cpu bandwidth
> slices, and then not fully using that slice within the period. At which
> point the slice and quota expires. This expiration of unused slice
> results in applications not being able to utilize the quota for which
> they are allocated.
>
> The expiration of per-cpu slices was recently fixed by
> 'commit 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift
> condition")'. Prior to that it appears that this has been broken since
> at least 'commit 51f2176d74ac ("sched/fair: Fix unlocked reads of some
> cfs_b->quota/period")' which was introduced in v3.16-rc1 in 2014. That
> added the following conditional which resulted in slices never being
> expired.
>
> if (cfs_rq->runtime_expires != cfs_b->runtime_expires) {
> /* extend local deadline, drift is bounded above by 2 ticks */
> cfs_rq->runtime_expires += TICK_NSEC;
>
> Because this was broken for nearly 5 years, and has recently been fixed
> and is now being noticed by many users running kubernetes
> (https://github.com/kubernetes/kubernetes/issues/67577) it is my opinion
> that the mechanisms around expiring runtime should be removed
> altogether.
>
> This allows only per-cpu slices to live longer than the period boundary.
> This allows threads on runqueues that do not use much CPU to continue to
> use their remaining slice over a longer period of time than
> cpu.cfs_period_us. However, this helps prevents the above condition of
> hitting throttling while also not fully utilizing your cpu quota.
>
> This theoretically allows a machine to use slightly more than it's
> allotted quota in some periods. This overflow would be bounded by the
> remaining per-cpu slice that was left un-used in the previous period.
> For CPU bound tasks this will change nothing, as they should
> theoretically fully utilize all of their quota and slices in each
> period. For user-interactive tasks as described above this provides a
> much better user/application experience as their cpu utilization will
> more closely match the amount they requested when they hit throttling.
>
> This greatly improves performance of high-thread-count, non-cpu bound
> applications with low cfs_quota_us allocation on high-core-count
> machines. In the case of an artificial testcase, this performance
> discrepancy has been observed to be almost 30x performance improvement,
> while still maintaining correct cpu quota restrictions albeit over
> longer time intervals than cpu.cfs_period_us. That testcase is
> available at https://github.com/indeedeng/fibtest.
>
> Fixes: 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift condition")
> Signed-off-by: Dave Chiluk <chiluk+linux@indeed.com>
> ---
> Documentation/scheduler/sched-bwc.txt | 56 ++++++++++++++++++++++-----
> kernel/sched/fair.c | 71 +++--------------------------------
> kernel/sched/sched.h | 4 --
> 3 files changed, 53 insertions(+), 78 deletions(-)
>
> diff --git a/Documentation/scheduler/sched-bwc.txt b/Documentation/scheduler/sched-bwc.txt
> index f6b1873..260fd65 100644
> --- a/Documentation/scheduler/sched-bwc.txt
> +++ b/Documentation/scheduler/sched-bwc.txt
> @@ -8,15 +8,16 @@ CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the
> specification of the maximum CPU bandwidth available to a group or hierarchy.
>
> The bandwidth allowed for a group is specified using a quota and period. Within
> -each given "period" (microseconds), a group is allowed to consume only up to
> -"quota" microseconds of CPU time. When the CPU bandwidth consumption of a
> -group exceeds this limit (for that period), the tasks belonging to its
> -hierarchy will be throttled and are not allowed to run again until the next
> -period.
> -
> -A group's unused runtime is globally tracked, being refreshed with quota units
> -above at each period boundary. As threads consume this bandwidth it is
> -transferred to cpu-local "silos" on a demand basis. The amount transferred
> +each given "period" (microseconds), a task group is allocated up to "quota"
> +microseconds of CPU time. That quota is assigned to per cpu run queues in
> +slices as threads in the cgroup become runnable. Once all quota has been
> +assigned any additional requests for quota will result in those threads being
> +throttled. Throttled threads will not be able to run again until the next
> +period when the quota is replenished.
> +
> +A group's unassigned quota is globally tracked, being refreshed back to
> +cfs_quota units at each period boundary. As threads consume this bandwidth it
> +is transferred to cpu-local "silos" on a demand basis. The amount transferred
> within each of these updates is tunable and described as the "slice".
>
> Management
> @@ -90,6 +91,43 @@ There are two ways in which a group may become throttled:
> In case b) above, even though the child may have runtime remaining it will not
> be allowed to until the parent's runtime is refreshed.
>
> +Real-world behavior of slice non-expiration
> +-------------------------------------------
> +The fact that cpu-local slices do not expire results in some interesting corner
> +cases that should be understood.
> +
> +For cgroup cpu constrained applications that are cpu limited this is a
> +relatively moot point because they will naturally consume the entirety of their
> +quota as well as the entirety of each cpu-local slice in each period. As a
> +result it is expected that nr_periods roughly equal nr_throttled, and that
> +cpuacct.usage will increase roughly equal to cfs_quota_us in each period.
> +
> +However in a worst-case scenario, highly-threaded, interactive/non-cpu bound
> +applications this non-expiration nuance allows applications to briefly burst
> +past their quota limits by the amount of unused slice on each cpu that the task
> +group is running on. This slight burst requires that quota had been assigned
> +and then not fully used in previous periods. This burst amount will not be
> +transferred between cores. As a result, this mechanism still strictly limits
> +the task group to quota average usage, albeit over a longer time window than
> +period. This provides better more predictable user experience for highly
> +threaded applications with small quota limits on high core count machines. It
> +also eliminates the propensity to throttle these applications while
> +simultanously using less than quota amounts of cpu. Another way to say this,
> +is that by allowing the unused portion of a slice to remain valid across
> +periods we have decreased the possibility of wasting quota on cpu-local silos
> +that don't need a full slice's amount of cpu time.
> +
> +The interaction between cpu-bound and non-cpu-bound-interactive applications
> +should also be considered, especially when single core usage hits 100%. If you
> +gave each of these applications half of a cpu-core and they both got scheduled
> +on the same CPU it is theoretically possible that the non-cpu bound application
> +will use up to sched_cfs_bandwidth_slice_us additional quota in some periods,
> +thereby preventing the cpu-bound application from fully using it's quota by
"its quota"
> +that same amount. In these instances it will be up to the CFS algorithm (see
> +sched-design-CFS.txt) to decide which application is chosen to run, as they
> +will both be runnable and have remaining quota. This runtime discrepancy will
> +should made up in the following periods when the interactive application idles.
> +
"discrepancy will be made" or "descrepancy should be made" but not both :)
Otherwise, fwiw,
Acked-by: Phil Auld <pauld@redhat.com>
Cheers,
Phil
--
next prev parent reply other threads:[~2019-05-29 19:28 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-17 19:30 [PATCH] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu slices Dave Chiluk
2019-05-23 18:44 ` [PATCH v2 0/1] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices Dave Chiluk
2019-05-23 18:44 ` [PATCH v2 1/1] " Dave Chiluk
2019-05-23 21:01 ` Peter Oskolkov
2019-05-24 14:32 ` Phil Auld
2019-05-24 15:14 ` Dave Chiluk
2019-05-24 15:59 ` Phil Auld
2019-05-24 16:28 ` Peter Oskolkov
2019-05-24 21:35 ` Dave Chiluk
2019-05-24 22:07 ` Peter Oskolkov
2019-05-28 22:25 ` Dave Chiluk
2019-05-24 8:55 ` Peter Zijlstra
2019-05-29 19:08 ` [PATCH v3 0/1] " Dave Chiluk
2019-05-29 19:08 ` [PATCH v3 1/1] " Dave Chiluk
2019-05-29 19:28 ` Phil Auld [this message]
2019-05-29 19:50 ` bsegall
2019-05-29 21:05 ` bsegall
2019-05-30 17:53 ` Dave Chiluk
2019-05-30 20:44 ` bsegall
[not found] ` <1561391404-14450-1-git-send-email-chiluk+linux@indeed.com>
2019-06-24 15:50 ` [PATCH v4 1/1] sched/fair: Return all runtime when cfs_b has very little remaining Dave Chiluk
2019-06-24 17:33 ` bsegall
2019-06-26 22:10 ` Dave Chiluk
2019-06-27 20:18 ` bsegall
2019-06-27 19:09 ` [PATCH] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices Dave Chiluk
2019-06-27 19:49 ` [PATCH v5 0/1] " Dave Chiluk
2019-06-27 19:49 ` [PATCH v5 1/1] " Dave Chiluk
2019-07-01 20:15 ` bsegall
2019-07-11 9:51 ` Peter Zijlstra
2019-07-11 17:46 ` bsegall
[not found] ` <CAC=E7cV4sO50NpYOZ06n_BkZTcBqf1KQp83prc+oave3ircBrw@mail.gmail.com>
2019-07-12 18:01 ` bsegall
2019-07-12 22:09 ` bsegall
2019-07-15 15:44 ` Dave Chiluk
2019-07-16 19:58 ` bsegall
2019-07-23 16:44 ` [PATCH v6 0/1] " Dave Chiluk
2019-07-23 16:44 ` [PATCH v6 1/1] " Dave Chiluk
2019-07-23 17:13 ` Phil Auld
2019-07-23 22:12 ` Dave Chiluk
2019-07-23 23:26 ` Phil Auld
2019-07-26 18:14 ` Peter Zijlstra
2019-08-08 10:53 ` [tip:sched/core] " tip-bot for Dave Chiluk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190529192833.GF26206@pauld.bos.csb \
--to=pauld@redhat.com \
--cc=bgregg@netflix.com \
--cc=bsegall@google.com \
--cc=cgroups@vger.kernel.org \
--cc=chiluk+linux@indeed.com \
--cc=corbet@lwn.net \
--cc=gmunoz@netflix.com \
--cc=jhammond@indeed.com \
--cc=kwa@yelp.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=posk@posk.io \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).