linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: bsegall@google.com
To: Phil Auld <pauld@redhat.com>
Cc: mingo@redhat.com, peterz@infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC]  sched/fair: hard lockup in sched_cfs_period_timer
Date: Tue, 12 Mar 2019 10:29:37 -0700	[thread overview]
Message-ID: <xm26o96gxapq.fsf@bsegall-linux.svl.corp.google.com> (raw)
In-Reply-To: <20190311202536.GK25201@pauld.bos.csb> (Phil Auld's message of "Mon, 11 Mar 2019 16:25:37 -0400")

Phil Auld <pauld@redhat.com> writes:

> On Mon, Mar 11, 2019 at 10:44:25AM -0700 bsegall@google.com wrote:
>> Phil Auld <pauld@redhat.com> writes:
>> 
>> > On Wed, Mar 06, 2019 at 11:25:02AM -0800 bsegall@google.com wrote:
>> >> Phil Auld <pauld@redhat.com> writes:
>> >> 
>> >> > On Tue, Mar 05, 2019 at 12:45:34PM -0800 bsegall@google.com wrote:
>> >> >> Phil Auld <pauld@redhat.com> writes:
>> >> >> 
>> >> >> > Interestingly, if I limit the number of child cgroups to the number of 
>> >> >> > them I'm actually putting processes into (16 down from 2500) the problem
>> >> >> > does not reproduce.
>> >> >> 
>> >> >> That is indeed interesting, and definitely not something we'd want to
>> >> >> matter. (Particularly if it's not root->a->b->c...->throttled_cgroup or
>> >> >> root->throttled->a->...->thread vs root->throttled_cgroup, which is what
>> >> >> I was originally thinking of)
>> >> >> 
>> >> >
>> >> > The locking may be a red herring.
>> >> >
>> >> > The setup is root->throttled->a where a is 1-2500. There are 4 threads in
>> >> > each of the first 16 a groups.  The parent, throttled, is where the 
>> >> > cfs_period/quota_us are set. 
>> >> >
>> >> > I wonder if the problem is the walk_tg_tree_from() call in unthrottle_cfs_rq(). 
>> >> >
>> >> > The distribute_cfg_runtime looks to be O(n * m) where n is number of 
>> >> > throttled cfs_rqs and m is the number of child cgroups. But I'm not 
>> >> > completely clear on how the hierarchical cgroups play together here. 
>> >> >
>> >> > I'll pull on this thread some. 
>> >> >
>> >> > Thanks for your input.
>> >> >
>> >> >
>> >> > Cheers,
>> >> > Phil
>> >> 
>> >> Yeah, that isn't under the cfs_b lock, but is still part of distribute
>> >> (and under rq lock, which might also matter). I was thinking too much
>> >> about just the cfs_b regions. I'm not sure there's any good general
>> >> optimization there.
>> >>
>> >
>> > It's really an edge case, but the watchdog NMI is pretty painful.
>> >
>> >> I suppose cfs_rqs (tgs/cfs_bs?) could have "nearest
>> >> ancestor with a quota" pointer and ones with quota could have
>> >> "descendants with quota" list, parallel to the children/parent lists of
>> >> tgs. Then throttle/unthrottle would only have to visit these lists, and
>> >> child cgroups/cfs_rqs without their own quotas would just check
>> >> cfs_rq->nearest_quota_cfs_rq->throttle_count. throttled_clock_task_time
>> >> can also probably be tracked there.
>> >
>> > That seems like it would add a lot of complexity for this edge case. Maybe
>> > it would be acceptible to use the safety valve like my first example, or
>> > something like the below which will tune the period up until it doesn't
>> > overrun for ever.  The down side of this one is it does change the user's
>> > settings, but that could be preferable to an NMI crash.
>> 
>> Yeah, I'm not sure what solution is best here, but one of the solutions
>> should be done.
>> 
>> >
>> > Cheers,
>> > Phil
>> >
>> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> > index 310d0637fe4b..78f9e28adc7b 100644
>> > --- a/kernel/sched/fair.c
>> > +++ b/kernel/sched/fair.c
>> > @@ -4859,16 +4859,42 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
>> >  	return HRTIMER_NORESTART;
>> >  }
>> >  
>> > +extern const u64 max_cfs_quota_period;
>> > +s64 cfs_quota_period_autotune_thresh = 100 * NSEC_PER_MSEC;
>> > +int cfs_quota_period_autotune_shift  = 4; /* 100 / 16 = 6.25% */
>> 
>> Letting it spin for 100ms and then only increasing by 6% seems extremely
>> generous. If we went this route I'd probably say "after looping N
>> times, set the period to time taken / N + X%" where N is like 8 or
>> something. I think I'd probably perfer something like this to the
>> previous "just abort and let it happen again next interrupt" one.
>
> Okay. I'll try to spin something up that does this. It may be a little 
> trickier to keep the quota proportional to the new period. I think that's 
> important since we'll be changing the user's setting.
>
> Do you mean to have it break when it hits N and recalculates the period or 
> reset the counter and keep going?
>

In theory you should be fine doing it once more I think? And yeah,
keeping the quota correct is a bit more annoying given you have to use
fixed point math.

      parent reply	other threads:[~2019-03-12 17:29 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-01 14:52 [RFC] sched/fair: hard lockup in sched_cfs_period_timer Phil Auld
2019-03-04 18:13 ` bsegall
2019-03-04 19:05   ` Phil Auld
2019-03-05 18:49     ` bsegall
2019-03-05 20:05       ` Phil Auld
2019-03-05 20:45         ` bsegall
2019-03-06 16:23           ` Phil Auld
2019-03-06 19:25             ` bsegall
2019-03-09 20:33               ` Phil Auld
2019-03-11 17:44                 ` bsegall
2019-03-11 20:25                   ` Phil Auld
2019-03-12 13:57                     ` Phil Auld
2019-03-13 17:44                       ` bsegall
2019-03-13 18:50                         ` Phil Auld
2019-03-13 20:26                           ` bsegall
2019-03-13 21:10                             ` Phil Auld
2019-03-12 17:29                     ` bsegall [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=xm26o96gxapq.fsf@bsegall-linux.svl.corp.google.com \
    --to=bsegall@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pauld@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).