linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: bsegall@google.com
To: Xunlei Pang <xlpang@linux.alibaba.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>,
	linux-kernel@vger.kernel.org, Ben Segall <bsegall@google.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH] sched/fair: sync expires_seq in distribute_cfs_runtime()
Date: Mon, 30 Jul 2018 10:32:58 -0700	[thread overview]
Message-ID: <xm26k1pc8w8l.fsf@bsegall-linux.svl.corp.google.com> (raw)
In-Reply-To: <ef0e1185-51d6-0707-cda4-6f2ea987b651@linux.alibaba.com> (Xunlei Pang's message of "Mon, 30 Jul 2018 13:28:51 +0800")

Xunlei Pang <xlpang@linux.alibaba.com> writes:

> Hi Cong,
>
> On 7/28/18 8:24 AM, Cong Wang wrote:
>> Each time we sync cfs_rq->runtime_expires with cfs_b->runtime_expires,
>> we should sync its ->expires_seq too. However it is missing
>> for distribute_cfs_runtime(), especially the slack timer call path.
>
> I don't think it's a problem, as expires_seq will get synced in
> assign_cfs_rq_runtime().
>
> Thanks,
> Xunlei

It does seem unlikely to actually come up since the cfs_rq would have to
not run until the period was expired-locally-but-not-globally, but
there's no reason to not fix it.


>
>> 
>> Fixes: 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift condition")
>> Cc: Xunlei Pang <xlpang@linux.alibaba.com>
>> Cc: Ben Segall <bsegall@google.com>
>> Cc: Linus Torvalds <torvalds@linux-foundation.org>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
>> ---
>>  kernel/sched/fair.c | 12 ++++++++----
>>  1 file changed, 8 insertions(+), 4 deletions(-)
>> 
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 2f0a0be4d344..910c50db3d74 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -4857,7 +4857,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
>>  }
>>  
>>  static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
>> -		u64 remaining, u64 expires)
>> +		u64 remaining, u64 expires, int expires_seq)
>>  {
>>  	struct cfs_rq *cfs_rq;
>>  	u64 runtime;
>> @@ -4880,6 +4880,7 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
>>  
>>  		cfs_rq->runtime_remaining += runtime;
>>  		cfs_rq->runtime_expires = expires;
>> +		cfs_rq->expires_seq = expires_seq;
>>  
>>  		/* we check whether we're throttled above */
>>  		if (cfs_rq->runtime_remaining > 0)
>> @@ -4905,7 +4906,7 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
>>  static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
>>  {
>>  	u64 runtime, runtime_expires;
>> -	int throttled;
>> +	int throttled, expires_seq;
>>  
>>  	/* no need to continue the timer with no bandwidth constraint */
>>  	if (cfs_b->quota == RUNTIME_INF)
>> @@ -4933,6 +4934,7 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
>>  	cfs_b->nr_throttled += overrun;
>>  
>>  	runtime_expires = cfs_b->runtime_expires;
>> +	expires_seq = cfs_b->expires_seq;
>>  
>>  	/*
>>  	 * This check is repeated as we are holding onto the new bandwidth while
>> @@ -4946,7 +4948,7 @@ static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun)
>>  		raw_spin_unlock(&cfs_b->lock);
>>  		/* we can't nest cfs_b->lock while distributing bandwidth */
>>  		runtime = distribute_cfs_runtime(cfs_b, runtime,
>> -						 runtime_expires);
>> +						 runtime_expires, expires_seq);
>>  		raw_spin_lock(&cfs_b->lock);
>>  
>>  		throttled = !list_empty(&cfs_b->throttled_cfs_rq);
>> @@ -5055,6 +5057,7 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq)
>>  static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
>>  {
>>  	u64 runtime = 0, slice = sched_cfs_bandwidth_slice();
>> +	int expires_seq;
>>  	u64 expires;
>>  
>>  	/* confirm we're still not at a refresh boundary */
>> @@ -5068,12 +5071,13 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth *cfs_b)
>>  		runtime = cfs_b->runtime;
>>  
>>  	expires = cfs_b->runtime_expires;
>> +	expires_seq = cfs_b->expires_seq;
>>  	raw_spin_unlock(&cfs_b->lock);
>>  
>>  	if (!runtime)
>>  		return;
>>  
>> -	runtime = distribute_cfs_runtime(cfs_b, runtime, expires);
>> +	runtime = distribute_cfs_runtime(cfs_b, runtime, expires, expires_seq);
>>  
>>  	raw_spin_lock(&cfs_b->lock);
>>  	if (expires == cfs_b->runtime_expires)
>> 

  reply	other threads:[~2018-07-30 17:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-28  0:24 [PATCH] sched/fair: sync expires_seq in distribute_cfs_runtime() Cong Wang
2018-07-30  5:28 ` Xunlei Pang
2018-07-30 17:32   ` bsegall [this message]
2018-07-30 17:55   ` Cong Wang
2018-07-31 14:58     ` Xunlei Pang
2018-07-31 17:13       ` bsegall
2018-07-31 20:55         ` Cong Wang
2018-08-01  3:24           ` Xunlei Pang
2018-08-03 18:57             ` Cong Wang
2018-08-01 17:17           ` bsegall
2018-08-03 21:56             ` Cong Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=xm26k1pc8w8l.fsf@bsegall-linux.svl.corp.google.com \
    --to=bsegall@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=xiyou.wangcong@gmail.com \
    --cc=xlpang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).