From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753681AbcGFLxX (ORCPT ); Wed, 6 Jul 2016 07:53:23 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:33579 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751908AbcGFLxV (ORCPT ); Wed, 6 Jul 2016 07:53:21 -0400 Date: Wed, 6 Jul 2016 13:53:17 +0200 From: Frederic Weisbecker To: Peter Zijlstra Cc: LKML , Ingo Molnar , Mike Galbraith , Thomas Gleixner Subject: Re: [PATCH 2/3] sched: Unloop sched avg decaying Message-ID: <20160706115316.GA10758@lerouge> References: <1465918082-27005-1-git-send-email-fweisbec@gmail.com> <1465918082-27005-3-git-send-email-fweisbec@gmail.com> <20160614155842.GJ30921@twins.programming.kicks-ass.net> <20160630125225.GA32568@lerouge> <20160630132037.GE30921@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160630132037.GE30921@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 30, 2016 at 03:20:37PM +0200, Peter Zijlstra wrote: > On Thu, Jun 30, 2016 at 02:52:26PM +0200, Frederic Weisbecker wrote: > > On Tue, Jun 14, 2016 at 05:58:42PM +0200, Peter Zijlstra wrote: > > > > Why not add the division to the nohz exit path only? > > > > It would be worse I think because we may exit much more often from nohz > > than we reach a sched_avg_period(). > > > > So the only safe optimization I can do for now is: > > How about something like this then? > > --- > > kernel/sched/core.c | 19 +++++++++++++++++-- > 1 file changed, 17 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 3387e4f14fc9..fd1ae4c4105f 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -665,9 +665,23 @@ bool sched_can_stop_tick(struct rq *rq) > > void sched_avg_update(struct rq *rq) > { > - s64 period = sched_avg_period(); > + s64 delta, period = sched_avg_period(); > > - while ((s64)(rq_clock(rq) - rq->age_stamp) > period) { > + delta = (s64)(rq_clock(rq) - rq->age_stamp); > + if (likely(delta < period)) > + return; > + > + if (unlikely(delta > 3*period)) { > + int pending; > + u64 rem; > + > + pending = div64_u64_rem(delta, period, &rem); > + rq->age_stamp += delta - rem; > + rq->rt_avg >>= pending; > + return; > + } > + > + while (delta > period) { > /* > * Inline assembly required to prevent the compiler > * optimising this loop into a divmod call. > @@ -675,6 +689,7 @@ void sched_avg_update(struct rq *rq) > */ > asm("" : "+rm" (rq->age_stamp)); > rq->age_stamp += period; > + delta -= period; > rq->rt_avg /= 2; > } > } Makes sense. I'm going to do some tests. We might want to precompute 3*period maybe. Thanks.