From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965464AbcFMInM (ORCPT ); Mon, 13 Jun 2016 04:43:12 -0400 Received: from www.linutronix.de ([62.245.132.108]:55336 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965427AbcFMInH (ORCPT ); Mon, 13 Jun 2016 04:43:07 -0400 Message-Id: <20160613075929.773295596@linutronix.de> User-Agent: quilt/0.63-1 Date: Mon, 13 Jun 2016 08:41:05 -0000 From: Thomas Gleixner To: LKML Cc: Ingo Molnar , Peter Zijlstra , "Paul E. McKenney" , Eric Dumazet , Frederic Weisbecker , Chris Mason , Arjan van de Ven , rt@linutronix.de, Anna-Maria Gleixner Subject: [patch 19/20] timer: Split out index calculation References: <20160613070440.950649741@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Disposition: inline; filename=timers--Split-out-index-calculation.patch X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001,URIBL_BLOCKED=0.001 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Anna-Maria Gleixner For further optimizations we need to seperate index calculation and queueing. No functional change. Signed-off-by: Anna-Maria Gleixner Signed-off-by: Thomas Gleixner --- kernel/time/timer.c | 41 +++++++++++++++++++++++++++++++++-------- 1 file changed, 33 insertions(+), 8 deletions(-) --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -456,12 +456,9 @@ static inline unsigned calc_index(unsign return base + (((expires + gran) >> sft) & LVL_MASK); } -static void -__internal_add_timer(struct timer_base *base, struct timer_list *timer) +static int calc_wheel_index(unsigned long expires, unsigned long clk) { - unsigned long expires = timer->expires; - unsigned long delta = expires - base->clk; - struct hlist_head *vec; + unsigned long delta = expires - clk; unsigned int idx; if (delta < LVL1_TSTART) { @@ -475,7 +472,7 @@ static void } else if (delta < LVL5_TSTART) { idx = calc_index(expires, LVL4_GRAN, LVL4_SHIFT, LVL4_OFFS); } else if ((long) delta < 0) { - idx = base->clk & LVL_MASK; + idx = clk & LVL_MASK; } else { /* * The long timeouts go into the last array level. They @@ -485,6 +482,18 @@ static void idx = calc_index(expires, LVL5_GRAN, LVL5_SHIFT, LVL5_OFFS); } + return idx; +} + +/* + * Enqueue the timer into the hash bucket, mark it pending in + * the bitmap and store the index in the timer flags. + */ +static void enqueue_timer(struct timer_base *base, struct timer_list *timer, + unsigned int idx) +{ + struct hlist_head *vec; + /* * Enqueue the timer into the array bucket, mark it pending in * the bitmap and store the index in the timer flags. @@ -495,10 +504,19 @@ static void timer_set_idx(timer, idx); } -static void internal_add_timer(struct timer_base *base, struct timer_list *timer) +static void +__internal_add_timer(struct timer_base *base, struct timer_list *timer) { - __internal_add_timer(base, timer); + unsigned long expires = timer->expires; + unsigned int idx; + + idx = calc_wheel_index(expires, base->clk); + enqueue_timer(base, timer, idx); +} +static void +trigger_dyntick_cpu(struct timer_base *base, struct timer_list *timer) +{ /* * We might have to IPI the remote CPU if the base is idle and the * timer is not deferrable. If the other cpu is on the way to idle @@ -523,6 +541,13 @@ static void internal_add_timer(struct ti wake_up_nohz_cpu(base->cpu); } +static void +internal_add_timer(struct timer_base *base, struct timer_list *timer) +{ + __internal_add_timer(base, timer); + trigger_dyntick_cpu(base, timer); +} + #ifdef CONFIG_TIMER_STATS void __timer_stats_timer_set_start_info(struct timer_list *timer, void *addr) {