From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751767AbbDIGin (ORCPT ); Thu, 9 Apr 2015 02:38:43 -0400 Received: from mail-wg0-f51.google.com ([74.125.82.51]:36713 "EHLO mail-wg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751314AbbDIGim (ORCPT ); Thu, 9 Apr 2015 02:38:42 -0400 Date: Thu, 9 Apr 2015 08:38:38 +0200 From: Ingo Molnar To: Thomas Gleixner Cc: Viresh Kumar , Ingo Molnar , Peter Zijlstra , linaro-kernel@lists.linaro.org, linux-kernel@vger.kernel.org, Preeti U Murthy Subject: Re: [PATCH] hrtimer: Replace cpu_base->active_bases with a direct check of the active list Message-ID: <20150409063838.GC14259@gmail.com> References: <20150409062841.GB14259@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150409062841.GB14259@gmail.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Ingo Molnar wrote: > This would speed up various hrtimer primitives like > hrtimer_remove()/add and simplify the code. It would be a net code > shrink as well. > > Totally untested patch below. It gives: > > text data bss dec hex filename > 7502 427 0 7929 1ef9 hrtimer.o.before > 7422 427 0 7849 1ea9 hrtimer.o.after > > and half of that code removal is from hot paths. > > This would simplify the followup step of skipping over inactive bases > as well. The followup step is attached below (untested as well). Note that all other iterations already had a check for active.next, so the patch doesn't even bloat anything: text data bss dec hex filename 7422 427 0 7849 1ea9 hrtimer.o.before 7422 427 0 7849 1ea9 hrtimer.o.after (I did a rename within migrate_hrtimers() because it used 'cpu_base' vs 'clock_base' inconsistently in a confusing (to me) manner.) I'd also suggest the removal of the timerqueue_getnext() obfuscation: it 'sounds' complex but in reality it's a simple dereference to active.next. I think this is what triggered this rather pointless maintenance of active_bases. Thanks, Ingo --- kernel/time/hrtimer.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) Index: tip/kernel/time/hrtimer.c =================================================================== --- tip.orig/kernel/time/hrtimer.c +++ tip/kernel/time/hrtimer.c @@ -1660,29 +1660,35 @@ static void migrate_hrtimer_list(struct static void migrate_hrtimers(int scpu) { - struct hrtimer_cpu_base *old_base, *new_base; + struct hrtimer_cpu_base *old_cpu_base, *new_cpu_base; int i; BUG_ON(cpu_online(scpu)); tick_cancel_sched_timer(scpu); local_irq_disable(); - old_base = &per_cpu(hrtimer_bases, scpu); - new_base = this_cpu_ptr(&hrtimer_bases); + old_cpu_base = &per_cpu(hrtimer_bases, scpu); + new_cpu_base = this_cpu_ptr(&hrtimer_bases); /* * The caller is globally serialized and nobody else * takes two locks at once, deadlock is not possible. */ - raw_spin_lock(&new_base->lock); - raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING); + raw_spin_lock(&new_cpu_base->lock); + raw_spin_lock_nested(&old_cpu_base->lock, SINGLE_DEPTH_NESTING); for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { - migrate_hrtimer_list(&old_base->clock_base[i], - &new_base->clock_base[i]); + struct hrtimer_clock_base *old_base = old_cpu_base->clock_base + i; + struct hrtimer_clock_base *new_base; + + if (!old_base->active.next) + continue; + + new_base = new_cpu_base->clock_base + i; + migrate_hrtimer_list(old_base, new_base); } - raw_spin_unlock(&old_base->lock); - raw_spin_unlock(&new_base->lock); + raw_spin_unlock(&old_cpu_base->lock); + raw_spin_unlock(&new_cpu_base->lock); /* Check, if we got expired work to do */ __hrtimer_peek_ahead_timers();