From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932067AbdI0OWm (ORCPT ); Wed, 27 Sep 2017 10:22:42 -0400 Received: from Galois.linutronix.de ([146.0.238.70]:53335 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752257AbdI0OWl (ORCPT ); Wed, 27 Sep 2017 10:22:41 -0400 Date: Wed, 27 Sep 2017 16:22:39 +0200 (CEST) From: Anna-Maria Gleixner To: Peter Zijlstra cc: LKML , Ingo Molnar , Christoph Hellwig , keescook@chromium.org, John Stultz , Thomas Gleixner Subject: Re: [PATCH 17/25] hrtimer: Implementation of softirq hrtimer handling In-Reply-To: <20170926150311.ksxokdvvyqu36sud@hirez.programming.kicks-ass.net> Message-ID: References: <20170831105725.809317030@linutronix.de> <20170831105826.921969670@linutronix.de> <20170926150311.ksxokdvvyqu36sud@hirez.programming.kicks-ass.net> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 26 Sep 2017, Peter Zijlstra wrote: > On Thu, Aug 31, 2017 at 12:23:42PM -0000, Anna-Maria Gleixner wrote: > > static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base, > > struct hrtimer_clock_base *base, > > - struct hrtimer *timer, ktime_t *now) > > + struct hrtimer *timer, ktime_t *now, > > + bool hardirq) > > { > > enum hrtimer_restart (*fn)(struct hrtimer *); > > int restart; > > @@ -1241,11 +1298,19 @@ static void __run_hrtimer(struct hrtimer > > * protected against migration to a different CPU even if the lock > > * is dropped. > > */ > > - raw_spin_unlock(&cpu_base->lock); > > + if (hardirq) > > + raw_spin_unlock(&cpu_base->lock); > > + else > > + raw_spin_unlock_irq(&cpu_base->lock); > > + > > trace_hrtimer_expire_entry(timer, now); > > restart = fn(timer); > > trace_hrtimer_expire_exit(timer); > > - raw_spin_lock(&cpu_base->lock); > > + > > + if (hardirq) > > + raw_spin_lock(&cpu_base->lock); > > + else > > + raw_spin_lock_irq(&cpu_base->lock); > > > > That's just nasty... > I know and Thomas was unhappy about that as well, but we did not come up with a better solution. The nasty alternative is: static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base, struct hrtimer_clock_base *base, - struct hrtimer *timer, ktime_t *now) + struct hrtimer *timer, ktime_t *now, + unsigned long flags) ... - raw_spin_unlock(&cpu_base->lock); + raw_spin_unlock_irqrestore(&cpu_base->lock, flags); ... - raw_spin_lock(&cpu_base->lock); + raw_spin_lock_irq(&cpu_base->lock, flags); and hand in flags from the callsites via local_save_flags(). We wanted to avoid the pointless lock_irq for the interrupt context, but yes the conditional is equally bad. Anna-Maria