From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754462AbbBBRxt (ORCPT ); Mon, 2 Feb 2015 12:53:49 -0500 Received: from mail-wg0-f47.google.com ([74.125.82.47]:38278 "EHLO mail-wg0-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753282AbbBBRxs (ORCPT ); Mon, 2 Feb 2015 12:53:48 -0500 Date: Mon, 2 Feb 2015 18:53:45 +0100 From: Frederic Weisbecker To: Peter Zijlstra Cc: Ingo Molnar , LKML , Steven Rostedt , Linus Torvalds Subject: Re: [PATCH 3/4] sched: Pull preemption disablement to __schedule() caller Message-ID: <20150202175343.GD11054@lerouge> References: <1422404652-29067-1-git-send-email-fweisbec@gmail.com> <1422404652-29067-4-git-send-email-fweisbec@gmail.com> <20150128155044.GJ23038@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150128155044.GJ23038@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 28, 2015 at 04:50:44PM +0100, Peter Zijlstra wrote: > On Wed, Jan 28, 2015 at 01:24:11AM +0100, Frederic Weisbecker wrote: > > +++ b/kernel/sched/core.c > > @@ -2760,7 +2760,6 @@ static void __sched __schedule(void) > > struct rq *rq; > > int cpu; > > > > - preempt_disable(); > > Implies barrier(); > > > cpu = smp_processor_id(); > > rq = cpu_rq(cpu); > > rcu_note_context_switch(); > > @@ -2822,8 +2821,6 @@ static void __sched __schedule(void) > > raw_spin_unlock_irq(&rq->lock); > > > > post_schedule(rq); > > implies barrier(); > > > - > > - sched_preempt_enable_no_resched(); > > } > > > > static inline void sched_submit_work(struct task_struct *tsk) > > > @@ -2883,9 +2882,9 @@ void __sched schedule_preempt_disabled(void) > > static void preempt_schedule_common(void) > > { > > do { > > - preempt_count_add(PREEMPT_ACTIVE); > > + preempt_count_add(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET); > > Does _NOT_ imply barrier(); > > > __schedule(); > > - preempt_count_sub(PREEMPT_ACTIVE); > > idem. It looks like preempt_count_add/inc() mostly imply entering a context that we want to be seen right away (thus want barrier() after) and preempt_count_sub/dec() mostly want previous work to be visible before re-enabling interrupt, preemption, etc... (thus want barrier() before). So maybe these functions (the non-underscored ones) should imply a barrier() rather than their callers (preempt_disable() and others). Inline functions instead of macros would do the trick (if the headers hell let us do that). Note the underscored implementations are all inline currently so this happens to work by chance for direct calls to preempt_count_add/sub() outside preempt_disable(). If the non-underscored caller is turned into inline too I don't expect performance issues. What do you think, does it make sense? > > > + preempt_count_sub(PREEMPT_ACTIVE + PREEMPT_CHECK_OFFSET); > > > > /* > > * Check again in case we missed a preemption opportunity