From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932911AbcILMyO (ORCPT ); Mon, 12 Sep 2016 08:54:14 -0400 Received: from merlin.infradead.org ([205.233.59.134]:42106 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932839AbcILMyN (ORCPT ); Mon, 12 Sep 2016 08:54:13 -0400 Date: Mon, 12 Sep 2016 14:54:03 +0200 From: Peter Zijlstra To: Nicholas Piggin Cc: Linus Torvalds , Will Deacon , Oleg Nesterov , Paul McKenney , Benjamin Herrenschmidt , Michael Ellerman , linux-kernel@vger.kernel.org, Ingo Molnar , Alan Stern Subject: Re: Question on smp_mb__before_spinlock Message-ID: <20160912125403.GS10153@twins.programming.kicks-ass.net> References: <20160905093753.GN10138@twins.programming.kicks-ass.net> <20160907221726.37981b30@roar.ozlabs.ibm.com> <20160907132354.GR10138@twins.programming.kicks-ass.net> <20160912122708.71a91ea3@roar.ozlabs.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160912122708.71a91ea3@roar.ozlabs.ibm.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 12, 2016 at 12:27:08PM +1000, Nicholas Piggin wrote: > On Wed, 7 Sep 2016 15:23:54 +0200 > Peter Zijlstra wrote: > > Interesting idea.. > > > > So I'm not a fan of that raw_spin_lock wrapper, since that would end up > > with a lot more boiler-plate code than just the one extra barrier. > > #ifndef sched_ctxsw_raw_spin_lock > #define sched_ctxsw_raw_spin_lock(lock) raw_spin_lock(lock) > #endif > > #define sched_ctxsw_raw_spin_lock(lock) do { smp_mb() ; raw_spin_lock(lock); } while (0) I was thinking you wanted to avoid the lwsync in arch_spin_lock() entirely, at which point you'll grow more layers. Because then you get an arch_spin_lock_mb() or something and then you'll have to do the raw_spin_lock wrappery for that. Or am I missing the point of having the raw_spin_lock wrapper, as opposed to the extra barrier after it? Afaict the benefit of having that wrapper is so you can avoid issuing multiple barriers. > > But moving MMIO/DMA/TLB etc.. barriers into this spinlock might not be a > > good idea, since those are typically fairly heavy barriers, and its > > quite common to call schedule() without ending up in switch_to(). > > That's true I guess, but if we already have the arch specific smp_mb__ > specifically for this context switch code, and you are asking for them to > implement *cacheable* memory barrier vs migration, then I see no reason > not to allow them to implement uncacheable as well. > > You make a good point about schedule() without switch_to(), but > architectures will still have no less flexibility than they do now. Ah, so you're saying make it optional where they put it? I was initially thinking you wanted to add it to the list of requirements. Sure, optional works.