From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754807AbbBTU0J (ORCPT ); Fri, 20 Feb 2015 15:26:09 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41301 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753001AbbBTU0H (ORCPT ); Fri, 20 Feb 2015 15:26:07 -0500 Date: Fri, 20 Feb 2015 21:23:19 +0100 From: Oleg Nesterov To: Peter Zijlstra Cc: Manfred Spraul , "Paul E. McKenney" , Kirill Tkhai , linux-kernel@vger.kernel.org, Ingo Molnar , Josh Poimboeuf Subject: Re: [PATCH 2/2] [PATCH] sched: Add smp_rmb() in task rq locking cycles Message-ID: <20150220202319.GA21132@redhat.com> References: <20150217130523.GV24151@twins.programming.kicks-ass.net> <20150217160532.GW4166@linux.vnet.ibm.com> <20150217183636.GR5029@twins.programming.kicks-ass.net> <20150217215231.GK4166@linux.vnet.ibm.com> <20150218155904.GA27687@redhat.com> <54E4E479.4050003@colorfullife.com> <20150218224317.GC5029@twins.programming.kicks-ass.net> <20150219141905.GA11018@redhat.com> <54E77CC0.5030401@colorfullife.com> <20150220184551.GQ2896@worktop.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150220184551.GQ2896@worktop.programming.kicks-ass.net> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/20, Peter Zijlstra wrote: > > I think I agree with Oleg in that we only need the smp_rmb(); of course > that wants a somewhat elaborate comment to go along with it. How about > something like so: > > spin_unlock_wait(&local); > /* > * The above spin_unlock_wait() forms a control dependency with > * any following stores; because we must first observe the lock > * unlocked and we cannot speculate stores. > * > * Subsequent loads however can easily pass through the loads > * represented by spin_unlock_wait() and therefore we need the > * read barrier. > * > * This together is stronger than ACQUIRE for @local and > * therefore we will observe the complete prior critical section > * of @local. > */ > smp_rmb(); > > The obvious alternative is using spin_unlock_wait() with an > smp_load_acquire(), but that might be more expensive on some archs due > to repeated issuing of memory barriers. Yes, yes, thanks! But note that we need the same comment after sem_lock()->spin_is_locked(). So perhaps we can add this comment into include/linux/spinlock.h ? In this case perhaps it makes sense to add, say, #define smp_mb__after_unlock_wait() smp_rmb() with this comment above? Another potential user task_work_run(). It could use rmb() too, but this again needs the same fat comment. Ehat do you think? Oleg.