From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752928AbbKPQqn (ORCPT ); Mon, 16 Nov 2015 11:46:43 -0500 Received: from foss.arm.com ([217.140.101.70]:51152 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751099AbbKPQql (ORCPT ); Mon, 16 Nov 2015 11:46:41 -0500 Date: Mon, 16 Nov 2015 16:46:36 +0000 From: Will Deacon To: "Paul E. McKenney" Cc: Peter Zijlstra , Linus Torvalds , Boqun Feng , Oleg Nesterov , Ingo Molnar , Linux Kernel Mailing List , Jonathan Corbet , Michal Hocko , David Howells , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire() Message-ID: <20151116164636.GF1999@arm.com> References: <20151103175958.GA4800@redhat.com> <20151111093939.GA6314@fixme-laptop.cn.ibm.com> <20151111121232.GN17308@twins.programming.kicks-ass.net> <20151111193953.GA23515@redhat.com> <20151112070915.GC6314@fixme-laptop.cn.ibm.com> <20151116155658.GW17308@twins.programming.kicks-ass.net> <20151116160445.GK11639@twins.programming.kicks-ass.net> <20151116162452.GD1999@arm.com> <20151116164443.GA5184@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151116164443.GA5184@linux.vnet.ibm.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 16, 2015 at 08:44:43AM -0800, Paul E. McKenney wrote: > On Mon, Nov 16, 2015 at 04:24:53PM +0000, Will Deacon wrote: > > On Mon, Nov 16, 2015 at 05:04:45PM +0100, Peter Zijlstra wrote: > > > On Mon, Nov 16, 2015 at 04:56:58PM +0100, Peter Zijlstra wrote: > > > > On Thu, Nov 12, 2015 at 10:21:39AM -0800, Linus Torvalds wrote: > > > > > Now, the point of spin_unlock_wait() (and "spin_is_locked()") should > > > > > generally be that you have some external ordering guarantee that > > > > > guarantees that the lock has been taken. For example, for the IPC > > > > > semaphores, we do either one of: > > > > > > > > > > (a) get large lock, then - once you hold that lock - wait for each small lock > > > > > > > > > > or > > > > > > > > > > (b) get small lock, then - once you hold that lock - check that the > > > > > largo lock is unlocked > > > > > > > > > > and that's the case we should really worry about. The other uses of > > > > > spin_unlock_wait() should have similar "I have other reasons to know > > > > > I've seen that the lock was taken, or will never be taken after this > > > > > because XYZ". > > > > > > > > I don't think this is true for the usage in do_exit(), we have no > > > > knowledge on if pi_lock is taken or not. We just want to make sure that > > > > _if_ it were taken, we wait until it is released. > > > > > > And unless PPC would move to using RCsc locks with a SYNC in > > > spin_lock(), I don't think it makes sense to add > > > smp_mb__after_unlock_lock() to all tsk->pi_lock instances to fix this. > > > As that is far more expensive than flipping the exit path to do > > > spin_lock()+spin_unlock(). > > > > ... or we upgrade spin_unlock_wait to a LOCK operation, which might be > > slightly cheaper than spin_lock()+spin_unlock(). > > Or we supply a heavyweight version of spin_unlock_wait() that forces > the cache miss. But I bet that the difference in overhead between > spin_lock()+spin_unlock() and the heavyweight version would be down in > the noise. I'm not so sure. If the lock is ticket-based, then spin_lock() has to queue for its turn, whereas spin_unlock_wait could just wait for the next unlock. Will