From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753454AbbKLVxE (ORCPT ); Thu, 12 Nov 2015 16:53:04 -0500 Received: from foss.arm.com ([217.140.101.70]:38434 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752921AbbKLVxC (ORCPT ); Thu, 12 Nov 2015 16:53:02 -0500 Date: Thu, 12 Nov 2015 21:53:04 +0000 From: Will Deacon To: "Paul E. McKenney" Cc: Boqun Feng , Oleg Nesterov , Peter Zijlstra , mingo@kernel.org, linux-kernel@vger.kernel.org, corbet@lwn.net, mhocko@kernel.org, dhowells@redhat.com, torvalds@linux-foundation.org, Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire() Message-ID: <20151112215304.GE23979@arm.com> References: <20151102134941.005198372@infradead.org> <20151103175958.GA4800@redhat.com> <20151111093939.GA6314@fixme-laptop.cn.ibm.com> <20151111121232.GN17308@twins.programming.kicks-ass.net> <20151111193953.GA23515@redhat.com> <20151112070915.GC6314@fixme-laptop.cn.ibm.com> <20151112150058.GA30321@redhat.com> <20151112144004.GU3972@linux.vnet.ibm.com> <20151112144902.GA4549@fixme-laptop.cn.ibm.com> <20151112150251.GZ3972@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151112150251.GZ3972@linux.vnet.ibm.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 12, 2015 at 07:02:51AM -0800, Paul E. McKenney wrote: > On Thu, Nov 12, 2015 at 10:49:02PM +0800, Boqun Feng wrote: > > On Thu, Nov 12, 2015 at 06:40:04AM -0800, Paul E. McKenney wrote: > > [snip] > > > > > > I cannot resist suggesting that any lock that interacts with > > > spin_unlock_wait() must have all relevant acquisitions followed by > > > smp_mb__after_unlock_lock(). > > > > > > > But > > > > 1. This would expand the purpose of smp_mb__after_unlock_lock(), > > right? smp_mb__after_unlock_lock() is for making UNLOCK-LOCK > > pair global transitive rather than guaranteeing no operations > > can be reorder before the STORE part of LOCK/ACQUIRE. > > Indeed it would. Which might be OK. > > > 2. If ARM64 has the same problem as PPC now, > > smp_mb__after_unlock_lock() can't help, as it's a no-op on > > ARM64. > > Agreed, and that is why we need Will to weigh in. I really don't want to implement smp_mb__after_unlock_lock, because we don't need it based on its current definition and I think there's a better way to fix spin_unlock_wait (see my other post). Will