From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751650AbdF2AFv (ORCPT ); Wed, 28 Jun 2017 20:05:51 -0400 Received: from mail-io0-f180.google.com ([209.85.223.180]:36719 "EHLO mail-io0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751544AbdF2AFr (ORCPT ); Wed, 28 Jun 2017 20:05:47 -0400 MIME-Version: 1.0 In-Reply-To: <20170628235412.GB3721@linux.vnet.ibm.com> References: <20170628170321.GQ3721@linux.vnet.ibm.com> <20170628235412.GB3721@linux.vnet.ibm.com> From: Linus Torvalds Date: Wed, 28 Jun 2017 17:05:46 -0700 X-Google-Sender-Auth: vwncesUxSRaG3Eg58JgDrDLXd9Y Message-ID: Subject: Re: [GIT PULL rcu/next] RCU commits for 4.13 To: Paul McKenney Cc: Alan Stern , Andrea Parri , Linux Kernel Mailing List , priyalee.kushwaha@intel.com, =?UTF-8?Q?Stanis=C5=82aw_Drozd?= , Arnd Bergmann , ldr709@gmail.com, Thomas Gleixner , Peter Zijlstra , Josh Triplett , Nicolas Pitre , Krister Johansen , Vegard Nossum , dcb314@hotmail.com, Wu Fengguang , Frederic Weisbecker , Rik van Riel , Steven Rostedt , Ingo Molnar , Luc Maranget , Jade Alglave Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 28, 2017 at 4:54 PM, Paul E. McKenney wrote: > > Linus, are you dead-set against defining spin_unlock_wait() to be > spin_lock + spin_unlock? For example, is the current x86 implementation > of spin_unlock_wait() really a non-negotiable hard requirement? Or > would you be willing to live with the spin_lock + spin_unlock semantics? So I think the "same as spin_lock + spin_unlock" semantics are kind of insane. One of the issues is that the same as "spin_lock + spin_unlock" is basically now architecture-dependent. Is it really the architecture-dependent ordering you want to define this as? So I just think it's a *bad* definition. If somebody wants something that is exactly equivalent to spin_lock+spin_unlock, then dammit, just do *THAT*. It's completely pointless to me to define spin_unlock_wait() in those terms. And if it's not equivalent to the *architecture* behavior of spin_lock+spin_unlock, then I think it should be descibed in terms that aren't about the architecture implementation (so you shouldn't describe it as "spin_lock+spin_unlock", you should describe it in terms of memory barrier semantics. And if we really have to use the spin_lock+spinunlock semantics for this, then what is the advantage of spin_unlock_wait at all, if it doesn't fundamentally avoid some locking overhead of just taking the spinlock in the first place? And if we can't use a cheaper model, maybe we should just get rid of it entirely? Finally: if the memory barrier semantics are exactly the same, and it's purely about avoiding some nasty contention case, I think the concept is broken - contention is almost never an actual issue, and if it is, the problem is much deeper than spin_unlock_wait(). Linus