From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751701AbcEUAsy (ORCPT ); Fri, 20 May 2016 20:48:54 -0400 Received: from mx2.suse.de ([195.135.220.15]:49465 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751422AbcEUAsx (ORCPT ); Fri, 20 May 2016 20:48:53 -0400 Date: Fri, 20 May 2016 17:48:39 -0700 From: Davidlohr Bueso To: Linus Torvalds Cc: Peter Zijlstra , Boqun Feng , Manfred Spraul , Waiman Long , Ingo Molnar , ggherdovich@suse.com, Mel Gorman , Linux Kernel Mailing List , Paul McKenney , Will Deacon Subject: Re: sem_lock() vs qspinlocks Message-ID: <20160521004839.GA28231@linux-uzut.site> References: <20160520053926.GC31084@linux-uzut.site> <20160520115819.GF3193@twins.programming.kicks-ass.net> <20160520140533.GA20726@insomnia> <20160520152149.GH3193@twins.programming.kicks-ass.net> <20160520160436.GQ3205@twins.programming.kicks-ass.net> <20160520210618.GK3193@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.6.0 (2016-04-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 20 May 2016, Linus Torvalds wrote: >Oh, I definitely agree on the stable part, and yes, the "splt things >up" model should come later if people agree that it's a good thing. The backporting part is quite nice, yes, but ultimately I think I prefer Linus' suggestion making things explicit, as opposed to consulting the spinlock implying barriers. I also hate to have an smp_mb() (particularly for spin_is_locked) given that we are not optimizing for the common case (regular mutual excl). As opposed to spin_is_locked(), spin_unlock_wait() is perhaps more tempting to use for locking correctness. For example, taking a look at nf_conntrack_all_lock(), it too likes to get smart with spin_unlock_wait() -- also for finer graining purposes. While not identical to sems, it goes like: nf_conntrack_all_lock(): nf_conntrack_lock(): spin_lock(B); spin_lock(A); if (bar) { // false bar = 1; ... } [loop ctrl-barrier] spin_unlock_wait(A); foo(); foo(); If the spin_unlock_wait() doesn't yet see the store that makes A visibly locked, we could end up with both threads in foo(), no?. (Although I'm unsure about that ctrl barrier and archs could fall into it. The point was to see in-tree examples of creative thinking with locking). >Should I take the patch as-is, or should I just wait for a pull >request from the locking tree? Either is ok by me. I can verify that this patch fixes the issue. Thanks, Davidlohr