From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752565AbdGFRSn (ORCPT ); Thu, 6 Jul 2017 13:18:43 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:46324 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751813AbdGFRSl (ORCPT ); Thu, 6 Jul 2017 13:18:41 -0400 Date: Thu, 6 Jul 2017 10:18:32 -0700 From: "Paul E. McKenney" To: Peter Zijlstra Cc: David Laight , "linux-kernel@vger.kernel.org" , "netfilter-devel@vger.kernel.org" , "netdev@vger.kernel.org" , "oleg@redhat.com" , "akpm@linux-foundation.org" , "mingo@redhat.com" , "dave@stgolabs.net" , "manfred@colorfullife.com" , "tj@kernel.org" , "arnd@arndb.de" , "linux-arch@vger.kernel.org" , "will.deacon@arm.com" , "stern@rowland.harvard.edu" , "parri.andrea@gmail.com" , "torvalds@linux-foundation.org" Subject: Re: [PATCH v2 0/9] Remove spin_unlock_wait() Reply-To: paulmck@linux.vnet.ibm.com References: <20170629235918.GA6445@linux.vnet.ibm.com> <20170705232955.GA15992@linux.vnet.ibm.com> <063D6719AE5E284EB5DD2968C1650D6DD0033F01@AcuExch.aculab.com> <20170706160555.xc63yydk77gmttae@hirez.programming.kicks-ass.net> <20170706162024.GD2393@linux.vnet.ibm.com> <20170706165036.v4u5rbz56si4emw5@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170706165036.v4u5rbz56si4emw5@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 x-cbid: 17070617-0056-0000-0000-0000039FB826 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007331; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000214; SDB=6.00883757; UDB=6.00440910; IPR=6.00663943; BA=6.00005455; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00016115; XFM=3.00000015; UTC=2017-07-06 17:18:38 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17070617-0057-0000-0000-000007D5BF33 Message-Id: <20170706171832.GH2393@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-07-06_11:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1707060298 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jul 06, 2017 at 06:50:36PM +0200, Peter Zijlstra wrote: > On Thu, Jul 06, 2017 at 09:20:24AM -0700, Paul E. McKenney wrote: > > On Thu, Jul 06, 2017 at 06:05:55PM +0200, Peter Zijlstra wrote: > > > On Thu, Jul 06, 2017 at 02:12:24PM +0000, David Laight wrote: > > > > From: Paul E. McKenney > > > > [ . . . ] > > > > > Now on the one hand I feel like Oleg that it would be a shame to loose > > > the optimization, OTOH this thing is really really tricky to use, > > > and has lead to a number of bugs already. > > > > I do agree, it is a bit sad to see these optimizations go. So, should > > this make mainline, I will be tagging the commits that spin_unlock_wait() > > so that they can be easily reverted should someone come up with good > > semantics and a compelling use case with compelling performance benefits. > > Ha!, but what would constitute 'good semantics' ? At this point, it beats the heck out of me! ;-) > The current thing is something along the lines of: > > "Waits for the currently observed critical section > to complete with ACQUIRE ordering such that it will observe > whatever state was left by said critical section." > > With the 'obvious' benefit of limited interference on those actually > wanting to acquire the lock, and a shorter wait time on our side too, > since we only need to wait for completion of the current section, and > not for however many contender are before us. > > Not sure I have an actual (micro) benchmark that shows a difference > though. > > > > Is this all good enough to retain the thing, I dunno. Like I said, I'm > conflicted on the whole thing. On the one hand its a nice optimization, > on the other hand I don't want to have to keep fixing these bugs. Yeah, if I had seen a compelling use case... Oleg's task_work case was closest, but given that it involved a task-local lock that shouldn't be all -that- heavily contended, it is hard to see there being all that much difference. But maybe I am missing something here? Wouldn't be the first time... Thanx, Paul