From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH 03/11] qspinlock: Add pending bit Date: Wed, 18 Jun 2014 13:29:48 +0200 Message-ID: <53A1782C.7040400__39158.673809334$1403091156$gmane$org@redhat.com> References: <20140615124657.264658593@chello.nl> <20140615130153.196728583@chello.nl> <20140617203615.GA29634@laptop.dumpdata.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WxE4q-0001Zz-SX for xen-devel@lists.xenproject.org; Wed, 18 Jun 2014 11:30:53 +0000 In-Reply-To: <20140617203615.GA29634@laptop.dumpdata.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Konrad Rzeszutek Wilk , Peter Zijlstra Cc: Waiman.Long@hp.com, linux-arch@vger.kernel.org, gleb@redhat.com, kvm@vger.kernel.org, boris.ostrovsky@oracle.com, scott.norton@hp.com, raghavendra.kt@linux.vnet.ibm.com, paolo.bonzini@gmail.com, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, Peter Zijlstra , chegu_vinod@hp.com, david.vrabel@citrix.com, oleg@redhat.com, xen-devel@lists.xenproject.org, tglx@linutronix.de, paulmck@linux.vnet.ibm.com, torvalds@linux-foundation.org, mingo@kernel.org List-Id: xen-devel@lists.xenproject.org Il 17/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto: > + /* One more attempt - but if we fail mark it as pending. */ > + if (val == _Q_LOCKED_VAL) { > + new = Q_LOCKED_VAL |_Q_PENDING_VAL; > + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == _Q_LOCKED_VAL) /* YEEY! */ > + return; > + val = old; > + } Note that Peter's code is in a for(;;) loop: + for (;;) { + /* + * If we observe any contention; queue. + */ + if (val & ~_Q_LOCKED_MASK) + goto queue; + + new = _Q_LOCKED_VAL; + if (val == new) + new |= _Q_PENDING_VAL; + + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } + + /* + * we won the trylock + */ + if (new == _Q_LOCKED_VAL) + return; So what you'd have is basically: /* * One more attempt if no one is already in queue. Perhaps * they have unlocked the spinlock already. */ if (val == _Q_LOCKED_VAL && atomic_read(&lock->val) == 0) { old = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); if (old == 0) /* YEEY! */ return; val = old; } But I agree with Waiman that this is unlikely to trigger often enough. It does have to be handled in the slowpath for correctness, but the most likely path is (0,0,1)->(0,1,1). Paolo