All of lore.kernel.org
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <waiman.long@hp.com>
Cc: raghavendra.kt@linux.vnet.ibm.com, mingo@kernel.org,
	riel@redhat.com, oleg@redhat.com, gleb@redhat.com,
	virtualization@lists.linux-foundation.org, tglx@linutronix.de,
	chegu_vinod@hp.com, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, linux-kernel@vger.kernel.org,
	linux-arch@vger.kernel.org, paolo.bonzini@gmail.com,
	Peter Zijlstra <peterz@infradead.org>,
	scott.norton@hp.com, torvalds@linux-foundation.org,
	kvm@vger.kernel.org, paulmck@linux.vnet.ibm.com,
	xen-devel@lists.xenproject.org,
	Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: Re: [PATCH 03/11] qspinlock: Add pending bit
Date: Tue, 17 Jun 2014 19:23:44 -0400	[thread overview]
Message-ID: <201406172323.s5HNNveT018439@userz7022.oracle.com> (raw)

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=utf-8, Size: 4502 bytes --]


On Jun 17, 2014 6:25 PM, Waiman Long <waiman.long@hp.com> wrote:
>
> On 06/17/2014 05:10 PM, Konrad Rzeszutek Wilk wrote: 
> > On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote: 
> >> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote: 
> >>> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote: 
> >>>> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote: 
> >>>>> Because the qspinlock needs to touch a second cacheline; add a pending 
> >>>>> bit and allow a single in-word spinner before we punt to the second 
> >>>>> cacheline. 
> >>>> Could you add this in the description please: 
> >>>> 
> >>>> And by second cacheline we mean the local 'node'. That is the: 
> >>>> mcs_nodes[0] and mcs_nodes[idx] 
> >>>> 
> >>>> Perhaps it might be better then to split this in the header file 
> >>>> as this is trying to not be a slowpath code - but rather - a 
> >>>> pre-slow-path-lets-try-if-we can do another cmpxchg in case 
> >>>> the unlocker has just unlocked itself. 
> >>>> 
> >>>> So something like: 
> >>>> 
> >>>> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h 
> >>>> index e8a7ae8..29cc9c7 100644 
> >>>> --- a/include/asm-generic/qspinlock.h 
> >>>> +++ b/include/asm-generic/qspinlock.h 
> >>>> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); 
> >>>>    */ 
> >>>>   static __always_inline void queue_spin_lock(struct qspinlock *lock) 
> >>>>   { 
> >>>> - u32 val; 
> >>>> + u32 val, new; 
> >>>> 
> >>>>   val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); 
> >>>>   if (likely(val == 0)) 
> >>>>   return; 
> >>>> + 
> >>>> + /* One more attempt - but if we fail mark it as pending. */ 
> >>>> + if (val == _Q_LOCKED_VAL) { 
> >>>> + new = Q_LOCKED_VAL |_Q_PENDING_VAL; 
> >>>> + 
> >>>> + old = atomic_cmpxchg(&lock->val, val, new); 
> >>>> + if (old == _Q_LOCKED_VAL) /* YEEY! */ 
> >>>> + return; 
> >>> No, it can leave like that. The unlock path will not clear the pending bit. 
> >> Err, you are right. It needs to go back in the slowpath. 
> > What I should have wrote is: 
> > 
> > if (old == 0) /* YEEY */ 
> >    return; 
>
> Unfortunately, that still doesn't work. If old is 0, it just meant the 
> cmpxchg failed. It still haven't got the lock. 
> > As that would the same thing as this patch does on the pending bit - that 
> > is if we can on the second compare and exchange set the pending bit (and the 
> > lock) and the lock has been released - we are good. 
>
> That is not true. When the lock is freed, the pending bit holder will 
> still have to clear the pending bit and set the lock bit as is done in 
> the slowpath. We cannot skip the step here. The problem of moving the 
> pending code here is that it includes a wait loop which we don't want to 
> put in the fastpath. 
> > 
> > And it is a quick path. 
> > 
> >>> We are trying to make the fastpath as simple as possible as it may be 
> >>> inlined. The complexity of the queue spinlock is in the slowpath. 
> >> Sure, but then it shouldn't be called slowpath anymore as it is not 
> >> slow. It is a combination of fast path (the potential chance of 
> >> grabbing the lock and setting the pending lock) and the real slow 
> >> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ? 
> >> 
> > I forgot to mention - that was the crux of my comments - just change 
> > the slowpath to complex name at that point to better reflect what 
> > it does. 
>
> Actually in my v11 patch, I subdivided the slowpath into a slowpath for 
> the pending code and slowerpath for actual queuing. Perhaps, we could 
> use quickpath and slowpath instead. Anyway, it is a minor detail that we 
> can discuss after the core code get merged.
>
> -Longman

Why not do it the right way the first time around?

That aside - these optimization - seem to make the code harder to read. And they do remind me of the scheduler code in 2.6.x which was based on heuristics - and eventually ripped out.

So are these optimizations based on turning off certain hardware features? Say hardware prefetching?

What I am getting at - can the hardware do this at some point (or perhaps already does on IvyBridge-EX?) - that is prefetch the per-cpu areas so they are always hot? And rendering this optimization not needed?

Thanks!
ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

WARNING: multiple messages have this Message-ID (diff)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <waiman.long@hp.com>
Cc: linux-arch@vger.kernel.org, riel@redhat.com, kvm@vger.kernel.org,
	gleb@redhat.com, boris.ostrovsky@oracle.com,
	linux-kernel@vger.kernel.org, raghavendra.kt@linux.vnet.ibm.com,
	paolo.bonzini@gmail.com, oleg@redhat.com,
	virtualization@lists.linux-foundation.org,
	Peter Zijlstra <peterz@infradead.org>,
	torvalds@linux-foundation.org, david.vrabel@citrix.com,
	scott.norton@hp.com, xen-devel@lists.xenproject.org,
	tglx@linutronix.de, paulmck@linux.vnet.ibm.com,
	chegu_vinod@hp.com, mingo@kernel.org,
	Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: Re: [PATCH 03/11] qspinlock: Add pending bit
Date: Tue, 17 Jun 2014 19:23:44 -0400	[thread overview]
Message-ID: <201406172323.s5HNNveT018439@userz7022.oracle.com> (raw)


On Jun 17, 2014 6:25 PM, Waiman Long <waiman.long@hp.com> wrote:
>
> On 06/17/2014 05:10 PM, Konrad Rzeszutek Wilk wrote: 
> > On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote: 
> >> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote: 
> >>> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote: 
> >>>> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote: 
> >>>>> Because the qspinlock needs to touch a second cacheline; add a pending 
> >>>>> bit and allow a single in-word spinner before we punt to the second 
> >>>>> cacheline. 
> >>>> Could you add this in the description please: 
> >>>> 
> >>>> And by second cacheline we mean the local 'node'. That is the: 
> >>>> mcs_nodes[0] and mcs_nodes[idx] 
> >>>> 
> >>>> Perhaps it might be better then to split this in the header file 
> >>>> as this is trying to not be a slowpath code - but rather - a 
> >>>> pre-slow-path-lets-try-if-we can do another cmpxchg in case 
> >>>> the unlocker has just unlocked itself. 
> >>>> 
> >>>> So something like: 
> >>>> 
> >>>> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h 
> >>>> index e8a7ae8..29cc9c7 100644 
> >>>> --- a/include/asm-generic/qspinlock.h 
> >>>> +++ b/include/asm-generic/qspinlock.h 
> >>>> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); 
> >>>>    */ 
> >>>>   static __always_inline void queue_spin_lock(struct qspinlock *lock) 
> >>>>   { 
> >>>> - u32 val; 
> >>>> + u32 val, new; 
> >>>> 
> >>>>   val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); 
> >>>>   if (likely(val == 0)) 
> >>>>   return; 
> >>>> + 
> >>>> + /* One more attempt - but if we fail mark it as pending. */ 
> >>>> + if (val == _Q_LOCKED_VAL) { 
> >>>> + new = Q_LOCKED_VAL |_Q_PENDING_VAL; 
> >>>> + 
> >>>> + old = atomic_cmpxchg(&lock->val, val, new); 
> >>>> + if (old == _Q_LOCKED_VAL) /* YEEY! */ 
> >>>> + return; 
> >>> No, it can leave like that. The unlock path will not clear the pending bit. 
> >> Err, you are right. It needs to go back in the slowpath. 
> > What I should have wrote is: 
> > 
> > if (old == 0) /* YEEY */ 
> >    return; 
>
> Unfortunately, that still doesn't work. If old is 0, it just meant the 
> cmpxchg failed. It still haven't got the lock. 
> > As that would the same thing as this patch does on the pending bit - that 
> > is if we can on the second compare and exchange set the pending bit (and the 
> > lock) and the lock has been released - we are good. 
>
> That is not true. When the lock is freed, the pending bit holder will 
> still have to clear the pending bit and set the lock bit as is done in 
> the slowpath. We cannot skip the step here. The problem of moving the 
> pending code here is that it includes a wait loop which we don't want to 
> put in the fastpath. 
> > 
> > And it is a quick path. 
> > 
> >>> We are trying to make the fastpath as simple as possible as it may be 
> >>> inlined. The complexity of the queue spinlock is in the slowpath. 
> >> Sure, but then it shouldn't be called slowpath anymore as it is not 
> >> slow. It is a combination of fast path (the potential chance of 
> >> grabbing the lock and setting the pending lock) and the real slow 
> >> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ? 
> >> 
> > I forgot to mention - that was the crux of my comments - just change 
> > the slowpath to complex name at that point to better reflect what 
> > it does. 
>
> Actually in my v11 patch, I subdivided the slowpath into a slowpath for 
> the pending code and slowerpath for actual queuing. Perhaps, we could 
> use quickpath and slowpath instead. Anyway, it is a minor detail that we 
> can discuss after the core code get merged.
>
> -Longman

Why not do it the right way the first time around?

That aside - these optimization - seem to make the code harder to read. And they do remind me of the scheduler code in 2.6.x which was based on heuristics - and eventually ripped out.

So are these optimizations based on turning off certain hardware features? Say hardware prefetching?

What I am getting at - can the hardware do this at some point (or perhaps already does on IvyBridge-EX?) - that is prefetch the per-cpu areas so they are always hot? And rendering this optimization not needed?

Thanks!
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

WARNING: multiple messages have this Message-ID (diff)
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <waiman.long@hp.com>
Cc: raghavendra.kt@linux.vnet.ibm.com, mingo@kernel.org,
	riel@redhat.com, oleg@redhat.com, gleb@redhat.com,
	virtualization@lists.linux-foundation.org, tglx@linutronix.de,
	chegu_vinod@hp.com, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com, linux-kernel@vger.kernel.org,
	linux-arch@vger.kernel.org, paolo.bonzini@gmail.com,
	Peter Zijlstra <peterz@infradead.org>,
	scott.norton@hp.com, torvalds@linux-foundation.org,
	kvm@vger.kernel.org, paulmck@linux.vnet.ibm.com,
	xen-devel@lists.xenproject.org,
	Peter Zijlstra <a.p.zijlstra@chello.nl>
Subject: Re: [PATCH 03/11] qspinlock: Add pending bit
Date: Tue, 17 Jun 2014 19:23:44 -0400	[thread overview]
Message-ID: <201406172323.s5HNNveT018439@userz7022.oracle.com> (raw)
Message-ID: <20140617232344.gNTanZNhkkI8M4b-mPx0EPRyIDVMS3jz_uNhSxWuUSQ@z> (raw)


On Jun 17, 2014 6:25 PM, Waiman Long <waiman.long@hp.com> wrote:
>
> On 06/17/2014 05:10 PM, Konrad Rzeszutek Wilk wrote: 
> > On Tue, Jun 17, 2014 at 05:07:29PM -0400, Konrad Rzeszutek Wilk wrote: 
> >> On Tue, Jun 17, 2014 at 04:51:57PM -0400, Waiman Long wrote: 
> >>> On 06/17/2014 04:36 PM, Konrad Rzeszutek Wilk wrote: 
> >>>> On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote: 
> >>>>> Because the qspinlock needs to touch a second cacheline; add a pending 
> >>>>> bit and allow a single in-word spinner before we punt to the second 
> >>>>> cacheline. 
> >>>> Could you add this in the description please: 
> >>>> 
> >>>> And by second cacheline we mean the local 'node'. That is the: 
> >>>> mcs_nodes[0] and mcs_nodes[idx] 
> >>>> 
> >>>> Perhaps it might be better then to split this in the header file 
> >>>> as this is trying to not be a slowpath code - but rather - a 
> >>>> pre-slow-path-lets-try-if-we can do another cmpxchg in case 
> >>>> the unlocker has just unlocked itself. 
> >>>> 
> >>>> So something like: 
> >>>> 
> >>>> diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h 
> >>>> index e8a7ae8..29cc9c7 100644 
> >>>> --- a/include/asm-generic/qspinlock.h 
> >>>> +++ b/include/asm-generic/qspinlock.h 
> >>>> @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); 
> >>>>    */ 
> >>>>   static __always_inline void queue_spin_lock(struct qspinlock *lock) 
> >>>>   { 
> >>>> - u32 val; 
> >>>> + u32 val, new; 
> >>>> 
> >>>>   val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); 
> >>>>   if (likely(val == 0)) 
> >>>>   return; 
> >>>> + 
> >>>> + /* One more attempt - but if we fail mark it as pending. */ 
> >>>> + if (val == _Q_LOCKED_VAL) { 
> >>>> + new = Q_LOCKED_VAL |_Q_PENDING_VAL; 
> >>>> + 
> >>>> + old = atomic_cmpxchg(&lock->val, val, new); 
> >>>> + if (old == _Q_LOCKED_VAL) /* YEEY! */ 
> >>>> + return; 
> >>> No, it can leave like that. The unlock path will not clear the pending bit. 
> >> Err, you are right. It needs to go back in the slowpath. 
> > What I should have wrote is: 
> > 
> > if (old == 0) /* YEEY */ 
> >    return; 
>
> Unfortunately, that still doesn't work. If old is 0, it just meant the 
> cmpxchg failed. It still haven't got the lock. 
> > As that would the same thing as this patch does on the pending bit - that 
> > is if we can on the second compare and exchange set the pending bit (and the 
> > lock) and the lock has been released - we are good. 
>
> That is not true. When the lock is freed, the pending bit holder will 
> still have to clear the pending bit and set the lock bit as is done in 
> the slowpath. We cannot skip the step here. The problem of moving the 
> pending code here is that it includes a wait loop which we don't want to 
> put in the fastpath. 
> > 
> > And it is a quick path. 
> > 
> >>> We are trying to make the fastpath as simple as possible as it may be 
> >>> inlined. The complexity of the queue spinlock is in the slowpath. 
> >> Sure, but then it shouldn't be called slowpath anymore as it is not 
> >> slow. It is a combination of fast path (the potential chance of 
> >> grabbing the lock and setting the pending lock) and the real slow 
> >> path (the queuing). Perhaps it should be called 'queue_spinlock_complex' ? 
> >> 
> > I forgot to mention - that was the crux of my comments - just change 
> > the slowpath to complex name at that point to better reflect what 
> > it does. 
>
> Actually in my v11 patch, I subdivided the slowpath into a slowpath for 
> the pending code and slowerpath for actual queuing. Perhaps, we could 
> use quickpath and slowpath instead. Anyway, it is a minor detail that we 
> can discuss after the core code get merged.
>
> -Longman

Why not do it the right way the first time around?

That aside - these optimization - seem to make the code harder to read. And they do remind me of the scheduler code in 2.6.x which was based on heuristics - and eventually ripped out.

So are these optimizations based on turning off certain hardware features? Say hardware prefetching?

What I am getting at - can the hardware do this at some point (or perhaps already does on IvyBridge-EX?) - that is prefetch the per-cpu areas so they are always hot? And rendering this optimization not needed?

Thanks!

             reply	other threads:[~2014-06-17 23:25 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-17 23:23 Konrad Rzeszutek Wilk [this message]
2014-06-17 23:23 ` [PATCH 03/11] qspinlock: Add pending bit Konrad Rzeszutek Wilk
2014-06-17 23:23 ` Konrad Rzeszutek Wilk
2014-06-24  8:46 ` Peter Zijlstra
2014-06-24  8:46   ` Peter Zijlstra
2014-06-24  8:46 ` Peter Zijlstra
  -- strict thread matches above, loose matches on Subject: below --
2014-06-17 23:23 Konrad Rzeszutek Wilk
2014-06-15 12:46 [PATCH 00/11] qspinlock with paravirt support Peter Zijlstra
2014-06-15 12:47 ` [PATCH 03/11] qspinlock: Add pending bit Peter Zijlstra
2014-06-15 12:47 ` Peter Zijlstra
2014-06-15 12:47   ` Peter Zijlstra
2014-06-17 20:36   ` Konrad Rzeszutek Wilk
2014-06-17 20:36   ` Konrad Rzeszutek Wilk
2014-06-17 20:36     ` Konrad Rzeszutek Wilk
2014-06-17 20:51     ` Waiman Long
2014-06-17 20:51     ` Waiman Long
2014-06-17 20:51       ` Waiman Long
2014-06-17 21:07       ` Konrad Rzeszutek Wilk
2014-06-17 21:07       ` Konrad Rzeszutek Wilk
2014-06-17 21:07         ` Konrad Rzeszutek Wilk
2014-06-17 21:10         ` Konrad Rzeszutek Wilk
2014-06-17 21:10           ` Konrad Rzeszutek Wilk
2014-06-17 22:25           ` Waiman Long
2014-06-17 22:25             ` Waiman Long
2014-06-17 22:25           ` Waiman Long
2014-06-17 21:10         ` Konrad Rzeszutek Wilk
2014-06-24  8:24         ` Peter Zijlstra
2014-06-24  8:24           ` Peter Zijlstra
2014-06-24  8:24         ` Peter Zijlstra
2014-06-18 11:29     ` Paolo Bonzini
2014-06-18 11:29     ` Paolo Bonzini
2014-06-18 11:29       ` Paolo Bonzini
2014-06-18 13:36       ` Konrad Rzeszutek Wilk
2014-06-18 13:36       ` Konrad Rzeszutek Wilk
2014-06-18 13:36         ` Konrad Rzeszutek Wilk
2014-06-23 16:35     ` Peter Zijlstra
2014-06-23 16:35     ` Peter Zijlstra
2014-06-23 16:35       ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201406172323.s5HNNveT018439@userz7022.oracle.com \
    --to=konrad.wilk@oracle.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=boris.ostrovsky@oracle.com \
    --cc=chegu_vinod@hp.com \
    --cc=david.vrabel@citrix.com \
    --cc=gleb@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=oleg@redhat.com \
    --cc=paolo.bonzini@gmail.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=waiman.long@hp.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.