From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hpe.com>
Cc: Ingo Molnar <mingo@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
"H. Peter Anvin" <hpa@zytor.com>,
x86@kernel.org, linux-kernel@vger.kernel.org,
Scott J Norton <scott.norton@hpe.com>,
Douglas Hatch <doug.hatch@hpe.com>,
Davidlohr Bueso <dave@stgolabs.net>
Subject: Re: [PATCH tip/locking/core v9 2/6] locking/qspinlock: prefetch next node cacheline
Date: Mon, 2 Nov 2015 17:36:26 +0100 [thread overview]
Message-ID: <20151102163626.GU3604@twins.programming.kicks-ass.net> (raw)
In-Reply-To: <1446247597-61863-3-git-send-email-Waiman.Long@hpe.com>
On Fri, Oct 30, 2015 at 07:26:33PM -0400, Waiman Long wrote:
> A queue head CPU, after acquiring the lock, will have to notify
> the next CPU in the wait queue that it has became the new queue
> head. This involves loading a new cacheline from the MCS node of the
> next CPU. That operation can be expensive and add to the latency of
> locking operation.
>
> This patch addes code to optmistically prefetch the next MCS node
> cacheline if the next pointer is defined and it has been spinning
> for the MCS lock for a while. This reduces the locking latency and
> improves the system throughput.
>
> Using a locking microbenchmark on a Haswell-EX system, this patch
> can improve throughput by about 5%.
How does it affect IVB-EX (which you were testing earlier IIRC)?
> Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
> ---
> kernel/locking/qspinlock.c | 21 +++++++++++++++++++++
> 1 files changed, 21 insertions(+), 0 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 7868418..c1c8a1a 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -396,6 +396,7 @@ queue:
> * p,*,* -> n,*,*
> */
> old = xchg_tail(lock, tail);
> + next = NULL;
>
> /*
> * if there was a previous node; link it and wait until reaching the
> @@ -407,6 +408,16 @@ queue:
>
> pv_wait_node(node);
> arch_mcs_spin_lock_contended(&node->locked);
> +
> + /*
> + * While waiting for the MCS lock, the next pointer may have
> + * been set by another lock waiter. We optimistically load
> + * the next pointer & prefetch the cacheline for writing
> + * to reduce latency in the upcoming MCS unlock operation.
> + */
> + next = READ_ONCE(node->next);
> + if (next)
> + prefetchw(next);
> }
OK so far I suppose. Since we already read node->locked, which is in the
same cacheline, also reading node->next isn't extra pressure. And we can
then prefetch that cacheline.
> /*
> @@ -426,6 +437,15 @@ queue:
> cpu_relax();
>
> /*
> + * If the next pointer is defined, we are not tail anymore.
> + * In this case, claim the spinlock & release the MCS lock.
> + */
> + if (next) {
> + set_locked(lock);
> + goto mcs_unlock;
> + }
> +
> + /*
> * claim the lock:
> *
> * n,0,0 -> 0,0,1 : lock, uncontended
> @@ -458,6 +478,7 @@ queue:
> while (!(next = READ_ONCE(node->next)))
> cpu_relax();
>
> +mcs_unlock:
> arch_mcs_spin_unlock_contended(&next->locked);
> pv_kick_node(lock, next);
>
This however appears an independent optimization. Is it worth it? Would
we not already have observed a val != tail in this case? At which point
we're just adding extra code for no gain.
That is, if we observe @next, must we then not also observe val != tail?
next prev parent reply other threads:[~2015-11-02 16:36 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-30 23:26 [PATCH tip/locking/core v9 0/6] locking/qspinlock: Enhance pvqspinlock Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 1/6] locking/qspinlock: Use _acquire/_release versions of cmpxchg & xchg Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 2/6] locking/qspinlock: prefetch next node cacheline Waiman Long
2015-11-02 16:36 ` Peter Zijlstra [this message]
2015-11-02 22:54 ` Peter Zijlstra
2015-11-05 16:42 ` Waiman Long
2015-11-05 16:49 ` Peter Zijlstra
2015-11-05 16:06 ` Waiman Long
2015-11-05 16:39 ` Peter Zijlstra
2015-11-05 16:52 ` Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 3/6] locking/pvqspinlock, x86: Optimize PV unlock code path Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 4/6] locking/pvqspinlock: Collect slowpath lock statistics Waiman Long
2015-11-02 16:40 ` Peter Zijlstra
2015-11-05 16:29 ` Waiman Long
2015-11-05 16:43 ` Peter Zijlstra
2015-11-05 16:59 ` Waiman Long
2015-11-05 17:09 ` Peter Zijlstra
2015-11-05 17:34 ` Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 5/6] locking/pvqspinlock: Allow 1 lock stealing attempt Waiman Long
2015-11-06 14:50 ` Peter Zijlstra
2015-11-06 17:47 ` Waiman Long
2015-11-09 17:29 ` Peter Zijlstra
2015-11-09 19:53 ` Waiman Long
2015-10-30 23:26 ` [PATCH tip/locking/core v9 6/6] locking/pvqspinlock: Queue node adaptive spinning Waiman Long
2015-11-06 15:01 ` Peter Zijlstra
2015-11-06 17:54 ` Waiman Long
2015-11-06 20:37 ` Peter Zijlstra
2015-11-09 16:51 ` Waiman Long
2015-11-09 17:33 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151102163626.GU3604@twins.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=Waiman.Long@hpe.com \
--cc=dave@stgolabs.net \
--cc=doug.hatch@hpe.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=scott.norton@hpe.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).