From: Waiman Long <waiman.long@hp.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
"H. Peter Anvin" <hpa@zytor.com>,
x86@kernel.org, linux-kernel@vger.kernel.org,
Scott J Norton <scott.norton@hp.com>,
Douglas Hatch <doug.hatch@hp.com>
Subject: Re: [PATCH 1/7] locking/pvqspinlock: Only kick CPU at unlock time
Date: Tue, 14 Jul 2015 21:31:22 -0400 [thread overview]
Message-ID: <55A5B7EA.9020809@hp.com> (raw)
In-Reply-To: <20150713134828.GH3644@twins.programming.kicks-ass.net>
On 07/13/2015 09:48 AM, Peter Zijlstra wrote:
> On Sat, Jul 11, 2015 at 04:36:52PM -0400, Waiman Long wrote:
>> @@ -229,19 +244,42 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
>> {
>> struct pv_node *pn = (struct pv_node *)node;
>> struct __qspinlock *l = (void *)lock;
>> - struct qspinlock **lp = NULL;
>> + struct qspinlock **lp;
>> int loop;
>>
>> + /*
>> + * Initialize lp to a non-NULL value if it has already been in the
>> + * pv_hashed state so that pv_hash() won't be called again.
>> + */
>> + lp = (READ_ONCE(pn->state) == vcpu_hashed) ? (struct qspinlock **)1
>> + : NULL;
>> for (;;) {
>> + WRITE_ONCE(pn->state, vcpu_running);
>> for (loop = SPIN_THRESHOLD; loop; loop--) {
>> if (!READ_ONCE(l->locked))
>> return;
>> cpu_relax();
>> }
>>
>> - WRITE_ONCE(pn->state, vcpu_halted);
>> + /*
>> + * Recheck lock value after setting vcpu_hashed state
>> + *
>> + * [S] state = vcpu_hashed [S] l->locked = 0
>> + * MB MB
>> + * [L] l->locked [L] state == vcpu_hashed
>> + *
>> + * Matches smp_store_mb() in __pv_queued_spin_unlock()
>> + */
>> + smp_store_mb(pn->state, vcpu_hashed);
>> +
>> + if (!READ_ONCE(l->locked)) {
>> + WRITE_ONCE(pn->state, vcpu_running);
>> + return;
>> + }
>> +
>> if (!lp) { /* ONCE */
>> lp = pv_hash(lock, pn);
>> +
>> /*
>> * lp must be set before setting _Q_SLOW_VAL
>> *
>> @@ -305,13 +343,16 @@ __visible void __pv_queued_spin_unlock(struct qspinlock *lock)
>> * Now that we have a reference to the (likely) blocked pv_node,
>> * release the lock.
>> */
>> - smp_store_release(&l->locked, 0);
>> + smp_store_mb(l->locked, 0);
>>
>> /*
>> * At this point the memory pointed at by lock can be freed/reused,
>> * however we can still use the pv_node to kick the CPU.
>> + * The other vCPU may not really be halted, but kicking an active
>> + * vCPU is harmless other than the additional latency in completing
>> + * the unlock.
>> */
>> - if (READ_ONCE(node->state) == vcpu_halted)
>> + if (READ_ONCE(node->state) == vcpu_hashed)
>> pv_kick(node->cpu);
>> }
> I think most of that is not actually required; if we let pv_kick_node()
> set vcpu_hashed and avoid writing another value in pv_wait_head(), then
> __pv_queued_spin_unlock() has two cases:
>
> - pv_kick_node() set _SLOW_VAL, which is the same 'thread' and things
> observe program order and we're trivially guaranteed to see
> node->state and the hash state.
I just found out that letting pv_kick_node() to wakeup vCPUs at locking
time can have a slightly better performance in some cases. So I am going
to keep it, but defer kicking to the unlock time when we can do multiple
kicks. The advantage of doing it at unlock time is that the kicking can
be done outside of the critical section. So I am going to keep the
current name.
Cheers,
Longman
next prev parent reply other threads:[~2015-07-15 1:31 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-11 20:36 [PATCH 0/7] locking/qspinlock: Enhance pvqspinlock & introduce queued unfair lock Waiman Long
2015-07-11 20:36 ` [PATCH 1/7] locking/pvqspinlock: Only kick CPU at unlock time Waiman Long
2015-07-13 12:02 ` Peter Zijlstra
2015-07-13 12:31 ` Peter Zijlstra
2015-07-15 1:24 ` Waiman Long
2015-07-13 13:48 ` Peter Zijlstra
2015-07-14 9:31 ` Peter Zijlstra
2015-07-15 1:31 ` Waiman Long [this message]
2015-08-03 17:00 ` [tip:locking/core] " tip-bot for Waiman Long
2015-07-11 20:36 ` [PATCH 2/7] locking/pvqspinlock: Allow vCPUs kick-ahead Waiman Long
2015-07-13 13:52 ` Peter Zijlstra
2015-07-15 1:38 ` Waiman Long
2015-07-11 20:36 ` [PATCH 3/7] locking/pvqspinlock: Implement wait-early for overcommitted guest Waiman Long
2015-07-12 8:23 ` Peter Zijlstra
2015-07-13 19:50 ` Davidlohr Bueso
2015-07-15 1:39 ` Waiman Long
2015-07-11 20:36 ` [PATCH 4/7] locking/pvqspinlock: Collect slowpath lock statistics Waiman Long
2015-07-12 8:22 ` Peter Zijlstra
2015-07-14 18:48 ` Waiman Long
2015-07-11 20:36 ` [PATCH 5/7] locking/pvqspinlock: Add pending bit support Waiman Long
2015-07-12 8:21 ` Peter Zijlstra
2015-07-14 18:47 ` Waiman Long
2015-07-11 20:36 ` [PATCH 6/7] locking/qspinlock: A fairer queued unfair lock Waiman Long
2015-07-12 8:21 ` Peter Zijlstra
2015-07-14 18:47 ` Waiman Long
2015-07-14 20:45 ` Peter Zijlstra
2015-07-11 20:36 ` [PATCH 7/7] locking/qspinlock: Collect queued unfair lock slowpath statistics Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55A5B7EA.9020809@hp.com \
--to=waiman.long@hp.com \
--cc=doug.hatch@hp.com \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hp.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).