From: Waiman Long <longman@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>, Will Deacon <will.deacon@arm.com>,
Thomas Gleixner <tglx@linutronix.de>,
Borislav Petkov <bp@alien8.de>, Arnd Bergmann <arnd@arndb.de>,
linux-kernel@vger.kernel.org, x86@kernel.org,
linux-arch@vger.kernel.org, Nicholas Piggin <npiggin@gmail.com>,
Davidlohr Bueso <dave@stgolabs.net>
Subject: Re: [PATCH 2/2] locking/pvqspinlock: Optionally store lock holder cpu into lock
Date: Sun, 12 Jul 2020 19:05:36 -0400 [thread overview]
Message-ID: <bed22603-e347-8bff-f586-072a18987946@redhat.com> (raw)
In-Reply-To: <20200712173452.GB10769@hirez.programming.kicks-ass.net>
On 7/12/20 1:34 PM, Peter Zijlstra wrote:
> On Sat, Jul 11, 2020 at 02:21:28PM -0400, Waiman Long wrote:
>> The previous patch enables native qspinlock to store lock holder cpu
>> number into the lock word when the lock is acquired via the slowpath.
>> Since PV qspinlock uses atomic unlock, allowing the fastpath and
>> slowpath to put different values into the lock word will further slow
>> down the performance. This is certainly undesirable.
>>
>> The only way we can do that without too much performance impact is to
>> make fastpath and slowpath put in the same value. Still there is a slight
>> performance overhead in the additional access to a percpu variable in the
>> fastpath as well as the less optimized x86-64 PV qspinlock unlock path.
>>
>> A new config option QUEUED_SPINLOCKS_CPUINFO is now added to enable
>> distros to decide if they want to enable lock holder cpu information in
>> the lock itself for both native and PV qspinlocks across both fastpath
>> and slowpath. If this option is not configureed, only native qspinlocks
>> in the slowpath will put the lock holder cpu information in the lock
>> word.
> And this kills it,.. if it doesn't make unconditional sense, we're not
> going to do this. It's just too ugly.
>
You mean it has to be unconditional, no option config if we want to do
it. Right?
It can certainly be made unconditional after I figure out how to make
the optimized PV unlock code work.
Cheers,
Longman
next prev parent reply other threads:[~2020-07-12 23:05 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-11 18:21 [PATCH 0/2] locking/qspinlock: Allow lock to store lock holder cpu number Waiman Long
2020-07-11 18:21 ` [PATCH 1/2] locking/qspinlock: Store lock holder cpu in lock if feasible Waiman Long
2020-07-12 17:33 ` Peter Zijlstra
2020-07-11 18:21 ` [PATCH 2/2] locking/pvqspinlock: Optionally store lock holder cpu into lock Waiman Long
2020-07-12 17:34 ` Peter Zijlstra
2020-07-12 23:05 ` Waiman Long [this message]
2020-07-13 4:17 ` Nicholas Piggin
2020-07-14 2:48 ` Waiman Long
2020-07-14 9:01 ` Peter Zijlstra
2020-07-15 16:33 ` Waiman Long
2020-07-13 9:21 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bed22603-e347-8bff-f586-072a18987946@redhat.com \
--to=longman@redhat.com \
--cc=arnd@arndb.de \
--cc=bp@alien8.de \
--cc=dave@stgolabs.net \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=will.deacon@arm.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).