From: Leonardo Bras <leonardo@linux.ibm.com>
To: Christophe Leroy <christophe.leroy@c-s.fr>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Paul Mackerras <paulus@samba.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Enrico Weigelt <info@metux.net>,
Allison Randal <allison@lohutok.net>,
Thomas Gleixner <tglx@linutronix.de>
Cc: linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/1] ppc/crash: Skip spinlocks during crash
Date: Fri, 27 Mar 2020 12:51:55 -0300 [thread overview]
Message-ID: <56965ad674071181548d5ed4fb7c8fa08061b591.camel@linux.ibm.com> (raw)
In-Reply-To: <af505ef0-e0df-e0aa-bb83-3ed99841f151@c-s.fr>
[-- Attachment #1: Type: text/plain, Size: 1664 bytes --]
Hello Christophe, thanks for the feedback.
I noticed an error in this patch and sent a v2, that can be seen here:
http://patchwork.ozlabs.org/patch/1262468/
Comments inline::
On Fri, 2020-03-27 at 07:50 +0100, Christophe Leroy wrote:
> > @@ -142,6 +144,8 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
> > if (likely(__arch_spin_trylock(lock) == 0))
> > break;
> > do {
> > + if (unlikely(crash_skip_spinlock))
> > + return;
Complete function for reference:
static inline void arch_spin_lock(arch_spinlock_t *lock)
{
while (1) {
if (likely(__arch_spin_trylock(lock) == 0))
break;
do {
if (unlikely(crash_skip_spinlock))
return;
HMT_low();
if (is_shared_processor())
splpar_spin_yield(lock);
} while (unlikely(lock->slock != 0));
HMT_medium();
}
}
> You are adding a test that reads a global var in the middle of a so hot
> path ? That must kill performance.
I thought it would, in worst case scenario, increase a maximum delay of
an arch_spin_lock() call 1 spin cycle. Here is what I thought:
- If the lock is already free, it would change nothing,
- Otherwise, the lock will wait.
- Waiting cycle just got bigger.
- Worst case scenario: running one more cycle, given lock->slock can
turn to 0 just after checking.
Could you please point where I failed to see the performance penalty?
(I need to get better at this :) )
> Can we do different ?
Sure, a less intrusive way of doing it would be to free the currently
needed locks before proceeding. I just thought it would be harder to
maintain.
> Christophe
Best regards,
Leonardo
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2020-03-27 16:00 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-26 22:28 [PATCH 1/1] ppc/crash: Skip spinlocks during crash Leonardo Bras
2020-03-26 23:26 ` Leonardo Bras
2020-03-27 6:50 ` Christophe Leroy
2020-03-27 15:51 ` Leonardo Bras [this message]
2020-03-28 10:19 ` Christophe Leroy
2020-03-30 14:33 ` Leonardo Bras
2020-03-30 11:02 ` Peter Zijlstra
2020-03-30 14:12 ` Leonardo Bras
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56965ad674071181548d5ed4fb7c8fa08061b591.camel@linux.ibm.com \
--to=leonardo@linux.ibm.com \
--cc=allison@lohutok.net \
--cc=benh@kernel.crashing.org \
--cc=christophe.leroy@c-s.fr \
--cc=info@metux.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mingo@redhat.com \
--cc=mpe@ellerman.id.au \
--cc=paulus@samba.org \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).