From: Arnd Bergmann <arnd@arndb.de>
To: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Peter Zijlstra <peterz@infradead.org>,
Russell King <linux@armlinux.org.uk>,
Ingo Molnar <mingo@redhat.com>, Waiman Long <longman@redhat.com>,
Will Deacon <will@kernel.org>,
Linux ARM <linux-arm-kernel@lists.infradead.org>
Subject: Re: [RFC PATCH 0/3] Queued spinlocks/RW-locks for ARM
Date: Tue, 8 Oct 2019 23:47:31 +0200 [thread overview]
Message-ID: <CAK8P3a182o64NfheNEYixDsi=mSZCNVSgg=_EDnwy+fZ1hrzLw@mail.gmail.com> (raw)
In-Reply-To: <20191008194724.evlk3bnomcz3kxwg@flow>
On Tue, Oct 8, 2019 at 9:47 PM Sebastian Andrzej Siewior
<sebastian@breakpoint.cc> wrote:
>
> On 2019-10-08 16:32:27 [+0200], Arnd Bergmann wrote:
> > On Tue, Oct 8, 2019 at 3:36 PM Waiman Long <longman@redhat.com> wrote:
> > > In x86, the lock instruction prefix is patched out when running on UP
> > > system. This downgrades the atomic cmpxchg to non-atomic one. We may do
> > > something similar in other architectures.
> >
> > Unfortunately, the atomic macros cannot trivially be made cheaper
> > on non-SMP systems based on load-locked/store-conditional
> > based architectures, as there may be an interrupt in-between,
> > and disabling interrupts would likely be more expensive.
> >
> > However, there might be a way to take a shortcut out of
> > queued_spin_lock() using asm-goto combined with the ARM
> > __ALT_SMP_ASM() macro.
>
> The smp_xchg16_relaxed() snippet above looked good. I would buy it :)
> Where are you heading with __ALT_SMP_ASM()?
I was thinking of something along the lines of this:
diff --git a/arch/arm/include/asm/spinlock.h b/arch/arm/include/asm/spinlock.h
index 2c595446cd73..1aa321b45f63 100644
--- a/arch/arm/include/asm/spinlock.h
+++ b/arch/arm/include/asm/spinlock.h
@@ -45,6 +45,19 @@ static inline void dsb_sev(void)
__asm__(SEV);
}
+static __always_inline bool smp_enabled(void)
+{
+ if (IS_ENABLED(CONFIG_SMP))
+ asm_volatile_goto(__ALT_SMP_ASM(WASM(b) "
%l[smp_on_up]", WASM(nop))
+ :::: smp_on_up);
+
+ return false;
+
+smp_on_up:
+ return true;
+}
+#define smp_enabled smp_enabled
+
#include <asm/qrwlock.h>
#include <asm/qspinlock.h>
#define smp_mb__after_spinlock() smp_mb()
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index fde943d180e0..3c456ad1661b 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -12,6 +12,10 @@
#include <asm-generic/qspinlock_types.h>
+#ifndef smp_enabled
+#define smp_enabled (true)
+#endif
+
/**
* queued_spin_is_locked - is the spinlock locked?
* @lock: Pointer to queued spinlock structure
@@ -75,6 +79,11 @@ static __always_inline void queued_spin_lock(struct
qspinlock *lock)
{
u32 val = 0;
+ if (!smp_enabled()) {
+ atomic_set(&lock->val, _Q_LOCKED_VAL);
+ return;
+ }
+
if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL)))
return;
The above is likely incorrect, non-idiomatic or inefficient, but this
is a way to
avoid both a runtime check and the cmpxchg() in each spinlock.
Arnd
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-10-08 21:47 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-07 21:44 [RFC PATCH 0/3] Queued spinlocks/RW-locks for ARM Sebastian Andrzej Siewior
2019-10-07 21:44 ` [PATCH 1/3] ARM: Use qrwlock implementation Sebastian Andrzej Siewior
2019-10-07 21:44 ` [PATCH 2/3] ARM: Use qspinlock implementation Sebastian Andrzej Siewior
2019-10-07 21:44 ` [PATCH 3/3] ARM: Inline locking functions for !PREEMPTION Sebastian Andrzej Siewior
2019-10-08 11:42 ` [RFC PATCH 0/3] Queued spinlocks/RW-locks for ARM Arnd Bergmann
2019-10-08 13:36 ` Waiman Long
2019-10-08 14:32 ` Arnd Bergmann
2019-10-08 19:47 ` Sebastian Andrzej Siewior
2019-10-08 21:47 ` Arnd Bergmann [this message]
2019-10-08 22:02 ` Sebastian Andrzej Siewior
2019-10-09 8:15 ` Arnd Bergmann
2019-10-09 8:46 ` Peter Zijlstra
2019-10-09 8:57 ` Arnd Bergmann
2019-10-09 9:31 ` Peter Zijlstra
2019-10-09 10:31 ` Arnd Bergmann
2019-10-09 10:56 ` Peter Zijlstra
2019-10-09 12:00 ` Arnd Bergmann
2019-10-09 12:06 ` Peter Zijlstra
2019-10-09 12:52 ` Will Deacon
2019-10-09 13:50 ` Arnd Bergmann
2019-10-09 21:42 ` Sebastian Andrzej Siewior
2019-10-08 19:32 ` Sebastian Andrzej Siewior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAK8P3a182o64NfheNEYixDsi=mSZCNVSgg=_EDnwy+fZ1hrzLw@mail.gmail.com' \
--to=arnd@arndb.de \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux@armlinux.org.uk \
--cc=longman@redhat.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=sebastian@breakpoint.cc \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).