From: guoren@kernel.org To: palmer@rivosinc.com, arnd@arndb.de, mingo@redhat.com, will@kernel.org, longman@redhat.com, boqun.feng@gmail.com Cc: linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Guo Ren <guoren@linux.alibaba.com>, Guo Ren <guoren@kernel.org>, Peter Zijlstra <peterz@infradead.org> Subject: [PATCH V7 4/5] asm-generic: spinlock: Add combo spinlock (ticket & queued) Date: Tue, 28 Jun 2022 04:17:06 -0400 [thread overview] Message-ID: <20220628081707.1997728-5-guoren@kernel.org> (raw) In-Reply-To: <20220628081707.1997728-1-guoren@kernel.org> From: Guo Ren <guoren@linux.alibaba.com> Some architecture has a flexible requirement on the type of spinlock. Some LL/SC architectures of ISA don't force micro-arch to give a strong forward guarantee. Thus different kinds of memory model micro-arch would come out in one ISA. The ticket lock is suitable for exclusive monitor designed LL/SC micro-arch with limited cores and "!NUMA". The queue-spinlock could deal with NUMA/large-scale scenarios with a strong forward guarantee designed LL/SC micro-arch. So, make the spinlock a combo with feature. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Palmer Dabbelt <palmer@rivosinc.com> --- include/asm-generic/spinlock.h | 43 ++++++++++++++++++++++++++++++++-- kernel/locking/qspinlock.c | 2 ++ 2 files changed, 43 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index f41dc7c2b900..a9b43089bf99 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -28,34 +28,73 @@ #define __ASM_GENERIC_SPINLOCK_H #include <asm-generic/ticket_spinlock.h> +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS +#include <linux/jump_label.h> +#include <asm-generic/qspinlock.h> + +DECLARE_STATIC_KEY_TRUE(use_qspinlock_key); +#endif + +#undef arch_spin_is_locked +#undef arch_spin_is_contended +#undef arch_spin_value_unlocked +#undef arch_spin_lock +#undef arch_spin_trylock +#undef arch_spin_unlock static __always_inline void arch_spin_lock(arch_spinlock_t *lock) { - ticket_spin_lock(lock); +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + queued_spin_lock(lock); + else +#endif + ticket_spin_lock(lock); } static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock) { +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + return queued_spin_trylock(lock); +#endif return ticket_spin_trylock(lock); } static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) { - ticket_spin_unlock(lock); +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + queued_spin_unlock(lock); + else +#endif + ticket_spin_unlock(lock); } static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) { +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + return queued_spin_is_locked(lock); +#endif return ticket_spin_is_locked(lock); } static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) { +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + return queued_spin_is_contended(lock); +#endif return ticket_spin_is_contended(lock); } static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) { +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + return queued_spin_value_unlocked(lock); +#endif return ticket_spin_value_unlocked(lock); } diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 65a9a10caa6f..b7f7436f42f6 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -566,6 +566,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) } EXPORT_SYMBOL(queued_spin_lock_slowpath); +DEFINE_STATIC_KEY_TRUE_RO(use_qspinlock_key); + /* * Generate the paravirt code for queued_spin_unlock_slowpath(). */ -- 2.36.1 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv
WARNING: multiple messages have this Message-ID (diff)
From: guoren@kernel.org To: palmer@rivosinc.com, arnd@arndb.de, mingo@redhat.com, will@kernel.org, longman@redhat.com, boqun.feng@gmail.com Cc: linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Guo Ren <guoren@linux.alibaba.com>, Guo Ren <guoren@kernel.org>, Peter Zijlstra <peterz@infradead.org> Subject: [PATCH V7 4/5] asm-generic: spinlock: Add combo spinlock (ticket & queued) Date: Tue, 28 Jun 2022 04:17:06 -0400 [thread overview] Message-ID: <20220628081707.1997728-5-guoren@kernel.org> (raw) In-Reply-To: <20220628081707.1997728-1-guoren@kernel.org> From: Guo Ren <guoren@linux.alibaba.com> Some architecture has a flexible requirement on the type of spinlock. Some LL/SC architectures of ISA don't force micro-arch to give a strong forward guarantee. Thus different kinds of memory model micro-arch would come out in one ISA. The ticket lock is suitable for exclusive monitor designed LL/SC micro-arch with limited cores and "!NUMA". The queue-spinlock could deal with NUMA/large-scale scenarios with a strong forward guarantee designed LL/SC micro-arch. So, make the spinlock a combo with feature. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Palmer Dabbelt <palmer@rivosinc.com> --- include/asm-generic/spinlock.h | 43 ++++++++++++++++++++++++++++++++-- kernel/locking/qspinlock.c | 2 ++ 2 files changed, 43 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h index f41dc7c2b900..a9b43089bf99 100644 --- a/include/asm-generic/spinlock.h +++ b/include/asm-generic/spinlock.h @@ -28,34 +28,73 @@ #define __ASM_GENERIC_SPINLOCK_H #include <asm-generic/ticket_spinlock.h> +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS +#include <linux/jump_label.h> +#include <asm-generic/qspinlock.h> + +DECLARE_STATIC_KEY_TRUE(use_qspinlock_key); +#endif + +#undef arch_spin_is_locked +#undef arch_spin_is_contended +#undef arch_spin_value_unlocked +#undef arch_spin_lock +#undef arch_spin_trylock +#undef arch_spin_unlock static __always_inline void arch_spin_lock(arch_spinlock_t *lock) { - ticket_spin_lock(lock); +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + queued_spin_lock(lock); + else +#endif + ticket_spin_lock(lock); } static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock) { +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + return queued_spin_trylock(lock); +#endif return ticket_spin_trylock(lock); } static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) { - ticket_spin_unlock(lock); +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + queued_spin_unlock(lock); + else +#endif + ticket_spin_unlock(lock); } static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock) { +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + return queued_spin_is_locked(lock); +#endif return ticket_spin_is_locked(lock); } static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock) { +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + return queued_spin_is_contended(lock); +#endif return ticket_spin_is_contended(lock); } static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) { +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS + if (static_branch_likely(&use_qspinlock_key)) + return queued_spin_value_unlocked(lock); +#endif return ticket_spin_value_unlocked(lock); } diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 65a9a10caa6f..b7f7436f42f6 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -566,6 +566,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) } EXPORT_SYMBOL(queued_spin_lock_slowpath); +DEFINE_STATIC_KEY_TRUE_RO(use_qspinlock_key); + /* * Generate the paravirt code for queued_spin_unlock_slowpath(). */ -- 2.36.1
next prev parent reply other threads:[~2022-06-28 8:18 UTC|newest] Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-06-28 8:17 [PATCH V7 0/5] riscv: Add qspinlock support with combo style guoren 2022-06-28 8:17 ` guoren 2022-06-28 8:17 ` [PATCH V7 1/5] asm-generic: ticket-lock: Remove unnecessary atomic_read guoren 2022-06-28 8:17 ` guoren 2022-06-28 18:05 ` Waiman Long 2022-06-28 18:05 ` Waiman Long 2022-06-29 2:12 ` Guo Ren 2022-06-29 2:12 ` Guo Ren 2022-06-29 8:27 ` David Laight 2022-06-29 8:27 ` David Laight 2022-07-01 15:18 ` Guo Ren 2022-07-01 15:18 ` Guo Ren 2022-07-04 9:52 ` Peter Zijlstra 2022-07-04 9:52 ` Peter Zijlstra 2022-07-04 11:10 ` Guo Ren 2022-07-04 11:10 ` Guo Ren 2022-06-28 8:17 ` [PATCH V7 2/5] asm-generic: ticket-lock: Use the same struct definitions with qspinlock guoren 2022-06-28 8:17 ` guoren 2022-06-28 8:17 ` [PATCH V7 3/5] asm-generic: ticket-lock: Move into ticket_spinlock.h guoren 2022-06-28 8:17 ` guoren 2022-06-28 8:17 ` guoren [this message] 2022-06-28 8:17 ` [PATCH V7 4/5] asm-generic: spinlock: Add combo spinlock (ticket & queued) guoren 2022-06-28 18:13 ` Waiman Long 2022-06-28 18:13 ` Waiman Long 2022-06-29 1:17 ` Guo Ren 2022-06-29 1:17 ` Guo Ren 2022-06-29 1:34 ` Waiman Long 2022-06-29 1:34 ` Waiman Long 2022-06-29 2:29 ` Guo Ren 2022-06-29 2:29 ` Guo Ren 2022-06-29 7:08 ` Arnd Bergmann 2022-06-29 7:08 ` Arnd Bergmann 2022-06-29 8:24 ` Guo Ren 2022-06-29 8:24 ` Guo Ren 2022-06-29 8:29 ` Arnd Bergmann 2022-06-29 8:29 ` Arnd Bergmann 2022-07-01 12:18 ` Guo Ren 2022-07-01 12:18 ` Guo Ren 2022-06-29 12:53 ` Waiman Long 2022-06-29 12:53 ` Waiman Long 2022-07-04 9:57 ` Peter Zijlstra 2022-07-04 9:57 ` Peter Zijlstra 2022-07-04 13:13 ` Guo Ren 2022-07-04 13:13 ` Guo Ren 2022-07-04 13:45 ` Peter Zijlstra 2022-07-04 13:45 ` Peter Zijlstra 2022-06-28 8:17 ` [PATCH V7 5/5] riscv: Add qspinlock support guoren 2022-06-28 8:17 ` guoren
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220628081707.1997728-5-guoren@kernel.org \ --to=guoren@kernel.org \ --cc=arnd@arndb.de \ --cc=boqun.feng@gmail.com \ --cc=guoren@linux.alibaba.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-riscv@lists.infradead.org \ --cc=longman@redhat.com \ --cc=mingo@redhat.com \ --cc=palmer@rivosinc.com \ --cc=peterz@infradead.org \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.