From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752334AbbIMK4O (ORCPT ); Sun, 13 Sep 2015 06:56:14 -0400 Received: from terminus.zytor.com ([198.137.202.10]:43202 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751428AbbIMK4M (ORCPT ); Sun, 13 Sep 2015 06:56:12 -0400 Date: Sun, 13 Sep 2015 03:55:22 -0700 From: tip-bot for Peter Zijlstra Message-ID: Cc: hpa@zytor.com, tglx@linutronix.de, torvalds@linux-foundation.org, mingo@kernel.org, david@fromorbit.com, Waiman.Long@hp.com, linux-kernel@vger.kernel.org, peterz@infradead.org Reply-To: linux-kernel@vger.kernel.org, peterz@infradead.org, mingo@kernel.org, Waiman.Long@hp.com, david@fromorbit.com, hpa@zytor.com, tglx@linutronix.de, torvalds@linux-foundation.org In-Reply-To: <20150904152523.GR18673@twins.programming.kicks-ass.net> References: <20150904152523.GR18673@twins.programming.kicks-ass.net> To: linux-tip-commits@vger.kernel.org Subject: [tip:locking/core] locking/qspinlock/x86: Fix performance regression under unaccelerated VMs Git-Commit-ID: 43b3f02899f74ae9914a39547cc5492156f0027a X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 43b3f02899f74ae9914a39547cc5492156f0027a Gitweb: http://git.kernel.org/tip/43b3f02899f74ae9914a39547cc5492156f0027a Author: Peter Zijlstra AuthorDate: Fri, 4 Sep 2015 17:25:23 +0200 Committer: Ingo Molnar CommitDate: Fri, 11 Sep 2015 07:49:42 +0200 locking/qspinlock/x86: Fix performance regression under unaccelerated VMs Dave ran into horrible performance on a VM without PARAVIRT_SPINLOCKS set and Linus noted that the test-and-set implementation was retarded. One should spin on the variable with a load, not a RMW. While there, remove 'queued' from the name, as the lock isn't queued at all, but a simple test-and-set. Suggested-by: Linus Torvalds Reported-by: Dave Chinner Tested-by: Dave Chinner Signed-off-by: Peter Zijlstra (Intel) Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Waiman Long Cc: stable@vger.kernel.org # v4.2+ Link: http://lkml.kernel.org/r/20150904152523.GR18673@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar --- arch/x86/include/asm/qspinlock.h | 16 ++++++++++++---- include/asm-generic/qspinlock.h | 4 ++-- kernel/locking/qspinlock.c | 2 +- 3 files changed, 15 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index 9d51fae..8dde3bd 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -39,15 +39,23 @@ static inline void queued_spin_unlock(struct qspinlock *lock) } #endif -#define virt_queued_spin_lock virt_queued_spin_lock +#define virt_spin_lock virt_spin_lock -static inline bool virt_queued_spin_lock(struct qspinlock *lock) +static inline bool virt_spin_lock(struct qspinlock *lock) { if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) return false; - while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0) - cpu_relax(); + /* + * On hypervisors without PARAVIRT_SPINLOCKS support we fall + * back to a Test-and-Set spinlock, because fair locks have + * horrible lock 'holder' preemption issues. + */ + + do { + while (atomic_read(&lock->val) != 0) + cpu_relax(); + } while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0); return true; } diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index 83bfb87..e2aadbc 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -111,8 +111,8 @@ static inline void queued_spin_unlock_wait(struct qspinlock *lock) cpu_relax(); } -#ifndef virt_queued_spin_lock -static __always_inline bool virt_queued_spin_lock(struct qspinlock *lock) +#ifndef virt_spin_lock +static __always_inline bool virt_spin_lock(struct qspinlock *lock) { return false; } diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 337c881..87e9ce6a 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -289,7 +289,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) if (pv_enabled()) goto queue; - if (virt_queued_spin_lock(lock)) + if (virt_spin_lock(lock)) return; /*