linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Peter Zijlstra <peterz@infradead.org>
Cc: linux-arch@vger.kernel.org, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	Paolo Bonzini <paolo.bonzini@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Rik van Riel <riel@redhat.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Daniel J Blueman <daniel@numascale.com>,
	Scott J Norton <scott.norton@hp.com>,
	Douglas Hatch <doug.hatch@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>
Subject: [PATCH v16 06/14] qspinlock: Use a simple write to grab the lock
Date: Fri, 24 Apr 2015 14:56:35 -0400	[thread overview]
Message-ID: <1429901803-29771-7-git-send-email-Waiman.Long@hp.com> (raw)
In-Reply-To: <1429901803-29771-1-git-send-email-Waiman.Long@hp.com>

Currently, atomic_cmpxchg() is used to get the lock. However, this
is not really necessary if there is more than one task in the queue
and the queue head don't need to reset the tail code. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligible to get the lock as long as it checks that
both the lock and pending bits are not set. The current pending bit
waiting code will ensure that the bit will not be set as soon as the
tail code in the lock is set.

With that change, the are some slight improvement in the performance
of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket
Westere-EX machine as shown in the tables below.

		[Standalone/Embedded - same node]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	 2324/2321	2248/2265	 -3%/-2%
       4	 2890/2896	2819/2831	 -2%/-2%
       5	 3611/3595	3522/3512	 -2%/-2%
       6	 4281/4276	4173/4160	 -3%/-3%
       7	 5018/5001	4875/4861	 -3%/-3%
       8	 5759/5750	5563/5568	 -3%/-3%

		[Standalone/Embedded - different nodes]
  # of tasks	Before patch	After patch	%Change
  ----------	-----------	----------	-------
       3	12242/12237	12087/12093	 -1%/-1%
       4	10688/10696	10507/10521	 -2%/-2%

It was also found that this change produced a much bigger performance
improvement in the newer IvyBridge-EX chip and was essentially to close
the performance gap between the ticket spinlock and queue spinlock.

The disk workload of the AIM7 benchmark was run on a 4-socket
Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
on a 3.14 based kernel. The results of the test runs were:

                AIM7 XFS Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            5678233    3.17       96.61       5.81
  qspinlock             5750799    3.13       94.83       5.97

                AIM7 EXT4 Disk Test
  kernel                 JPM    Real Time   Sys Time    Usr Time
  -----                  ---    ---------   --------    --------
  ticketlock            1114551   16.15      509.72       7.11
  qspinlock             2184466    8.24      232.99       6.01

The ext4 filesystem run had a much higher spinlock contention than
the xfs filesystem run.

The "ebizzy -m" test was also run with the following results:

  kernel               records/s  Real Time   Sys Time    Usr Time
  -----                ---------  ---------   --------    --------
  ticketlock             2075       10.00      216.35       3.49
  qspinlock              3023       10.00      198.20       4.80

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 kernel/locking/qspinlock.c |   66 +++++++++++++++++++++++++++++++++----------
 1 files changed, 50 insertions(+), 16 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index bcc99e6..99503ef 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -105,24 +105,37 @@ static inline struct mcs_spinlock *decode_tail(u32 tail)
  * By using the whole 2nd least significant byte for the pending bit, we
  * can allow better optimization of the lock acquisition for the pending
  * bit holder.
+ *
+ * This internal structure is also used by the set_locked function which
+ * is not restricted to _Q_PENDING_BITS == 8.
  */
-#if _Q_PENDING_BITS == 8
-
 struct __qspinlock {
 	union {
 		atomic_t val;
-		struct {
 #ifdef __LITTLE_ENDIAN
+		struct {
+			u8	locked;
+			u8	pending;
+		};
+		struct {
 			u16	locked_pending;
 			u16	tail;
+		};
 #else
+		struct {
 			u16	tail;
 			u16	locked_pending;
-#endif
 		};
+		struct {
+			u8	reserved[2];
+			u8	pending;
+			u8	locked;
+		};
+#endif
 	};
 };
 
+#if _Q_PENDING_BITS == 8
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queue spinlock structure
@@ -195,6 +208,19 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
 #endif /* _Q_PENDING_BITS == 8 */
 
 /**
+ * set_locked - Set the lock bit and own the lock
+ * @lock: Pointer to queue spinlock structure
+ *
+ * *,*,0 -> *,0,1
+ */
+static __always_inline void set_locked(struct qspinlock *lock)
+{
+	struct __qspinlock *l = (void *)lock;
+
+	WRITE_ONCE(l->locked, _Q_LOCKED_VAL);
+}
+
+/**
  * queue_spin_lock_slowpath - acquire the queue spinlock
  * @lock: Pointer to queue spinlock structure
  * @val: Current value of the queue spinlock 32-bit word
@@ -329,8 +355,14 @@ queue:
 	 * go away.
 	 *
 	 * *,x,y -> *,0,0
+	 *
+	 * this wait loop must use a load-acquire such that we match the
+	 * store-release that clears the locked bit and create lock
+	 * sequentiality; this is because the set_locked() function below
+	 * does not imply a full barrier.
+	 *
 	 */
-	while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK)
+	while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_PENDING_MASK)
 		cpu_relax();
 
 	/*
@@ -338,15 +370,19 @@ queue:
 	 *
 	 * n,0,0 -> 0,0,1 : lock, uncontended
 	 * *,0,0 -> *,0,1 : lock, contended
+	 *
+	 * If the queue head is the only one in the queue (lock value == tail),
+	 * clear the tail code and grab the lock. Otherwise, we only need
+	 * to grab the lock.
 	 */
 	for (;;) {
-		new = _Q_LOCKED_VAL;
-		if (val != tail)
-			new |= val;
-
-		old = atomic_cmpxchg(&lock->val, val, new);
-		if (old == val)
+		if (val != tail) {
+			set_locked(lock);
 			break;
+		}
+		old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
+		if (old == val)
+			goto release;	/* No contention */
 
 		val = old;
 	}
@@ -354,12 +390,10 @@ queue:
 	/*
 	 * contended path; wait for next, release.
 	 */
-	if (new != _Q_LOCKED_VAL) {
-		while (!(next = READ_ONCE(node->next)))
-			cpu_relax();
+	while (!(next = READ_ONCE(node->next)))
+		cpu_relax();
 
-		arch_mcs_spin_unlock_contended(&next->locked);
-	}
+	arch_mcs_spin_unlock_contended(&next->locked);
 
 release:
 	/*
-- 
1.7.1


  parent reply	other threads:[~2015-04-24 18:58 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-24 18:56 [PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support Waiman Long
2015-04-24 18:56 ` [PATCH v16 01/14] qspinlock: A simple generic 4-byte queue spinlock Waiman Long
2015-05-08 13:25   ` [tip:locking/core] locking/qspinlock: Introduce a simple generic 4-byte queued spinlock tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 02/14] qspinlock, x86: Enable x86-64 to use queue spinlock Waiman Long
2015-05-08 13:25   ` [tip:locking/core] locking/qspinlock, x86: Enable x86-64 to use queued spinlocks tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 03/14] qspinlock: Add pending bit Waiman Long
2015-05-08 13:26   ` [tip:locking/core] locking/qspinlock: " tip-bot for Peter Zijlstra (Intel)
2015-04-24 18:56 ` [PATCH v16 04/14] qspinlock: Extract out code snippets for the next patch Waiman Long
2015-05-08 13:26   ` [tip:locking/core] locking/qspinlock: " tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 05/14] qspinlock: Optimize for smaller NR_CPUS Waiman Long
2015-05-08 13:26   ` [tip:locking/core] locking/qspinlock: " tip-bot for Peter Zijlstra (Intel)
2015-04-24 18:56 ` Waiman Long [this message]
2015-05-08 13:27   ` [tip:locking/core] locking/qspinlock: Use a simple write to grab the lock tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 07/14] qspinlock: Revert to test-and-set on hypervisors Waiman Long
2015-05-08 13:27   ` [tip:locking/core] locking/qspinlock: " tip-bot for Peter Zijlstra (Intel)
2015-04-24 18:56 ` [PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock Waiman Long
2015-05-04 14:20   ` Peter Zijlstra
2015-05-04 17:15     ` Waiman Long
2015-05-08 13:27   ` [tip:locking/core] locking/pvqspinlock: " tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 09/14] pvqspinlock, x86: Implement the paravirt qspinlock call patching Waiman Long
2015-05-08 13:27   ` [tip:locking/core] locking/pvqspinlock, " tip-bot for Peter Zijlstra (Intel)
2015-05-30  4:09     ` Sasha Levin
2015-05-31 18:29       ` Waiman Long
2015-04-24 18:56 ` [PATCH v16 10/14] pvqspinlock, x86: Enable PV qspinlock for KVM Waiman Long
2015-05-08 13:28   ` [tip:locking/core] locking/pvqspinlock, " tip-bot for Waiman Long
2015-04-24 18:56 ` [PATCH v16 11/14] pvqspinlock, x86: Enable PV qspinlock for Xen Waiman Long
2015-05-08 13:28   ` [tip:locking/core] locking/pvqspinlock, " tip-bot for David Vrabel
2015-04-24 18:56 ` [PATCH v16 12/14] pvqspinlock: Only kick CPU at unlock time Waiman Long
2015-04-24 18:56 ` [PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg Waiman Long
2015-04-29 18:11   ` Peter Zijlstra
2015-04-29 18:27     ` Linus Torvalds
2015-04-30 18:56       ` Waiman Long
2015-04-30 18:49     ` Waiman Long
2015-05-04 14:05       ` Peter Zijlstra
2015-05-04 17:18         ` Waiman Long
2015-04-24 18:56 ` [PATCH v16 14/14] pvqspinlock: Collect slowpath lock statistics Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1429901803-29771-7-git-send-email-Waiman.Long@hp.com \
    --to=waiman.long@hp.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=daniel@numascale.com \
    --cc=david.vrabel@citrix.com \
    --cc=doug.hatch@hp.com \
    --cc=hpa@zytor.com \
    --cc=konrad.wilk@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=paolo.bonzini@gmail.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=raghavendra.kt@linux.vnet.ibm.com \
    --cc=riel@redhat.com \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).