All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] QRCU fastpath optimization
@ 2007-02-12  4:48 Paul E. McKenney
  2007-02-12  6:22 ` Jens Axboe
  0 siblings, 1 reply; 3+ messages in thread
From: Paul E. McKenney @ 2007-02-12  4:48 UTC (permalink / raw)
  To: linux-kernel; +Cc: oleg, a.p.zijlstra, mingo, hch, akpm, stern

This patch optimizes the "quick" RCU update-side fastpath, so that in the
absence of readers, synchronize_qrcu() does four non-atomic comparisons
and three memory barriers, eliminating the need to acquire the global
lock in this case.  Lightly tested.  Algorithm has been validated for
the 3-reader-2-updater and 2-reader-3-updater cases -- 3-readers-3-updaters
case still to be done (I expect to get access to a large-memory machine
in the next few weeks -- need >>20GB).

Not for inclusion.  Patch is against Oleg's original patch, and likely
needs to be rediffed against Jen's patchstack.  I will do this rediffing
later, first want an easy-to-test and easy-to-inpect version.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---

 srcu.c |   42 +++++++++++++++++++++++++++++++++++++-----
 1 file changed, 37 insertions(+), 5 deletions(-)

diff -urpNa -X dontdiff linux-2.6.19-qrcu/kernel/srcu.c linux-2.6.19-qrcu-fp/kernel/srcu.c
--- linux-2.6.19-qrcu/kernel/srcu.c	2007-02-05 16:27:50.000000000 -0800
+++ linux-2.6.19-qrcu-fp/kernel/srcu.c	2007-02-11 18:10:35.000000000 -0800
@@ -287,23 +287,55 @@ void synchronize_qrcu(struct qrcu_struct
 {
 	int idx;
 
-	smp_mb();
+	smp_mb();  /* Force preceding change to happen before fastpath check. */
+
+	/*
+	 * Fastpath: If the two counters sum to "1" at a given point in
+	 * time, there are no readers.  However, it takes two separate
+	 * loads to sample both counters, which won't occur simultaneously.
+	 * So we might race with a counter switch, so that we might see
+	 * ctr[0]==0, then the counter might switch, then we might see
+	 * ctr[1]==1 (unbeknownst to us because there is a reader still
+	 * there).  So we do a read memory barrier and recheck.  If the
+	 * same race happens again, there must have been a second counter
+	 * switch.  This second counter switch could not have happened
+	 * until all preceding readers finished, so if the condition
+	 * is true both times, we may safely proceed.
+	 *
+	 * This relies critically on the atomic increment and atomic
+	 * decrement being seen as executing in order.
+	 */
+
+	if (atomic_read(&qp->ctr[0]) + atomic_read(&qp->ctr[1]) <= 1) {
+		smp_rmb();  /* Keep two checks independent. */
+		if (atomic_read(&qp->ctr[0]) + atomic_read(&qp->ctr[1]) <= 1)
+			goto out;
+	}
+
 	mutex_lock(&qp->mutex);
 
 	idx = qp->completed & 0x1;
 	if (atomic_read(qp->ctr + idx) == 1)
-		goto out;
+		goto out_unlock;
 
 	atomic_inc(qp->ctr + (idx ^ 0x1));
-	/* Reduce the likelihood that qrcu_read_lock() will loop */
+
+	/*
+	 * Prevent subsequent decrement from being seen before previous
+	 * increment -- such an inversion could cause the fastpath
+	 * above to falsely conclude that there were no readers.  Also,
+	 * reduce the likelihood that qrcu_read_lock() will loop.
+	 */
+
 	smp_mb__after_atomic_inc();
 	qp->completed++;
 
 	atomic_dec(qp->ctr + idx);
 	__wait_event(qp->wq, !atomic_read(qp->ctr + idx));
-out:
+out_unlock:
 	mutex_unlock(&qp->mutex);
-	smp_mb();
+out:
+	smp_mb(); /* force subsequent free after qrcu_read_unlock(). */
 }
 
 EXPORT_SYMBOL_GPL(init_qrcu_struct);

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH] QRCU fastpath optimization
  2007-02-12  4:48 [RFC PATCH] QRCU fastpath optimization Paul E. McKenney
@ 2007-02-12  6:22 ` Jens Axboe
  2007-02-13  2:49   ` Paul E. McKenney
  0 siblings, 1 reply; 3+ messages in thread
From: Jens Axboe @ 2007-02-12  6:22 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: linux-kernel, oleg, a.p.zijlstra, mingo, hch, akpm, stern

On Sun, Feb 11 2007, Paul E. McKenney wrote:
> This patch optimizes the "quick" RCU update-side fastpath, so that in the
> absence of readers, synchronize_qrcu() does four non-atomic comparisons
> and three memory barriers, eliminating the need to acquire the global
> lock in this case.  Lightly tested.  Algorithm has been validated for
> the 3-reader-2-updater and 2-reader-3-updater cases -- 3-readers-3-updaters
> case still to be done (I expect to get access to a large-memory machine
> in the next few weeks -- need >>20GB).
> 
> Not for inclusion.  Patch is against Oleg's original patch, and likely
> needs to be rediffed against Jen's patchstack.  I will do this rediffing
> later, first want an easy-to-test and easy-to-inpect version.

I'd suggest just merging this optimization into the original QRCU patch.
Once you are happy with the validation, I'll add it to the plug branch
as well.

Version against the plug branch below.

diff --git a/kernel/srcu.c b/kernel/srcu.c
index 53c6989..bfe347a 100644
--- a/kernel/srcu.c
+++ b/kernel/srcu.c
@@ -324,28 +324,53 @@ void synchronize_qrcu(struct qrcu_struct *qp)
 {
 	int idx;
 
+	smp_mb();  /* Force preceding change to happen before fastpath check. */
+
 	/*
-	 * The following memory barrier is needed to ensure that
-	 * any prior data-structure manipulation is seen by other
-	 * CPUs to happen before picking up the value of
-	 * qp->completed.
+	 * Fastpath: If the two counters sum to "1" at a given point in
+	 * time, there are no readers.  However, it takes two separate
+	 * loads to sample both counters, which won't occur simultaneously.
+	 * So we might race with a counter switch, so that we might see
+	 * ctr[0]==0, then the counter might switch, then we might see
+	 * ctr[1]==1 (unbeknownst to us because there is a reader still
+	 * there).  So we do a read memory barrier and recheck.  If the
+	 * same race happens again, there must have been a second counter
+	 * switch.  This second counter switch could not have happened
+	 * until all preceding readers finished, so if the condition
+	 * is true both times, we may safely proceed.
+	 *
+	 * This relies critically on the atomic increment and atomic
+	 * decrement being seen as executing in order.
 	 */
-	smp_mb();
+
+	if (atomic_read(&qp->ctr[0]) + atomic_read(&qp->ctr[1]) <= 1) {
+		smp_rmb();  /* Keep two checks independent. */
+		if (atomic_read(&qp->ctr[0]) + atomic_read(&qp->ctr[1]) <= 1)
+			goto out;
+	}
+
 	mutex_lock(&qp->mutex);
 
 	idx = qp->completed & 0x1;
 	if (atomic_read(qp->ctr + idx) == 1)
-		goto out;
+		goto out_unlock;
 
 	atomic_inc(qp->ctr + (idx ^ 0x1));
-	/* Reduce the likelihood that qrcu_read_lock() will loop */
+
+	/*
+	 * Prevent subsequent decrement from being seen before previous
+	 * increment -- such an inversion could cause the fastpath
+	 * above to falsely conclude that there were no readers.  Also,
+	 * reduce the likelihood that qrcu_read_lock() will loop.
+	 */
 	smp_mb__after_atomic_inc();
 	qp->completed++;
 
 	atomic_dec(qp->ctr + idx);
 	__wait_event(qp->wq, !atomic_read(qp->ctr + idx));
-out:
+out_unlock:
 	mutex_unlock(&qp->mutex);
+out:
 	smp_mb();
 	/*
 	 * The above smp_mb() is needed in the case that we

-- 
Jens Axboe


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH] QRCU fastpath optimization
  2007-02-12  6:22 ` Jens Axboe
@ 2007-02-13  2:49   ` Paul E. McKenney
  0 siblings, 0 replies; 3+ messages in thread
From: Paul E. McKenney @ 2007-02-13  2:49 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-kernel, oleg, a.p.zijlstra, mingo, hch, akpm, stern

On Mon, Feb 12, 2007 at 07:22:09AM +0100, Jens Axboe wrote:
> On Sun, Feb 11 2007, Paul E. McKenney wrote:
> > This patch optimizes the "quick" RCU update-side fastpath, so that in the
> > absence of readers, synchronize_qrcu() does four non-atomic comparisons
> > and three memory barriers, eliminating the need to acquire the global
> > lock in this case.  Lightly tested.  Algorithm has been validated for
> > the 3-reader-2-updater and 2-reader-3-updater cases -- 3-readers-3-updaters
> > case still to be done (I expect to get access to a large-memory machine
> > in the next few weeks -- need >>20GB).
> > 
> > Not for inclusion.  Patch is against Oleg's original patch, and likely
> > needs to be rediffed against Jen's patchstack.  I will do this rediffing
> > later, first want an easy-to-test and easy-to-inpect version.
> 
> I'd suggest just merging this optimization into the original QRCU patch.
> Once you are happy with the validation, I'll add it to the plug branch
> as well.
> 
> Version against the plug branch below.

Thank you very much!!!

One way or another, I will get you something in a form friendly to your
patch stack.

							Thanx, Paul

> diff --git a/kernel/srcu.c b/kernel/srcu.c
> index 53c6989..bfe347a 100644
> --- a/kernel/srcu.c
> +++ b/kernel/srcu.c
> @@ -324,28 +324,53 @@ void synchronize_qrcu(struct qrcu_struct *qp)
>  {
>  	int idx;
> 
> +	smp_mb();  /* Force preceding change to happen before fastpath check. */
> +
>  	/*
> -	 * The following memory barrier is needed to ensure that
> -	 * any prior data-structure manipulation is seen by other
> -	 * CPUs to happen before picking up the value of
> -	 * qp->completed.
> +	 * Fastpath: If the two counters sum to "1" at a given point in
> +	 * time, there are no readers.  However, it takes two separate
> +	 * loads to sample both counters, which won't occur simultaneously.
> +	 * So we might race with a counter switch, so that we might see
> +	 * ctr[0]==0, then the counter might switch, then we might see
> +	 * ctr[1]==1 (unbeknownst to us because there is a reader still
> +	 * there).  So we do a read memory barrier and recheck.  If the
> +	 * same race happens again, there must have been a second counter
> +	 * switch.  This second counter switch could not have happened
> +	 * until all preceding readers finished, so if the condition
> +	 * is true both times, we may safely proceed.
> +	 *
> +	 * This relies critically on the atomic increment and atomic
> +	 * decrement being seen as executing in order.
>  	 */
> -	smp_mb();
> +
> +	if (atomic_read(&qp->ctr[0]) + atomic_read(&qp->ctr[1]) <= 1) {
> +		smp_rmb();  /* Keep two checks independent. */
> +		if (atomic_read(&qp->ctr[0]) + atomic_read(&qp->ctr[1]) <= 1)
> +			goto out;
> +	}
> +
>  	mutex_lock(&qp->mutex);
> 
>  	idx = qp->completed & 0x1;
>  	if (atomic_read(qp->ctr + idx) == 1)
> -		goto out;
> +		goto out_unlock;
> 
>  	atomic_inc(qp->ctr + (idx ^ 0x1));
> -	/* Reduce the likelihood that qrcu_read_lock() will loop */
> +
> +	/*
> +	 * Prevent subsequent decrement from being seen before previous
> +	 * increment -- such an inversion could cause the fastpath
> +	 * above to falsely conclude that there were no readers.  Also,
> +	 * reduce the likelihood that qrcu_read_lock() will loop.
> +	 */
>  	smp_mb__after_atomic_inc();
>  	qp->completed++;
> 
>  	atomic_dec(qp->ctr + idx);
>  	__wait_event(qp->wq, !atomic_read(qp->ctr + idx));
> -out:
> +out_unlock:
>  	mutex_unlock(&qp->mutex);
> +out:
>  	smp_mb();
>  	/*
>  	 * The above smp_mb() is needed in the case that we
> 
> -- 
> Jens Axboe
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2007-02-13  2:49 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-12  4:48 [RFC PATCH] QRCU fastpath optimization Paul E. McKenney
2007-02-12  6:22 ` Jens Axboe
2007-02-13  2:49   ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.