All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lai Jiangshan <eag0628@gmail.com>
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org,
	oleg@redhat.com, paulmck@linux.vnet.ibm.com,
	rusty@rustcorp.com.au, mingo@kernel.org,
	akpm@linux-foundation.org, namhyung@kernel.org,
	rostedt@goodmis.org, wangyun@linux.vnet.ibm.com,
	xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu,
	fweisbec@gmail.com, linux@arm.linux.org.uk,
	nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	walken@google.com, vincent.guittot@linaro.org
Subject: Re: [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Tue, 26 Feb 2013 22:17:03 +0800	[thread overview]
Message-ID: <CACvQF529c72xLuosD44CPd_y_0zJBc-tjgNQpBiD0UhbzS8XDg@mail.gmail.com> (raw)
In-Reply-To: <20130218123856.26245.46705.stgit@srivatsabhat.in.ibm.com>

On Mon, Feb 18, 2013 at 8:38 PM, Srivatsa S. Bhat
<srivatsa.bhat@linux.vnet.ibm.com> wrote:
> Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
> lock-ordering related problems (unlike per-cpu locks). However, global
> rwlocks lead to unnecessary cache-line bouncing even when there are no
> writers present, which can slow down the system needlessly.

per-CPU rwlocks(yours and mine) are the exactly same as rwlock_t in
the view of lock dependency(except reader-C.S. can be nested in
writer-C.S.)
so they can deadlock in this order:

spin_lock(some_lock);				percpu_write_lock_irqsave()
						case CPU_DYING
percpu_read_lock_irqsafe();	<---deadlock--->	spin_lock(some_lock);

The lockdep can find out such dependency, but we must try our best to
find out them before merge the patchset to mainline. We can  review
all the code of cpu_disable() and CPU_DYING and fix this kinds of lock
dependency, but it is not easy thing, it may be a long term project.

======
And if there is any CPU_DYING code takes no locks and do some
works(because they know they are called via stop_machine()) we need to
add that locking code back if there is such code.(I don't know whether
such code exist or not)

>
> Per-cpu counters can help solve the cache-line bouncing problem. So we
> actually use the best of both: per-cpu counters (no-waiting) at the reader
> side in the fast-path, and global rwlocks in the slowpath.
>
> [ Fastpath = no writer is active; Slowpath = a writer is active ]
>
> IOW, the readers just increment/decrement their per-cpu refcounts (disabling
> interrupts during the updates, if necessary) when no writer is active.
> When a writer becomes active, he signals all readers to switch to global
> rwlocks for the duration of his activity. The readers switch over when it
> is safe for them (ie., when they are about to start a fresh, non-nested
> read-side critical section) and start using (holding) the global rwlock for
> read in their subsequent critical sections.
>
> The writer waits for every existing reader to switch, and then acquires the
> global rwlock for write and enters his critical section. Later, the writer
> signals all readers that he is done, and that they can go back to using their
> per-cpu refcounts again.
>
> Note that the lock-safety (despite the per-cpu scheme) comes from the fact
> that the readers can *choose* _when_ to switch to rwlocks upon the writer's
> signal. And the readers don't wait on anybody based on the per-cpu counters.
> The only true synchronization that involves waiting at the reader-side in this
> scheme, is the one arising from the global rwlock, which is safe from circular
> locking dependency issues.
>
> Reader-writer locks and per-cpu counters are recursive, so they can be
> used in a nested fashion in the reader-path, which makes per-CPU rwlocks also
> recursive. Also, this design of switching the synchronization scheme ensures
> that you can safely nest and use these locks in a very flexible manner.
>
> I'm indebted to Michael Wang and Xiao Guangrong for their numerous thoughtful
> suggestions and ideas, which inspired and influenced many of the decisions in
> this as well as previous designs. Thanks a lot Michael and Xiao!
>
> Cc: David Howells <dhowells@redhat.com>
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
>
>  lib/percpu-rwlock.c |  139 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 137 insertions(+), 2 deletions(-)
>
> diff --git a/lib/percpu-rwlock.c b/lib/percpu-rwlock.c
> index f938096..edefdea 100644
> --- a/lib/percpu-rwlock.c
> +++ b/lib/percpu-rwlock.c
> @@ -27,6 +27,24 @@
>  #include <linux/percpu-rwlock.h>
>  #include <linux/errno.h>
>
> +#include <asm/processor.h>
> +
> +
> +#define reader_yet_to_switch(pcpu_rwlock, cpu)                             \
> +       (ACCESS_ONCE(per_cpu_ptr((pcpu_rwlock)->rw_state, cpu)->reader_refcnt))
> +
> +#define reader_percpu_nesting_depth(pcpu_rwlock)                 \
> +       (__this_cpu_read((pcpu_rwlock)->rw_state->reader_refcnt))
> +
> +#define reader_uses_percpu_refcnt(pcpu_rwlock)                         \
> +                               reader_percpu_nesting_depth(pcpu_rwlock)
> +
> +#define reader_nested_percpu(pcpu_rwlock)                              \
> +                       (reader_percpu_nesting_depth(pcpu_rwlock) > 1)
> +
> +#define writer_active(pcpu_rwlock)                                     \
> +       (__this_cpu_read((pcpu_rwlock)->rw_state->writer_signal))
> +
>
>  int __percpu_init_rwlock(struct percpu_rwlock *pcpu_rwlock,
>                          const char *name, struct lock_class_key *rwlock_key)
> @@ -55,21 +73,138 @@ void percpu_free_rwlock(struct percpu_rwlock *pcpu_rwlock)
>
>  void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock)
>  {
> -       read_lock(&pcpu_rwlock->global_rwlock);
> +       preempt_disable();
> +
> +       /*
> +        * Let the writer know that a reader is active, even before we choose
> +        * our reader-side synchronization scheme.
> +        */
> +       this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt);
> +
> +       /*
> +        * If we are already using per-cpu refcounts, it is not safe to switch
> +        * the synchronization scheme. So continue using the refcounts.
> +        */
> +       if (reader_nested_percpu(pcpu_rwlock))
> +               return;
> +
> +       /*
> +        * The write to 'reader_refcnt' must be visible before we read
> +        * 'writer_signal'.
> +        */
> +       smp_mb();
> +
> +       if (likely(!writer_active(pcpu_rwlock))) {
> +               goto out;
> +       } else {
> +               /* Writer is active, so switch to global rwlock. */
> +               read_lock(&pcpu_rwlock->global_rwlock);
> +
> +               /*
> +                * We might have raced with a writer going inactive before we
> +                * took the read-lock. So re-evaluate whether we still need to
> +                * hold the rwlock or if we can switch back to per-cpu
> +                * refcounts. (This also helps avoid heterogeneous nesting of
> +                * readers).
> +                */
> +               if (writer_active(pcpu_rwlock)) {
> +                       /*
> +                        * The above writer_active() check can get reordered
> +                        * with this_cpu_dec() below, but this is OK, because
> +                        * holding the rwlock is conservative.
> +                        */
> +                       this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +               } else {
> +                       read_unlock(&pcpu_rwlock->global_rwlock);
> +               }
> +       }
> +
> +out:
> +       /* Prevent reordering of any subsequent reads/writes */
> +       smp_mb();
>  }
>
>  void percpu_read_unlock(struct percpu_rwlock *pcpu_rwlock)
>  {
> -       read_unlock(&pcpu_rwlock->global_rwlock);
> +       /*
> +        * We never allow heterogeneous nesting of readers. So it is trivial
> +        * to find out the kind of reader we are, and undo the operation
> +        * done by our corresponding percpu_read_lock().
> +        */
> +
> +       /* Try to fast-path: a nested percpu reader is the simplest case */
> +       if (reader_nested_percpu(pcpu_rwlock)) {
> +               this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +               preempt_enable();
> +               return;
> +       }
> +
> +       /*
> +        * Now we are left with only 2 options: a non-nested percpu reader,
> +        * or a reader holding rwlock
> +        */
> +       if (reader_uses_percpu_refcnt(pcpu_rwlock)) {
> +               /*
> +                * Complete the critical section before decrementing the
> +                * refcnt. We can optimize this away if we are a nested
> +                * reader (the case above).
> +                */
> +               smp_mb();
> +               this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +       } else {
> +               read_unlock(&pcpu_rwlock->global_rwlock);
> +       }
> +
> +       preempt_enable();
>  }
>
>  void percpu_write_lock(struct percpu_rwlock *pcpu_rwlock)
>  {
> +       unsigned int cpu;
> +
> +       /*
> +        * Tell all readers that a writer is becoming active, so that they
> +        * start switching over to the global rwlock.
> +        */
> +       for_each_possible_cpu(cpu)
> +               per_cpu_ptr(pcpu_rwlock->rw_state, cpu)->writer_signal = true;
> +
> +       smp_mb();
> +
> +       /*
> +        * Wait for every reader to see the writer's signal and switch from
> +        * percpu refcounts to global rwlock.
> +        *
> +        * If a reader is still using percpu refcounts, wait for him to switch.
> +        * Else, we can safely go ahead, because either the reader has already
> +        * switched over, or the next reader that comes along on that CPU will
> +        * notice the writer's signal and will switch over to the rwlock.
> +        */
> +
> +       for_each_possible_cpu(cpu) {
> +               while (reader_yet_to_switch(pcpu_rwlock, cpu))
> +                       cpu_relax();
> +       }
> +
> +       smp_mb(); /* Complete the wait-for-readers, before taking the lock */
>         write_lock(&pcpu_rwlock->global_rwlock);
>  }
>
>  void percpu_write_unlock(struct percpu_rwlock *pcpu_rwlock)
>  {
> +       unsigned int cpu;
> +
> +       /* Complete the critical section before clearing ->writer_signal */
> +       smp_mb();
> +
> +       /*
> +        * Inform all readers that we are done, so that they can switch back
> +        * to their per-cpu refcounts. (We don't need to wait for them to
> +        * see it).
> +        */
> +       for_each_possible_cpu(cpu)
> +               per_cpu_ptr(pcpu_rwlock->rw_state, cpu)->writer_signal = false;
> +
>         write_unlock(&pcpu_rwlock->global_rwlock);
>  }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

WARNING: multiple messages have this Message-ID (diff)
From: Lai Jiangshan <eag0628@gmail.com>
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: linux-doc@vger.kernel.org, peterz@infradead.org,
	fweisbec@gmail.com, linux-kernel@vger.kernel.org,
	walken@google.com, mingo@kernel.org, linux-arch@vger.kernel.org,
	linux@arm.linux.org.uk, xiaoguangrong@linux.vnet.ibm.com,
	wangyun@linux.vnet.ibm.com, paulmck@linux.vnet.ibm.com,
	nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
	rusty@rustcorp.com.au, rostedt@goodmis.org, rjw@sisk.pl,
	namhyung@kernel.org, tglx@linutronix.de,
	linux-arm-kernel@lists.infradead.org, netdev@vger.kernel.org,
	oleg@redhat.com, vincent.guittot@linaro.org, sbw@mit.edu,
	tj@kernel.org, akpm@linux-foundation.org,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Tue, 26 Feb 2013 22:17:03 +0800	[thread overview]
Message-ID: <CACvQF529c72xLuosD44CPd_y_0zJBc-tjgNQpBiD0UhbzS8XDg@mail.gmail.com> (raw)
In-Reply-To: <20130218123856.26245.46705.stgit@srivatsabhat.in.ibm.com>

On Mon, Feb 18, 2013 at 8:38 PM, Srivatsa S. Bhat
<srivatsa.bhat@linux.vnet.ibm.com> wrote:
> Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
> lock-ordering related problems (unlike per-cpu locks). However, global
> rwlocks lead to unnecessary cache-line bouncing even when there are no
> writers present, which can slow down the system needlessly.

per-CPU rwlocks(yours and mine) are the exactly same as rwlock_t in
the view of lock dependency(except reader-C.S. can be nested in
writer-C.S.)
so they can deadlock in this order:

spin_lock(some_lock);				percpu_write_lock_irqsave()
						case CPU_DYING
percpu_read_lock_irqsafe();	<---deadlock--->	spin_lock(some_lock);

The lockdep can find out such dependency, but we must try our best to
find out them before merge the patchset to mainline. We can  review
all the code of cpu_disable() and CPU_DYING and fix this kinds of lock
dependency, but it is not easy thing, it may be a long term project.

======
And if there is any CPU_DYING code takes no locks and do some
works(because they know they are called via stop_machine()) we need to
add that locking code back if there is such code.(I don't know whether
such code exist or not)

>
> Per-cpu counters can help solve the cache-line bouncing problem. So we
> actually use the best of both: per-cpu counters (no-waiting) at the reader
> side in the fast-path, and global rwlocks in the slowpath.
>
> [ Fastpath = no writer is active; Slowpath = a writer is active ]
>
> IOW, the readers just increment/decrement their per-cpu refcounts (disabling
> interrupts during the updates, if necessary) when no writer is active.
> When a writer becomes active, he signals all readers to switch to global
> rwlocks for the duration of his activity. The readers switch over when it
> is safe for them (ie., when they are about to start a fresh, non-nested
> read-side critical section) and start using (holding) the global rwlock for
> read in their subsequent critical sections.
>
> The writer waits for every existing reader to switch, and then acquires the
> global rwlock for write and enters his critical section. Later, the writer
> signals all readers that he is done, and that they can go back to using their
> per-cpu refcounts again.
>
> Note that the lock-safety (despite the per-cpu scheme) comes from the fact
> that the readers can *choose* _when_ to switch to rwlocks upon the writer's
> signal. And the readers don't wait on anybody based on the per-cpu counters.
> The only true synchronization that involves waiting at the reader-side in this
> scheme, is the one arising from the global rwlock, which is safe from circular
> locking dependency issues.
>
> Reader-writer locks and per-cpu counters are recursive, so they can be
> used in a nested fashion in the reader-path, which makes per-CPU rwlocks also
> recursive. Also, this design of switching the synchronization scheme ensures
> that you can safely nest and use these locks in a very flexible manner.
>
> I'm indebted to Michael Wang and Xiao Guangrong for their numerous thoughtful
> suggestions and ideas, which inspired and influenced many of the decisions in
> this as well as previous designs. Thanks a lot Michael and Xiao!
>
> Cc: David Howells <dhowells@redhat.com>
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
>
>  lib/percpu-rwlock.c |  139 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 137 insertions(+), 2 deletions(-)
>
> diff --git a/lib/percpu-rwlock.c b/lib/percpu-rwlock.c
> index f938096..edefdea 100644
> --- a/lib/percpu-rwlock.c
> +++ b/lib/percpu-rwlock.c
> @@ -27,6 +27,24 @@
>  #include <linux/percpu-rwlock.h>
>  #include <linux/errno.h>
>
> +#include <asm/processor.h>
> +
> +
> +#define reader_yet_to_switch(pcpu_rwlock, cpu)                             \
> +       (ACCESS_ONCE(per_cpu_ptr((pcpu_rwlock)->rw_state, cpu)->reader_refcnt))
> +
> +#define reader_percpu_nesting_depth(pcpu_rwlock)                 \
> +       (__this_cpu_read((pcpu_rwlock)->rw_state->reader_refcnt))
> +
> +#define reader_uses_percpu_refcnt(pcpu_rwlock)                         \
> +                               reader_percpu_nesting_depth(pcpu_rwlock)
> +
> +#define reader_nested_percpu(pcpu_rwlock)                              \
> +                       (reader_percpu_nesting_depth(pcpu_rwlock) > 1)
> +
> +#define writer_active(pcpu_rwlock)                                     \
> +       (__this_cpu_read((pcpu_rwlock)->rw_state->writer_signal))
> +
>
>  int __percpu_init_rwlock(struct percpu_rwlock *pcpu_rwlock,
>                          const char *name, struct lock_class_key *rwlock_key)
> @@ -55,21 +73,138 @@ void percpu_free_rwlock(struct percpu_rwlock *pcpu_rwlock)
>
>  void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock)
>  {
> -       read_lock(&pcpu_rwlock->global_rwlock);
> +       preempt_disable();
> +
> +       /*
> +        * Let the writer know that a reader is active, even before we choose
> +        * our reader-side synchronization scheme.
> +        */
> +       this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt);
> +
> +       /*
> +        * If we are already using per-cpu refcounts, it is not safe to switch
> +        * the synchronization scheme. So continue using the refcounts.
> +        */
> +       if (reader_nested_percpu(pcpu_rwlock))
> +               return;
> +
> +       /*
> +        * The write to 'reader_refcnt' must be visible before we read
> +        * 'writer_signal'.
> +        */
> +       smp_mb();
> +
> +       if (likely(!writer_active(pcpu_rwlock))) {
> +               goto out;
> +       } else {
> +               /* Writer is active, so switch to global rwlock. */
> +               read_lock(&pcpu_rwlock->global_rwlock);
> +
> +               /*
> +                * We might have raced with a writer going inactive before we
> +                * took the read-lock. So re-evaluate whether we still need to
> +                * hold the rwlock or if we can switch back to per-cpu
> +                * refcounts. (This also helps avoid heterogeneous nesting of
> +                * readers).
> +                */
> +               if (writer_active(pcpu_rwlock)) {
> +                       /*
> +                        * The above writer_active() check can get reordered
> +                        * with this_cpu_dec() below, but this is OK, because
> +                        * holding the rwlock is conservative.
> +                        */
> +                       this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +               } else {
> +                       read_unlock(&pcpu_rwlock->global_rwlock);
> +               }
> +       }
> +
> +out:
> +       /* Prevent reordering of any subsequent reads/writes */
> +       smp_mb();
>  }
>
>  void percpu_read_unlock(struct percpu_rwlock *pcpu_rwlock)
>  {
> -       read_unlock(&pcpu_rwlock->global_rwlock);
> +       /*
> +        * We never allow heterogeneous nesting of readers. So it is trivial
> +        * to find out the kind of reader we are, and undo the operation
> +        * done by our corresponding percpu_read_lock().
> +        */
> +
> +       /* Try to fast-path: a nested percpu reader is the simplest case */
> +       if (reader_nested_percpu(pcpu_rwlock)) {
> +               this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +               preempt_enable();
> +               return;
> +       }
> +
> +       /*
> +        * Now we are left with only 2 options: a non-nested percpu reader,
> +        * or a reader holding rwlock
> +        */
> +       if (reader_uses_percpu_refcnt(pcpu_rwlock)) {
> +               /*
> +                * Complete the critical section before decrementing the
> +                * refcnt. We can optimize this away if we are a nested
> +                * reader (the case above).
> +                */
> +               smp_mb();
> +               this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +       } else {
> +               read_unlock(&pcpu_rwlock->global_rwlock);
> +       }
> +
> +       preempt_enable();
>  }
>
>  void percpu_write_lock(struct percpu_rwlock *pcpu_rwlock)
>  {
> +       unsigned int cpu;
> +
> +       /*
> +        * Tell all readers that a writer is becoming active, so that they
> +        * start switching over to the global rwlock.
> +        */
> +       for_each_possible_cpu(cpu)
> +               per_cpu_ptr(pcpu_rwlock->rw_state, cpu)->writer_signal = true;
> +
> +       smp_mb();
> +
> +       /*
> +        * Wait for every reader to see the writer's signal and switch from
> +        * percpu refcounts to global rwlock.
> +        *
> +        * If a reader is still using percpu refcounts, wait for him to switch.
> +        * Else, we can safely go ahead, because either the reader has already
> +        * switched over, or the next reader that comes along on that CPU will
> +        * notice the writer's signal and will switch over to the rwlock.
> +        */
> +
> +       for_each_possible_cpu(cpu) {
> +               while (reader_yet_to_switch(pcpu_rwlock, cpu))
> +                       cpu_relax();
> +       }
> +
> +       smp_mb(); /* Complete the wait-for-readers, before taking the lock */
>         write_lock(&pcpu_rwlock->global_rwlock);
>  }
>
>  void percpu_write_unlock(struct percpu_rwlock *pcpu_rwlock)
>  {
> +       unsigned int cpu;
> +
> +       /* Complete the critical section before clearing ->writer_signal */
> +       smp_mb();
> +
> +       /*
> +        * Inform all readers that we are done, so that they can switch back
> +        * to their per-cpu refcounts. (We don't need to wait for them to
> +        * see it).
> +        */
> +       for_each_possible_cpu(cpu)
> +               per_cpu_ptr(pcpu_rwlock->rw_state, cpu)->writer_signal = false;
> +
>         write_unlock(&pcpu_rwlock->global_rwlock);
>  }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

WARNING: multiple messages have this Message-ID (diff)
From: eag0628@gmail.com (Lai Jiangshan)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Tue, 26 Feb 2013 22:17:03 +0800	[thread overview]
Message-ID: <CACvQF529c72xLuosD44CPd_y_0zJBc-tjgNQpBiD0UhbzS8XDg@mail.gmail.com> (raw)
In-Reply-To: <20130218123856.26245.46705.stgit@srivatsabhat.in.ibm.com>

On Mon, Feb 18, 2013 at 8:38 PM, Srivatsa S. Bhat
<srivatsa.bhat@linux.vnet.ibm.com> wrote:
> Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
> lock-ordering related problems (unlike per-cpu locks). However, global
> rwlocks lead to unnecessary cache-line bouncing even when there are no
> writers present, which can slow down the system needlessly.

per-CPU rwlocks(yours and mine) are the exactly same as rwlock_t in
the view of lock dependency(except reader-C.S. can be nested in
writer-C.S.)
so they can deadlock in this order:

spin_lock(some_lock);				percpu_write_lock_irqsave()
						case CPU_DYING
percpu_read_lock_irqsafe();	<---deadlock--->	spin_lock(some_lock);

The lockdep can find out such dependency, but we must try our best to
find out them before merge the patchset to mainline. We can  review
all the code of cpu_disable() and CPU_DYING and fix this kinds of lock
dependency, but it is not easy thing, it may be a long term project.

======
And if there is any CPU_DYING code takes no locks and do some
works(because they know they are called via stop_machine()) we need to
add that locking code back if there is such code.(I don't know whether
such code exist or not)

>
> Per-cpu counters can help solve the cache-line bouncing problem. So we
> actually use the best of both: per-cpu counters (no-waiting) at the reader
> side in the fast-path, and global rwlocks in the slowpath.
>
> [ Fastpath = no writer is active; Slowpath = a writer is active ]
>
> IOW, the readers just increment/decrement their per-cpu refcounts (disabling
> interrupts during the updates, if necessary) when no writer is active.
> When a writer becomes active, he signals all readers to switch to global
> rwlocks for the duration of his activity. The readers switch over when it
> is safe for them (ie., when they are about to start a fresh, non-nested
> read-side critical section) and start using (holding) the global rwlock for
> read in their subsequent critical sections.
>
> The writer waits for every existing reader to switch, and then acquires the
> global rwlock for write and enters his critical section. Later, the writer
> signals all readers that he is done, and that they can go back to using their
> per-cpu refcounts again.
>
> Note that the lock-safety (despite the per-cpu scheme) comes from the fact
> that the readers can *choose* _when_ to switch to rwlocks upon the writer's
> signal. And the readers don't wait on anybody based on the per-cpu counters.
> The only true synchronization that involves waiting at the reader-side in this
> scheme, is the one arising from the global rwlock, which is safe from circular
> locking dependency issues.
>
> Reader-writer locks and per-cpu counters are recursive, so they can be
> used in a nested fashion in the reader-path, which makes per-CPU rwlocks also
> recursive. Also, this design of switching the synchronization scheme ensures
> that you can safely nest and use these locks in a very flexible manner.
>
> I'm indebted to Michael Wang and Xiao Guangrong for their numerous thoughtful
> suggestions and ideas, which inspired and influenced many of the decisions in
> this as well as previous designs. Thanks a lot Michael and Xiao!
>
> Cc: David Howells <dhowells@redhat.com>
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
>
>  lib/percpu-rwlock.c |  139 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 137 insertions(+), 2 deletions(-)
>
> diff --git a/lib/percpu-rwlock.c b/lib/percpu-rwlock.c
> index f938096..edefdea 100644
> --- a/lib/percpu-rwlock.c
> +++ b/lib/percpu-rwlock.c
> @@ -27,6 +27,24 @@
>  #include <linux/percpu-rwlock.h>
>  #include <linux/errno.h>
>
> +#include <asm/processor.h>
> +
> +
> +#define reader_yet_to_switch(pcpu_rwlock, cpu)                             \
> +       (ACCESS_ONCE(per_cpu_ptr((pcpu_rwlock)->rw_state, cpu)->reader_refcnt))
> +
> +#define reader_percpu_nesting_depth(pcpu_rwlock)                 \
> +       (__this_cpu_read((pcpu_rwlock)->rw_state->reader_refcnt))
> +
> +#define reader_uses_percpu_refcnt(pcpu_rwlock)                         \
> +                               reader_percpu_nesting_depth(pcpu_rwlock)
> +
> +#define reader_nested_percpu(pcpu_rwlock)                              \
> +                       (reader_percpu_nesting_depth(pcpu_rwlock) > 1)
> +
> +#define writer_active(pcpu_rwlock)                                     \
> +       (__this_cpu_read((pcpu_rwlock)->rw_state->writer_signal))
> +
>
>  int __percpu_init_rwlock(struct percpu_rwlock *pcpu_rwlock,
>                          const char *name, struct lock_class_key *rwlock_key)
> @@ -55,21 +73,138 @@ void percpu_free_rwlock(struct percpu_rwlock *pcpu_rwlock)
>
>  void percpu_read_lock(struct percpu_rwlock *pcpu_rwlock)
>  {
> -       read_lock(&pcpu_rwlock->global_rwlock);
> +       preempt_disable();
> +
> +       /*
> +        * Let the writer know that a reader is active, even before we choose
> +        * our reader-side synchronization scheme.
> +        */
> +       this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt);
> +
> +       /*
> +        * If we are already using per-cpu refcounts, it is not safe to switch
> +        * the synchronization scheme. So continue using the refcounts.
> +        */
> +       if (reader_nested_percpu(pcpu_rwlock))
> +               return;
> +
> +       /*
> +        * The write to 'reader_refcnt' must be visible before we read
> +        * 'writer_signal'.
> +        */
> +       smp_mb();
> +
> +       if (likely(!writer_active(pcpu_rwlock))) {
> +               goto out;
> +       } else {
> +               /* Writer is active, so switch to global rwlock. */
> +               read_lock(&pcpu_rwlock->global_rwlock);
> +
> +               /*
> +                * We might have raced with a writer going inactive before we
> +                * took the read-lock. So re-evaluate whether we still need to
> +                * hold the rwlock or if we can switch back to per-cpu
> +                * refcounts. (This also helps avoid heterogeneous nesting of
> +                * readers).
> +                */
> +               if (writer_active(pcpu_rwlock)) {
> +                       /*
> +                        * The above writer_active() check can get reordered
> +                        * with this_cpu_dec() below, but this is OK, because
> +                        * holding the rwlock is conservative.
> +                        */
> +                       this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +               } else {
> +                       read_unlock(&pcpu_rwlock->global_rwlock);
> +               }
> +       }
> +
> +out:
> +       /* Prevent reordering of any subsequent reads/writes */
> +       smp_mb();
>  }
>
>  void percpu_read_unlock(struct percpu_rwlock *pcpu_rwlock)
>  {
> -       read_unlock(&pcpu_rwlock->global_rwlock);
> +       /*
> +        * We never allow heterogeneous nesting of readers. So it is trivial
> +        * to find out the kind of reader we are, and undo the operation
> +        * done by our corresponding percpu_read_lock().
> +        */
> +
> +       /* Try to fast-path: a nested percpu reader is the simplest case */
> +       if (reader_nested_percpu(pcpu_rwlock)) {
> +               this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +               preempt_enable();
> +               return;
> +       }
> +
> +       /*
> +        * Now we are left with only 2 options: a non-nested percpu reader,
> +        * or a reader holding rwlock
> +        */
> +       if (reader_uses_percpu_refcnt(pcpu_rwlock)) {
> +               /*
> +                * Complete the critical section before decrementing the
> +                * refcnt. We can optimize this away if we are a nested
> +                * reader (the case above).
> +                */
> +               smp_mb();
> +               this_cpu_dec(pcpu_rwlock->rw_state->reader_refcnt);
> +       } else {
> +               read_unlock(&pcpu_rwlock->global_rwlock);
> +       }
> +
> +       preempt_enable();
>  }
>
>  void percpu_write_lock(struct percpu_rwlock *pcpu_rwlock)
>  {
> +       unsigned int cpu;
> +
> +       /*
> +        * Tell all readers that a writer is becoming active, so that they
> +        * start switching over to the global rwlock.
> +        */
> +       for_each_possible_cpu(cpu)
> +               per_cpu_ptr(pcpu_rwlock->rw_state, cpu)->writer_signal = true;
> +
> +       smp_mb();
> +
> +       /*
> +        * Wait for every reader to see the writer's signal and switch from
> +        * percpu refcounts to global rwlock.
> +        *
> +        * If a reader is still using percpu refcounts, wait for him to switch.
> +        * Else, we can safely go ahead, because either the reader has already
> +        * switched over, or the next reader that comes along on that CPU will
> +        * notice the writer's signal and will switch over to the rwlock.
> +        */
> +
> +       for_each_possible_cpu(cpu) {
> +               while (reader_yet_to_switch(pcpu_rwlock, cpu))
> +                       cpu_relax();
> +       }
> +
> +       smp_mb(); /* Complete the wait-for-readers, before taking the lock */
>         write_lock(&pcpu_rwlock->global_rwlock);
>  }
>
>  void percpu_write_unlock(struct percpu_rwlock *pcpu_rwlock)
>  {
> +       unsigned int cpu;
> +
> +       /* Complete the critical section before clearing ->writer_signal */
> +       smp_mb();
> +
> +       /*
> +        * Inform all readers that we are done, so that they can switch back
> +        * to their per-cpu refcounts. (We don't need to wait for them to
> +        * see it).
> +        */
> +       for_each_possible_cpu(cpu)
> +               per_cpu_ptr(pcpu_rwlock->rw_state, cpu)->writer_signal = false;
> +
>         write_unlock(&pcpu_rwlock->global_rwlock);
>  }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

  parent reply	other threads:[~2013-02-26 14:17 UTC|newest]

Thread overview: 360+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-18 12:38 [PATCH v6 00/46] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat
2013-02-18 12:38 ` Srivatsa S. Bhat
2013-02-18 12:38 ` Srivatsa S. Bhat
2013-02-18 12:38 ` [PATCH v6 01/46] percpu_rwlock: Introduce the global reader-writer lock backend Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38 ` [PATCH v6 02/46] percpu_rwlock: Introduce per-CPU variables for the reader and the writer Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38 ` [PATCH v6 03/46] percpu_rwlock: Provide a way to define and init percpu-rwlocks at compile time Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38 ` [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 15:45   ` Michel Lespinasse
2013-02-18 15:45     ` Michel Lespinasse
2013-02-18 15:45     ` Michel Lespinasse
2013-02-18 16:21     ` Srivatsa S. Bhat
2013-02-18 16:21       ` Srivatsa S. Bhat
2013-02-18 16:21       ` Srivatsa S. Bhat
2013-02-18 16:31       ` Steven Rostedt
2013-02-18 16:31         ` Steven Rostedt
2013-02-18 16:31         ` Steven Rostedt
2013-02-18 16:46         ` Srivatsa S. Bhat
2013-02-18 16:46           ` Srivatsa S. Bhat
2013-02-18 16:46           ` Srivatsa S. Bhat
2013-02-18 17:56       ` Srivatsa S. Bhat
2013-02-18 17:56         ` Srivatsa S. Bhat
2013-02-18 17:56         ` Srivatsa S. Bhat
2013-02-18 18:07         ` Michel Lespinasse
2013-02-18 18:07           ` Michel Lespinasse
2013-02-18 18:07           ` Michel Lespinasse
2013-02-18 18:14           ` Srivatsa S. Bhat
2013-02-18 18:14             ` Srivatsa S. Bhat
2013-02-18 18:14             ` Srivatsa S. Bhat
2013-02-25 15:53             ` Lai Jiangshan
2013-02-25 15:53               ` Lai Jiangshan
2013-02-25 15:53               ` Lai Jiangshan
2013-02-25 15:53               ` Lai Jiangshan
2013-02-25 19:26               ` Srivatsa S. Bhat
2013-02-25 19:26                 ` Srivatsa S. Bhat
2013-02-25 19:26                 ` Srivatsa S. Bhat
2013-02-26  0:17                 ` Lai Jiangshan
2013-02-26  0:17                   ` Lai Jiangshan
2013-02-26  0:17                   ` Lai Jiangshan
2013-02-26  0:19                   ` Lai Jiangshan
2013-02-26  0:19                     ` Lai Jiangshan
2013-02-26  0:19                     ` Lai Jiangshan
2013-02-26  9:02                   ` Srivatsa S. Bhat
2013-02-26  9:02                     ` Srivatsa S. Bhat
2013-02-26  9:02                     ` Srivatsa S. Bhat
2013-02-26 12:59                     ` Lai Jiangshan
2013-02-26 12:59                       ` Lai Jiangshan
2013-02-26 12:59                       ` Lai Jiangshan
2013-02-26 14:22                       ` Srivatsa S. Bhat
2013-02-26 14:22                         ` Srivatsa S. Bhat
2013-02-26 14:22                         ` Srivatsa S. Bhat
2013-02-26 16:25                         ` Lai Jiangshan
2013-02-26 16:25                           ` Lai Jiangshan
2013-02-26 16:25                           ` Lai Jiangshan
2013-02-26 19:30                           ` Srivatsa S. Bhat
2013-02-26 19:30                             ` Srivatsa S. Bhat
2013-02-26 19:30                             ` Srivatsa S. Bhat
2013-02-27  0:33                             ` Lai Jiangshan
2013-02-27  0:33                               ` Lai Jiangshan
2013-02-27  0:33                               ` Lai Jiangshan
2013-02-27 21:19                               ` Srivatsa S. Bhat
2013-02-27 21:19                                 ` Srivatsa S. Bhat
2013-02-27 21:19                                 ` Srivatsa S. Bhat
2013-02-27 21:19                                 ` Srivatsa S. Bhat
2013-03-01 17:44                                 ` [PATCH] lglock: add read-preference local-global rwlock Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:53                                   ` Tejun Heo
2013-03-01 17:53                                     ` Tejun Heo
2013-03-01 17:53                                     ` Tejun Heo
2013-03-01 20:06                                     ` Srivatsa S. Bhat
2013-03-01 20:06                                       ` Srivatsa S. Bhat
2013-03-01 20:06                                       ` Srivatsa S. Bhat
2013-03-01 18:28                                   ` Oleg Nesterov
2013-03-01 18:28                                     ` Oleg Nesterov
2013-03-01 18:28                                     ` Oleg Nesterov
2013-03-02 12:13                                     ` Michel Lespinasse
2013-03-02 12:13                                       ` Michel Lespinasse
2013-03-02 12:13                                       ` Michel Lespinasse
2013-03-02 12:13                                       ` Michel Lespinasse
2013-03-02 13:14                                       ` [PATCH V2] " Lai Jiangshan
2013-03-02 13:14                                         ` Lai Jiangshan
2013-03-02 13:14                                         ` Lai Jiangshan
2013-03-02 13:14                                         ` Lai Jiangshan
2013-03-02 13:14                                         ` Lai Jiangshan
2013-03-02 17:11                                         ` Srivatsa S. Bhat
2013-03-02 17:11                                           ` Srivatsa S. Bhat
2013-03-02 17:11                                           ` Srivatsa S. Bhat
2013-03-05 15:41                                           ` Lai Jiangshan
2013-03-05 15:41                                             ` Lai Jiangshan
2013-03-05 15:41                                             ` Lai Jiangshan
2013-03-05 17:55                                             ` Srivatsa S. Bhat
2013-03-05 17:55                                               ` Srivatsa S. Bhat
2013-03-05 17:55                                               ` Srivatsa S. Bhat
2013-03-02 17:20                                         ` Oleg Nesterov
2013-03-02 17:20                                           ` Oleg Nesterov
2013-03-02 17:20                                           ` Oleg Nesterov
2013-03-03 17:40                                           ` Oleg Nesterov
2013-03-03 17:40                                             ` Oleg Nesterov
2013-03-03 17:40                                             ` Oleg Nesterov
2013-03-05  1:37                                             ` Michel Lespinasse
2013-03-05  1:37                                               ` Michel Lespinasse
2013-03-05  1:37                                               ` Michel Lespinasse
2013-03-05 15:27                                           ` Lai Jiangshan
2013-03-05 15:27                                             ` Lai Jiangshan
2013-03-05 15:27                                             ` Lai Jiangshan
2013-03-05 16:19                                             ` Michel Lespinasse
2013-03-05 16:19                                               ` Michel Lespinasse
2013-03-05 16:19                                               ` Michel Lespinasse
2013-03-05 16:41                                             ` Oleg Nesterov
2013-03-05 16:41                                               ` Oleg Nesterov
2013-03-05 16:41                                               ` Oleg Nesterov
2013-03-02 17:06                                       ` [PATCH] " Oleg Nesterov
2013-03-02 17:06                                         ` Oleg Nesterov
2013-03-02 17:06                                         ` Oleg Nesterov
2013-03-05 15:54                                         ` Lai Jiangshan
2013-03-05 15:54                                           ` Lai Jiangshan
2013-03-05 15:54                                           ` Lai Jiangshan
2013-03-05 16:32                                           ` Michel Lespinasse
2013-03-05 16:32                                             ` Michel Lespinasse
2013-03-05 16:32                                             ` Michel Lespinasse
2013-03-05 16:35                                           ` Oleg Nesterov
2013-03-05 16:35                                             ` Oleg Nesterov
2013-03-05 16:35                                             ` Oleg Nesterov
2013-03-02 13:42                                     ` Lai Jiangshan
2013-03-02 13:42                                       ` Lai Jiangshan
2013-03-02 13:42                                       ` Lai Jiangshan
2013-03-02 17:01                                       ` Oleg Nesterov
2013-03-02 17:01                                         ` Oleg Nesterov
2013-03-02 17:01                                         ` Oleg Nesterov
2013-03-01 17:50                                 ` [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks Lai Jiangshan
2013-03-01 17:50                                   ` Lai Jiangshan
2013-03-01 17:50                                   ` Lai Jiangshan
2013-03-01 17:50                                   ` Lai Jiangshan
2013-03-01 19:47                                   ` Srivatsa S. Bhat
2013-03-01 19:47                                     ` Srivatsa S. Bhat
2013-03-01 19:47                                     ` Srivatsa S. Bhat
2013-03-01 19:47                                     ` Srivatsa S. Bhat
2013-03-05 16:25                                     ` Lai Jiangshan
2013-03-05 16:25                                       ` Lai Jiangshan
2013-03-05 16:25                                       ` Lai Jiangshan
2013-03-05 16:25                                       ` Lai Jiangshan
2013-03-05 18:27                                       ` Srivatsa S. Bhat
2013-03-05 18:27                                         ` Srivatsa S. Bhat
2013-03-05 18:27                                         ` Srivatsa S. Bhat
2013-03-05 18:27                                         ` Srivatsa S. Bhat
2013-03-01 18:10                                 ` Tejun Heo
2013-03-01 18:10                                   ` Tejun Heo
2013-03-01 18:10                                   ` Tejun Heo
2013-03-01 19:59                                   ` Srivatsa S. Bhat
2013-03-01 19:59                                     ` Srivatsa S. Bhat
2013-03-01 19:59                                     ` Srivatsa S. Bhat
2013-02-27 11:11                             ` Michel Lespinasse
2013-02-27 11:11                               ` Michel Lespinasse
2013-02-27 11:11                               ` Michel Lespinasse
2013-02-27 19:25                               ` Oleg Nesterov
2013-02-27 19:25                                 ` Oleg Nesterov
2013-02-27 19:25                                 ` Oleg Nesterov
2013-02-28 11:34                                 ` Michel Lespinasse
2013-02-28 11:34                                   ` Michel Lespinasse
2013-02-28 11:34                                   ` Michel Lespinasse
2013-02-28 18:00                                   ` Oleg Nesterov
2013-02-28 18:00                                     ` Oleg Nesterov
2013-02-28 18:00                                     ` Oleg Nesterov
2013-02-28 18:20                                     ` Oleg Nesterov
2013-02-28 18:20                                       ` Oleg Nesterov
2013-02-28 18:20                                       ` Oleg Nesterov
2013-02-26 13:34                 ` Lai Jiangshan
2013-02-26 13:34                   ` Lai Jiangshan
2013-02-26 13:34                   ` Lai Jiangshan
2013-02-26 15:17                   ` Srivatsa S. Bhat
2013-02-26 15:17                     ` Srivatsa S. Bhat
2013-02-26 14:17   ` Lai Jiangshan [this message]
2013-02-26 14:17     ` Lai Jiangshan
2013-02-26 14:17     ` Lai Jiangshan
2013-02-26 14:37     ` Srivatsa S. Bhat
2013-02-26 14:37       ` Srivatsa S. Bhat
2013-02-26 14:37       ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 05/46] percpu_rwlock: Make percpu-rwlocks IRQ-safe, optimally Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 06/46] percpu_rwlock: Rearrange the read-lock code to fastpath nested percpu readers Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 07/46] percpu_rwlock: Allow writers to be readers, and add lockdep annotations Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 15:51   ` Michel Lespinasse
2013-02-18 15:51     ` Michel Lespinasse
2013-02-18 15:51     ` Michel Lespinasse
2013-02-18 16:31     ` Srivatsa S. Bhat
2013-02-18 16:31       ` Srivatsa S. Bhat
2013-02-18 16:31       ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 08/46] CPU hotplug: Provide APIs to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 16:23   ` Michel Lespinasse
2013-02-18 16:23     ` Michel Lespinasse
2013-02-18 16:23     ` Michel Lespinasse
2013-02-18 16:43     ` Srivatsa S. Bhat
2013-02-18 16:43       ` Srivatsa S. Bhat
2013-02-18 16:43       ` Srivatsa S. Bhat
2013-02-18 17:21       ` Michel Lespinasse
2013-02-18 17:21         ` Michel Lespinasse
2013-02-18 17:21         ` Michel Lespinasse
2013-02-18 18:50         ` Srivatsa S. Bhat
2013-02-18 18:50           ` Srivatsa S. Bhat
2013-02-18 18:50           ` Srivatsa S. Bhat
2013-02-19  9:40           ` Michel Lespinasse
2013-02-19  9:40             ` Michel Lespinasse
2013-02-19  9:40             ` Michel Lespinasse
2013-02-19  9:55             ` Srivatsa S. Bhat
2013-02-19  9:55               ` Srivatsa S. Bhat
2013-02-19  9:55               ` Srivatsa S. Bhat
2013-02-19 10:42               ` David Laight
2013-02-19 10:42                 ` David Laight
2013-02-19 10:42                 ` David Laight
2013-02-19 10:42                 ` David Laight
2013-02-19 10:58                 ` Srivatsa S. Bhat
2013-02-19 10:58                   ` Srivatsa S. Bhat
2013-02-19 10:58                   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 09/46] CPU hotplug: Convert preprocessor macros to static inline functions Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 10/46] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 11/46] smp, cpu hotplug: Fix on_each_cpu_*() " Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 12/46] sched/timer: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 13/46] sched/migration: Use raw_spin_lock/unlock since interrupts are already disabled Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 14/46] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 15/46] tick: " Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 16/46] time/clocksource: " Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 17/46] clockevents: Use get/put_online_cpus_atomic() in clockevents_notify() Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 18/46] softirq: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 19/46] irq: " Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 20/46] net: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 21/46] block: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 22/46] crypto: pcrypt - Protect access to cpu_online_mask with get/put_online_cpus() Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 23/46] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 24/46] [SCSI] fcoe: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 25/46] staging: octeon: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 26/46] x86: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 27/46] perf/x86: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 28/46] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 29/46] kvm/vmx: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 30/46] x86/xen: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 31/46] alpha/smp: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 32/46] blackfin/smp: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 33/46] cris/smp: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 13:07   ` Jesper Nilsson
2013-02-18 13:07     ` Jesper Nilsson
2013-02-18 13:07     ` Jesper Nilsson
2013-02-18 12:43 ` [PATCH v6 34/46] hexagon/smp: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 35/46] ia64: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 36/46] m32r: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 37/46] MIPS: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 38/46] mn10300: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 39/46] parisc: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 40/46] powerpc: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 41/46] sh: " Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 42/46] sparc: " Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 43/46] tile: " Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 44/46] cpu: No more __stop_machine() in _cpu_down() Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 45/46] CPU hotplug, stop_machine: Decouple CPU hotplug from stop_machine() in Kconfig Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 46/46] Documentation/cpu-hotplug: Remove references to stop_machine() Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-22  0:31 ` [PATCH v6 00/46] CPU hotplug: stop_machine()-free CPU hotplug Rusty Russell
2013-02-22  0:31   ` Rusty Russell
2013-02-22  0:31   ` Rusty Russell
2013-02-22  0:31   ` Rusty Russell
2013-02-22  0:31   ` Rusty Russell
2013-02-25 21:45   ` Srivatsa S. Bhat
2013-02-25 21:45     ` Srivatsa S. Bhat
2013-02-25 21:45     ` Srivatsa S. Bhat
2013-03-01 12:05 ` Vincent Guittot
2013-03-01 12:05   ` Vincent Guittot
2013-03-01 12:05   ` Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACvQF529c72xLuosD44CPd_y_0zJBc-tjgNQpBiD0UhbzS8XDg@mail.gmail.com \
    --to=eag0628@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=fweisbec@gmail.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=nikunj@linux.vnet.ibm.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rjw@sisk.pl \
    --cc=rostedt@goodmis.org \
    --cc=rusty@rustcorp.com.au \
    --cc=sbw@mit.edu \
    --cc=srivatsa.bhat@linux.vnet.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=walken@google.com \
    --cc=wangyun@linux.vnet.ibm.com \
    --cc=xiaoguangrong@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.