All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: Tejun Heo <tj@kernel.org>
Cc: tglx@linutronix.de, peterz@infradead.org, oleg@redhat.com,
	paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au,
	mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org,
	rostedt@goodmis.org, wangyun@linux.vnet.ibm.com,
	xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu,
	fweisbec@gmail.com, linux@arm.linux.org.uk,
	nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	walken@google.com
Subject: Re: [PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Thu, 24 Jan 2013 10:00:04 +0530	[thread overview]
Message-ID: <5100B8CC.4080406@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130123195740.GI2373@mtj.dyndns.org>

On 01/24/2013 01:27 AM, Tejun Heo wrote:
> Hello, Srivatsa.
> 
> On Thu, Jan 24, 2013 at 01:03:52AM +0530, Srivatsa S. Bhat wrote:
>> Hmm.. I split it up into steps to help explain the reasoning behind
>> the code sufficiently, rather than spring all of the intricacies at
>> one go (which would make it very hard to write the changelog/comments
>> also). The split made it easier for me to document it well in the
>> changelog, because I could deal with reasonable chunks of code/complexity
>> at a time. IMHO that helps people reading it for the first time to
>> understand the logic easily.
> 
> I don't know.  It's a judgement call I guess.  I personally would much
> prefer having ample documentation as comments in the source itself or
> as a separate Documentation/ file as that's what most people are gonna
> be looking at to figure out what's going on.  Maybe just compact it a
> bit and add more in-line documentation instead?
> 

OK, I'll think about this.

>>> The only two options are either punishing writers or identifying and
>>> updating all such possible deadlocks.  percpu_rwsem does the former,
>>> right?  I don't know how feasible the latter would be.
>>
>> I don't think we can avoid looking into all the possible deadlocks,
>> as long as we use rwlocks inside get/put_online_cpus_atomic() (assuming
>> rwlocks are fair). Even with Oleg's idea of using synchronize_sched()
>> at the writer, we still need to take care of locking rules, because the
>> synchronize_sched() only helps avoid the memory barriers at the reader,
>> and doesn't help get rid of the rwlocks themselves.
> 
> Well, percpu_rwlock don't have to use rwlock for the slow path.  It
> can implement its own writer starving locking scheme.  It's not like
> implementing slow path global rwlock logic is difficult.
>

Great idea! So probably I could use atomic ops or something similar in the
slow path to implement the scheme we need...

>> CPU 0                          CPU 1
>>
>> read_lock(&rwlock)
>>
>>                               write_lock(&rwlock) //spins, because CPU 0
>>                               //has acquired the lock for read
>>
>> read_lock(&rwlock)
>>    ^^^^^
>> What happens here? Does CPU 0 start spinning (and hence deadlock) or will
>> it continue realizing that it already holds the rwlock for read?
> 
> I don't think rwlock allows nesting write lock inside read lock.
> read_lock(); write_lock() will always deadlock.
> 

Sure, I understand that :-) My question was, what happens when *two* CPUs
are involved, as in, the read_lock() is invoked only on CPU 0 whereas the
write_lock() is invoked on CPU 1.

For example, the same scenario shown above, but with slightly different
timing, will NOT result in a deadlock:

Scenario 2:
  CPU 0                                CPU 1

read_lock(&rwlock)


read_lock(&rwlock) //doesn't spin

                                    write_lock(&rwlock) //spins, because CPU 0
                                    //has acquired the lock for read


So I was wondering whether the "fairness" logic of rwlocks would cause
the second read_lock() to spin (in the first scenario shown above) because
a writer is already waiting (and hence new readers should spin) and thus
cause a deadlock.

Regards,
Srivatsa S. Bhat


WARNING: multiple messages have this Message-ID (diff)
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: Tejun Heo <tj@kernel.org>
Cc: linux-doc@vger.kernel.org, peterz@infradead.org,
	fweisbec@gmail.com, linux-kernel@vger.kernel.org,
	walken@google.com, mingo@kernel.org, linux-arch@vger.kernel.org,
	linux@arm.linux.org.uk, xiaoguangrong@linux.vnet.ibm.com,
	wangyun@linux.vnet.ibm.com, paulmck@linux.vnet.ibm.com,
	nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
	rusty@rustcorp.com.au, rostedt@goodmis.org, rjw@sisk.pl,
	namhyung@kernel.org, tglx@linutronix.de,
	linux-arm-kernel@lists.infradead.org, netdev@vger.kernel.org,
	oleg@redhat.com, sbw@mit.edu, akpm@linux-foundation.org,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Thu, 24 Jan 2013 10:00:04 +0530	[thread overview]
Message-ID: <5100B8CC.4080406@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130123195740.GI2373@mtj.dyndns.org>

On 01/24/2013 01:27 AM, Tejun Heo wrote:
> Hello, Srivatsa.
> 
> On Thu, Jan 24, 2013 at 01:03:52AM +0530, Srivatsa S. Bhat wrote:
>> Hmm.. I split it up into steps to help explain the reasoning behind
>> the code sufficiently, rather than spring all of the intricacies at
>> one go (which would make it very hard to write the changelog/comments
>> also). The split made it easier for me to document it well in the
>> changelog, because I could deal with reasonable chunks of code/complexity
>> at a time. IMHO that helps people reading it for the first time to
>> understand the logic easily.
> 
> I don't know.  It's a judgement call I guess.  I personally would much
> prefer having ample documentation as comments in the source itself or
> as a separate Documentation/ file as that's what most people are gonna
> be looking at to figure out what's going on.  Maybe just compact it a
> bit and add more in-line documentation instead?
> 

OK, I'll think about this.

>>> The only two options are either punishing writers or identifying and
>>> updating all such possible deadlocks.  percpu_rwsem does the former,
>>> right?  I don't know how feasible the latter would be.
>>
>> I don't think we can avoid looking into all the possible deadlocks,
>> as long as we use rwlocks inside get/put_online_cpus_atomic() (assuming
>> rwlocks are fair). Even with Oleg's idea of using synchronize_sched()
>> at the writer, we still need to take care of locking rules, because the
>> synchronize_sched() only helps avoid the memory barriers at the reader,
>> and doesn't help get rid of the rwlocks themselves.
> 
> Well, percpu_rwlock don't have to use rwlock for the slow path.  It
> can implement its own writer starving locking scheme.  It's not like
> implementing slow path global rwlock logic is difficult.
>

Great idea! So probably I could use atomic ops or something similar in the
slow path to implement the scheme we need...

>> CPU 0                          CPU 1
>>
>> read_lock(&rwlock)
>>
>>                               write_lock(&rwlock) //spins, because CPU 0
>>                               //has acquired the lock for read
>>
>> read_lock(&rwlock)
>>    ^^^^^
>> What happens here? Does CPU 0 start spinning (and hence deadlock) or will
>> it continue realizing that it already holds the rwlock for read?
> 
> I don't think rwlock allows nesting write lock inside read lock.
> read_lock(); write_lock() will always deadlock.
> 

Sure, I understand that :-) My question was, what happens when *two* CPUs
are involved, as in, the read_lock() is invoked only on CPU 0 whereas the
write_lock() is invoked on CPU 1.

For example, the same scenario shown above, but with slightly different
timing, will NOT result in a deadlock:

Scenario 2:
  CPU 0                                CPU 1

read_lock(&rwlock)


read_lock(&rwlock) //doesn't spin

                                    write_lock(&rwlock) //spins, because CPU 0
                                    //has acquired the lock for read


So I was wondering whether the "fairness" logic of rwlocks would cause
the second read_lock() to spin (in the first scenario shown above) because
a writer is already waiting (and hence new readers should spin) and thus
cause a deadlock.

Regards,
Srivatsa S. Bhat

WARNING: multiple messages have this Message-ID (diff)
From: srivatsa.bhat@linux.vnet.ibm.com (Srivatsa S. Bhat)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Thu, 24 Jan 2013 10:00:04 +0530	[thread overview]
Message-ID: <5100B8CC.4080406@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130123195740.GI2373@mtj.dyndns.org>

On 01/24/2013 01:27 AM, Tejun Heo wrote:
> Hello, Srivatsa.
> 
> On Thu, Jan 24, 2013 at 01:03:52AM +0530, Srivatsa S. Bhat wrote:
>> Hmm.. I split it up into steps to help explain the reasoning behind
>> the code sufficiently, rather than spring all of the intricacies at
>> one go (which would make it very hard to write the changelog/comments
>> also). The split made it easier for me to document it well in the
>> changelog, because I could deal with reasonable chunks of code/complexity
>> at a time. IMHO that helps people reading it for the first time to
>> understand the logic easily.
> 
> I don't know.  It's a judgement call I guess.  I personally would much
> prefer having ample documentation as comments in the source itself or
> as a separate Documentation/ file as that's what most people are gonna
> be looking at to figure out what's going on.  Maybe just compact it a
> bit and add more in-line documentation instead?
> 

OK, I'll think about this.

>>> The only two options are either punishing writers or identifying and
>>> updating all such possible deadlocks.  percpu_rwsem does the former,
>>> right?  I don't know how feasible the latter would be.
>>
>> I don't think we can avoid looking into all the possible deadlocks,
>> as long as we use rwlocks inside get/put_online_cpus_atomic() (assuming
>> rwlocks are fair). Even with Oleg's idea of using synchronize_sched()
>> at the writer, we still need to take care of locking rules, because the
>> synchronize_sched() only helps avoid the memory barriers at the reader,
>> and doesn't help get rid of the rwlocks themselves.
> 
> Well, percpu_rwlock don't have to use rwlock for the slow path.  It
> can implement its own writer starving locking scheme.  It's not like
> implementing slow path global rwlock logic is difficult.
>

Great idea! So probably I could use atomic ops or something similar in the
slow path to implement the scheme we need...

>> CPU 0                          CPU 1
>>
>> read_lock(&rwlock)
>>
>>                               write_lock(&rwlock) //spins, because CPU 0
>>                               //has acquired the lock for read
>>
>> read_lock(&rwlock)
>>    ^^^^^
>> What happens here? Does CPU 0 start spinning (and hence deadlock) or will
>> it continue realizing that it already holds the rwlock for read?
> 
> I don't think rwlock allows nesting write lock inside read lock.
> read_lock(); write_lock() will always deadlock.
> 

Sure, I understand that :-) My question was, what happens when *two* CPUs
are involved, as in, the read_lock() is invoked only on CPU 0 whereas the
write_lock() is invoked on CPU 1.

For example, the same scenario shown above, but with slightly different
timing, will NOT result in a deadlock:

Scenario 2:
  CPU 0                                CPU 1

read_lock(&rwlock)


read_lock(&rwlock) //doesn't spin

                                    write_lock(&rwlock) //spins, because CPU 0
                                    //has acquired the lock for read


So I was wondering whether the "fairness" logic of rwlocks would cause
the second read_lock() to spin (in the first scenario shown above) because
a writer is already waiting (and hence new readers should spin) and thus
cause a deadlock.

Regards,
Srivatsa S. Bhat

  reply	other threads:[~2013-01-24  4:32 UTC|newest]

Thread overview: 362+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-22  7:33 [PATCH v5 00/45] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat
2013-01-22  7:33 ` Srivatsa S. Bhat
2013-01-22  7:33 ` Srivatsa S. Bhat
2013-01-22  7:33 ` [PATCH v5 01/45] percpu_rwlock: Introduce the global reader-writer lock backend Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22 18:45   ` Stephen Hemminger
2013-01-22 18:45     ` Stephen Hemminger
2013-01-22 18:45     ` Stephen Hemminger
2013-01-22 19:41     ` Srivatsa S. Bhat
2013-01-22 19:41       ` Srivatsa S. Bhat
2013-01-22 19:41       ` Srivatsa S. Bhat
2013-01-22 19:32   ` Steven Rostedt
2013-01-22 19:32     ` Steven Rostedt
2013-01-22 19:32     ` Steven Rostedt
2013-01-22 19:58     ` Srivatsa S. Bhat
2013-01-22 19:58       ` Srivatsa S. Bhat
2013-01-22 20:54       ` Steven Rostedt
2013-01-22 20:54         ` Steven Rostedt
2013-01-24  4:14     ` Michel Lespinasse
2013-01-24  4:14       ` Michel Lespinasse
2013-01-24  4:14       ` Michel Lespinasse
2013-01-24 15:58       ` Oleg Nesterov
2013-01-24 15:58         ` Oleg Nesterov
2013-01-22  7:33 ` [PATCH v5 02/45] percpu_rwlock: Introduce per-CPU variables for the reader and the writer Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33 ` [PATCH v5 03/45] percpu_rwlock: Provide a way to define and init percpu-rwlocks at compile time Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33 ` [PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-23 18:55   ` Tejun Heo
2013-01-23 18:55     ` Tejun Heo
2013-01-23 18:55     ` Tejun Heo
2013-01-23 19:33     ` Srivatsa S. Bhat
2013-01-23 19:33       ` Srivatsa S. Bhat
2013-01-23 19:33       ` Srivatsa S. Bhat
2013-01-23 19:57       ` Tejun Heo
2013-01-23 19:57         ` Tejun Heo
2013-01-23 19:57         ` Tejun Heo
2013-01-24  4:30         ` Srivatsa S. Bhat [this message]
2013-01-24  4:30           ` Srivatsa S. Bhat
2013-01-24  4:30           ` Srivatsa S. Bhat
2013-01-29 11:12           ` Namhyung Kim
2013-01-29 11:12             ` Namhyung Kim
2013-01-29 11:12             ` Namhyung Kim
2013-02-08 22:47             ` Paul E. McKenney
2013-02-08 22:47               ` Paul E. McKenney
2013-02-08 22:47               ` Paul E. McKenney
2013-02-10 18:38               ` Srivatsa S. Bhat
2013-02-10 18:38                 ` Srivatsa S. Bhat
2013-02-10 18:38                 ` Srivatsa S. Bhat
2013-02-08 23:10   ` Paul E. McKenney
2013-02-08 23:10     ` Paul E. McKenney
2013-02-08 23:10     ` Paul E. McKenney
2013-02-10 18:06     ` Oleg Nesterov
2013-02-10 18:06       ` Oleg Nesterov
2013-02-10 18:06       ` Oleg Nesterov
2013-02-10 19:24       ` Srivatsa S. Bhat
2013-02-10 19:24         ` Srivatsa S. Bhat
2013-02-10 19:24         ` Srivatsa S. Bhat
2013-02-10 19:50         ` Oleg Nesterov
2013-02-10 19:50           ` Oleg Nesterov
2013-02-10 19:50           ` Oleg Nesterov
2013-02-10 20:09           ` Srivatsa S. Bhat
2013-02-10 20:09             ` Srivatsa S. Bhat
2013-02-10 20:09             ` Srivatsa S. Bhat
2013-02-10 22:13             ` Paul E. McKenney
2013-02-10 22:13               ` Paul E. McKenney
2013-02-10 22:13               ` Paul E. McKenney
2013-02-10 19:54       ` Paul E. McKenney
2013-02-10 19:54         ` Paul E. McKenney
2013-02-10 19:54         ` Paul E. McKenney
2013-02-12 16:15         ` Paul E. McKenney
2013-02-12 16:15           ` Paul E. McKenney
2013-02-12 16:15           ` Paul E. McKenney
2013-02-10 19:10     ` Srivatsa S. Bhat
2013-02-10 19:10       ` Srivatsa S. Bhat
2013-02-10 19:10       ` Srivatsa S. Bhat
2013-02-10 19:47       ` Paul E. McKenney
2013-02-10 19:47         ` Paul E. McKenney
2013-02-10 19:47         ` Paul E. McKenney
2013-02-10 19:57         ` Srivatsa S. Bhat
2013-02-10 19:57           ` Srivatsa S. Bhat
2013-02-10 19:57           ` Srivatsa S. Bhat
2013-02-10 20:13       ` Oleg Nesterov
2013-02-10 20:13         ` Oleg Nesterov
2013-02-10 20:13         ` Oleg Nesterov
2013-02-10 20:20         ` Srivatsa S. Bhat
2013-02-10 20:20           ` Srivatsa S. Bhat
2013-01-22  7:34 ` [PATCH v5 05/45] percpu_rwlock: Make percpu-rwlocks IRQ-safe, optimally Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-02-08 23:44   ` Paul E. McKenney
2013-02-08 23:44     ` Paul E. McKenney
2013-02-08 23:44     ` Paul E. McKenney
2013-02-10 19:27     ` Srivatsa S. Bhat
2013-02-10 19:27       ` Srivatsa S. Bhat
2013-02-10 19:27       ` Srivatsa S. Bhat
2013-02-10 18:42   ` Oleg Nesterov
2013-02-10 18:42     ` Oleg Nesterov
2013-02-10 18:42     ` Oleg Nesterov
2013-02-10 19:30     ` Srivatsa S. Bhat
2013-02-10 19:30       ` Srivatsa S. Bhat
2013-02-10 19:30       ` Srivatsa S. Bhat
2013-01-22  7:34 ` [PATCH v5 06/45] percpu_rwlock: Allow writers to be readers, and add lockdep annotations Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-02-08 23:47   ` Paul E. McKenney
2013-02-08 23:47     ` Paul E. McKenney
2013-02-08 23:47     ` Paul E. McKenney
2013-02-10 19:32     ` Srivatsa S. Bhat
2013-02-10 19:32       ` Srivatsa S. Bhat
2013-02-10 19:32       ` Srivatsa S. Bhat
2013-01-22  7:34 ` [PATCH v5 07/45] CPU hotplug: Provide APIs to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-01-29 10:21   ` Namhyung Kim
2013-01-29 10:21     ` Namhyung Kim
2013-01-29 10:21     ` Namhyung Kim
2013-02-10 19:34     ` Srivatsa S. Bhat
2013-02-10 19:34       ` Srivatsa S. Bhat
2013-02-08 23:50   ` Paul E. McKenney
2013-02-08 23:50     ` Paul E. McKenney
2013-02-08 23:50     ` Paul E. McKenney
2013-01-22  7:35 ` [PATCH v5 08/45] CPU hotplug: Convert preprocessor macros to static inline functions Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-02-08 23:51   ` Paul E. McKenney
2013-02-08 23:51     ` Paul E. McKenney
2013-02-08 23:51     ` Paul E. McKenney
2013-01-22  7:35 ` [PATCH v5 09/45] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-02-09  0:07   ` Paul E. McKenney
2013-02-09  0:07     ` Paul E. McKenney
2013-02-09  0:07     ` Paul E. McKenney
2013-02-10 19:41     ` Srivatsa S. Bhat
2013-02-10 19:41       ` Srivatsa S. Bhat
2013-02-10 19:41       ` Srivatsa S. Bhat
2013-02-10 19:56       ` Paul E. McKenney
2013-02-10 19:56         ` Paul E. McKenney
2013-02-10 19:56         ` Paul E. McKenney
2013-02-10 19:59         ` Srivatsa S. Bhat
2013-02-10 19:59           ` Srivatsa S. Bhat
2013-02-10 19:59           ` Srivatsa S. Bhat
2013-01-22  7:35 ` [PATCH v5 10/45] smp, cpu hotplug: Fix on_each_cpu_*() " Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35 ` [PATCH v5 11/45] sched/timer: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35 ` [PATCH v5 12/45] sched/migration: Use raw_spin_lock/unlock since interrupts are already disabled Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:36 ` [PATCH v5 13/45] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:36 ` [PATCH v5 14/45] rcu, CPU hotplug: Fix comment referring to stop_machine() Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-02-09  0:14   ` Paul E. McKenney
2013-02-09  0:14     ` Paul E. McKenney
2013-02-09  0:14     ` Paul E. McKenney
2013-02-10 19:43     ` Srivatsa S. Bhat
2013-02-10 19:43       ` Srivatsa S. Bhat
2013-02-10 19:43       ` Srivatsa S. Bhat
2013-01-22  7:36 ` [PATCH v5 15/45] tick: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:37 ` [PATCH v5 16/45] time/clocksource: " Srivatsa S. Bhat
2013-01-22  7:37   ` Srivatsa S. Bhat
2013-01-22  7:37   ` Srivatsa S. Bhat
2013-01-22  7:37 ` [PATCH v5 17/45] softirq: " Srivatsa S. Bhat
2013-01-22  7:37   ` Srivatsa S. Bhat
2013-01-22  7:37   ` Srivatsa S. Bhat
2013-01-22  7:38 ` [PATCH v5 18/45] irq: " Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38 ` [PATCH v5 19/45] net: " Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38 ` [PATCH v5 20/45] block: " Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38 ` [PATCH v5 21/45] crypto: pcrypt - Protect access to cpu_online_mask with get/put_online_cpus() Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 22/45] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 23/45] [SCSI] fcoe: " Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 24/45] staging: octeon: " Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 25/45] x86: " Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 26/45] perf/x86: " Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:40 ` [PATCH v5 27/45] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40 ` [PATCH v5 28/45] kvm/vmx: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40 ` [PATCH v5 29/45] x86/xen: " Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-02-19 18:10   ` Konrad Rzeszutek Wilk
2013-02-19 18:10     ` Konrad Rzeszutek Wilk
2013-02-19 18:10     ` Konrad Rzeszutek Wilk
2013-02-19 18:29     ` Srivatsa S. Bhat
2013-02-19 18:29       ` Srivatsa S. Bhat
2013-02-19 18:29       ` Srivatsa S. Bhat
2013-01-22  7:41 ` [PATCH v5 30/45] alpha/smp: " Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41 ` [PATCH v5 31/45] blackfin/smp: " Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-28  9:09   ` Bob Liu
2013-01-28  9:09     ` Bob Liu
2013-01-28  9:09     ` Bob Liu
2013-01-28 19:06     ` Tejun Heo
2013-01-28 19:06       ` Tejun Heo
2013-01-28 19:06       ` Tejun Heo
2013-01-29  1:14       ` Srivatsa S. Bhat
2013-01-29  1:14         ` Srivatsa S. Bhat
2013-01-29  1:14         ` Srivatsa S. Bhat
2013-01-22  7:41 ` [PATCH v5 32/45] cris/smp: " Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:42 ` [PATCH v5 33/45] hexagon/smp: " Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42 ` [PATCH v5 34/45] ia64: " Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42 ` [PATCH v5 35/45] m32r: " Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42 ` [PATCH v5 36/45] MIPS: " Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:43 ` [PATCH v5 37/45] mn10300: " Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43 ` [PATCH v5 38/45] parisc: " Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43 ` [PATCH v5 39/45] powerpc: " Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:44 ` [PATCH v5 40/45] sh: " Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44 ` [PATCH v5 41/45] sparc: " Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44 ` [PATCH v5 42/45] tile: " Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44 ` [PATCH v5 43/45] cpu: No more __stop_machine() in _cpu_down() Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:45 ` [PATCH v5 44/45] CPU hotplug, stop_machine: Decouple CPU hotplug from stop_machine() in Kconfig Srivatsa S. Bhat
2013-01-22  7:45   ` Srivatsa S. Bhat
2013-01-22  7:45   ` Srivatsa S. Bhat
2013-02-09  0:15   ` Paul E. McKenney
2013-02-09  0:15     ` Paul E. McKenney
2013-02-09  0:15     ` Paul E. McKenney
2013-02-10 19:45     ` Srivatsa S. Bhat
2013-02-10 19:45       ` Srivatsa S. Bhat
2013-02-10 19:45       ` Srivatsa S. Bhat
2013-01-22  7:45 ` [PATCH v5 45/45] Documentation/cpu-hotplug: Remove references to stop_machine() Srivatsa S. Bhat
2013-01-22  7:45   ` Srivatsa S. Bhat
2013-01-22  7:45   ` Srivatsa S. Bhat
2013-02-09  0:16   ` Paul E. McKenney
2013-02-09  0:16     ` Paul E. McKenney
2013-02-09  0:16     ` Paul E. McKenney
2013-02-04 13:47 ` [PATCH v5 00/45] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat
2013-02-04 13:47   ` Srivatsa S. Bhat
2013-02-04 13:47   ` Srivatsa S. Bhat
2013-02-07  4:14   ` Rusty Russell
2013-02-07  4:14     ` Rusty Russell
2013-02-07  4:14     ` Rusty Russell
2013-02-07  4:14     ` Rusty Russell
2013-02-07  4:14     ` Rusty Russell
2013-02-07  6:11     ` Srivatsa S. Bhat
2013-02-07  6:11       ` Srivatsa S. Bhat
2013-02-07  6:11       ` Srivatsa S. Bhat
2013-02-08 15:41       ` Russell King - ARM Linux
2013-02-08 15:41         ` Russell King - ARM Linux
2013-02-08 15:41         ` Russell King - ARM Linux
2013-02-08 16:44         ` Srivatsa S. Bhat
2013-02-08 16:44           ` Srivatsa S. Bhat
2013-02-08 18:09           ` Srivatsa S. Bhat
2013-02-08 18:09             ` Srivatsa S. Bhat
2013-02-08 18:09             ` Srivatsa S. Bhat
2013-02-11 11:58             ` Vincent Guittot
2013-02-11 11:58               ` Vincent Guittot
2013-02-11 11:58               ` Vincent Guittot
2013-02-11 12:23               ` Srivatsa S. Bhat
2013-02-11 12:23                 ` Srivatsa S. Bhat
2013-02-11 12:23                 ` Srivatsa S. Bhat
2013-02-11 19:08                 ` Paul E. McKenney
2013-02-11 19:08                   ` Paul E. McKenney
2013-02-11 19:08                   ` Paul E. McKenney
2013-02-12  3:58                   ` Srivatsa S. Bhat
2013-02-12  3:58                     ` Srivatsa S. Bhat
2013-02-12  3:58                     ` Srivatsa S. Bhat
2013-02-15 13:28                     ` Vincent Guittot
2013-02-15 13:28                       ` Vincent Guittot
2013-02-15 19:40                       ` Srivatsa S. Bhat
2013-02-15 19:40                         ` Srivatsa S. Bhat
2013-02-15 19:40                         ` Srivatsa S. Bhat
2013-02-18 10:24                         ` Vincent Guittot
2013-02-18 10:24                           ` Vincent Guittot
2013-02-18 10:24                           ` Vincent Guittot
2013-02-18 10:34                           ` Srivatsa S. Bhat
2013-02-18 10:34                             ` Srivatsa S. Bhat
2013-02-18 10:34                             ` Srivatsa S. Bhat
2013-02-18 10:51                             ` Srivatsa S. Bhat
2013-02-18 10:51                               ` Srivatsa S. Bhat
2013-02-18 10:51                               ` Srivatsa S. Bhat
2013-02-18 10:58                               ` Vincent Guittot
2013-02-18 10:58                                 ` Vincent Guittot
2013-02-18 10:58                                 ` Vincent Guittot
2013-02-18 15:30                                 ` Steven Rostedt
2013-02-18 15:30                                   ` Steven Rostedt
2013-02-18 15:30                                   ` Steven Rostedt
2013-02-18 16:50                                   ` Vincent Guittot
2013-02-18 16:50                                     ` Vincent Guittot
2013-02-18 16:50                                     ` Vincent Guittot
2013-02-18 19:53                                     ` Steven Rostedt
2013-02-18 19:53                                       ` Steven Rostedt
2013-02-18 19:53                                       ` Steven Rostedt
2013-02-18 19:53                                     ` Steven Rostedt
2013-02-18 19:53                                       ` Steven Rostedt
2013-02-18 19:53                                       ` Steven Rostedt
2013-02-19 10:33                                       ` Vincent Guittot
2013-02-19 10:33                                         ` Vincent Guittot
2013-02-19 10:33                                         ` Vincent Guittot
2013-02-18 10:54                             ` Thomas Gleixner
2013-02-18 10:54                               ` Thomas Gleixner
2013-02-18 10:54                               ` Thomas Gleixner
2013-02-18 10:57                               ` Srivatsa S. Bhat
2013-02-18 10:57                                 ` Srivatsa S. Bhat
2013-02-18 10:57                                 ` Srivatsa S. Bhat
2013-02-11 12:41 ` [PATCH v5 01/45] percpu_rwlock: Introduce the global reader-writer lock backend David Howells
2013-02-11 12:41   ` David Howells
2013-02-11 12:41   ` David Howells
2013-02-11 12:56   ` Srivatsa S. Bhat
2013-02-11 12:56     ` Srivatsa S. Bhat
2013-02-11 12:56     ` Srivatsa S. Bhat

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5100B8CC.4080406@linux.vnet.ibm.com \
    --to=srivatsa.bhat@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=fweisbec@gmail.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=nikunj@linux.vnet.ibm.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rjw@sisk.pl \
    --cc=rostedt@goodmis.org \
    --cc=rusty@rustcorp.com.au \
    --cc=sbw@mit.edu \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=walken@google.com \
    --cc=wangyun@linux.vnet.ibm.com \
    --cc=xiaoguangrong@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.