All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: Tejun Heo <tj@kernel.org>
Cc: tglx@linutronix.de, peterz@infradead.org, oleg@redhat.com,
	paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au,
	mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org,
	rostedt@goodmis.org, wangyun@linux.vnet.ibm.com,
	xiaoguangrong@linux.vnet.ibm.com, rjw@sisk.pl, sbw@mit.edu,
	fweisbec@gmail.com, linux@arm.linux.org.uk,
	nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
	linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Thu, 24 Jan 2013 01:03:52 +0530	[thread overview]
Message-ID: <51003B20.2060506@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130123185522.GG2373@mtj.dyndns.org>

On 01/24/2013 12:25 AM, Tejun Heo wrote:
> Hello, Srivatsa.
> 
> First of all, I'm not sure whether we need to be this step-by-step
> when introducing something new.  It's not like we're transforming an
> existing implementation and it doesn't seem to help understanding the
> series that much either.
> 

Hmm.. I split it up into steps to help explain the reasoning behind
the code sufficiently, rather than spring all of the intricacies at
one go (which would make it very hard to write the changelog/comments
also). The split made it easier for me to document it well in the
changelog, because I could deal with reasonable chunks of code/complexity
at a time. IMHO that helps people reading it for the first time to
understand the logic easily.

> On Tue, Jan 22, 2013 at 01:03:53PM +0530, Srivatsa S. Bhat wrote:
>> Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
>> lock-ordering related problems (unlike per-cpu locks). However, global
> 
> So, unfortunately, this already seems broken, right?  The problem here
> seems to be that previously, say, read_lock() implied
> preempt_disable() but as this series aims to move away from it, it
> introduces the problem of locking order between such locks and the new
> contruct.
>

Not sure I got your point correctly. Are you referring to Steve's comment
that rwlocks are probably fair now (and hence not really safe when used
like this)? If yes, I haven't actually verified that yet, but yes, that
will make this hard to use, since we need to take care of locking rules.

But suppose rwlocks are unfair (as I had assumed them to be), then we
have absolutely no problems and no lock-ordering to worry about.
 
> The only two options are either punishing writers or identifying and
> updating all such possible deadlocks.  percpu_rwsem does the former,
> right?  I don't know how feasible the latter would be.

I don't think we can avoid looking into all the possible deadlocks,
as long as we use rwlocks inside get/put_online_cpus_atomic() (assuming
rwlocks are fair). Even with Oleg's idea of using synchronize_sched()
at the writer, we still need to take care of locking rules, because the
synchronize_sched() only helps avoid the memory barriers at the reader,
and doesn't help get rid of the rwlocks themselves.
So in short, I don't see how we can punish the writers and thereby somehow
avoid looking into possible deadlocks (if rwlocks are fair).

>  Srivatsa,
> you've been looking at all the places which would require conversion,
> how difficult would doing the latter be?
> 

The problem is that some APIs like smp_call_function() will need to use
get/put_online_cpus_atomic(). That is when the locking becomes tricky
in the subsystem which invokes these APIs with other (subsystem-specific,
internal) locks held. So we could potentially use a convention such as
"Make get/put_online_cpus_atomic() your outer-most calls, within which
you nest the other locks" to rule out all ABBA deadlock possibilities...
But we might still hit some hard-to-convert places.. 

BTW, Steve, fair rwlocks doesn't mean the following scenario will result
in a deadlock right?

CPU 0                          CPU 1

read_lock(&rwlock)

                              write_lock(&rwlock) //spins, because CPU 0
                              //has acquired the lock for read

read_lock(&rwlock)
   ^^^^^
What happens here? Does CPU 0 start spinning (and hence deadlock) or will
it continue realizing that it already holds the rwlock for read?

If the above ends in a deadlock, then its next to impossible to convert
all the places safely (because the above mentioned convention will simply
fall apart).

>> +#define reader_uses_percpu_refcnt(pcpu_rwlock, cpu)			\
>> +		(ACCESS_ONCE(per_cpu(*((pcpu_rwlock)->reader_refcnt), cpu)))
>> +
>> +#define reader_nested_percpu(pcpu_rwlock)				\
>> +			(__this_cpu_read(*((pcpu_rwlock)->reader_refcnt)) > 1)
>> +
>> +#define writer_active(pcpu_rwlock)					\
>> +			(__this_cpu_read(*((pcpu_rwlock)->writer_signal)))
> 
> Why are these in the public header file?  Are they gonna be used to
> inline something?
>

No, I can put it in the .c file itself. Will do.
 
>> +static inline void raise_writer_signal(struct percpu_rwlock *pcpu_rwlock,
>> +				       unsigned int cpu)
>> +{
>> +	per_cpu(*pcpu_rwlock->writer_signal, cpu) = true;
>> +}
>> +
>> +static inline void drop_writer_signal(struct percpu_rwlock *pcpu_rwlock,
>> +				      unsigned int cpu)
>> +{
>> +	per_cpu(*pcpu_rwlock->writer_signal, cpu) = false;
>> +}
>> +
>> +static void announce_writer_active(struct percpu_rwlock *pcpu_rwlock)
>> +{
>> +	unsigned int cpu;
>> +
>> +	for_each_online_cpu(cpu)
>> +		raise_writer_signal(pcpu_rwlock, cpu);
>> +
>> +	smp_mb(); /* Paired with smp_rmb() in percpu_read_[un]lock() */
>> +}
>> +
>> +static void announce_writer_inactive(struct percpu_rwlock *pcpu_rwlock)
>> +{
>> +	unsigned int cpu;
>> +
>> +	drop_writer_signal(pcpu_rwlock, smp_processor_id());
>> +
>> +	for_each_online_cpu(cpu)
>> +		drop_writer_signal(pcpu_rwlock, cpu);
>> +
>> +	smp_mb(); /* Paired with smp_rmb() in percpu_read_[un]lock() */
>> +}
> 
> It could be just personal preference but I find the above one line
> wrappers more obfuscating than anything else.  What's the point of
> wrapping writer_signal = true/false into a separate function?  These
> simple wrappers just add layers that people have to dig through to
> figure out what's going on without adding anything of value.  I'd much
> prefer collapsing these into the percpu_write_[un]lock().
>

Sure, I see your point. I'll change that.

Thanks a lot for your feedback Tejun!

Regards,
Srivatsa S. Bhat


WARNING: multiple messages have this Message-ID (diff)
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: Tejun Heo <tj@kernel.org>
Cc: linux-doc@vger.kernel.org, peterz@infradead.org,
	fweisbec@gmail.com, linux-kernel@vger.kernel.org,
	mingo@kernel.org, linux-arch@vger.kernel.org,
	linux@arm.linux.org.uk, xiaoguangrong@linux.vnet.ibm.com,
	wangyun@linux.vnet.ibm.com, paulmck@linux.vnet.ibm.com,
	nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
	rusty@rustcorp.com.au, rostedt@goodmis.org, rjw@sisk.pl,
	namhyung@kernel.org, tglx@linutronix.de,
	linux-arm-kernel@lists.infradead.org, netdev@vger.kernel.org,
	oleg@redhat.com, sbw@mit.edu, akpm@linux-foundation.org,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Thu, 24 Jan 2013 01:03:52 +0530	[thread overview]
Message-ID: <51003B20.2060506@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130123185522.GG2373@mtj.dyndns.org>

On 01/24/2013 12:25 AM, Tejun Heo wrote:
> Hello, Srivatsa.
> 
> First of all, I'm not sure whether we need to be this step-by-step
> when introducing something new.  It's not like we're transforming an
> existing implementation and it doesn't seem to help understanding the
> series that much either.
> 

Hmm.. I split it up into steps to help explain the reasoning behind
the code sufficiently, rather than spring all of the intricacies at
one go (which would make it very hard to write the changelog/comments
also). The split made it easier for me to document it well in the
changelog, because I could deal with reasonable chunks of code/complexity
at a time. IMHO that helps people reading it for the first time to
understand the logic easily.

> On Tue, Jan 22, 2013 at 01:03:53PM +0530, Srivatsa S. Bhat wrote:
>> Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
>> lock-ordering related problems (unlike per-cpu locks). However, global
> 
> So, unfortunately, this already seems broken, right?  The problem here
> seems to be that previously, say, read_lock() implied
> preempt_disable() but as this series aims to move away from it, it
> introduces the problem of locking order between such locks and the new
> contruct.
>

Not sure I got your point correctly. Are you referring to Steve's comment
that rwlocks are probably fair now (and hence not really safe when used
like this)? If yes, I haven't actually verified that yet, but yes, that
will make this hard to use, since we need to take care of locking rules.

But suppose rwlocks are unfair (as I had assumed them to be), then we
have absolutely no problems and no lock-ordering to worry about.
 
> The only two options are either punishing writers or identifying and
> updating all such possible deadlocks.  percpu_rwsem does the former,
> right?  I don't know how feasible the latter would be.

I don't think we can avoid looking into all the possible deadlocks,
as long as we use rwlocks inside get/put_online_cpus_atomic() (assuming
rwlocks are fair). Even with Oleg's idea of using synchronize_sched()
at the writer, we still need to take care of locking rules, because the
synchronize_sched() only helps avoid the memory barriers at the reader,
and doesn't help get rid of the rwlocks themselves.
So in short, I don't see how we can punish the writers and thereby somehow
avoid looking into possible deadlocks (if rwlocks are fair).

>  Srivatsa,
> you've been looking at all the places which would require conversion,
> how difficult would doing the latter be?
> 

The problem is that some APIs like smp_call_function() will need to use
get/put_online_cpus_atomic(). That is when the locking becomes tricky
in the subsystem which invokes these APIs with other (subsystem-specific,
internal) locks held. So we could potentially use a convention such as
"Make get/put_online_cpus_atomic() your outer-most calls, within which
you nest the other locks" to rule out all ABBA deadlock possibilities...
But we might still hit some hard-to-convert places.. 

BTW, Steve, fair rwlocks doesn't mean the following scenario will result
in a deadlock right?

CPU 0                          CPU 1

read_lock(&rwlock)

                              write_lock(&rwlock) //spins, because CPU 0
                              //has acquired the lock for read

read_lock(&rwlock)
   ^^^^^
What happens here? Does CPU 0 start spinning (and hence deadlock) or will
it continue realizing that it already holds the rwlock for read?

If the above ends in a deadlock, then its next to impossible to convert
all the places safely (because the above mentioned convention will simply
fall apart).

>> +#define reader_uses_percpu_refcnt(pcpu_rwlock, cpu)			\
>> +		(ACCESS_ONCE(per_cpu(*((pcpu_rwlock)->reader_refcnt), cpu)))
>> +
>> +#define reader_nested_percpu(pcpu_rwlock)				\
>> +			(__this_cpu_read(*((pcpu_rwlock)->reader_refcnt)) > 1)
>> +
>> +#define writer_active(pcpu_rwlock)					\
>> +			(__this_cpu_read(*((pcpu_rwlock)->writer_signal)))
> 
> Why are these in the public header file?  Are they gonna be used to
> inline something?
>

No, I can put it in the .c file itself. Will do.
 
>> +static inline void raise_writer_signal(struct percpu_rwlock *pcpu_rwlock,
>> +				       unsigned int cpu)
>> +{
>> +	per_cpu(*pcpu_rwlock->writer_signal, cpu) = true;
>> +}
>> +
>> +static inline void drop_writer_signal(struct percpu_rwlock *pcpu_rwlock,
>> +				      unsigned int cpu)
>> +{
>> +	per_cpu(*pcpu_rwlock->writer_signal, cpu) = false;
>> +}
>> +
>> +static void announce_writer_active(struct percpu_rwlock *pcpu_rwlock)
>> +{
>> +	unsigned int cpu;
>> +
>> +	for_each_online_cpu(cpu)
>> +		raise_writer_signal(pcpu_rwlock, cpu);
>> +
>> +	smp_mb(); /* Paired with smp_rmb() in percpu_read_[un]lock() */
>> +}
>> +
>> +static void announce_writer_inactive(struct percpu_rwlock *pcpu_rwlock)
>> +{
>> +	unsigned int cpu;
>> +
>> +	drop_writer_signal(pcpu_rwlock, smp_processor_id());
>> +
>> +	for_each_online_cpu(cpu)
>> +		drop_writer_signal(pcpu_rwlock, cpu);
>> +
>> +	smp_mb(); /* Paired with smp_rmb() in percpu_read_[un]lock() */
>> +}
> 
> It could be just personal preference but I find the above one line
> wrappers more obfuscating than anything else.  What's the point of
> wrapping writer_signal = true/false into a separate function?  These
> simple wrappers just add layers that people have to dig through to
> figure out what's going on without adding anything of value.  I'd much
> prefer collapsing these into the percpu_write_[un]lock().
>

Sure, I see your point. I'll change that.

Thanks a lot for your feedback Tejun!

Regards,
Srivatsa S. Bhat

WARNING: multiple messages have this Message-ID (diff)
From: srivatsa.bhat@linux.vnet.ibm.com (Srivatsa S. Bhat)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Thu, 24 Jan 2013 01:03:52 +0530	[thread overview]
Message-ID: <51003B20.2060506@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130123185522.GG2373@mtj.dyndns.org>

On 01/24/2013 12:25 AM, Tejun Heo wrote:
> Hello, Srivatsa.
> 
> First of all, I'm not sure whether we need to be this step-by-step
> when introducing something new.  It's not like we're transforming an
> existing implementation and it doesn't seem to help understanding the
> series that much either.
> 

Hmm.. I split it up into steps to help explain the reasoning behind
the code sufficiently, rather than spring all of the intricacies at
one go (which would make it very hard to write the changelog/comments
also). The split made it easier for me to document it well in the
changelog, because I could deal with reasonable chunks of code/complexity
at a time. IMHO that helps people reading it for the first time to
understand the logic easily.

> On Tue, Jan 22, 2013 at 01:03:53PM +0530, Srivatsa S. Bhat wrote:
>> Using global rwlocks as the backend for per-CPU rwlocks helps us avoid many
>> lock-ordering related problems (unlike per-cpu locks). However, global
> 
> So, unfortunately, this already seems broken, right?  The problem here
> seems to be that previously, say, read_lock() implied
> preempt_disable() but as this series aims to move away from it, it
> introduces the problem of locking order between such locks and the new
> contruct.
>

Not sure I got your point correctly. Are you referring to Steve's comment
that rwlocks are probably fair now (and hence not really safe when used
like this)? If yes, I haven't actually verified that yet, but yes, that
will make this hard to use, since we need to take care of locking rules.

But suppose rwlocks are unfair (as I had assumed them to be), then we
have absolutely no problems and no lock-ordering to worry about.
 
> The only two options are either punishing writers or identifying and
> updating all such possible deadlocks.  percpu_rwsem does the former,
> right?  I don't know how feasible the latter would be.

I don't think we can avoid looking into all the possible deadlocks,
as long as we use rwlocks inside get/put_online_cpus_atomic() (assuming
rwlocks are fair). Even with Oleg's idea of using synchronize_sched()
at the writer, we still need to take care of locking rules, because the
synchronize_sched() only helps avoid the memory barriers at the reader,
and doesn't help get rid of the rwlocks themselves.
So in short, I don't see how we can punish the writers and thereby somehow
avoid looking into possible deadlocks (if rwlocks are fair).

>  Srivatsa,
> you've been looking at all the places which would require conversion,
> how difficult would doing the latter be?
> 

The problem is that some APIs like smp_call_function() will need to use
get/put_online_cpus_atomic(). That is when the locking becomes tricky
in the subsystem which invokes these APIs with other (subsystem-specific,
internal) locks held. So we could potentially use a convention such as
"Make get/put_online_cpus_atomic() your outer-most calls, within which
you nest the other locks" to rule out all ABBA deadlock possibilities...
But we might still hit some hard-to-convert places.. 

BTW, Steve, fair rwlocks doesn't mean the following scenario will result
in a deadlock right?

CPU 0                          CPU 1

read_lock(&rwlock)

                              write_lock(&rwlock) //spins, because CPU 0
                              //has acquired the lock for read

read_lock(&rwlock)
   ^^^^^
What happens here? Does CPU 0 start spinning (and hence deadlock) or will
it continue realizing that it already holds the rwlock for read?

If the above ends in a deadlock, then its next to impossible to convert
all the places safely (because the above mentioned convention will simply
fall apart).

>> +#define reader_uses_percpu_refcnt(pcpu_rwlock, cpu)			\
>> +		(ACCESS_ONCE(per_cpu(*((pcpu_rwlock)->reader_refcnt), cpu)))
>> +
>> +#define reader_nested_percpu(pcpu_rwlock)				\
>> +			(__this_cpu_read(*((pcpu_rwlock)->reader_refcnt)) > 1)
>> +
>> +#define writer_active(pcpu_rwlock)					\
>> +			(__this_cpu_read(*((pcpu_rwlock)->writer_signal)))
> 
> Why are these in the public header file?  Are they gonna be used to
> inline something?
>

No, I can put it in the .c file itself. Will do.
 
>> +static inline void raise_writer_signal(struct percpu_rwlock *pcpu_rwlock,
>> +				       unsigned int cpu)
>> +{
>> +	per_cpu(*pcpu_rwlock->writer_signal, cpu) = true;
>> +}
>> +
>> +static inline void drop_writer_signal(struct percpu_rwlock *pcpu_rwlock,
>> +				      unsigned int cpu)
>> +{
>> +	per_cpu(*pcpu_rwlock->writer_signal, cpu) = false;
>> +}
>> +
>> +static void announce_writer_active(struct percpu_rwlock *pcpu_rwlock)
>> +{
>> +	unsigned int cpu;
>> +
>> +	for_each_online_cpu(cpu)
>> +		raise_writer_signal(pcpu_rwlock, cpu);
>> +
>> +	smp_mb(); /* Paired with smp_rmb() in percpu_read_[un]lock() */
>> +}
>> +
>> +static void announce_writer_inactive(struct percpu_rwlock *pcpu_rwlock)
>> +{
>> +	unsigned int cpu;
>> +
>> +	drop_writer_signal(pcpu_rwlock, smp_processor_id());
>> +
>> +	for_each_online_cpu(cpu)
>> +		drop_writer_signal(pcpu_rwlock, cpu);
>> +
>> +	smp_mb(); /* Paired with smp_rmb() in percpu_read_[un]lock() */
>> +}
> 
> It could be just personal preference but I find the above one line
> wrappers more obfuscating than anything else.  What's the point of
> wrapping writer_signal = true/false into a separate function?  These
> simple wrappers just add layers that people have to dig through to
> figure out what's going on without adding anything of value.  I'd much
> prefer collapsing these into the percpu_write_[un]lock().
>

Sure, I see your point. I'll change that.

Thanks a lot for your feedback Tejun!

Regards,
Srivatsa S. Bhat

  reply	other threads:[~2013-01-23 19:36 UTC|newest]

Thread overview: 362+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-22  7:33 [PATCH v5 00/45] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat
2013-01-22  7:33 ` Srivatsa S. Bhat
2013-01-22  7:33 ` Srivatsa S. Bhat
2013-01-22  7:33 ` [PATCH v5 01/45] percpu_rwlock: Introduce the global reader-writer lock backend Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22 18:45   ` Stephen Hemminger
2013-01-22 18:45     ` Stephen Hemminger
2013-01-22 18:45     ` Stephen Hemminger
2013-01-22 19:41     ` Srivatsa S. Bhat
2013-01-22 19:41       ` Srivatsa S. Bhat
2013-01-22 19:41       ` Srivatsa S. Bhat
2013-01-22 19:32   ` Steven Rostedt
2013-01-22 19:32     ` Steven Rostedt
2013-01-22 19:32     ` Steven Rostedt
2013-01-22 19:58     ` Srivatsa S. Bhat
2013-01-22 19:58       ` Srivatsa S. Bhat
2013-01-22 20:54       ` Steven Rostedt
2013-01-22 20:54         ` Steven Rostedt
2013-01-24  4:14     ` Michel Lespinasse
2013-01-24  4:14       ` Michel Lespinasse
2013-01-24  4:14       ` Michel Lespinasse
2013-01-24 15:58       ` Oleg Nesterov
2013-01-24 15:58         ` Oleg Nesterov
2013-01-22  7:33 ` [PATCH v5 02/45] percpu_rwlock: Introduce per-CPU variables for the reader and the writer Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33 ` [PATCH v5 03/45] percpu_rwlock: Provide a way to define and init percpu-rwlocks at compile time Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33 ` [PATCH v5 04/45] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-22  7:33   ` Srivatsa S. Bhat
2013-01-23 18:55   ` Tejun Heo
2013-01-23 18:55     ` Tejun Heo
2013-01-23 18:55     ` Tejun Heo
2013-01-23 19:33     ` Srivatsa S. Bhat [this message]
2013-01-23 19:33       ` Srivatsa S. Bhat
2013-01-23 19:33       ` Srivatsa S. Bhat
2013-01-23 19:57       ` Tejun Heo
2013-01-23 19:57         ` Tejun Heo
2013-01-23 19:57         ` Tejun Heo
2013-01-24  4:30         ` Srivatsa S. Bhat
2013-01-24  4:30           ` Srivatsa S. Bhat
2013-01-24  4:30           ` Srivatsa S. Bhat
2013-01-29 11:12           ` Namhyung Kim
2013-01-29 11:12             ` Namhyung Kim
2013-01-29 11:12             ` Namhyung Kim
2013-02-08 22:47             ` Paul E. McKenney
2013-02-08 22:47               ` Paul E. McKenney
2013-02-08 22:47               ` Paul E. McKenney
2013-02-10 18:38               ` Srivatsa S. Bhat
2013-02-10 18:38                 ` Srivatsa S. Bhat
2013-02-10 18:38                 ` Srivatsa S. Bhat
2013-02-08 23:10   ` Paul E. McKenney
2013-02-08 23:10     ` Paul E. McKenney
2013-02-08 23:10     ` Paul E. McKenney
2013-02-10 18:06     ` Oleg Nesterov
2013-02-10 18:06       ` Oleg Nesterov
2013-02-10 18:06       ` Oleg Nesterov
2013-02-10 19:24       ` Srivatsa S. Bhat
2013-02-10 19:24         ` Srivatsa S. Bhat
2013-02-10 19:24         ` Srivatsa S. Bhat
2013-02-10 19:50         ` Oleg Nesterov
2013-02-10 19:50           ` Oleg Nesterov
2013-02-10 19:50           ` Oleg Nesterov
2013-02-10 20:09           ` Srivatsa S. Bhat
2013-02-10 20:09             ` Srivatsa S. Bhat
2013-02-10 20:09             ` Srivatsa S. Bhat
2013-02-10 22:13             ` Paul E. McKenney
2013-02-10 22:13               ` Paul E. McKenney
2013-02-10 22:13               ` Paul E. McKenney
2013-02-10 19:54       ` Paul E. McKenney
2013-02-10 19:54         ` Paul E. McKenney
2013-02-10 19:54         ` Paul E. McKenney
2013-02-12 16:15         ` Paul E. McKenney
2013-02-12 16:15           ` Paul E. McKenney
2013-02-12 16:15           ` Paul E. McKenney
2013-02-10 19:10     ` Srivatsa S. Bhat
2013-02-10 19:10       ` Srivatsa S. Bhat
2013-02-10 19:10       ` Srivatsa S. Bhat
2013-02-10 19:47       ` Paul E. McKenney
2013-02-10 19:47         ` Paul E. McKenney
2013-02-10 19:47         ` Paul E. McKenney
2013-02-10 19:57         ` Srivatsa S. Bhat
2013-02-10 19:57           ` Srivatsa S. Bhat
2013-02-10 19:57           ` Srivatsa S. Bhat
2013-02-10 20:13       ` Oleg Nesterov
2013-02-10 20:13         ` Oleg Nesterov
2013-02-10 20:13         ` Oleg Nesterov
2013-02-10 20:20         ` Srivatsa S. Bhat
2013-02-10 20:20           ` Srivatsa S. Bhat
2013-01-22  7:34 ` [PATCH v5 05/45] percpu_rwlock: Make percpu-rwlocks IRQ-safe, optimally Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-02-08 23:44   ` Paul E. McKenney
2013-02-08 23:44     ` Paul E. McKenney
2013-02-08 23:44     ` Paul E. McKenney
2013-02-10 19:27     ` Srivatsa S. Bhat
2013-02-10 19:27       ` Srivatsa S. Bhat
2013-02-10 19:27       ` Srivatsa S. Bhat
2013-02-10 18:42   ` Oleg Nesterov
2013-02-10 18:42     ` Oleg Nesterov
2013-02-10 18:42     ` Oleg Nesterov
2013-02-10 19:30     ` Srivatsa S. Bhat
2013-02-10 19:30       ` Srivatsa S. Bhat
2013-02-10 19:30       ` Srivatsa S. Bhat
2013-01-22  7:34 ` [PATCH v5 06/45] percpu_rwlock: Allow writers to be readers, and add lockdep annotations Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-02-08 23:47   ` Paul E. McKenney
2013-02-08 23:47     ` Paul E. McKenney
2013-02-08 23:47     ` Paul E. McKenney
2013-02-10 19:32     ` Srivatsa S. Bhat
2013-02-10 19:32       ` Srivatsa S. Bhat
2013-02-10 19:32       ` Srivatsa S. Bhat
2013-01-22  7:34 ` [PATCH v5 07/45] CPU hotplug: Provide APIs to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-01-22  7:34   ` Srivatsa S. Bhat
2013-01-29 10:21   ` Namhyung Kim
2013-01-29 10:21     ` Namhyung Kim
2013-01-29 10:21     ` Namhyung Kim
2013-02-10 19:34     ` Srivatsa S. Bhat
2013-02-10 19:34       ` Srivatsa S. Bhat
2013-02-08 23:50   ` Paul E. McKenney
2013-02-08 23:50     ` Paul E. McKenney
2013-02-08 23:50     ` Paul E. McKenney
2013-01-22  7:35 ` [PATCH v5 08/45] CPU hotplug: Convert preprocessor macros to static inline functions Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-02-08 23:51   ` Paul E. McKenney
2013-02-08 23:51     ` Paul E. McKenney
2013-02-08 23:51     ` Paul E. McKenney
2013-01-22  7:35 ` [PATCH v5 09/45] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-02-09  0:07   ` Paul E. McKenney
2013-02-09  0:07     ` Paul E. McKenney
2013-02-09  0:07     ` Paul E. McKenney
2013-02-10 19:41     ` Srivatsa S. Bhat
2013-02-10 19:41       ` Srivatsa S. Bhat
2013-02-10 19:41       ` Srivatsa S. Bhat
2013-02-10 19:56       ` Paul E. McKenney
2013-02-10 19:56         ` Paul E. McKenney
2013-02-10 19:56         ` Paul E. McKenney
2013-02-10 19:59         ` Srivatsa S. Bhat
2013-02-10 19:59           ` Srivatsa S. Bhat
2013-02-10 19:59           ` Srivatsa S. Bhat
2013-01-22  7:35 ` [PATCH v5 10/45] smp, cpu hotplug: Fix on_each_cpu_*() " Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35 ` [PATCH v5 11/45] sched/timer: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35 ` [PATCH v5 12/45] sched/migration: Use raw_spin_lock/unlock since interrupts are already disabled Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:35   ` Srivatsa S. Bhat
2013-01-22  7:36 ` [PATCH v5 13/45] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:36 ` [PATCH v5 14/45] rcu, CPU hotplug: Fix comment referring to stop_machine() Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-02-09  0:14   ` Paul E. McKenney
2013-02-09  0:14     ` Paul E. McKenney
2013-02-09  0:14     ` Paul E. McKenney
2013-02-10 19:43     ` Srivatsa S. Bhat
2013-02-10 19:43       ` Srivatsa S. Bhat
2013-02-10 19:43       ` Srivatsa S. Bhat
2013-01-22  7:36 ` [PATCH v5 15/45] tick: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:36   ` Srivatsa S. Bhat
2013-01-22  7:37 ` [PATCH v5 16/45] time/clocksource: " Srivatsa S. Bhat
2013-01-22  7:37   ` Srivatsa S. Bhat
2013-01-22  7:37   ` Srivatsa S. Bhat
2013-01-22  7:37 ` [PATCH v5 17/45] softirq: " Srivatsa S. Bhat
2013-01-22  7:37   ` Srivatsa S. Bhat
2013-01-22  7:37   ` Srivatsa S. Bhat
2013-01-22  7:38 ` [PATCH v5 18/45] irq: " Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38 ` [PATCH v5 19/45] net: " Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38 ` [PATCH v5 20/45] block: " Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38 ` [PATCH v5 21/45] crypto: pcrypt - Protect access to cpu_online_mask with get/put_online_cpus() Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:38   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 22/45] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 23/45] [SCSI] fcoe: " Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 24/45] staging: octeon: " Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 25/45] x86: " Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39 ` [PATCH v5 26/45] perf/x86: " Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:39   ` Srivatsa S. Bhat
2013-01-22  7:40 ` [PATCH v5 27/45] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40 ` [PATCH v5 28/45] kvm/vmx: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40 ` [PATCH v5 29/45] x86/xen: " Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-01-22  7:40   ` Srivatsa S. Bhat
2013-02-19 18:10   ` Konrad Rzeszutek Wilk
2013-02-19 18:10     ` Konrad Rzeszutek Wilk
2013-02-19 18:10     ` Konrad Rzeszutek Wilk
2013-02-19 18:29     ` Srivatsa S. Bhat
2013-02-19 18:29       ` Srivatsa S. Bhat
2013-02-19 18:29       ` Srivatsa S. Bhat
2013-01-22  7:41 ` [PATCH v5 30/45] alpha/smp: " Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41 ` [PATCH v5 31/45] blackfin/smp: " Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-28  9:09   ` Bob Liu
2013-01-28  9:09     ` Bob Liu
2013-01-28  9:09     ` Bob Liu
2013-01-28 19:06     ` Tejun Heo
2013-01-28 19:06       ` Tejun Heo
2013-01-28 19:06       ` Tejun Heo
2013-01-29  1:14       ` Srivatsa S. Bhat
2013-01-29  1:14         ` Srivatsa S. Bhat
2013-01-29  1:14         ` Srivatsa S. Bhat
2013-01-22  7:41 ` [PATCH v5 32/45] cris/smp: " Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:41   ` Srivatsa S. Bhat
2013-01-22  7:42 ` [PATCH v5 33/45] hexagon/smp: " Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42 ` [PATCH v5 34/45] ia64: " Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42 ` [PATCH v5 35/45] m32r: " Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42 ` [PATCH v5 36/45] MIPS: " Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:42   ` Srivatsa S. Bhat
2013-01-22  7:43 ` [PATCH v5 37/45] mn10300: " Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43 ` [PATCH v5 38/45] parisc: " Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43 ` [PATCH v5 39/45] powerpc: " Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:43   ` Srivatsa S. Bhat
2013-01-22  7:44 ` [PATCH v5 40/45] sh: " Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44 ` [PATCH v5 41/45] sparc: " Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44 ` [PATCH v5 42/45] tile: " Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44 ` [PATCH v5 43/45] cpu: No more __stop_machine() in _cpu_down() Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:44   ` Srivatsa S. Bhat
2013-01-22  7:45 ` [PATCH v5 44/45] CPU hotplug, stop_machine: Decouple CPU hotplug from stop_machine() in Kconfig Srivatsa S. Bhat
2013-01-22  7:45   ` Srivatsa S. Bhat
2013-01-22  7:45   ` Srivatsa S. Bhat
2013-02-09  0:15   ` Paul E. McKenney
2013-02-09  0:15     ` Paul E. McKenney
2013-02-09  0:15     ` Paul E. McKenney
2013-02-10 19:45     ` Srivatsa S. Bhat
2013-02-10 19:45       ` Srivatsa S. Bhat
2013-02-10 19:45       ` Srivatsa S. Bhat
2013-01-22  7:45 ` [PATCH v5 45/45] Documentation/cpu-hotplug: Remove references to stop_machine() Srivatsa S. Bhat
2013-01-22  7:45   ` Srivatsa S. Bhat
2013-01-22  7:45   ` Srivatsa S. Bhat
2013-02-09  0:16   ` Paul E. McKenney
2013-02-09  0:16     ` Paul E. McKenney
2013-02-09  0:16     ` Paul E. McKenney
2013-02-04 13:47 ` [PATCH v5 00/45] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat
2013-02-04 13:47   ` Srivatsa S. Bhat
2013-02-04 13:47   ` Srivatsa S. Bhat
2013-02-07  4:14   ` Rusty Russell
2013-02-07  4:14     ` Rusty Russell
2013-02-07  4:14     ` Rusty Russell
2013-02-07  4:14     ` Rusty Russell
2013-02-07  4:14     ` Rusty Russell
2013-02-07  6:11     ` Srivatsa S. Bhat
2013-02-07  6:11       ` Srivatsa S. Bhat
2013-02-07  6:11       ` Srivatsa S. Bhat
2013-02-08 15:41       ` Russell King - ARM Linux
2013-02-08 15:41         ` Russell King - ARM Linux
2013-02-08 15:41         ` Russell King - ARM Linux
2013-02-08 16:44         ` Srivatsa S. Bhat
2013-02-08 16:44           ` Srivatsa S. Bhat
2013-02-08 18:09           ` Srivatsa S. Bhat
2013-02-08 18:09             ` Srivatsa S. Bhat
2013-02-08 18:09             ` Srivatsa S. Bhat
2013-02-11 11:58             ` Vincent Guittot
2013-02-11 11:58               ` Vincent Guittot
2013-02-11 11:58               ` Vincent Guittot
2013-02-11 12:23               ` Srivatsa S. Bhat
2013-02-11 12:23                 ` Srivatsa S. Bhat
2013-02-11 12:23                 ` Srivatsa S. Bhat
2013-02-11 19:08                 ` Paul E. McKenney
2013-02-11 19:08                   ` Paul E. McKenney
2013-02-11 19:08                   ` Paul E. McKenney
2013-02-12  3:58                   ` Srivatsa S. Bhat
2013-02-12  3:58                     ` Srivatsa S. Bhat
2013-02-12  3:58                     ` Srivatsa S. Bhat
2013-02-15 13:28                     ` Vincent Guittot
2013-02-15 13:28                       ` Vincent Guittot
2013-02-15 19:40                       ` Srivatsa S. Bhat
2013-02-15 19:40                         ` Srivatsa S. Bhat
2013-02-15 19:40                         ` Srivatsa S. Bhat
2013-02-18 10:24                         ` Vincent Guittot
2013-02-18 10:24                           ` Vincent Guittot
2013-02-18 10:24                           ` Vincent Guittot
2013-02-18 10:34                           ` Srivatsa S. Bhat
2013-02-18 10:34                             ` Srivatsa S. Bhat
2013-02-18 10:34                             ` Srivatsa S. Bhat
2013-02-18 10:51                             ` Srivatsa S. Bhat
2013-02-18 10:51                               ` Srivatsa S. Bhat
2013-02-18 10:51                               ` Srivatsa S. Bhat
2013-02-18 10:58                               ` Vincent Guittot
2013-02-18 10:58                                 ` Vincent Guittot
2013-02-18 10:58                                 ` Vincent Guittot
2013-02-18 15:30                                 ` Steven Rostedt
2013-02-18 15:30                                   ` Steven Rostedt
2013-02-18 15:30                                   ` Steven Rostedt
2013-02-18 16:50                                   ` Vincent Guittot
2013-02-18 16:50                                     ` Vincent Guittot
2013-02-18 16:50                                     ` Vincent Guittot
2013-02-18 19:53                                     ` Steven Rostedt
2013-02-18 19:53                                       ` Steven Rostedt
2013-02-18 19:53                                       ` Steven Rostedt
2013-02-18 19:53                                     ` Steven Rostedt
2013-02-18 19:53                                       ` Steven Rostedt
2013-02-18 19:53                                       ` Steven Rostedt
2013-02-19 10:33                                       ` Vincent Guittot
2013-02-19 10:33                                         ` Vincent Guittot
2013-02-19 10:33                                         ` Vincent Guittot
2013-02-18 10:54                             ` Thomas Gleixner
2013-02-18 10:54                               ` Thomas Gleixner
2013-02-18 10:54                               ` Thomas Gleixner
2013-02-18 10:57                               ` Srivatsa S. Bhat
2013-02-18 10:57                                 ` Srivatsa S. Bhat
2013-02-18 10:57                                 ` Srivatsa S. Bhat
2013-02-11 12:41 ` [PATCH v5 01/45] percpu_rwlock: Introduce the global reader-writer lock backend David Howells
2013-02-11 12:41   ` David Howells
2013-02-11 12:41   ` David Howells
2013-02-11 12:56   ` Srivatsa S. Bhat
2013-02-11 12:56     ` Srivatsa S. Bhat
2013-02-11 12:56     ` Srivatsa S. Bhat

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51003B20.2060506@linux.vnet.ibm.com \
    --to=srivatsa.bhat@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=fweisbec@gmail.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=nikunj@linux.vnet.ibm.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rjw@sisk.pl \
    --cc=rostedt@goodmis.org \
    --cc=rusty@rustcorp.com.au \
    --cc=sbw@mit.edu \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=wangyun@linux.vnet.ibm.com \
    --cc=xiaoguangrong@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.