All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lai Jiangshan <eag0628@gmail.com>
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Michel Lespinasse <walken@google.com>,
	linux-doc@vger.kernel.org, peterz@infradead.org,
	fweisbec@gmail.com, linux-kernel@vger.kernel.org,
	namhyung@kernel.org, mingo@kernel.org,
	linux-arch@vger.kernel.org, linux@arm.linux.org.uk,
	xiaoguangrong@linux.vnet.ibm.com, wangyun@linux.vnet.ibm.com,
	paulmck@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com,
	linux-pm@vger.kernel.org, rusty@rustcorp.com.au,
	rostedt@goodmis.org, rjw@sisk.pl, vincent.guittot@linaro.org,
	tglx@linutronix.de, linux-arm-kernel@lists.infradead.org,
	netdev@vger.kernel.org, oleg@redhat.com, sbw@mit.edu,
	tj@kernel.org, akpm@linux-foundation.org,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Wed, 27 Feb 2013 00:25:33 +0800	[thread overview]
Message-ID: <CACvQF50UgiqPpDk3HCmV86zKSf6tPjqr0bZowfPEOJs0g0r0MQ@mail.gmail.com> (raw)
In-Reply-To: <512CC509.1050000@linux.vnet.ibm.com>

On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
<srivatsa.bhat@linux.vnet.ibm.com> wrote:
>
> Hi Lai,
>
> I'm really not convinced that piggy-backing on lglocks would help
> us in any way. But still, let me try to address some of the points
> you raised...
>
> On 02/26/2013 06:29 PM, Lai Jiangshan wrote:
>> On Tue, Feb 26, 2013 at 5:02 PM, Srivatsa S. Bhat
>> <srivatsa.bhat@linux.vnet.ibm.com> wrote:
>>> On 02/26/2013 05:47 AM, Lai Jiangshan wrote:
>>>> On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
>>>> <srivatsa.bhat@linux.vnet.ibm.com> wrote:
>>>>> Hi Lai,
>>>>>
>>>>> On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
>>>>>> Hi, Srivatsa,
>>>>>>
>>>>>> The target of the whole patchset is nice for me.
>>>>>
>>>>> Cool! Thanks :-)
>>>>>
>>> [...]
>>>
>>> Unfortunately, I see quite a few issues with the code above. IIUC, the
>>> writer and the reader both increment the same counters. So how will the
>>> unlock() code in the reader path know when to unlock which of the locks?
>>
>> The same as your code, the reader(which nested in write C.S.) just dec
>> the counters.
>
> And that works fine in my case because the writer and the reader update
> _two_ _different_ counters.

I can't find any magic in your code, they are the same counter.

        /*
         * It is desirable to allow the writer to acquire the percpu-rwlock
         * for read (if necessary), without deadlocking or getting complaints
         * from lockdep. To achieve that, just increment the reader_refcnt of
         * this CPU - that way, any attempt by the writer to acquire the
         * percpu-rwlock for read, will get treated as a case of nested percpu
         * reader, which is safe, from a locking perspective.
         */
        this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt);


> If both of them update the same counter, there
> will be a semantic clash - an increment of the counter can either mean that
> a new writer became active, or it can also indicate a nested reader. A decrement
> can also similarly have 2 meanings. And thus it will be difficult to decide
> the right action to take, based on the value of the counter.
>
>>
>>> (The counter-dropping-to-zero logic is not safe, since it can be updated
>>> due to different reasons). And now that I look at it again, in the absence
>>> of the writer, the reader is allowed to be recursive at the heavy cost of
>>> taking the global rwlock for read, every 2nd time you nest (because the
>>> spinlock is non-recursive).
>>
>> (I did not understand your comments of this part)
>> nested reader is considered seldom.
>
> No, nested readers can be _quite_ frequent. Because, potentially all users
> of preempt_disable() are readers - and its well-known how frequently we
> nest preempt_disable(). As a simple example, any atomic reader who calls
> smp_call_function() will become a nested reader, because smp_call_function()
> itself is a reader. So reader nesting is expected to be quite frequent.
>
>> But if N(>=2) nested readers happen,
>> the overhead is:
>>     1 spin_try_lock() + 1 read_lock() + (N-1) __this_cpu_inc()
>>
>
> In my patch, its just this_cpu_inc(). Note that these are _very_ hot paths.
> So every bit of optimization that you can add is worthwhile.
>
> And your read_lock() is a _global_ lock - thus, it can lead to a lot of
> cache-line bouncing. That's *exactly* why I have used per-cpu refcounts in
> my synchronization scheme, to avoid taking the global rwlock as much as possible.
>
> Another important point to note is that, the overhead we are talking about
> here, exists even when _not_ performing hotplug. And its the replacement to
> the super-fast preempt_disable(). So its extremely important to consciously
> minimize this overhead - else we'll end up slowing down the system significantly.
>

All I was considered is "nested reader is seldom", so I always
fallback to rwlock when nested.
If you like, I can add 6 lines of code, the overhead is
1 spin_try_lock()(fast path)  + N  __this_cpu_inc()

The overhead of your code is
2 smp_mb() + N __this_cpu_inc()

I don't see how much different.

>>> Also, this lg_rwlock implementation uses 3
>>> different data-structures - a per-cpu spinlock, a global rwlock and
>>> a per-cpu refcnt, and its not immediately apparent why you need those many
>>> or even those many varieties.
>>
>> data-structures is the same as yours.
>> fallback_reader_refcnt <--> reader_refcnt
>
> This has semantic problems, as noted above.
>
>> per-cpu spinlock <--> write_signal
>
> Acquire/release of (spin) lock is costlier than inc/dec of a counter, IIUC.
>
>> fallback_rwlock  <---> global_rwlock
>>
>>> Also I see that this doesn't handle the
>>> case of interrupt-handlers also being readers.
>>
>> handled. nested reader will see the ref or take the fallback_rwlock
>>

Sorry, _reentrance_ read_lock() will see the ref or take the fallback_rwlock

>
> I'm not referring to simple nested readers here, but interrupt handlers who
> can act as readers. For starters, the arch_spin_trylock() is not safe when
> interrupt handlers can also run the same code, right? You'll need to save
> and restore interrupts at critical points in the code. Also, the __foo()
> variants used to read/update the counters are not interrupt-safe.

I must missed something.

Could you elaborate more why arch_spin_trylock() is not safe when
interrupt handlers can also run the same code?

Could you elaborate more why __this_cpu_op variants is not
interrupt-safe since they are always called paired.


> And,
> the unlock() code in the reader path is again going to be confused about
> what to do when interrupt handlers interrupt regular readers, due to the
> messed up refcount.

I still can't understand.

>
>>>
>>> IMHO, the per-cpu rwlock scheme that I have implemented in this patchset
>>> has a clean, understandable design and just enough data-structures/locks
>>> to achieve its goal and has several optimizations (like reducing the
>>> interrupts-disabled time etc) included - all in a very straight-forward
>>> manner. Since this is non-trivial, IMHO, starting from a clean slate is
>>> actually better than trying to retrofit the logic into some locking scheme
>>> which we actively want to avoid (and hence effectively we aren't even
>>> borrowing anything from!).
>>>
>>> To summarize, if you are just pointing out that we can implement the same
>>> logic by altering lglocks, then sure, I acknowledge the possibility.
>>> However, I don't think doing that actually makes it better; it either
>>> convolutes the logic unnecessarily, or ends up looking _very_ similar to
>>> the implementation in this patchset, from what I can see.
>>>
>

WARNING: multiple messages have this Message-ID (diff)
From: Lai Jiangshan <eag0628@gmail.com>
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: linux-doc@vger.kernel.org, peterz@infradead.org,
	fweisbec@gmail.com, linux-kernel@vger.kernel.org,
	Michel Lespinasse <walken@google.com>,
	mingo@kernel.org, linux-arch@vger.kernel.org,
	linux@arm.linux.org.uk, xiaoguangrong@linux.vnet.ibm.com,
	wangyun@linux.vnet.ibm.com, paulmck@linux.vnet.ibm.com,
	nikunj@linux.vnet.ibm.com, linux-pm@vger.kernel.org,
	rusty@rustcorp.com.au, rostedt@goodmis.org, rjw@sisk.pl,
	namhyung@kernel.org, tglx@linutronix.de,
	linux-arm-kernel@lists.infradead.org, netdev@vger.kernel.org,
	oleg@redhat.com, vincent.guittot@linaro.org, sbw@mit.edu,
	tj@kernel.org, akpm@linux-foundation.org,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Wed, 27 Feb 2013 00:25:33 +0800	[thread overview]
Message-ID: <CACvQF50UgiqPpDk3HCmV86zKSf6tPjqr0bZowfPEOJs0g0r0MQ@mail.gmail.com> (raw)
In-Reply-To: <512CC509.1050000@linux.vnet.ibm.com>

On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
<srivatsa.bhat@linux.vnet.ibm.com> wrote:
>
> Hi Lai,
>
> I'm really not convinced that piggy-backing on lglocks would help
> us in any way. But still, let me try to address some of the points
> you raised...
>
> On 02/26/2013 06:29 PM, Lai Jiangshan wrote:
>> On Tue, Feb 26, 2013 at 5:02 PM, Srivatsa S. Bhat
>> <srivatsa.bhat@linux.vnet.ibm.com> wrote:
>>> On 02/26/2013 05:47 AM, Lai Jiangshan wrote:
>>>> On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
>>>> <srivatsa.bhat@linux.vnet.ibm.com> wrote:
>>>>> Hi Lai,
>>>>>
>>>>> On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
>>>>>> Hi, Srivatsa,
>>>>>>
>>>>>> The target of the whole patchset is nice for me.
>>>>>
>>>>> Cool! Thanks :-)
>>>>>
>>> [...]
>>>
>>> Unfortunately, I see quite a few issues with the code above. IIUC, the
>>> writer and the reader both increment the same counters. So how will the
>>> unlock() code in the reader path know when to unlock which of the locks?
>>
>> The same as your code, the reader(which nested in write C.S.) just dec
>> the counters.
>
> And that works fine in my case because the writer and the reader update
> _two_ _different_ counters.

I can't find any magic in your code, they are the same counter.

        /*
         * It is desirable to allow the writer to acquire the percpu-rwlock
         * for read (if necessary), without deadlocking or getting complaints
         * from lockdep. To achieve that, just increment the reader_refcnt of
         * this CPU - that way, any attempt by the writer to acquire the
         * percpu-rwlock for read, will get treated as a case of nested percpu
         * reader, which is safe, from a locking perspective.
         */
        this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt);


> If both of them update the same counter, there
> will be a semantic clash - an increment of the counter can either mean that
> a new writer became active, or it can also indicate a nested reader. A decrement
> can also similarly have 2 meanings. And thus it will be difficult to decide
> the right action to take, based on the value of the counter.
>
>>
>>> (The counter-dropping-to-zero logic is not safe, since it can be updated
>>> due to different reasons). And now that I look at it again, in the absence
>>> of the writer, the reader is allowed to be recursive at the heavy cost of
>>> taking the global rwlock for read, every 2nd time you nest (because the
>>> spinlock is non-recursive).
>>
>> (I did not understand your comments of this part)
>> nested reader is considered seldom.
>
> No, nested readers can be _quite_ frequent. Because, potentially all users
> of preempt_disable() are readers - and its well-known how frequently we
> nest preempt_disable(). As a simple example, any atomic reader who calls
> smp_call_function() will become a nested reader, because smp_call_function()
> itself is a reader. So reader nesting is expected to be quite frequent.
>
>> But if N(>=2) nested readers happen,
>> the overhead is:
>>     1 spin_try_lock() + 1 read_lock() + (N-1) __this_cpu_inc()
>>
>
> In my patch, its just this_cpu_inc(). Note that these are _very_ hot paths.
> So every bit of optimization that you can add is worthwhile.
>
> And your read_lock() is a _global_ lock - thus, it can lead to a lot of
> cache-line bouncing. That's *exactly* why I have used per-cpu refcounts in
> my synchronization scheme, to avoid taking the global rwlock as much as possible.
>
> Another important point to note is that, the overhead we are talking about
> here, exists even when _not_ performing hotplug. And its the replacement to
> the super-fast preempt_disable(). So its extremely important to consciously
> minimize this overhead - else we'll end up slowing down the system significantly.
>

All I was considered is "nested reader is seldom", so I always
fallback to rwlock when nested.
If you like, I can add 6 lines of code, the overhead is
1 spin_try_lock()(fast path)  + N  __this_cpu_inc()

The overhead of your code is
2 smp_mb() + N __this_cpu_inc()

I don't see how much different.

>>> Also, this lg_rwlock implementation uses 3
>>> different data-structures - a per-cpu spinlock, a global rwlock and
>>> a per-cpu refcnt, and its not immediately apparent why you need those many
>>> or even those many varieties.
>>
>> data-structures is the same as yours.
>> fallback_reader_refcnt <--> reader_refcnt
>
> This has semantic problems, as noted above.
>
>> per-cpu spinlock <--> write_signal
>
> Acquire/release of (spin) lock is costlier than inc/dec of a counter, IIUC.
>
>> fallback_rwlock  <---> global_rwlock
>>
>>> Also I see that this doesn't handle the
>>> case of interrupt-handlers also being readers.
>>
>> handled. nested reader will see the ref or take the fallback_rwlock
>>

Sorry, _reentrance_ read_lock() will see the ref or take the fallback_rwlock

>
> I'm not referring to simple nested readers here, but interrupt handlers who
> can act as readers. For starters, the arch_spin_trylock() is not safe when
> interrupt handlers can also run the same code, right? You'll need to save
> and restore interrupts at critical points in the code. Also, the __foo()
> variants used to read/update the counters are not interrupt-safe.

I must missed something.

Could you elaborate more why arch_spin_trylock() is not safe when
interrupt handlers can also run the same code?

Could you elaborate more why __this_cpu_op variants is not
interrupt-safe since they are always called paired.


> And,
> the unlock() code in the reader path is again going to be confused about
> what to do when interrupt handlers interrupt regular readers, due to the
> messed up refcount.

I still can't understand.

>
>>>
>>> IMHO, the per-cpu rwlock scheme that I have implemented in this patchset
>>> has a clean, understandable design and just enough data-structures/locks
>>> to achieve its goal and has several optimizations (like reducing the
>>> interrupts-disabled time etc) included - all in a very straight-forward
>>> manner. Since this is non-trivial, IMHO, starting from a clean slate is
>>> actually better than trying to retrofit the logic into some locking scheme
>>> which we actively want to avoid (and hence effectively we aren't even
>>> borrowing anything from!).
>>>
>>> To summarize, if you are just pointing out that we can implement the same
>>> logic by altering lglocks, then sure, I acknowledge the possibility.
>>> However, I don't think doing that actually makes it better; it either
>>> convolutes the logic unnecessarily, or ends up looking _very_ similar to
>>> the implementation in this patchset, from what I can see.
>>>
>

WARNING: multiple messages have this Message-ID (diff)
From: eag0628@gmail.com (Lai Jiangshan)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks
Date: Wed, 27 Feb 2013 00:25:33 +0800	[thread overview]
Message-ID: <CACvQF50UgiqPpDk3HCmV86zKSf6tPjqr0bZowfPEOJs0g0r0MQ@mail.gmail.com> (raw)
In-Reply-To: <512CC509.1050000@linux.vnet.ibm.com>

On Tue, Feb 26, 2013 at 10:22 PM, Srivatsa S. Bhat
<srivatsa.bhat@linux.vnet.ibm.com> wrote:
>
> Hi Lai,
>
> I'm really not convinced that piggy-backing on lglocks would help
> us in any way. But still, let me try to address some of the points
> you raised...
>
> On 02/26/2013 06:29 PM, Lai Jiangshan wrote:
>> On Tue, Feb 26, 2013 at 5:02 PM, Srivatsa S. Bhat
>> <srivatsa.bhat@linux.vnet.ibm.com> wrote:
>>> On 02/26/2013 05:47 AM, Lai Jiangshan wrote:
>>>> On Tue, Feb 26, 2013 at 3:26 AM, Srivatsa S. Bhat
>>>> <srivatsa.bhat@linux.vnet.ibm.com> wrote:
>>>>> Hi Lai,
>>>>>
>>>>> On 02/25/2013 09:23 PM, Lai Jiangshan wrote:
>>>>>> Hi, Srivatsa,
>>>>>>
>>>>>> The target of the whole patchset is nice for me.
>>>>>
>>>>> Cool! Thanks :-)
>>>>>
>>> [...]
>>>
>>> Unfortunately, I see quite a few issues with the code above. IIUC, the
>>> writer and the reader both increment the same counters. So how will the
>>> unlock() code in the reader path know when to unlock which of the locks?
>>
>> The same as your code, the reader(which nested in write C.S.) just dec
>> the counters.
>
> And that works fine in my case because the writer and the reader update
> _two_ _different_ counters.

I can't find any magic in your code, they are the same counter.

        /*
         * It is desirable to allow the writer to acquire the percpu-rwlock
         * for read (if necessary), without deadlocking or getting complaints
         * from lockdep. To achieve that, just increment the reader_refcnt of
         * this CPU - that way, any attempt by the writer to acquire the
         * percpu-rwlock for read, will get treated as a case of nested percpu
         * reader, which is safe, from a locking perspective.
         */
        this_cpu_inc(pcpu_rwlock->rw_state->reader_refcnt);


> If both of them update the same counter, there
> will be a semantic clash - an increment of the counter can either mean that
> a new writer became active, or it can also indicate a nested reader. A decrement
> can also similarly have 2 meanings. And thus it will be difficult to decide
> the right action to take, based on the value of the counter.
>
>>
>>> (The counter-dropping-to-zero logic is not safe, since it can be updated
>>> due to different reasons). And now that I look at it again, in the absence
>>> of the writer, the reader is allowed to be recursive at the heavy cost of
>>> taking the global rwlock for read, every 2nd time you nest (because the
>>> spinlock is non-recursive).
>>
>> (I did not understand your comments of this part)
>> nested reader is considered seldom.
>
> No, nested readers can be _quite_ frequent. Because, potentially all users
> of preempt_disable() are readers - and its well-known how frequently we
> nest preempt_disable(). As a simple example, any atomic reader who calls
> smp_call_function() will become a nested reader, because smp_call_function()
> itself is a reader. So reader nesting is expected to be quite frequent.
>
>> But if N(>=2) nested readers happen,
>> the overhead is:
>>     1 spin_try_lock() + 1 read_lock() + (N-1) __this_cpu_inc()
>>
>
> In my patch, its just this_cpu_inc(). Note that these are _very_ hot paths.
> So every bit of optimization that you can add is worthwhile.
>
> And your read_lock() is a _global_ lock - thus, it can lead to a lot of
> cache-line bouncing. That's *exactly* why I have used per-cpu refcounts in
> my synchronization scheme, to avoid taking the global rwlock as much as possible.
>
> Another important point to note is that, the overhead we are talking about
> here, exists even when _not_ performing hotplug. And its the replacement to
> the super-fast preempt_disable(). So its extremely important to consciously
> minimize this overhead - else we'll end up slowing down the system significantly.
>

All I was considered is "nested reader is seldom", so I always
fallback to rwlock when nested.
If you like, I can add 6 lines of code, the overhead is
1 spin_try_lock()(fast path)  + N  __this_cpu_inc()

The overhead of your code is
2 smp_mb() + N __this_cpu_inc()

I don't see how much different.

>>> Also, this lg_rwlock implementation uses 3
>>> different data-structures - a per-cpu spinlock, a global rwlock and
>>> a per-cpu refcnt, and its not immediately apparent why you need those many
>>> or even those many varieties.
>>
>> data-structures is the same as yours.
>> fallback_reader_refcnt <--> reader_refcnt
>
> This has semantic problems, as noted above.
>
>> per-cpu spinlock <--> write_signal
>
> Acquire/release of (spin) lock is costlier than inc/dec of a counter, IIUC.
>
>> fallback_rwlock  <---> global_rwlock
>>
>>> Also I see that this doesn't handle the
>>> case of interrupt-handlers also being readers.
>>
>> handled. nested reader will see the ref or take the fallback_rwlock
>>

Sorry, _reentrance_ read_lock() will see the ref or take the fallback_rwlock

>
> I'm not referring to simple nested readers here, but interrupt handlers who
> can act as readers. For starters, the arch_spin_trylock() is not safe when
> interrupt handlers can also run the same code, right? You'll need to save
> and restore interrupts at critical points in the code. Also, the __foo()
> variants used to read/update the counters are not interrupt-safe.

I must missed something.

Could you elaborate more why arch_spin_trylock() is not safe when
interrupt handlers can also run the same code?

Could you elaborate more why __this_cpu_op variants is not
interrupt-safe since they are always called paired.


> And,
> the unlock() code in the reader path is again going to be confused about
> what to do when interrupt handlers interrupt regular readers, due to the
> messed up refcount.

I still can't understand.

>
>>>
>>> IMHO, the per-cpu rwlock scheme that I have implemented in this patchset
>>> has a clean, understandable design and just enough data-structures/locks
>>> to achieve its goal and has several optimizations (like reducing the
>>> interrupts-disabled time etc) included - all in a very straight-forward
>>> manner. Since this is non-trivial, IMHO, starting from a clean slate is
>>> actually better than trying to retrofit the logic into some locking scheme
>>> which we actively want to avoid (and hence effectively we aren't even
>>> borrowing anything from!).
>>>
>>> To summarize, if you are just pointing out that we can implement the same
>>> logic by altering lglocks, then sure, I acknowledge the possibility.
>>> However, I don't think doing that actually makes it better; it either
>>> convolutes the logic unnecessarily, or ends up looking _very_ similar to
>>> the implementation in this patchset, from what I can see.
>>>
>

  reply	other threads:[~2013-02-26 16:25 UTC|newest]

Thread overview: 360+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-18 12:38 [PATCH v6 00/46] CPU hotplug: stop_machine()-free CPU hotplug Srivatsa S. Bhat
2013-02-18 12:38 ` Srivatsa S. Bhat
2013-02-18 12:38 ` Srivatsa S. Bhat
2013-02-18 12:38 ` [PATCH v6 01/46] percpu_rwlock: Introduce the global reader-writer lock backend Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38 ` [PATCH v6 02/46] percpu_rwlock: Introduce per-CPU variables for the reader and the writer Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38 ` [PATCH v6 03/46] percpu_rwlock: Provide a way to define and init percpu-rwlocks at compile time Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38 ` [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 12:38   ` Srivatsa S. Bhat
2013-02-18 15:45   ` Michel Lespinasse
2013-02-18 15:45     ` Michel Lespinasse
2013-02-18 15:45     ` Michel Lespinasse
2013-02-18 16:21     ` Srivatsa S. Bhat
2013-02-18 16:21       ` Srivatsa S. Bhat
2013-02-18 16:21       ` Srivatsa S. Bhat
2013-02-18 16:31       ` Steven Rostedt
2013-02-18 16:31         ` Steven Rostedt
2013-02-18 16:31         ` Steven Rostedt
2013-02-18 16:46         ` Srivatsa S. Bhat
2013-02-18 16:46           ` Srivatsa S. Bhat
2013-02-18 16:46           ` Srivatsa S. Bhat
2013-02-18 17:56       ` Srivatsa S. Bhat
2013-02-18 17:56         ` Srivatsa S. Bhat
2013-02-18 17:56         ` Srivatsa S. Bhat
2013-02-18 18:07         ` Michel Lespinasse
2013-02-18 18:07           ` Michel Lespinasse
2013-02-18 18:07           ` Michel Lespinasse
2013-02-18 18:14           ` Srivatsa S. Bhat
2013-02-18 18:14             ` Srivatsa S. Bhat
2013-02-18 18:14             ` Srivatsa S. Bhat
2013-02-25 15:53             ` Lai Jiangshan
2013-02-25 15:53               ` Lai Jiangshan
2013-02-25 15:53               ` Lai Jiangshan
2013-02-25 15:53               ` Lai Jiangshan
2013-02-25 19:26               ` Srivatsa S. Bhat
2013-02-25 19:26                 ` Srivatsa S. Bhat
2013-02-25 19:26                 ` Srivatsa S. Bhat
2013-02-26  0:17                 ` Lai Jiangshan
2013-02-26  0:17                   ` Lai Jiangshan
2013-02-26  0:17                   ` Lai Jiangshan
2013-02-26  0:19                   ` Lai Jiangshan
2013-02-26  0:19                     ` Lai Jiangshan
2013-02-26  0:19                     ` Lai Jiangshan
2013-02-26  9:02                   ` Srivatsa S. Bhat
2013-02-26  9:02                     ` Srivatsa S. Bhat
2013-02-26  9:02                     ` Srivatsa S. Bhat
2013-02-26 12:59                     ` Lai Jiangshan
2013-02-26 12:59                       ` Lai Jiangshan
2013-02-26 12:59                       ` Lai Jiangshan
2013-02-26 14:22                       ` Srivatsa S. Bhat
2013-02-26 14:22                         ` Srivatsa S. Bhat
2013-02-26 14:22                         ` Srivatsa S. Bhat
2013-02-26 16:25                         ` Lai Jiangshan [this message]
2013-02-26 16:25                           ` Lai Jiangshan
2013-02-26 16:25                           ` Lai Jiangshan
2013-02-26 19:30                           ` Srivatsa S. Bhat
2013-02-26 19:30                             ` Srivatsa S. Bhat
2013-02-26 19:30                             ` Srivatsa S. Bhat
2013-02-27  0:33                             ` Lai Jiangshan
2013-02-27  0:33                               ` Lai Jiangshan
2013-02-27  0:33                               ` Lai Jiangshan
2013-02-27 21:19                               ` Srivatsa S. Bhat
2013-02-27 21:19                                 ` Srivatsa S. Bhat
2013-02-27 21:19                                 ` Srivatsa S. Bhat
2013-02-27 21:19                                 ` Srivatsa S. Bhat
2013-03-01 17:44                                 ` [PATCH] lglock: add read-preference local-global rwlock Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:44                                   ` Lai Jiangshan
2013-03-01 17:53                                   ` Tejun Heo
2013-03-01 17:53                                     ` Tejun Heo
2013-03-01 17:53                                     ` Tejun Heo
2013-03-01 20:06                                     ` Srivatsa S. Bhat
2013-03-01 20:06                                       ` Srivatsa S. Bhat
2013-03-01 20:06                                       ` Srivatsa S. Bhat
2013-03-01 18:28                                   ` Oleg Nesterov
2013-03-01 18:28                                     ` Oleg Nesterov
2013-03-01 18:28                                     ` Oleg Nesterov
2013-03-02 12:13                                     ` Michel Lespinasse
2013-03-02 12:13                                       ` Michel Lespinasse
2013-03-02 12:13                                       ` Michel Lespinasse
2013-03-02 12:13                                       ` Michel Lespinasse
2013-03-02 13:14                                       ` [PATCH V2] " Lai Jiangshan
2013-03-02 13:14                                         ` Lai Jiangshan
2013-03-02 13:14                                         ` Lai Jiangshan
2013-03-02 13:14                                         ` Lai Jiangshan
2013-03-02 13:14                                         ` Lai Jiangshan
2013-03-02 17:11                                         ` Srivatsa S. Bhat
2013-03-02 17:11                                           ` Srivatsa S. Bhat
2013-03-02 17:11                                           ` Srivatsa S. Bhat
2013-03-05 15:41                                           ` Lai Jiangshan
2013-03-05 15:41                                             ` Lai Jiangshan
2013-03-05 15:41                                             ` Lai Jiangshan
2013-03-05 17:55                                             ` Srivatsa S. Bhat
2013-03-05 17:55                                               ` Srivatsa S. Bhat
2013-03-05 17:55                                               ` Srivatsa S. Bhat
2013-03-02 17:20                                         ` Oleg Nesterov
2013-03-02 17:20                                           ` Oleg Nesterov
2013-03-02 17:20                                           ` Oleg Nesterov
2013-03-03 17:40                                           ` Oleg Nesterov
2013-03-03 17:40                                             ` Oleg Nesterov
2013-03-03 17:40                                             ` Oleg Nesterov
2013-03-05  1:37                                             ` Michel Lespinasse
2013-03-05  1:37                                               ` Michel Lespinasse
2013-03-05  1:37                                               ` Michel Lespinasse
2013-03-05 15:27                                           ` Lai Jiangshan
2013-03-05 15:27                                             ` Lai Jiangshan
2013-03-05 15:27                                             ` Lai Jiangshan
2013-03-05 16:19                                             ` Michel Lespinasse
2013-03-05 16:19                                               ` Michel Lespinasse
2013-03-05 16:19                                               ` Michel Lespinasse
2013-03-05 16:41                                             ` Oleg Nesterov
2013-03-05 16:41                                               ` Oleg Nesterov
2013-03-05 16:41                                               ` Oleg Nesterov
2013-03-02 17:06                                       ` [PATCH] " Oleg Nesterov
2013-03-02 17:06                                         ` Oleg Nesterov
2013-03-02 17:06                                         ` Oleg Nesterov
2013-03-05 15:54                                         ` Lai Jiangshan
2013-03-05 15:54                                           ` Lai Jiangshan
2013-03-05 15:54                                           ` Lai Jiangshan
2013-03-05 16:32                                           ` Michel Lespinasse
2013-03-05 16:32                                             ` Michel Lespinasse
2013-03-05 16:32                                             ` Michel Lespinasse
2013-03-05 16:35                                           ` Oleg Nesterov
2013-03-05 16:35                                             ` Oleg Nesterov
2013-03-05 16:35                                             ` Oleg Nesterov
2013-03-02 13:42                                     ` Lai Jiangshan
2013-03-02 13:42                                       ` Lai Jiangshan
2013-03-02 13:42                                       ` Lai Jiangshan
2013-03-02 17:01                                       ` Oleg Nesterov
2013-03-02 17:01                                         ` Oleg Nesterov
2013-03-02 17:01                                         ` Oleg Nesterov
2013-03-01 17:50                                 ` [PATCH v6 04/46] percpu_rwlock: Implement the core design of Per-CPU Reader-Writer Locks Lai Jiangshan
2013-03-01 17:50                                   ` Lai Jiangshan
2013-03-01 17:50                                   ` Lai Jiangshan
2013-03-01 17:50                                   ` Lai Jiangshan
2013-03-01 19:47                                   ` Srivatsa S. Bhat
2013-03-01 19:47                                     ` Srivatsa S. Bhat
2013-03-01 19:47                                     ` Srivatsa S. Bhat
2013-03-01 19:47                                     ` Srivatsa S. Bhat
2013-03-05 16:25                                     ` Lai Jiangshan
2013-03-05 16:25                                       ` Lai Jiangshan
2013-03-05 16:25                                       ` Lai Jiangshan
2013-03-05 16:25                                       ` Lai Jiangshan
2013-03-05 18:27                                       ` Srivatsa S. Bhat
2013-03-05 18:27                                         ` Srivatsa S. Bhat
2013-03-05 18:27                                         ` Srivatsa S. Bhat
2013-03-05 18:27                                         ` Srivatsa S. Bhat
2013-03-01 18:10                                 ` Tejun Heo
2013-03-01 18:10                                   ` Tejun Heo
2013-03-01 18:10                                   ` Tejun Heo
2013-03-01 19:59                                   ` Srivatsa S. Bhat
2013-03-01 19:59                                     ` Srivatsa S. Bhat
2013-03-01 19:59                                     ` Srivatsa S. Bhat
2013-02-27 11:11                             ` Michel Lespinasse
2013-02-27 11:11                               ` Michel Lespinasse
2013-02-27 11:11                               ` Michel Lespinasse
2013-02-27 19:25                               ` Oleg Nesterov
2013-02-27 19:25                                 ` Oleg Nesterov
2013-02-27 19:25                                 ` Oleg Nesterov
2013-02-28 11:34                                 ` Michel Lespinasse
2013-02-28 11:34                                   ` Michel Lespinasse
2013-02-28 11:34                                   ` Michel Lespinasse
2013-02-28 18:00                                   ` Oleg Nesterov
2013-02-28 18:00                                     ` Oleg Nesterov
2013-02-28 18:00                                     ` Oleg Nesterov
2013-02-28 18:20                                     ` Oleg Nesterov
2013-02-28 18:20                                       ` Oleg Nesterov
2013-02-28 18:20                                       ` Oleg Nesterov
2013-02-26 13:34                 ` Lai Jiangshan
2013-02-26 13:34                   ` Lai Jiangshan
2013-02-26 13:34                   ` Lai Jiangshan
2013-02-26 15:17                   ` Srivatsa S. Bhat
2013-02-26 15:17                     ` Srivatsa S. Bhat
2013-02-26 14:17   ` Lai Jiangshan
2013-02-26 14:17     ` Lai Jiangshan
2013-02-26 14:17     ` Lai Jiangshan
2013-02-26 14:37     ` Srivatsa S. Bhat
2013-02-26 14:37       ` Srivatsa S. Bhat
2013-02-26 14:37       ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 05/46] percpu_rwlock: Make percpu-rwlocks IRQ-safe, optimally Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 06/46] percpu_rwlock: Rearrange the read-lock code to fastpath nested percpu readers Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 07/46] percpu_rwlock: Allow writers to be readers, and add lockdep annotations Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 15:51   ` Michel Lespinasse
2013-02-18 15:51     ` Michel Lespinasse
2013-02-18 15:51     ` Michel Lespinasse
2013-02-18 16:31     ` Srivatsa S. Bhat
2013-02-18 16:31       ` Srivatsa S. Bhat
2013-02-18 16:31       ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 08/46] CPU hotplug: Provide APIs to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 16:23   ` Michel Lespinasse
2013-02-18 16:23     ` Michel Lespinasse
2013-02-18 16:23     ` Michel Lespinasse
2013-02-18 16:43     ` Srivatsa S. Bhat
2013-02-18 16:43       ` Srivatsa S. Bhat
2013-02-18 16:43       ` Srivatsa S. Bhat
2013-02-18 17:21       ` Michel Lespinasse
2013-02-18 17:21         ` Michel Lespinasse
2013-02-18 17:21         ` Michel Lespinasse
2013-02-18 18:50         ` Srivatsa S. Bhat
2013-02-18 18:50           ` Srivatsa S. Bhat
2013-02-18 18:50           ` Srivatsa S. Bhat
2013-02-19  9:40           ` Michel Lespinasse
2013-02-19  9:40             ` Michel Lespinasse
2013-02-19  9:40             ` Michel Lespinasse
2013-02-19  9:55             ` Srivatsa S. Bhat
2013-02-19  9:55               ` Srivatsa S. Bhat
2013-02-19  9:55               ` Srivatsa S. Bhat
2013-02-19 10:42               ` David Laight
2013-02-19 10:42                 ` David Laight
2013-02-19 10:42                 ` David Laight
2013-02-19 10:42                 ` David Laight
2013-02-19 10:58                 ` Srivatsa S. Bhat
2013-02-19 10:58                   ` Srivatsa S. Bhat
2013-02-19 10:58                   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 09/46] CPU hotplug: Convert preprocessor macros to static inline functions Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 10/46] smp, cpu hotplug: Fix smp_call_function_*() to prevent CPU offline properly Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39 ` [PATCH v6 11/46] smp, cpu hotplug: Fix on_each_cpu_*() " Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:39   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 12/46] sched/timer: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 13/46] sched/migration: Use raw_spin_lock/unlock since interrupts are already disabled Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 14/46] sched/rt: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 15/46] tick: " Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 16/46] time/clocksource: " Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 17/46] clockevents: Use get/put_online_cpus_atomic() in clockevents_notify() Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 18/46] softirq: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40 ` [PATCH v6 19/46] irq: " Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:40   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 20/46] net: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 21/46] block: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 22/46] crypto: pcrypt - Protect access to cpu_online_mask with get/put_online_cpus() Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 23/46] infiniband: ehca: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 24/46] [SCSI] fcoe: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 25/46] staging: octeon: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41 ` [PATCH v6 26/46] x86: " Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:41   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 27/46] perf/x86: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 28/46] KVM: Use get/put_online_cpus_atomic() to prevent CPU offline from atomic context Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 29/46] kvm/vmx: Use get/put_online_cpus_atomic() to prevent CPU offline Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 30/46] x86/xen: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 31/46] alpha/smp: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 32/46] blackfin/smp: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42 ` [PATCH v6 33/46] cris/smp: " Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 12:42   ` Srivatsa S. Bhat
2013-02-18 13:07   ` Jesper Nilsson
2013-02-18 13:07     ` Jesper Nilsson
2013-02-18 13:07     ` Jesper Nilsson
2013-02-18 12:43 ` [PATCH v6 34/46] hexagon/smp: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 35/46] ia64: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 36/46] m32r: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 37/46] MIPS: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 38/46] mn10300: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 39/46] parisc: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43 ` [PATCH v6 40/46] powerpc: " Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:43   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 41/46] sh: " Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 42/46] sparc: " Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 43/46] tile: " Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 44/46] cpu: No more __stop_machine() in _cpu_down() Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 45/46] CPU hotplug, stop_machine: Decouple CPU hotplug from stop_machine() in Kconfig Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44 ` [PATCH v6 46/46] Documentation/cpu-hotplug: Remove references to stop_machine() Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-18 12:44   ` Srivatsa S. Bhat
2013-02-22  0:31 ` [PATCH v6 00/46] CPU hotplug: stop_machine()-free CPU hotplug Rusty Russell
2013-02-22  0:31   ` Rusty Russell
2013-02-22  0:31   ` Rusty Russell
2013-02-22  0:31   ` Rusty Russell
2013-02-22  0:31   ` Rusty Russell
2013-02-25 21:45   ` Srivatsa S. Bhat
2013-02-25 21:45     ` Srivatsa S. Bhat
2013-02-25 21:45     ` Srivatsa S. Bhat
2013-03-01 12:05 ` Vincent Guittot
2013-03-01 12:05   ` Vincent Guittot
2013-03-01 12:05   ` Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACvQF50UgiqPpDk3HCmV86zKSf6tPjqr0bZowfPEOJs0g0r0MQ@mail.gmail.com \
    --to=eag0628@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=fweisbec@gmail.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux@arm.linux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mingo@kernel.org \
    --cc=namhyung@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=nikunj@linux.vnet.ibm.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=rjw@sisk.pl \
    --cc=rostedt@goodmis.org \
    --cc=rusty@rustcorp.com.au \
    --cc=sbw@mit.edu \
    --cc=srivatsa.bhat@linux.vnet.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=vincent.guittot@linaro.org \
    --cc=walken@google.com \
    --cc=wangyun@linux.vnet.ibm.com \
    --cc=xiaoguangrong@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.