All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Linux MM <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	linux-renesas-soc@vger.kernel.org,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache)
Date: Tue, 28 Jun 2016 17:33:57 +0900	[thread overview]
Message-ID: <20160628083357.GE19731@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <20160628001243.GA20638@linux.vnet.ibm.com>

On Mon, Jun 27, 2016 at 05:12:43PM -0700, Paul E. McKenney wrote:
> On Wed, Jun 22, 2016 at 07:53:29PM -0700, Paul E. McKenney wrote:
> > On Wed, Jun 22, 2016 at 07:47:42PM -0700, Paul E. McKenney wrote:
> > > On Thu, Jun 23, 2016 at 11:37:56AM +0900, Joonsoo Kim wrote:
> > > > On Wed, Jun 22, 2016 at 05:49:35PM -0700, Paul E. McKenney wrote:
> > > > > On Wed, Jun 22, 2016 at 12:08:59PM -0700, Paul E. McKenney wrote:
> > > > > > On Wed, Jun 22, 2016 at 05:01:35PM +0200, Geert Uytterhoeven wrote:
> > > > > > > On Wed, Jun 22, 2016 at 2:52 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> > > > > > > > Could you try below patch to check who causes the hang?
> > > > > > > >
> > > > > > > > And, if sysalt-t works when hang, could you get sysalt-t output? I haven't
> > > > > > > > used it before but Paul could find some culprit on it. :)
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > >
> > > > > > > > ----->8-----
> > > > > > > > diff --git a/mm/slab.c b/mm/slab.c
> > > > > > > > index 763096a..9652d38 100644
> > > > > > > > --- a/mm/slab.c
> > > > > > > > +++ b/mm/slab.c
> > > > > > > > @@ -964,8 +964,13 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> > > > > > > >          * guaranteed to be valid until irq is re-enabled, because it will be
> > > > > > > >          * freed after synchronize_sched().
> > > > > > > >          */
> > > > > > > > -       if (force_change)
> > > > > > > > +       if (force_change) {
> > > > > > > > +               if (num_online_cpus() > 1)
> > > > > > > > +                       dump_stack();
> > > > > > > >                 synchronize_sched();
> > > > > > > > +               if (num_online_cpus() > 1)
> > > > > > > > +                       dump_stack();
> > > > > > > > +       }
> > > > > > > 
> > > > > > > I've only added the first one, as I would never see the second one. All of
> > > > > > > this happens before the serial console is activated, earlycon is not supported,
> > > > > > > and I only have remote access.
> > > > > > > 
> > > > > > > Brought up 2 CPUs
> > > > > > > SMP: Total of 2 processors activated (2132.00 BogoMIPS).
> > > > > > > CPU: All CPU(s) started in SVC mode.
> > > > > > > CPU: 0 PID: 1 Comm: swapper/0 Not tainted
> > > > > > > 4.7.0-rc4-kzm9d-00404-g4a235e6dde4404dd-dirty #89
> > > > > > > Hardware name: Generic Emma Mobile EV2 (Flattened Device Tree)
> > > > > > > [<c010de68>] (unwind_backtrace) from [<c010a658>] (show_stack+0x10/0x14)
> > > > > > > [<c010a658>] (show_stack) from [<c02b5cf8>] (dump_stack+0x7c/0x9c)
> > > > > > > [<c02b5cf8>] (dump_stack) from [<c01cfa4c>] (setup_kmem_cache_node+0x140/0x170)
> > > > > > > [<c01cfa4c>] (setup_kmem_cache_node) from [<c01cfe3c>]
> > > > > > > (__do_tune_cpucache+0xf4/0x114)
> > > > > > > [<c01cfe3c>] (__do_tune_cpucache) from [<c01cff54>] (enable_cpucache+0xf8/0x148)
> > > > > > > [<c01cff54>] (enable_cpucache) from [<c01d0190>]
> > > > > > > (__kmem_cache_create+0x1a8/0x1d0)
> > > > > > > [<c01d0190>] (__kmem_cache_create) from [<c01b32d0>]
> > > > > > > (kmem_cache_create+0xbc/0x190)
> > > > > > > [<c01b32d0>] (kmem_cache_create) from [<c070d968>] (shmem_init+0x34/0xb0)
> > > > > > > [<c070d968>] (shmem_init) from [<c0700cc8>] (kernel_init_freeable+0x98/0x1ec)
> > > > > > > [<c0700cc8>] (kernel_init_freeable) from [<c049fdbc>] (kernel_init+0x8/0x110)
> > > > > > > [<c049fdbc>] (kernel_init) from [<c0106cb8>] (ret_from_fork+0x14/0x3c)
> > > > > > > devtmpfs: initialized
> > > > > > 
> > > > > > I don't see anything here that would prevent grace periods from completing.
> > > > > > 
> > > > > > The CPUs are using the normal hotplug sequence to come online, correct?
> > > > > 
> > > > > And either way, could you please apply the patch below and then
> > > > > invoke rcu_dump_rcu_sched_tree() just before the offending call to
> > > > > synchronize_sched()?  That will tell me what CPUs RCU believes exist,
> > > > > and perhaps also which CPU is holding it up.
> > > > 
> > > > I can't find rcu_dump_rcu_sched_tree(). Do you mean
> > > > rcu_dump_rcu_node_tree()? Anyway, there is no patch below so I attach
> > > > one which does what Paul want, maybe.
> > > 
> > > One of those days, I guess!  :-/
> > > 
> > > Your patch is exactly what I intended to send, thank you!
> > 
> > Ah, but your telepathy was not sufficient to intuit the additional
> > information I need.  Please see the patch at the end.  Your hunk
> > in mm/slab.c is needed on top of my patch.
> > 
> > So I am clearly having difficulties reading as well as including patches
> > today...
> 
> Just following up, any news using my diagnostic patch?

Hello, Paul.

Unfortunately, I have no hardware to re-generate it, so we need to wait Geert's
feedback.

Thanks.

WARNING: multiple messages have this Message-ID (diff)
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Linux MM <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	linux-renesas-soc@vger.kernel.org,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache)
Date: Tue, 28 Jun 2016 17:33:57 +0900	[thread overview]
Message-ID: <20160628083357.GE19731@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <20160628001243.GA20638@linux.vnet.ibm.com>

On Mon, Jun 27, 2016 at 05:12:43PM -0700, Paul E. McKenney wrote:
> On Wed, Jun 22, 2016 at 07:53:29PM -0700, Paul E. McKenney wrote:
> > On Wed, Jun 22, 2016 at 07:47:42PM -0700, Paul E. McKenney wrote:
> > > On Thu, Jun 23, 2016 at 11:37:56AM +0900, Joonsoo Kim wrote:
> > > > On Wed, Jun 22, 2016 at 05:49:35PM -0700, Paul E. McKenney wrote:
> > > > > On Wed, Jun 22, 2016 at 12:08:59PM -0700, Paul E. McKenney wrote:
> > > > > > On Wed, Jun 22, 2016 at 05:01:35PM +0200, Geert Uytterhoeven wrote:
> > > > > > > On Wed, Jun 22, 2016 at 2:52 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> > > > > > > > Could you try below patch to check who causes the hang?
> > > > > > > >
> > > > > > > > And, if sysalt-t works when hang, could you get sysalt-t output? I haven't
> > > > > > > > used it before but Paul could find some culprit on it. :)
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > >
> > > > > > > > ----->8-----
> > > > > > > > diff --git a/mm/slab.c b/mm/slab.c
> > > > > > > > index 763096a..9652d38 100644
> > > > > > > > --- a/mm/slab.c
> > > > > > > > +++ b/mm/slab.c
> > > > > > > > @@ -964,8 +964,13 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> > > > > > > >          * guaranteed to be valid until irq is re-enabled, because it will be
> > > > > > > >          * freed after synchronize_sched().
> > > > > > > >          */
> > > > > > > > -       if (force_change)
> > > > > > > > +       if (force_change) {
> > > > > > > > +               if (num_online_cpus() > 1)
> > > > > > > > +                       dump_stack();
> > > > > > > >                 synchronize_sched();
> > > > > > > > +               if (num_online_cpus() > 1)
> > > > > > > > +                       dump_stack();
> > > > > > > > +       }
> > > > > > > 
> > > > > > > I've only added the first one, as I would never see the second one. All of
> > > > > > > this happens before the serial console is activated, earlycon is not supported,
> > > > > > > and I only have remote access.
> > > > > > > 
> > > > > > > Brought up 2 CPUs
> > > > > > > SMP: Total of 2 processors activated (2132.00 BogoMIPS).
> > > > > > > CPU: All CPU(s) started in SVC mode.
> > > > > > > CPU: 0 PID: 1 Comm: swapper/0 Not tainted
> > > > > > > 4.7.0-rc4-kzm9d-00404-g4a235e6dde4404dd-dirty #89
> > > > > > > Hardware name: Generic Emma Mobile EV2 (Flattened Device Tree)
> > > > > > > [<c010de68>] (unwind_backtrace) from [<c010a658>] (show_stack+0x10/0x14)
> > > > > > > [<c010a658>] (show_stack) from [<c02b5cf8>] (dump_stack+0x7c/0x9c)
> > > > > > > [<c02b5cf8>] (dump_stack) from [<c01cfa4c>] (setup_kmem_cache_node+0x140/0x170)
> > > > > > > [<c01cfa4c>] (setup_kmem_cache_node) from [<c01cfe3c>]
> > > > > > > (__do_tune_cpucache+0xf4/0x114)
> > > > > > > [<c01cfe3c>] (__do_tune_cpucache) from [<c01cff54>] (enable_cpucache+0xf8/0x148)
> > > > > > > [<c01cff54>] (enable_cpucache) from [<c01d0190>]
> > > > > > > (__kmem_cache_create+0x1a8/0x1d0)
> > > > > > > [<c01d0190>] (__kmem_cache_create) from [<c01b32d0>]
> > > > > > > (kmem_cache_create+0xbc/0x190)
> > > > > > > [<c01b32d0>] (kmem_cache_create) from [<c070d968>] (shmem_init+0x34/0xb0)
> > > > > > > [<c070d968>] (shmem_init) from [<c0700cc8>] (kernel_init_freeable+0x98/0x1ec)
> > > > > > > [<c0700cc8>] (kernel_init_freeable) from [<c049fdbc>] (kernel_init+0x8/0x110)
> > > > > > > [<c049fdbc>] (kernel_init) from [<c0106cb8>] (ret_from_fork+0x14/0x3c)
> > > > > > > devtmpfs: initialized
> > > > > > 
> > > > > > I don't see anything here that would prevent grace periods from completing.
> > > > > > 
> > > > > > The CPUs are using the normal hotplug sequence to come online, correct?
> > > > > 
> > > > > And either way, could you please apply the patch below and then
> > > > > invoke rcu_dump_rcu_sched_tree() just before the offending call to
> > > > > synchronize_sched()?  That will tell me what CPUs RCU believes exist,
> > > > > and perhaps also which CPU is holding it up.
> > > > 
> > > > I can't find rcu_dump_rcu_sched_tree(). Do you mean
> > > > rcu_dump_rcu_node_tree()? Anyway, there is no patch below so I attach
> > > > one which does what Paul want, maybe.
> > > 
> > > One of those days, I guess!  :-/
> > > 
> > > Your patch is exactly what I intended to send, thank you!
> > 
> > Ah, but your telepathy was not sufficient to intuit the additional
> > information I need.  Please see the patch at the end.  Your hunk
> > in mm/slab.c is needed on top of my patch.
> > 
> > So I am clearly having difficulties reading as well as including patches
> > today...
> 
> Just following up, any news using my diagnostic patch?

Hello, Paul.

Unfortunately, I have no hardware to re-generate it, so we need to wait Geert's
feedback.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: iamjoonsoo.kim@lge.com (Joonsoo Kim)
To: linux-arm-kernel@lists.infradead.org
Subject: Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache)
Date: Tue, 28 Jun 2016 17:33:57 +0900	[thread overview]
Message-ID: <20160628083357.GE19731@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <20160628001243.GA20638@linux.vnet.ibm.com>

On Mon, Jun 27, 2016 at 05:12:43PM -0700, Paul E. McKenney wrote:
> On Wed, Jun 22, 2016 at 07:53:29PM -0700, Paul E. McKenney wrote:
> > On Wed, Jun 22, 2016 at 07:47:42PM -0700, Paul E. McKenney wrote:
> > > On Thu, Jun 23, 2016 at 11:37:56AM +0900, Joonsoo Kim wrote:
> > > > On Wed, Jun 22, 2016 at 05:49:35PM -0700, Paul E. McKenney wrote:
> > > > > On Wed, Jun 22, 2016 at 12:08:59PM -0700, Paul E. McKenney wrote:
> > > > > > On Wed, Jun 22, 2016 at 05:01:35PM +0200, Geert Uytterhoeven wrote:
> > > > > > > On Wed, Jun 22, 2016 at 2:52 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> > > > > > > > Could you try below patch to check who causes the hang?
> > > > > > > >
> > > > > > > > And, if sysalt-t works when hang, could you get sysalt-t output? I haven't
> > > > > > > > used it before but Paul could find some culprit on it. :)
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > >
> > > > > > > > ----->8-----
> > > > > > > > diff --git a/mm/slab.c b/mm/slab.c
> > > > > > > > index 763096a..9652d38 100644
> > > > > > > > --- a/mm/slab.c
> > > > > > > > +++ b/mm/slab.c
> > > > > > > > @@ -964,8 +964,13 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> > > > > > > >          * guaranteed to be valid until irq is re-enabled, because it will be
> > > > > > > >          * freed after synchronize_sched().
> > > > > > > >          */
> > > > > > > > -       if (force_change)
> > > > > > > > +       if (force_change) {
> > > > > > > > +               if (num_online_cpus() > 1)
> > > > > > > > +                       dump_stack();
> > > > > > > >                 synchronize_sched();
> > > > > > > > +               if (num_online_cpus() > 1)
> > > > > > > > +                       dump_stack();
> > > > > > > > +       }
> > > > > > > 
> > > > > > > I've only added the first one, as I would never see the second one. All of
> > > > > > > this happens before the serial console is activated, earlycon is not supported,
> > > > > > > and I only have remote access.
> > > > > > > 
> > > > > > > Brought up 2 CPUs
> > > > > > > SMP: Total of 2 processors activated (2132.00 BogoMIPS).
> > > > > > > CPU: All CPU(s) started in SVC mode.
> > > > > > > CPU: 0 PID: 1 Comm: swapper/0 Not tainted
> > > > > > > 4.7.0-rc4-kzm9d-00404-g4a235e6dde4404dd-dirty #89
> > > > > > > Hardware name: Generic Emma Mobile EV2 (Flattened Device Tree)
> > > > > > > [<c010de68>] (unwind_backtrace) from [<c010a658>] (show_stack+0x10/0x14)
> > > > > > > [<c010a658>] (show_stack) from [<c02b5cf8>] (dump_stack+0x7c/0x9c)
> > > > > > > [<c02b5cf8>] (dump_stack) from [<c01cfa4c>] (setup_kmem_cache_node+0x140/0x170)
> > > > > > > [<c01cfa4c>] (setup_kmem_cache_node) from [<c01cfe3c>]
> > > > > > > (__do_tune_cpucache+0xf4/0x114)
> > > > > > > [<c01cfe3c>] (__do_tune_cpucache) from [<c01cff54>] (enable_cpucache+0xf8/0x148)
> > > > > > > [<c01cff54>] (enable_cpucache) from [<c01d0190>]
> > > > > > > (__kmem_cache_create+0x1a8/0x1d0)
> > > > > > > [<c01d0190>] (__kmem_cache_create) from [<c01b32d0>]
> > > > > > > (kmem_cache_create+0xbc/0x190)
> > > > > > > [<c01b32d0>] (kmem_cache_create) from [<c070d968>] (shmem_init+0x34/0xb0)
> > > > > > > [<c070d968>] (shmem_init) from [<c0700cc8>] (kernel_init_freeable+0x98/0x1ec)
> > > > > > > [<c0700cc8>] (kernel_init_freeable) from [<c049fdbc>] (kernel_init+0x8/0x110)
> > > > > > > [<c049fdbc>] (kernel_init) from [<c0106cb8>] (ret_from_fork+0x14/0x3c)
> > > > > > > devtmpfs: initialized
> > > > > > 
> > > > > > I don't see anything here that would prevent grace periods from completing.
> > > > > > 
> > > > > > The CPUs are using the normal hotplug sequence to come online, correct?
> > > > > 
> > > > > And either way, could you please apply the patch below and then
> > > > > invoke rcu_dump_rcu_sched_tree() just before the offending call to
> > > > > synchronize_sched()?  That will tell me what CPUs RCU believes exist,
> > > > > and perhaps also which CPU is holding it up.
> > > > 
> > > > I can't find rcu_dump_rcu_sched_tree(). Do you mean
> > > > rcu_dump_rcu_node_tree()? Anyway, there is no patch below so I attach
> > > > one which does what Paul want, maybe.
> > > 
> > > One of those days, I guess!  :-/
> > > 
> > > Your patch is exactly what I intended to send, thank you!
> > 
> > Ah, but your telepathy was not sufficient to intuit the additional
> > information I need.  Please see the patch at the end.  Your hunk
> > in mm/slab.c is needed on top of my patch.
> > 
> > So I am clearly having difficulties reading as well as including patches
> > today...
> 
> Just following up, any news using my diagnostic patch?

Hello, Paul.

Unfortunately, I have no hardware to re-generate it, so we need to wait Geert's
feedback.

Thanks.

  reply	other threads:[~2016-06-28  8:31 UTC|newest]

Thread overview: 101+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-13 19:43 Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache) Geert Uytterhoeven
2016-06-13 19:43 ` Geert Uytterhoeven
2016-06-14  6:24 ` Joonsoo Kim
2016-06-14  6:24   ` Joonsoo Kim
2016-06-14  6:24   ` Joonsoo Kim
2016-06-14  7:31   ` Geert Uytterhoeven
2016-06-14  7:31     ` Geert Uytterhoeven
2016-06-14  7:31     ` Geert Uytterhoeven
2016-06-14  8:11     ` Joonsoo Kim
2016-06-14  8:11       ` Joonsoo Kim
2016-06-14  8:11       ` Joonsoo Kim
2016-06-14 10:45       ` Geert Uytterhoeven
2016-06-14 10:45         ` Geert Uytterhoeven
2016-06-14 10:45         ` Geert Uytterhoeven
2016-06-15  2:23         ` Joonsoo Kim
2016-06-15  2:23           ` Joonsoo Kim
2016-06-15  2:23           ` Joonsoo Kim
2016-06-15  8:39           ` Geert Uytterhoeven
2016-06-15  8:39             ` Geert Uytterhoeven
2016-06-15  8:39             ` Geert Uytterhoeven
2016-06-20  6:39             ` Joonsoo Kim
2016-06-20  6:39               ` Joonsoo Kim
2016-06-20  6:39               ` Joonsoo Kim
2016-06-20 13:12               ` Paul E. McKenney
2016-06-20 13:12                 ` Paul E. McKenney
2016-06-20 13:12                 ` Paul E. McKenney
2016-06-21  6:43                 ` Joonsoo Kim
2016-06-21  6:43                   ` Joonsoo Kim
2016-06-21  6:43                   ` Joonsoo Kim
2016-06-21 12:54                   ` Paul E. McKenney
2016-06-21 12:54                     ` Paul E. McKenney
2016-06-21 12:54                     ` Paul E. McKenney
2016-06-22  0:52                     ` Joonsoo Kim
2016-06-22  0:52                       ` Joonsoo Kim
2016-06-22  0:52                       ` Joonsoo Kim
2016-06-22  3:15                       ` Paul E. McKenney
2016-06-22  3:15                         ` Paul E. McKenney
2016-06-22  3:15                         ` Paul E. McKenney
2016-06-22 15:01                       ` Geert Uytterhoeven
2016-06-22 15:01                         ` Geert Uytterhoeven
2016-06-22 15:01                         ` Geert Uytterhoeven
2016-06-22 19:08                         ` Paul E. McKenney
2016-06-22 19:08                           ` Paul E. McKenney
2016-06-22 19:08                           ` Paul E. McKenney
2016-06-23  0:49                           ` Paul E. McKenney
2016-06-23  0:49                             ` Paul E. McKenney
2016-06-23  0:49                             ` Paul E. McKenney
2016-06-23  2:37                             ` Joonsoo Kim
2016-06-23  2:37                               ` Joonsoo Kim
2016-06-23  2:37                               ` Joonsoo Kim
2016-06-23  2:47                               ` Paul E. McKenney
2016-06-23  2:47                                 ` Paul E. McKenney
2016-06-23  2:47                                 ` Paul E. McKenney
2016-06-23  2:53                                 ` Paul E. McKenney
2016-06-23  2:53                                   ` Paul E. McKenney
2016-06-23  2:53                                   ` Paul E. McKenney
2016-06-28  0:12                                   ` Paul E. McKenney
2016-06-28  0:12                                     ` Paul E. McKenney
2016-06-28  0:12                                     ` Paul E. McKenney
2016-06-28  8:33                                     ` Joonsoo Kim [this message]
2016-06-28  8:33                                       ` Joonsoo Kim
2016-06-28  8:33                                       ` Joonsoo Kim
2016-06-29 14:54                                   ` Geert Uytterhoeven
2016-06-29 14:54                                     ` Geert Uytterhoeven
2016-06-29 14:54                                     ` Geert Uytterhoeven
2016-06-29 16:44                                     ` Paul E. McKenney
2016-06-29 16:44                                       ` Paul E. McKenney
2016-06-29 16:44                                       ` Paul E. McKenney
2016-06-29 17:52                                       ` Geert Uytterhoeven
2016-06-29 17:52                                         ` Geert Uytterhoeven
2016-06-29 17:52                                         ` Geert Uytterhoeven
2016-06-29 18:12                                         ` Paul E. McKenney
2016-06-29 18:12                                           ` Paul E. McKenney
2016-06-29 18:12                                           ` Paul E. McKenney
2016-06-30  7:47                                           ` Joonsoo Kim
2016-06-30  7:47                                             ` Joonsoo Kim
2016-06-30  7:47                                             ` Joonsoo Kim
2016-06-30  7:58                                             ` Geert Uytterhoeven
2016-06-30  7:58                                               ` Geert Uytterhoeven
2016-06-30  7:58                                               ` Geert Uytterhoeven
2016-06-30 13:24                                               ` Paul E. McKenney
2016-06-30 13:24                                                 ` Paul E. McKenney
2016-06-30 13:24                                                 ` Paul E. McKenney
2016-06-30 13:31                                                 ` Geert Uytterhoeven
2016-06-30 13:31                                                   ` Geert Uytterhoeven
2016-06-30 13:31                                                   ` Geert Uytterhoeven
2016-06-30 15:18                                                   ` Paul E. McKenney
2016-06-30 15:18                                                     ` Paul E. McKenney
2016-06-30 15:18                                                     ` Paul E. McKenney
2016-06-30 15:53                                                     ` Geert Uytterhoeven
2016-06-30 15:53                                                       ` Geert Uytterhoeven
2016-06-30 15:53                                                       ` Geert Uytterhoeven
2016-06-30 16:52                                                       ` Paul E. McKenney
2016-06-30 16:52                                                         ` Paul E. McKenney
2016-06-30 16:52                                                         ` Paul E. McKenney
2016-06-30 17:54                                                         ` Geert Uytterhoeven
2016-06-30 17:54                                                           ` Geert Uytterhoeven
2016-06-30 17:54                                                           ` Geert Uytterhoeven
2016-06-14 13:10 ` Geert Uytterhoeven
2016-06-14 13:10   ` Geert Uytterhoeven
2016-06-14 13:10   ` Geert Uytterhoeven

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160628083357.GE19731@js1304-P5Q-DELUXE \
    --to=iamjoonsoo.kim@lge.com \
    --cc=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=geert@linux-m68k.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-renesas-soc@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.