All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Linux MM <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	linux-renesas-soc@vger.kernel.org,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache)
Date: Tue, 14 Jun 2016 17:11:25 +0900	[thread overview]
Message-ID: <20160614081125.GA17700@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <CAMuHMdWipquaVFKYLd=2KhTx6djwH7NXpzL-RjtikCE=G8KTbA@mail.gmail.com>

On Tue, Jun 14, 2016 at 09:31:23AM +0200, Geert Uytterhoeven wrote:
> Hi Joonsoo,
> 
> On Tue, Jun 14, 2016 at 8:24 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> > On Mon, Jun 13, 2016 at 09:43:13PM +0200, Geert Uytterhoeven wrote:
> >> On Tue, Apr 12, 2016 at 6:51 AM,  <js1304@gmail.com> wrote:
> >> > From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >> > To check whther free objects exist or not precisely, we need to grab a
> >> > lock.  But, accuracy isn't that important because race window would be
> >> > even small and if there is too much free object, cache reaper would reap
> >> > it.  So, this patch makes the check for free object exisistence not to
> >> > hold a lock.  This will reduce lock contention in heavily allocation case.
> >> >
> >> > Note that until now, n->shared can be freed during the processing by
> >> > writing slabinfo, but, with some trick in this patch, we can access it
> >> > freely within interrupt disabled period.
> >> >
> >> > Below is the result of concurrent allocation/free in slab allocation
> >> > benchmark made by Christoph a long time ago.  I make the output simpler.
> >> > The number shows cycle count during alloc/free respectively so less is
> >> > better.
> >> >
> >> > * Before
> >> > Kmalloc N*alloc N*free(32): Average=248/966
> >> > Kmalloc N*alloc N*free(64): Average=261/949
> >> > Kmalloc N*alloc N*free(128): Average=314/1016
> >> > Kmalloc N*alloc N*free(256): Average=741/1061
> >> > Kmalloc N*alloc N*free(512): Average=1246/1152
> >> > Kmalloc N*alloc N*free(1024): Average=2437/1259
> >> > Kmalloc N*alloc N*free(2048): Average=4980/1800
> >> > Kmalloc N*alloc N*free(4096): Average=9000/2078
> >> >
> >> > * After
> >> > Kmalloc N*alloc N*free(32): Average=344/792
> >> > Kmalloc N*alloc N*free(64): Average=347/882
> >> > Kmalloc N*alloc N*free(128): Average=390/959
> >> > Kmalloc N*alloc N*free(256): Average=393/1067
> >> > Kmalloc N*alloc N*free(512): Average=683/1229
> >> > Kmalloc N*alloc N*free(1024): Average=1295/1325
> >> > Kmalloc N*alloc N*free(2048): Average=2513/1664
> >> > Kmalloc N*alloc N*free(4096): Average=4742/2172
> >> >
> >> > It shows that allocation performance decreases for the object size up to
> >> > 128 and it may be due to extra checks in cache_alloc_refill().  But, with
> >> > considering improvement of free performance, net result looks the same.
> >> > Result for other size class looks very promising, roughly, 50% performance
> >> > improvement.
> >> >
> >> > v2: replace kick_all_cpus_sync() with synchronize_sched().
> >> >
> >> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >>
> >> I've bisected a boot failure (no output at all) in v4.7-rc2 on emev2/kzm9d
> >> (Renesas dual Cortex A9) to this patch, which is upstream commit
> >> 801faf0db8947e01877920e848a4d338dd7a99e7.
> >>
> >> I've attached my .config. I don't know if it also happens with
> >> shmobile_defconfig, as something went wrong with my remote access to the board,
> >> preventing further testing. I also couldn't verify if the issue persists in
> >> v4.7-rc3.
> 
> In the mean time, I've verified it also happens with shmobile_defconfig.
> 
> >>
> >> Do you have a clue?
> >
> > I don't have yet. Could you help me to narrow down the problem?
> > Following diff is half-revert change to check that synchronize_sched()
> > has no problem.
> 
> Thanks!
> 
> Unfortunately the half revert is not sufficient. The full revert is.

Thanks for quick testing!

Could I ask one more time to check that synchronize_sched() is root
cause of the problem? Testing following two diffs will be helpful to me.

Thanks.

------->8--------
diff --git a/mm/slab.c b/mm/slab.c
index 763096a..d892364 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -965,7 +965,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
         * freed after synchronize_sched().
         */
        if (force_change)
-               synchronize_sched();
+               kick_all_cpus_sync();
 
 fail:
        kfree(old_shared);

------->8------
diff --git a/mm/slab.c b/mm/slab.c
index 763096a..38d99c2 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -964,8 +964,6 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
         * guaranteed to be valid until irq is re-enabled, because it will be
         * freed after synchronize_sched().
         */
-       if (force_change)
-               synchronize_sched();
 
 fail:
        kfree(old_shared);

WARNING: multiple messages have this Message-ID (diff)
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Linux MM <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	linux-renesas-soc@vger.kernel.org,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache)
Date: Tue, 14 Jun 2016 17:11:25 +0900	[thread overview]
Message-ID: <20160614081125.GA17700@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <CAMuHMdWipquaVFKYLd=2KhTx6djwH7NXpzL-RjtikCE=G8KTbA@mail.gmail.com>

On Tue, Jun 14, 2016 at 09:31:23AM +0200, Geert Uytterhoeven wrote:
> Hi Joonsoo,
> 
> On Tue, Jun 14, 2016 at 8:24 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> > On Mon, Jun 13, 2016 at 09:43:13PM +0200, Geert Uytterhoeven wrote:
> >> On Tue, Apr 12, 2016 at 6:51 AM,  <js1304@gmail.com> wrote:
> >> > From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >> > To check whther free objects exist or not precisely, we need to grab a
> >> > lock.  But, accuracy isn't that important because race window would be
> >> > even small and if there is too much free object, cache reaper would reap
> >> > it.  So, this patch makes the check for free object exisistence not to
> >> > hold a lock.  This will reduce lock contention in heavily allocation case.
> >> >
> >> > Note that until now, n->shared can be freed during the processing by
> >> > writing slabinfo, but, with some trick in this patch, we can access it
> >> > freely within interrupt disabled period.
> >> >
> >> > Below is the result of concurrent allocation/free in slab allocation
> >> > benchmark made by Christoph a long time ago.  I make the output simpler.
> >> > The number shows cycle count during alloc/free respectively so less is
> >> > better.
> >> >
> >> > * Before
> >> > Kmalloc N*alloc N*free(32): Average=248/966
> >> > Kmalloc N*alloc N*free(64): Average=261/949
> >> > Kmalloc N*alloc N*free(128): Average=314/1016
> >> > Kmalloc N*alloc N*free(256): Average=741/1061
> >> > Kmalloc N*alloc N*free(512): Average=1246/1152
> >> > Kmalloc N*alloc N*free(1024): Average=2437/1259
> >> > Kmalloc N*alloc N*free(2048): Average=4980/1800
> >> > Kmalloc N*alloc N*free(4096): Average=9000/2078
> >> >
> >> > * After
> >> > Kmalloc N*alloc N*free(32): Average=344/792
> >> > Kmalloc N*alloc N*free(64): Average=347/882
> >> > Kmalloc N*alloc N*free(128): Average=390/959
> >> > Kmalloc N*alloc N*free(256): Average=393/1067
> >> > Kmalloc N*alloc N*free(512): Average=683/1229
> >> > Kmalloc N*alloc N*free(1024): Average=1295/1325
> >> > Kmalloc N*alloc N*free(2048): Average=2513/1664
> >> > Kmalloc N*alloc N*free(4096): Average=4742/2172
> >> >
> >> > It shows that allocation performance decreases for the object size up to
> >> > 128 and it may be due to extra checks in cache_alloc_refill().  But, with
> >> > considering improvement of free performance, net result looks the same.
> >> > Result for other size class looks very promising, roughly, 50% performance
> >> > improvement.
> >> >
> >> > v2: replace kick_all_cpus_sync() with synchronize_sched().
> >> >
> >> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >>
> >> I've bisected a boot failure (no output at all) in v4.7-rc2 on emev2/kzm9d
> >> (Renesas dual Cortex A9) to this patch, which is upstream commit
> >> 801faf0db8947e01877920e848a4d338dd7a99e7.
> >>
> >> I've attached my .config. I don't know if it also happens with
> >> shmobile_defconfig, as something went wrong with my remote access to the board,
> >> preventing further testing. I also couldn't verify if the issue persists in
> >> v4.7-rc3.
> 
> In the mean time, I've verified it also happens with shmobile_defconfig.
> 
> >>
> >> Do you have a clue?
> >
> > I don't have yet. Could you help me to narrow down the problem?
> > Following diff is half-revert change to check that synchronize_sched()
> > has no problem.
> 
> Thanks!
> 
> Unfortunately the half revert is not sufficient. The full revert is.

Thanks for quick testing!

Could I ask one more time to check that synchronize_sched() is root
cause of the problem? Testing following two diffs will be helpful to me.

Thanks.

------->8--------
diff --git a/mm/slab.c b/mm/slab.c
index 763096a..d892364 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -965,7 +965,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
         * freed after synchronize_sched().
         */
        if (force_change)
-               synchronize_sched();
+               kick_all_cpus_sync();
 
 fail:
        kfree(old_shared);

------->8------
diff --git a/mm/slab.c b/mm/slab.c
index 763096a..38d99c2 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -964,8 +964,6 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
         * guaranteed to be valid until irq is re-enabled, because it will be
         * freed after synchronize_sched().
         */
-       if (force_change)
-               synchronize_sched();
 
 fail:
        kfree(old_shared);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: iamjoonsoo.kim@lge.com (Joonsoo Kim)
To: linux-arm-kernel@lists.infradead.org
Subject: Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache)
Date: Tue, 14 Jun 2016 17:11:25 +0900	[thread overview]
Message-ID: <20160614081125.GA17700@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <CAMuHMdWipquaVFKYLd=2KhTx6djwH7NXpzL-RjtikCE=G8KTbA@mail.gmail.com>

On Tue, Jun 14, 2016 at 09:31:23AM +0200, Geert Uytterhoeven wrote:
> Hi Joonsoo,
> 
> On Tue, Jun 14, 2016 at 8:24 AM, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> > On Mon, Jun 13, 2016 at 09:43:13PM +0200, Geert Uytterhoeven wrote:
> >> On Tue, Apr 12, 2016 at 6:51 AM,  <js1304@gmail.com> wrote:
> >> > From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >> > To check whther free objects exist or not precisely, we need to grab a
> >> > lock.  But, accuracy isn't that important because race window would be
> >> > even small and if there is too much free object, cache reaper would reap
> >> > it.  So, this patch makes the check for free object exisistence not to
> >> > hold a lock.  This will reduce lock contention in heavily allocation case.
> >> >
> >> > Note that until now, n->shared can be freed during the processing by
> >> > writing slabinfo, but, with some trick in this patch, we can access it
> >> > freely within interrupt disabled period.
> >> >
> >> > Below is the result of concurrent allocation/free in slab allocation
> >> > benchmark made by Christoph a long time ago.  I make the output simpler.
> >> > The number shows cycle count during alloc/free respectively so less is
> >> > better.
> >> >
> >> > * Before
> >> > Kmalloc N*alloc N*free(32): Average=248/966
> >> > Kmalloc N*alloc N*free(64): Average=261/949
> >> > Kmalloc N*alloc N*free(128): Average=314/1016
> >> > Kmalloc N*alloc N*free(256): Average=741/1061
> >> > Kmalloc N*alloc N*free(512): Average=1246/1152
> >> > Kmalloc N*alloc N*free(1024): Average=2437/1259
> >> > Kmalloc N*alloc N*free(2048): Average=4980/1800
> >> > Kmalloc N*alloc N*free(4096): Average=9000/2078
> >> >
> >> > * After
> >> > Kmalloc N*alloc N*free(32): Average=344/792
> >> > Kmalloc N*alloc N*free(64): Average=347/882
> >> > Kmalloc N*alloc N*free(128): Average=390/959
> >> > Kmalloc N*alloc N*free(256): Average=393/1067
> >> > Kmalloc N*alloc N*free(512): Average=683/1229
> >> > Kmalloc N*alloc N*free(1024): Average=1295/1325
> >> > Kmalloc N*alloc N*free(2048): Average=2513/1664
> >> > Kmalloc N*alloc N*free(4096): Average=4742/2172
> >> >
> >> > It shows that allocation performance decreases for the object size up to
> >> > 128 and it may be due to extra checks in cache_alloc_refill().  But, with
> >> > considering improvement of free performance, net result looks the same.
> >> > Result for other size class looks very promising, roughly, 50% performance
> >> > improvement.
> >> >
> >> > v2: replace kick_all_cpus_sync() with synchronize_sched().
> >> >
> >> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> >>
> >> I've bisected a boot failure (no output at all) in v4.7-rc2 on emev2/kzm9d
> >> (Renesas dual Cortex A9) to this patch, which is upstream commit
> >> 801faf0db8947e01877920e848a4d338dd7a99e7.
> >>
> >> I've attached my .config. I don't know if it also happens with
> >> shmobile_defconfig, as something went wrong with my remote access to the board,
> >> preventing further testing. I also couldn't verify if the issue persists in
> >> v4.7-rc3.
> 
> In the mean time, I've verified it also happens with shmobile_defconfig.
> 
> >>
> >> Do you have a clue?
> >
> > I don't have yet. Could you help me to narrow down the problem?
> > Following diff is half-revert change to check that synchronize_sched()
> > has no problem.
> 
> Thanks!
> 
> Unfortunately the half revert is not sufficient. The full revert is.

Thanks for quick testing!

Could I ask one more time to check that synchronize_sched() is root
cause of the problem? Testing following two diffs will be helpful to me.

Thanks.

------->8--------
diff --git a/mm/slab.c b/mm/slab.c
index 763096a..d892364 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -965,7 +965,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
         * freed after synchronize_sched().
         */
        if (force_change)
-               synchronize_sched();
+               kick_all_cpus_sync();
 
 fail:
        kfree(old_shared);

------->8------
diff --git a/mm/slab.c b/mm/slab.c
index 763096a..38d99c2 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -964,8 +964,6 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
         * guaranteed to be valid until irq is re-enabled, because it will be
         * freed after synchronize_sched().
         */
-       if (force_change)
-               synchronize_sched();
 
 fail:
        kfree(old_shared);

  reply	other threads:[~2016-06-14  8:09 UTC|newest]

Thread overview: 101+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-13 19:43 Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache) Geert Uytterhoeven
2016-06-13 19:43 ` Geert Uytterhoeven
2016-06-14  6:24 ` Joonsoo Kim
2016-06-14  6:24   ` Joonsoo Kim
2016-06-14  6:24   ` Joonsoo Kim
2016-06-14  7:31   ` Geert Uytterhoeven
2016-06-14  7:31     ` Geert Uytterhoeven
2016-06-14  7:31     ` Geert Uytterhoeven
2016-06-14  8:11     ` Joonsoo Kim [this message]
2016-06-14  8:11       ` Joonsoo Kim
2016-06-14  8:11       ` Joonsoo Kim
2016-06-14 10:45       ` Geert Uytterhoeven
2016-06-14 10:45         ` Geert Uytterhoeven
2016-06-14 10:45         ` Geert Uytterhoeven
2016-06-15  2:23         ` Joonsoo Kim
2016-06-15  2:23           ` Joonsoo Kim
2016-06-15  2:23           ` Joonsoo Kim
2016-06-15  8:39           ` Geert Uytterhoeven
2016-06-15  8:39             ` Geert Uytterhoeven
2016-06-15  8:39             ` Geert Uytterhoeven
2016-06-20  6:39             ` Joonsoo Kim
2016-06-20  6:39               ` Joonsoo Kim
2016-06-20  6:39               ` Joonsoo Kim
2016-06-20 13:12               ` Paul E. McKenney
2016-06-20 13:12                 ` Paul E. McKenney
2016-06-20 13:12                 ` Paul E. McKenney
2016-06-21  6:43                 ` Joonsoo Kim
2016-06-21  6:43                   ` Joonsoo Kim
2016-06-21  6:43                   ` Joonsoo Kim
2016-06-21 12:54                   ` Paul E. McKenney
2016-06-21 12:54                     ` Paul E. McKenney
2016-06-21 12:54                     ` Paul E. McKenney
2016-06-22  0:52                     ` Joonsoo Kim
2016-06-22  0:52                       ` Joonsoo Kim
2016-06-22  0:52                       ` Joonsoo Kim
2016-06-22  3:15                       ` Paul E. McKenney
2016-06-22  3:15                         ` Paul E. McKenney
2016-06-22  3:15                         ` Paul E. McKenney
2016-06-22 15:01                       ` Geert Uytterhoeven
2016-06-22 15:01                         ` Geert Uytterhoeven
2016-06-22 15:01                         ` Geert Uytterhoeven
2016-06-22 19:08                         ` Paul E. McKenney
2016-06-22 19:08                           ` Paul E. McKenney
2016-06-22 19:08                           ` Paul E. McKenney
2016-06-23  0:49                           ` Paul E. McKenney
2016-06-23  0:49                             ` Paul E. McKenney
2016-06-23  0:49                             ` Paul E. McKenney
2016-06-23  2:37                             ` Joonsoo Kim
2016-06-23  2:37                               ` Joonsoo Kim
2016-06-23  2:37                               ` Joonsoo Kim
2016-06-23  2:47                               ` Paul E. McKenney
2016-06-23  2:47                                 ` Paul E. McKenney
2016-06-23  2:47                                 ` Paul E. McKenney
2016-06-23  2:53                                 ` Paul E. McKenney
2016-06-23  2:53                                   ` Paul E. McKenney
2016-06-23  2:53                                   ` Paul E. McKenney
2016-06-28  0:12                                   ` Paul E. McKenney
2016-06-28  0:12                                     ` Paul E. McKenney
2016-06-28  0:12                                     ` Paul E. McKenney
2016-06-28  8:33                                     ` Joonsoo Kim
2016-06-28  8:33                                       ` Joonsoo Kim
2016-06-28  8:33                                       ` Joonsoo Kim
2016-06-29 14:54                                   ` Geert Uytterhoeven
2016-06-29 14:54                                     ` Geert Uytterhoeven
2016-06-29 14:54                                     ` Geert Uytterhoeven
2016-06-29 16:44                                     ` Paul E. McKenney
2016-06-29 16:44                                       ` Paul E. McKenney
2016-06-29 16:44                                       ` Paul E. McKenney
2016-06-29 17:52                                       ` Geert Uytterhoeven
2016-06-29 17:52                                         ` Geert Uytterhoeven
2016-06-29 17:52                                         ` Geert Uytterhoeven
2016-06-29 18:12                                         ` Paul E. McKenney
2016-06-29 18:12                                           ` Paul E. McKenney
2016-06-29 18:12                                           ` Paul E. McKenney
2016-06-30  7:47                                           ` Joonsoo Kim
2016-06-30  7:47                                             ` Joonsoo Kim
2016-06-30  7:47                                             ` Joonsoo Kim
2016-06-30  7:58                                             ` Geert Uytterhoeven
2016-06-30  7:58                                               ` Geert Uytterhoeven
2016-06-30  7:58                                               ` Geert Uytterhoeven
2016-06-30 13:24                                               ` Paul E. McKenney
2016-06-30 13:24                                                 ` Paul E. McKenney
2016-06-30 13:24                                                 ` Paul E. McKenney
2016-06-30 13:31                                                 ` Geert Uytterhoeven
2016-06-30 13:31                                                   ` Geert Uytterhoeven
2016-06-30 13:31                                                   ` Geert Uytterhoeven
2016-06-30 15:18                                                   ` Paul E. McKenney
2016-06-30 15:18                                                     ` Paul E. McKenney
2016-06-30 15:18                                                     ` Paul E. McKenney
2016-06-30 15:53                                                     ` Geert Uytterhoeven
2016-06-30 15:53                                                       ` Geert Uytterhoeven
2016-06-30 15:53                                                       ` Geert Uytterhoeven
2016-06-30 16:52                                                       ` Paul E. McKenney
2016-06-30 16:52                                                         ` Paul E. McKenney
2016-06-30 16:52                                                         ` Paul E. McKenney
2016-06-30 17:54                                                         ` Geert Uytterhoeven
2016-06-30 17:54                                                           ` Geert Uytterhoeven
2016-06-30 17:54                                                           ` Geert Uytterhoeven
2016-06-14 13:10 ` Geert Uytterhoeven
2016-06-14 13:10   ` Geert Uytterhoeven
2016-06-14 13:10   ` Geert Uytterhoeven

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160614081125.GA17700@js1304-P5Q-DELUXE \
    --to=iamjoonsoo.kim@lge.com \
    --cc=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=geert@linux-m68k.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-renesas-soc@vger.kernel.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.