All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm/slab: fix kmemcg cache creation delayed issue
@ 2016-10-06  6:20 ` js1304
  0 siblings, 0 replies; 11+ messages in thread
From: js1304 @ 2016-10-06  6:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Johannes Weiner,
	Vladimir Davydov, Doug Smythies, linux-mm, linux-kernel,
	Joonsoo Kim, stable

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

There is a bug report that SLAB makes extreme load average due to
over 2000 kworker thread.

https://bugzilla.kernel.org/show_bug.cgi?id=172981

This issue is caused by kmemcg feature that try to create new set of
kmem_caches for each memcg. Recently, kmem_cache creation is slowed by
synchronize_sched() and futher kmem_cache creation is also delayed
since kmem_cache creation is synchronized by a global slab_mutex lock.
So, the number of kworker that try to create kmem_cache increases quitely.
synchronize_sched() is for lockless access to node's shared array but
it's not needed when a new kmem_cache is created. So, this patch
rules out that case.

Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache")
Cc: stable@vger.kernel.org
Reported-by: Doug Smythies <dsmythies@telus.net>
Tested-by: Doug Smythies <dsmythies@telus.net>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/slab.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab.c b/mm/slab.c
index 6508b4d..3c83c29 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
 	 * guaranteed to be valid until irq is re-enabled, because it will be
 	 * freed after synchronize_sched().
 	 */
-	if (force_change)
+	if (old_shared && force_change)
 		synchronize_sched();
 
 fail:
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH] mm/slab: fix kmemcg cache creation delayed issue
@ 2016-10-06  6:20 ` js1304
  0 siblings, 0 replies; 11+ messages in thread
From: js1304 @ 2016-10-06  6:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Johannes Weiner,
	Vladimir Davydov, Doug Smythies, linux-mm, linux-kernel,
	Joonsoo Kim, stable

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

There is a bug report that SLAB makes extreme load average due to
over 2000 kworker thread.

https://bugzilla.kernel.org/show_bug.cgi?id=172981

This issue is caused by kmemcg feature that try to create new set of
kmem_caches for each memcg. Recently, kmem_cache creation is slowed by
synchronize_sched() and futher kmem_cache creation is also delayed
since kmem_cache creation is synchronized by a global slab_mutex lock.
So, the number of kworker that try to create kmem_cache increases quitely.
synchronize_sched() is for lockless access to node's shared array but
it's not needed when a new kmem_cache is created. So, this patch
rules out that case.

Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache")
Cc: stable@vger.kernel.org
Reported-by: Doug Smythies <dsmythies@telus.net>
Tested-by: Doug Smythies <dsmythies@telus.net>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/slab.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab.c b/mm/slab.c
index 6508b4d..3c83c29 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
 	 * guaranteed to be valid until irq is re-enabled, because it will be
 	 * freed after synchronize_sched().
 	 */
-	if (force_change)
+	if (old_shared && force_change)
 		synchronize_sched();
 
 fail:
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
  2016-10-07  8:20     ` Michal Hocko
@ 2016-10-13  7:42       ` Joonsoo Kim
  -1 siblings, 0 replies; 11+ messages in thread
From: Joonsoo Kim @ 2016-10-13  7:42 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Doug Smythies, 'Andrew Morton',
	'Christoph Lameter', 'Pekka Enberg',
	'David Rientjes', 'Johannes Weiner',
	'Vladimir Davydov',
	linux-mm, linux-kernel, stable

On Fri, Oct 07, 2016 at 10:20:55AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:14:01, Joonsoo Kim wrote:
> > On Thu, Oct 06, 2016 at 09:02:00AM -0700, Doug Smythies wrote:
> > > It was my (limited) understanding that the subsequent 2 patch set
> > > superseded this patch. Indeed, the 2 patch set seems to solve
> > > both the SLAB and SLUB bug reports.
> > 
> > It would mean that patch 1 solves both the SLAB and SLUB bug reports
> > since patch 2 is only effective for SLUB.
> > 
> > Reason that I send this patch is that although patch 1 fixes the
> > issue that too many kworkers are created, kmem_cache creation/destory
> > is still slowed by synchronize_sched() and it would cause kmemcg
> > usage counting delayed. I'm not sure how bad it is but it's generally
> > better to start accounting as soon as possible. With patch 2 for SLUB
> > and this patch for SLAB, performance of kmem_cache
> > creation/destory would recover.
> 
> OK, so do we really want/need it for stable as well. I am not opposing
> that but the effect doesn't seem to be a clear cut.

I think that it's meaningful to solve problematic commit itself
because it would cause the other problem in the future. And, this patch
is simple enough to apply to the stable tree.

Thanks.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
@ 2016-10-13  7:42       ` Joonsoo Kim
  0 siblings, 0 replies; 11+ messages in thread
From: Joonsoo Kim @ 2016-10-13  7:42 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Doug Smythies, 'Andrew Morton',
	'Christoph Lameter', 'Pekka Enberg',
	'David Rientjes', 'Johannes Weiner',
	'Vladimir Davydov',
	linux-mm, linux-kernel, stable

On Fri, Oct 07, 2016 at 10:20:55AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:14:01, Joonsoo Kim wrote:
> > On Thu, Oct 06, 2016 at 09:02:00AM -0700, Doug Smythies wrote:
> > > It was my (limited) understanding that the subsequent 2 patch set
> > > superseded this patch. Indeed, the 2 patch set seems to solve
> > > both the SLAB and SLUB bug reports.
> > 
> > It would mean that patch 1 solves both the SLAB and SLUB bug reports
> > since patch 2 is only effective for SLUB.
> > 
> > Reason that I send this patch is that although patch 1 fixes the
> > issue that too many kworkers are created, kmem_cache creation/destory
> > is still slowed by synchronize_sched() and it would cause kmemcg
> > usage counting delayed. I'm not sure how bad it is but it's generally
> > better to start accounting as soon as possible. With patch 2 for SLUB
> > and this patch for SLAB, performance of kmem_cache
> > creation/destory would recover.
> 
> OK, so do we really want/need it for stable as well. I am not opposing
> that but the effect doesn't seem to be a clear cut.

I think that it's meaningful to solve problematic commit itself
because it would cause the other problem in the future. And, this patch
is simple enough to apply to the stable tree.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
  2016-10-07  5:14   ` Joonsoo Kim
@ 2016-10-07  8:20     ` Michal Hocko
  -1 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2016-10-07  8:20 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Doug Smythies, 'Andrew Morton',
	'Christoph Lameter', 'Pekka Enberg',
	'David Rientjes', 'Johannes Weiner',
	'Vladimir Davydov',
	linux-mm, linux-kernel, stable

On Fri 07-10-16 14:14:01, Joonsoo Kim wrote:
> On Thu, Oct 06, 2016 at 09:02:00AM -0700, Doug Smythies wrote:
> > It was my (limited) understanding that the subsequent 2 patch set
> > superseded this patch. Indeed, the 2 patch set seems to solve
> > both the SLAB and SLUB bug reports.
> 
> It would mean that patch 1 solves both the SLAB and SLUB bug reports
> since patch 2 is only effective for SLUB.
> 
> Reason that I send this patch is that although patch 1 fixes the
> issue that too many kworkers are created, kmem_cache creation/destory
> is still slowed by synchronize_sched() and it would cause kmemcg
> usage counting delayed. I'm not sure how bad it is but it's generally
> better to start accounting as soon as possible. With patch 2 for SLUB
> and this patch for SLAB, performance of kmem_cache
> creation/destory would recover.

OK, so do we really want/need it for stable as well. I am not opposing
that but the effect doesn't seem to be a clear cut.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
@ 2016-10-07  8:20     ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2016-10-07  8:20 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Doug Smythies, 'Andrew Morton',
	'Christoph Lameter', 'Pekka Enberg',
	'David Rientjes', 'Johannes Weiner',
	'Vladimir Davydov',
	linux-mm, linux-kernel, stable

On Fri 07-10-16 14:14:01, Joonsoo Kim wrote:
> On Thu, Oct 06, 2016 at 09:02:00AM -0700, Doug Smythies wrote:
> > It was my (limited) understanding that the subsequent 2 patch set
> > superseded this patch. Indeed, the 2 patch set seems to solve
> > both the SLAB and SLUB bug reports.
> 
> It would mean that patch 1 solves both the SLAB and SLUB bug reports
> since patch 2 is only effective for SLUB.
> 
> Reason that I send this patch is that although patch 1 fixes the
> issue that too many kworkers are created, kmem_cache creation/destory
> is still slowed by synchronize_sched() and it would cause kmemcg
> usage counting delayed. I'm not sure how bad it is but it's generally
> better to start accounting as soon as possible. With patch 2 for SLUB
> and this patch for SLAB, performance of kmem_cache
> creation/destory would recover.

OK, so do we really want/need it for stable as well. I am not opposing
that but the effect doesn't seem to be a clear cut.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
  2016-10-06 16:02 ` Doug Smythies
@ 2016-10-07  5:14   ` Joonsoo Kim
  -1 siblings, 0 replies; 11+ messages in thread
From: Joonsoo Kim @ 2016-10-07  5:14 UTC (permalink / raw)
  To: Doug Smythies
  Cc: 'Andrew Morton', 'Christoph Lameter',
	'Pekka Enberg', 'David Rientjes',
	'Johannes Weiner', 'Vladimir Davydov',
	linux-mm, linux-kernel, stable

On Thu, Oct 06, 2016 at 09:02:00AM -0700, Doug Smythies wrote:
> It was my (limited) understanding that the subsequent 2 patch set
> superseded this patch. Indeed, the 2 patch set seems to solve
> both the SLAB and SLUB bug reports.

It would mean that patch 1 solves both the SLAB and SLUB bug reports
since patch 2 is only effective for SLUB.

Reason that I send this patch is that although patch 1 fixes the
issue that too many kworkers are created, kmem_cache creation/destory
is still slowed by synchronize_sched() and it would cause kmemcg
usage counting delayed. I'm not sure how bad it is but it's generally
better to start accounting as soon as possible. With patch 2 for SLUB
and this patch for SLAB, performance of kmem_cache
creation/destory would recover.

Thanks.

> 
> References:
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=172981
> https://bugzilla.kernel.org/show_bug.cgi?id=172991
> https://patchwork.kernel.org/patch/9361853
> https://patchwork.kernel.org/patch/9359271

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
@ 2016-10-07  5:14   ` Joonsoo Kim
  0 siblings, 0 replies; 11+ messages in thread
From: Joonsoo Kim @ 2016-10-07  5:14 UTC (permalink / raw)
  To: Doug Smythies
  Cc: 'Andrew Morton', 'Christoph Lameter',
	'Pekka Enberg', 'David Rientjes',
	'Johannes Weiner', 'Vladimir Davydov',
	linux-mm, linux-kernel, stable

On Thu, Oct 06, 2016 at 09:02:00AM -0700, Doug Smythies wrote:
> It was my (limited) understanding that the subsequent 2 patch set
> superseded this patch. Indeed, the 2 patch set seems to solve
> both the SLAB and SLUB bug reports.

It would mean that patch 1 solves both the SLAB and SLUB bug reports
since patch 2 is only effective for SLUB.

Reason that I send this patch is that although patch 1 fixes the
issue that too many kworkers are created, kmem_cache creation/destory
is still slowed by synchronize_sched() and it would cause kmemcg
usage counting delayed. I'm not sure how bad it is but it's generally
better to start accounting as soon as possible. With patch 2 for SLUB
and this patch for SLAB, performance of kmem_cache
creation/destory would recover.

Thanks.

> 
> References:
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=172981
> https://bugzilla.kernel.org/show_bug.cgi?id=172991
> https://patchwork.kernel.org/patch/9361853
> https://patchwork.kernel.org/patch/9359271

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
@ 2016-10-06 16:02 ` Doug Smythies
  0 siblings, 0 replies; 11+ messages in thread
From: Doug Smythies @ 2016-10-06 16:02 UTC (permalink / raw)
  To: js1304, 'Andrew Morton'
  Cc: 'Christoph Lameter', 'Pekka Enberg',
	'David Rientjes', 'Johannes Weiner',
	'Vladimir Davydov',
	linux-mm, linux-kernel, 'Joonsoo Kim',
	stable

It was my (limited) understanding that the subsequent 2 patch set
superseded this patch. Indeed, the 2 patch set seems to solve
both the SLAB and SLUB bug reports.

References:

https://bugzilla.kernel.org/show_bug.cgi?id=172981
https://bugzilla.kernel.org/show_bug.cgi?id=172991
https://patchwork.kernel.org/patch/9361853
https://patchwork.kernel.org/patch/9359271
 
On 2016.10.05 23:21 Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> There is a bug report that SLAB makes extreme load average due to
> over 2000 kworker thread.
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=172981
>
> This issue is caused by kmemcg feature that try to create new set of
> kmem_caches for each memcg. Recently, kmem_cache creation is slowed by
> synchronize_sched() and futher kmem_cache creation is also delayed
> since kmem_cache creation is synchronized by a global slab_mutex lock.
> So, the number of kworker that try to create kmem_cache increases quitely.
> synchronize_sched() is for lockless access to node's shared array but
> it's not needed when a new kmem_cache is created. So, this patch
> rules out that case.
>
> Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache")
> Cc: stable@vger.kernel.org
> Reported-by: Doug Smythies <dsmythies@telus.net>
> Tested-by: Doug Smythies <dsmythies@telus.net>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
> mm/slab.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index 6508b4d..3c83c29 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> 	 * guaranteed to be valid until irq is re-enabled, because it will be
> 	 * freed after synchronize_sched().
> 	 */
> -	if (force_change)
> +	if (old_shared && force_change)
> 		synchronize_sched();
> 
> fail:
> -- 
> 1.9.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
@ 2016-10-06 16:02 ` Doug Smythies
  0 siblings, 0 replies; 11+ messages in thread
From: Doug Smythies @ 2016-10-06 16:02 UTC (permalink / raw)
  To: js1304, 'Andrew Morton'
  Cc: 'Christoph Lameter', 'Pekka Enberg',
	'David Rientjes', 'Johannes Weiner',
	'Vladimir Davydov',
	linux-mm, linux-kernel, 'Joonsoo Kim',
	stable

It was my (limited) understanding that the subsequent 2 patch set
superseded this patch. Indeed, the 2 patch set seems to solve
both the SLAB and SLUB bug reports.

References:

https://bugzilla.kernel.org/show_bug.cgi?id=172981
https://bugzilla.kernel.org/show_bug.cgi?id=172991
https://patchwork.kernel.org/patch/9361853
https://patchwork.kernel.org/patch/9359271
 
On 2016.10.05 23:21 Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> There is a bug report that SLAB makes extreme load average due to
> over 2000 kworker thread.
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=172981
>
> This issue is caused by kmemcg feature that try to create new set of
> kmem_caches for each memcg. Recently, kmem_cache creation is slowed by
> synchronize_sched() and futher kmem_cache creation is also delayed
> since kmem_cache creation is synchronized by a global slab_mutex lock.
> So, the number of kworker that try to create kmem_cache increases quitely.
> synchronize_sched() is for lockless access to node's shared array but
> it's not needed when a new kmem_cache is created. So, this patch
> rules out that case.
>
> Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache")
> Cc: stable@vger.kernel.org
> Reported-by: Doug Smythies <dsmythies@telus.net>
> Tested-by: Doug Smythies <dsmythies@telus.net>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
> mm/slab.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index 6508b4d..3c83c29 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> 	 * guaranteed to be valid until irq is re-enabled, because it will be
> 	 * freed after synchronize_sched().
> 	 */
> -	if (force_change)
> +	if (old_shared && force_change)
> 		synchronize_sched();
> 
> fail:
> -- 
> 1.9.1


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH] mm/slab: fix kmemcg cache creation delayed issue
@ 2016-10-06 16:02 ` Doug Smythies
  0 siblings, 0 replies; 11+ messages in thread
From: Doug Smythies @ 2016-10-06 16:02 UTC (permalink / raw)
  To: js1304, 'Andrew Morton'
  Cc: 'Christoph Lameter', 'Pekka Enberg',
	'David Rientjes', 'Johannes Weiner',
	'Vladimir Davydov',
	linux-mm, linux-kernel, 'Joonsoo Kim',
	stable

It was my (limited) understanding that the subsequent 2 patch set
superseded this patch. Indeed, the 2 patch set seems to solve
both the SLAB and SLUB bug reports.

References:

https://bugzilla.kernel.org/show_bug.cgi?id=172981
https://bugzilla.kernel.org/show_bug.cgi?id=172991
https://patchwork.kernel.org/patch/9361853
https://patchwork.kernel.org/patch/9359271
 
On 2016.10.05 23:21 Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> There is a bug report that SLAB makes extreme load average due to
> over 2000 kworker thread.
> 
> https://bugzilla.kernel.org/show_bug.cgi?id=172981
>
> This issue is caused by kmemcg feature that try to create new set of
> kmem_caches for each memcg. Recently, kmem_cache creation is slowed by
> synchronize_sched() and futher kmem_cache creation is also delayed
> since kmem_cache creation is synchronized by a global slab_mutex lock.
> So, the number of kworker that try to create kmem_cache increases quitely.
> synchronize_sched() is for lockless access to node's shared array but
> it's not needed when a new kmem_cache is created. So, this patch
> rules out that case.
>
> Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache")
> Cc: stable@vger.kernel.org
> Reported-by: Doug Smythies <dsmythies@telus.net>
> Tested-by: Doug Smythies <dsmythies@telus.net>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
> mm/slab.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index 6508b4d..3c83c29 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep,
> 	 * guaranteed to be valid until irq is re-enabled, because it will be
> 	 * freed after synchronize_sched().
> 	 */
> -	if (force_change)
> +	if (old_shared && force_change)
> 		synchronize_sched();
> 
> fail:
> -- 
> 1.9.1


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-10-13  7:57 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-06  6:20 [PATCH] mm/slab: fix kmemcg cache creation delayed issue js1304
2016-10-06  6:20 ` js1304
2016-10-06 16:02 Doug Smythies
2016-10-06 16:02 ` Doug Smythies
2016-10-06 16:02 ` Doug Smythies
2016-10-07  5:14 ` Joonsoo Kim
2016-10-07  5:14   ` Joonsoo Kim
2016-10-07  8:20   ` Michal Hocko
2016-10-07  8:20     ` Michal Hocko
2016-10-13  7:42     ` Joonsoo Kim
2016-10-13  7:42       ` Joonsoo Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.