From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751672AbcFWCfX (ORCPT ); Wed, 22 Jun 2016 22:35:23 -0400 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:46342 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751000AbcFWCfV (ORCPT ); Wed, 22 Jun 2016 22:35:21 -0400 X-Original-SENDERIP: 156.147.1.121 X-Original-MAILFROM: iamjoonsoo.kim@lge.com X-Original-SENDERIP: 10.177.222.138 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Thu, 23 Jun 2016 11:37:56 +0900 From: Joonsoo Kim To: "Paul E. McKenney" Cc: Geert Uytterhoeven , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Jesper Dangaard Brouer , Linux MM , "linux-kernel@vger.kernel.org" , linux-renesas-soc@vger.kernel.org, "linux-arm-kernel@lists.infradead.org" Subject: Re: Boot failure on emev2/kzm9d (was: Re: [PATCH v2 11/11] mm/slab: lockless decision to grow cache) Message-ID: <20160623023756.GA30438@js1304-P5Q-DELUXE> References: <20160615022325.GA19863@js1304-P5Q-DELUXE> <20160620063942.GA13747@js1304-P5Q-DELUXE> <20160620131254.GO3923@linux.vnet.ibm.com> <20160621064302.GA20635@js1304-P5Q-DELUXE> <20160621125406.GF3923@linux.vnet.ibm.com> <20160622005208.GB25106@js1304-P5Q-DELUXE> <20160622190859.GA1473@linux.vnet.ibm.com> <20160623004935.GA20752@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160623004935.GA20752@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 22, 2016 at 05:49:35PM -0700, Paul E. McKenney wrote: > On Wed, Jun 22, 2016 at 12:08:59PM -0700, Paul E. McKenney wrote: > > On Wed, Jun 22, 2016 at 05:01:35PM +0200, Geert Uytterhoeven wrote: > > > On Wed, Jun 22, 2016 at 2:52 AM, Joonsoo Kim wrote: > > > > Could you try below patch to check who causes the hang? > > > > > > > > And, if sysalt-t works when hang, could you get sysalt-t output? I haven't > > > > used it before but Paul could find some culprit on it. :) > > > > > > > > Thanks. > > > > > > > > > > > > ----->8----- > > > > diff --git a/mm/slab.c b/mm/slab.c > > > > index 763096a..9652d38 100644 > > > > --- a/mm/slab.c > > > > +++ b/mm/slab.c > > > > @@ -964,8 +964,13 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep, > > > > * guaranteed to be valid until irq is re-enabled, because it will be > > > > * freed after synchronize_sched(). > > > > */ > > > > - if (force_change) > > > > + if (force_change) { > > > > + if (num_online_cpus() > 1) > > > > + dump_stack(); > > > > synchronize_sched(); > > > > + if (num_online_cpus() > 1) > > > > + dump_stack(); > > > > + } > > > > > > I've only added the first one, as I would never see the second one. All of > > > this happens before the serial console is activated, earlycon is not supported, > > > and I only have remote access. > > > > > > Brought up 2 CPUs > > > SMP: Total of 2 processors activated (2132.00 BogoMIPS). > > > CPU: All CPU(s) started in SVC mode. > > > CPU: 0 PID: 1 Comm: swapper/0 Not tainted > > > 4.7.0-rc4-kzm9d-00404-g4a235e6dde4404dd-dirty #89 > > > Hardware name: Generic Emma Mobile EV2 (Flattened Device Tree) > > > [] (unwind_backtrace) from [] (show_stack+0x10/0x14) > > > [] (show_stack) from [] (dump_stack+0x7c/0x9c) > > > [] (dump_stack) from [] (setup_kmem_cache_node+0x140/0x170) > > > [] (setup_kmem_cache_node) from [] > > > (__do_tune_cpucache+0xf4/0x114) > > > [] (__do_tune_cpucache) from [] (enable_cpucache+0xf8/0x148) > > > [] (enable_cpucache) from [] > > > (__kmem_cache_create+0x1a8/0x1d0) > > > [] (__kmem_cache_create) from [] > > > (kmem_cache_create+0xbc/0x190) > > > [] (kmem_cache_create) from [] (shmem_init+0x34/0xb0) > > > [] (shmem_init) from [] (kernel_init_freeable+0x98/0x1ec) > > > [] (kernel_init_freeable) from [] (kernel_init+0x8/0x110) > > > [] (kernel_init) from [] (ret_from_fork+0x14/0x3c) > > > devtmpfs: initialized > > > > I don't see anything here that would prevent grace periods from completing. > > > > The CPUs are using the normal hotplug sequence to come online, correct? > > And either way, could you please apply the patch below and then > invoke rcu_dump_rcu_sched_tree() just before the offending call to > synchronize_sched()? That will tell me what CPUs RCU believes exist, > and perhaps also which CPU is holding it up. I can't find rcu_dump_rcu_sched_tree(). Do you mean rcu_dump_rcu_node_tree()? Anyway, there is no patch below so I attach one which does what Paul want, maybe. Thanks. ------->8--------- diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 88d3f95..6b650f0 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4171,7 +4171,7 @@ static void __init rcu_init_geometry(void) * Dump out the structure of the rcu_node combining tree associated * with the rcu_state structure referenced by rsp. */ -static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp) +static void rcu_dump_rcu_node_tree(struct rcu_state *rsp) { int level = 0; struct rcu_node *rnp; @@ -4189,6 +4189,11 @@ static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp) pr_cont("\n"); } +void rcu_dump_rcu_sched_tree(void) +{ + rcu_dump_rcu_node_tree(&rcu_sched_state); +} + void __init rcu_init(void) { int cpu; diff --git a/mm/slab.c b/mm/slab.c index 763096a..d88976c 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -909,6 +909,8 @@ static int init_cache_node_node(int node) return 0; } +extern void rcu_dump_rcu_sched_tree(void); + static int setup_kmem_cache_node(struct kmem_cache *cachep, int node, gfp_t gfp, bool force_change) { @@ -964,8 +966,10 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep, * guaranteed to be valid until irq is re-enabled, because it will be * freed after synchronize_sched(). */ - if (force_change) + if (force_change) { + rcu_dump_rcu_sched_tree(); synchronize_sched(); + } fail: kfree(old_shared);