From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753001AbdFUQbl (ORCPT ); Wed, 21 Jun 2017 12:31:41 -0400 Received: from mail-lf0-f66.google.com ([209.85.215.66]:32883 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752805AbdFUQbj (ORCPT ); Wed, 21 Jun 2017 12:31:39 -0400 Date: Wed, 21 Jun 2017 19:31:34 +0300 From: Vladimir Davydov To: Sahitya Tummala Cc: Alexander Polakov , Andrew Morton , Jan Kara , viro@zeniv.linux.org.uk, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH v2] fs/dcache.c: fix spin lockup issue on nlru->lock Message-ID: <20170621163134.GA3273@esperanza> References: <6ab790fe-de97-9495-0d3b-804bae5d7fbb@codeaurora.org> <1498027155-4456-1-git-send-email-stummala@codeaurora.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1498027155-4456-1-git-send-email-stummala@codeaurora.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 21, 2017 at 12:09:15PM +0530, Sahitya Tummala wrote: > __list_lru_walk_one() acquires nlru spin lock (nlru->lock) for > longer duration if there are more number of items in the lru list. > As per the current code, it can hold the spin lock for upto maximum > UINT_MAX entries at a time. So if there are more number of items in > the lru list, then "BUG: spinlock lockup suspected" is observed in > the below path - > > [] spin_bug+0x90 > [] do_raw_spin_lock+0xfc > [] _raw_spin_lock+0x28 > [] list_lru_add+0x28 > [] dput+0x1c8 > [] path_put+0x20 > [] terminate_walk+0x3c > [] path_lookupat+0x100 > [] filename_lookup+0x6c > [] user_path_at_empty+0x54 > [] SyS_faccessat+0xd0 > [] el0_svc_naked+0x24 > > This nlru->lock is acquired by another CPU in this path - > > [] d_lru_shrink_move+0x34 > [] dentry_lru_isolate_shrink+0x48 > [] __list_lru_walk_one.isra.10+0x94 > [] list_lru_walk_node+0x40 > [] shrink_dcache_sb+0x60 > [] do_remount_sb+0xbc > [] do_emergency_remount+0xb0 > [] process_one_work+0x228 > [] worker_thread+0x2e0 > [] kthread+0xf4 > [] ret_from_fork+0x10 > > Fix this lockup by reducing the number of entries to be shrinked > from the lru list to 1024 at once. Also, add cond_resched() before > processing the lru list again. > > Link: http://marc.info/?t=149722864900001&r=1&w=2 > Fix-suggested-by: Jan kara > Fix-suggested-by: Vladimir Davydov > Signed-off-by: Sahitya Tummala > --- > v2: patch shrink_dcache_sb() instead of list_lru_walk() > --- > fs/dcache.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/fs/dcache.c b/fs/dcache.c > index cddf397..c8ca150 100644 > --- a/fs/dcache.c > +++ b/fs/dcache.c > @@ -1133,10 +1133,11 @@ void shrink_dcache_sb(struct super_block *sb) > LIST_HEAD(dispose); > > freed = list_lru_walk(&sb->s_dentry_lru, > - dentry_lru_isolate_shrink, &dispose, UINT_MAX); > + dentry_lru_isolate_shrink, &dispose, 1024); > > this_cpu_sub(nr_dentry_unused, freed); > shrink_dentry_list(&dispose); > + cond_resched(); > } while (freed > 0); In an extreme case, a single invocation of list_lru_walk() can skip all 1024 dentries, in which case 'freed' will be 0 forcing us to break the loop prematurely. I think we should loop until there's at least one dentry left on the LRU, i.e. while (list_lru_count(&sb->s_dentry_lru) > 0) However, even that wouldn't be quite correct, because list_lru_count() iterates over all memory cgroups to sum list_lru_one->nr_items, which can race with memcg offlining code migrating dentries off a dead cgroup (see memcg_drain_all_list_lrus()). So it looks like to make this check race-free, we need to account the number of entries on the LRU not only per memcg, but also per node, i.e. add list_lru_node->nr_items. Fortunately, list_lru entries can't be migrated between NUMA nodes. > } > EXPORT_SYMBOL(shrink_dcache_sb);