From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751458Ab3FZIPQ (ORCPT ); Wed, 26 Jun 2013 04:15:16 -0400 Received: from cantor2.suse.de ([195.135.220.15]:32909 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751271Ab3FZIPL (ORCPT ); Wed, 26 Jun 2013 04:15:11 -0400 Date: Wed, 26 Jun 2013 10:15:09 +0200 From: Michal Hocko To: Dave Chinner Cc: Glauber Costa , Andrew Morton , linux-mm@kvack.org, LKML Subject: Re: linux-next: slab shrinkers: BUG at mm/list_lru.c:92 Message-ID: <20130626081509.GF28748@dhcp22.suse.cz> References: <20130617141822.GF5018@dhcp22.suse.cz> <20130617151403.GA25172@localhost.localdomain> <20130617143508.7417f1ac9ecd15d8b2877f76@linux-foundation.org> <20130617223004.GB2538@localhost.localdomain> <20130618024623.GP29338@dastard> <20130618063104.GB20528@localhost.localdomain> <20130618082414.GC13677@dhcp22.suse.cz> <20130618104443.GH13677@dhcp22.suse.cz> <20130618135025.GK13677@dhcp22.suse.cz> <20130625022754.GP29376@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20130625022754.GP29376@dastard> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 25-06-13 12:27:54, Dave Chinner wrote: > On Tue, Jun 18, 2013 at 03:50:25PM +0200, Michal Hocko wrote: > > And again, another hang. It looks like the inode deletion never > > finishes. The good thing is that I do not see any LRU related BUG_ONs > > anymore. I am going to test with the other patch in the thread. > > > > 2476 [] __wait_on_freeing_inode+0x9e/0xc0 <<< waiting for an inode to go away > > [] find_inode_fast+0xa1/0xc0 > > [] iget_locked+0x4f/0x180 > > [] ext4_iget+0x33/0x9f0 > > [] ext4_lookup+0xbc/0x160 > > [] lookup_real+0x20/0x60 > > [] lookup_open+0x175/0x1d0 > > [] do_last+0x2de/0x780 <<< holds i_mutex > > [] path_openat+0xda/0x400 > > [] do_filp_open+0x43/0xa0 > > [] do_sys_open+0x160/0x1e0 > > [] sys_open+0x1c/0x20 > > [] system_call_fastpath+0x16/0x1b > > [] 0xffffffffffffffff > > I don't think this has anything to do with LRUs. I am not claiming that. It might be a timing issue which never mattered but it is strange I can reproduce this so easily and repeatedly with the shrinkers patchset applied. As I said earlier, this might be breakage in my -mm tree as well (missing some patch which didn't go via Andrew or misapplied patch). The situation is worsen by the state of linux-next which has some unrelated issues. I really do not want to delay the whole patchset just because of some problem on my side. Do you have any tree that I should try to test? > __wait_on_freeing_inode() only blocks once the inode is being freed > (i.e. I_FREEING is set), and that happens when a lookup is done when > the inode is still in the inode hash. > > I_FREEING is set on the inode at the same time it is removed from > the LRU, and from that point onwards the LRUs play no part in the > inode being freed and anyone waiting on the inode being freed > getting woken. > > The only way I can see this happening, is if there is a dispose list > that is not getting processed properly. e.g., we move a bunch on > inodes to the dispose list setting I_FREEING, then for some reason > it gets dropped on the ground and so the wakeup call doesn't happen > when the inode has been removed from the hash. > > I can't see anywhere in the code that this happens, though, but it > might be some pre-existing race in the inode hash that you are now > triggering because freeing will be happening in parallel on multiple > nodes rather than serialising on a global lock... > > I won't have seen this on XFS stress testing, because it doesn't use > the VFS inode hashes for inode lookups. Given that XFS is not > triggering either problem you are seeing, that makes me think I haven't tested with xfs. > that it might be a pre-existing inode hash lookup/reclaim race > condition, not a LRU problem. > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com -- Michal Hocko SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx173.postini.com [74.125.245.173]) by kanga.kvack.org (Postfix) with SMTP id AD77A6B0032 for ; Wed, 26 Jun 2013 04:15:11 -0400 (EDT) Date: Wed, 26 Jun 2013 10:15:09 +0200 From: Michal Hocko Subject: Re: linux-next: slab shrinkers: BUG at mm/list_lru.c:92 Message-ID: <20130626081509.GF28748@dhcp22.suse.cz> References: <20130617141822.GF5018@dhcp22.suse.cz> <20130617151403.GA25172@localhost.localdomain> <20130617143508.7417f1ac9ecd15d8b2877f76@linux-foundation.org> <20130617223004.GB2538@localhost.localdomain> <20130618024623.GP29338@dastard> <20130618063104.GB20528@localhost.localdomain> <20130618082414.GC13677@dhcp22.suse.cz> <20130618104443.GH13677@dhcp22.suse.cz> <20130618135025.GK13677@dhcp22.suse.cz> <20130625022754.GP29376@dastard> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20130625022754.GP29376@dastard> Sender: owner-linux-mm@kvack.org List-ID: To: Dave Chinner Cc: Glauber Costa , Andrew Morton , linux-mm@kvack.org, LKML On Tue 25-06-13 12:27:54, Dave Chinner wrote: > On Tue, Jun 18, 2013 at 03:50:25PM +0200, Michal Hocko wrote: > > And again, another hang. It looks like the inode deletion never > > finishes. The good thing is that I do not see any LRU related BUG_ONs > > anymore. I am going to test with the other patch in the thread. > > > > 2476 [] __wait_on_freeing_inode+0x9e/0xc0 <<< waiting for an inode to go away > > [] find_inode_fast+0xa1/0xc0 > > [] iget_locked+0x4f/0x180 > > [] ext4_iget+0x33/0x9f0 > > [] ext4_lookup+0xbc/0x160 > > [] lookup_real+0x20/0x60 > > [] lookup_open+0x175/0x1d0 > > [] do_last+0x2de/0x780 <<< holds i_mutex > > [] path_openat+0xda/0x400 > > [] do_filp_open+0x43/0xa0 > > [] do_sys_open+0x160/0x1e0 > > [] sys_open+0x1c/0x20 > > [] system_call_fastpath+0x16/0x1b > > [] 0xffffffffffffffff > > I don't think this has anything to do with LRUs. I am not claiming that. It might be a timing issue which never mattered but it is strange I can reproduce this so easily and repeatedly with the shrinkers patchset applied. As I said earlier, this might be breakage in my -mm tree as well (missing some patch which didn't go via Andrew or misapplied patch). The situation is worsen by the state of linux-next which has some unrelated issues. I really do not want to delay the whole patchset just because of some problem on my side. Do you have any tree that I should try to test? > __wait_on_freeing_inode() only blocks once the inode is being freed > (i.e. I_FREEING is set), and that happens when a lookup is done when > the inode is still in the inode hash. > > I_FREEING is set on the inode at the same time it is removed from > the LRU, and from that point onwards the LRUs play no part in the > inode being freed and anyone waiting on the inode being freed > getting woken. > > The only way I can see this happening, is if there is a dispose list > that is not getting processed properly. e.g., we move a bunch on > inodes to the dispose list setting I_FREEING, then for some reason > it gets dropped on the ground and so the wakeup call doesn't happen > when the inode has been removed from the hash. > > I can't see anywhere in the code that this happens, though, but it > might be some pre-existing race in the inode hash that you are now > triggering because freeing will be happening in parallel on multiple > nodes rather than serialising on a global lock... > > I won't have seen this on XFS stress testing, because it doesn't use > the VFS inode hashes for inode lookups. Given that XFS is not > triggering either problem you are seeing, that makes me think I haven't tested with xfs. > that it might be a pre-existing inode hash lookup/reclaim race > condition, not a LRU problem. > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org