On Sat 29-06-13 12:55:09, Dave Chinner wrote: > On Thu, Jun 27, 2013 at 04:54:11PM +0200, Michal Hocko wrote: > > On Thu 27-06-13 09:24:26, Dave Chinner wrote: > > > On Wed, Jun 26, 2013 at 10:15:09AM +0200, Michal Hocko wrote: > > > > On Tue 25-06-13 12:27:54, Dave Chinner wrote: > > > > > On Tue, Jun 18, 2013 at 03:50:25PM +0200, Michal Hocko wrote: > > > > > > And again, another hang. It looks like the inode deletion never > > > > > > finishes. The good thing is that I do not see any LRU related BUG_ONs > > > > > > anymore. I am going to test with the other patch in the thread. > > > > > > > > > > > > 2476 [] __wait_on_freeing_inode+0x9e/0xc0 <<< waiting for an inode to go away > > > > > > [] find_inode_fast+0xa1/0xc0 > > > > > > [] iget_locked+0x4f/0x180 > > > > > > [] ext4_iget+0x33/0x9f0 > > > > > > [] ext4_lookup+0xbc/0x160 > > > > > > [] lookup_real+0x20/0x60 > > > > > > [] lookup_open+0x175/0x1d0 > > > > > > [] do_last+0x2de/0x780 <<< holds i_mutex > > > > > > [] path_openat+0xda/0x400 > > > > > > [] do_filp_open+0x43/0xa0 > > > > > > [] do_sys_open+0x160/0x1e0 > > > > > > [] sys_open+0x1c/0x20 > > > > > > [] system_call_fastpath+0x16/0x1b > > > > > > [] 0xffffffffffffffff > > > > > > > > > > I don't think this has anything to do with LRUs. > > > > > > > > I am not claiming that. It might be a timing issue which never mattered > > > > but it is strange I can reproduce this so easily and repeatedly with the > > > > shrinkers patchset applied. > > > > As I said earlier, this might be breakage in my -mm tree as well > > > > (missing some patch which didn't go via Andrew or misapplied patch). The > > > > situation is worsen by the state of linux-next which has some unrelated > > > > issues. > > > > > > > > I really do not want to delay the whole patchset just because of some > > > > problem on my side. Do you have any tree that I should try to test? > > > > > > No, I've just been testing Glauber's tree and sending patches for > > > problems back to him based on it. > > > > > > > > I won't have seen this on XFS stress testing, because it doesn't use > > > > > the VFS inode hashes for inode lookups. Given that XFS is not > > > > > triggering either problem you are seeing, that makes me think > > > > > > > > I haven't tested with xfs. > > > > > > That might be worthwhile if you can easily do that - another data > > > point indicating a hang or absence of a hang will help point us in > > > the right direction here... > > > > OK, still hanging (with inode_lru_isolate-fix.patch). It is not the same > > thing, though, as xfs seem to do lookup slightly differently. > > 12467 [] xfs_iget+0xbe/0x190 [xfs] > > [] xfs_lookup+0xe8/0x110 [xfs] > > [] xfs_vn_lookup+0x49/0x90 [xfs] > > [] lookup_real+0x20/0x60 > > [] lookup_open+0x175/0x1d0 > > [] do_last+0x2de/0x780 > > [] path_openat+0xda/0x400 > > [] do_filp_open+0x43/0xa0 > > [] do_sys_open+0x160/0x1e0 > > [] sys_open+0x1c/0x20 > > [] system_call_fastpath+0x16/0x1b > > [] 0xffffffffffffffff > > What are the full traces? Do you mean sysrq+t? It is attached. Btw. I was able to reproduce this again. The stuck processes were sitting in the same traces for more than 28 hours without any change so I do not think this is a temporal condition. Traces of all processes in the D state: 7561 [] xfs_iget+0xbe/0x190 [xfs] [] xfs_lookup+0xe8/0x110 [xfs] [] xfs_vn_lookup+0x49/0x90 [xfs] [] lookup_real+0x20/0x60 [] lookup_open+0x175/0x1d0 [] do_last+0x2de/0x780 [] path_openat+0xda/0x400 [] do_filp_open+0x43/0xa0 [] do_sys_open+0x160/0x1e0 [] sys_open+0x1c/0x20 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff 8156 [] do_last+0x2c4/0x780 [] path_openat+0xda/0x400 [] do_filp_open+0x43/0xa0 [] do_sys_open+0x160/0x1e0 [] sys_open+0x1c/0x20 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff 8913 [] do_last+0x2c4/0x780 [] path_openat+0xda/0x400 [] do_filp_open+0x43/0xa0 [] do_sys_open+0x160/0x1e0 [] sys_open+0x1c/0x20 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff 9100 [] xfs_iget+0xbe/0x190 [xfs] [] xfs_lookup+0xe8/0x110 [xfs] [] xfs_vn_lookup+0x49/0x90 [xfs] [] lookup_real+0x20/0x60 [] lookup_open+0x175/0x1d0 [] do_last+0x2de/0x780 [] path_openat+0xda/0x400 [] do_filp_open+0x43/0xa0 [] do_sys_open+0x160/0x1e0 [] sys_open+0x1c/0x20 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff 9158 [] do_last+0x2c4/0x780 [] path_openat+0xda/0x400 [] do_filp_open+0x43/0xa0 [] do_sys_open+0x160/0x1e0 [] sys_open+0x1c/0x20 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff 11247 [] do_last+0x2c4/0x780 [] path_openat+0xda/0x400 [] do_filp_open+0x43/0xa0 [] do_sys_open+0x160/0x1e0 [] sys_open+0x1c/0x20 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff 12161 [] path_lookupat+0x792/0x830 [] filename_lookup+0x33/0xd0 [] user_path_at_empty+0x7b/0xb0 [] user_path_at+0xc/0x10 [] vfs_fstatat+0x51/0xb0 [] vfs_stat+0x16/0x20 [] sys_newstat+0x1f/0x50 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff 12585 [] do_last+0x2c4/0x780 [] path_openat+0xda/0x400 [] do_filp_open+0x43/0xa0 [] do_sys_open+0x160/0x1e0 [] sys_open+0x1c/0x20 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff -- Michal Hocko SUSE Labs