From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f199.google.com (mail-wr0-f199.google.com [209.85.128.199]) by kanga.kvack.org (Postfix) with ESMTP id 11C8D800D8 for ; Thu, 25 Jan 2018 03:35:19 -0500 (EST) Received: by mail-wr0-f199.google.com with SMTP id t21so4050542wrb.14 for ; Thu, 25 Jan 2018 00:35:19 -0800 (PST) Received: from mx2.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id h12si3278572wre.272.2018.01.25.00.35.17 for (version=TLS1 cipher=AES128-SHA bits=128/128); Thu, 25 Jan 2018 00:35:17 -0800 (PST) Date: Thu, 25 Jan 2018 09:35:16 +0100 From: Michal Hocko Subject: Re: Possible deadlock in v4.14.15 contention on shrinker_rwsem in shrink_slab() Message-ID: <20180125083516.GA22396@dhcp22.suse.cz> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: owner-linux-mm@kvack.org List-ID: To: Eric Wheeler Cc: linux-mm@kvack.org, "Kirill A. Shutemov" , Minchan Kim [CC Kirill, Minchan] On Wed 24-01-18 23:57:42, Eric Wheeler wrote: > Hello all, > > We are getting processes stuck with /proc/pid/stack listing the following: > > [] io_schedule+0x12/0x40 > [] __lock_page+0x105/0x150 > [] pagecache_get_page+0x161/0x210 > [] shmem_unused_huge_shrink+0x334/0x3f0 > [] super_cache_scan+0x176/0x180 > [] shrink_slab+0x275/0x460 > [] shrink_node+0x10e/0x320 > [] node_reclaim+0x19d/0x250 > [] get_page_from_freelist+0x16a/0xac0 > [] __alloc_pages_nodemask+0x107/0x290 > [] pte_alloc_one+0x13/0x40 > [] __pte_alloc+0x19/0x100 > [] alloc_set_pte+0x468/0x4c0 > [] finish_fault+0x3a/0x70 > [] __handle_mm_fault+0x94a/0x1190 > [] handle_mm_fault+0xc4/0x1d0 > [] __do_page_fault+0x253/0x4d0 > [] do_page_fault+0x33/0x120 > [] page_fault+0x4c/0x60 > > > For some reason io_schedule is not coming back, Is this a permanent state or does the holder eventually releases the lock? It smells like somebody hasn't unlocked the shmem page. Tracking those is a major PITA... :/ Do you remember the last good kernel? > so shrinker_rwsem never > gets an up_read. When this happens, other processes like libvirt get stuck > trying to start VMs with the /proc/pid/stack of libvirtd looking like so, > while register_shrinker waits for shrinker_rwsem to be released: > > [] call_rwsem_down_write_failed+0x13/0x20 > [] register_shrinker+0x45/0xa0 > [] sget_userns+0x468/0x4a0 > [] mount_nodev+0x2a/0xa0 > [] mount_fs+0x34/0x150 > [] vfs_kern_mount+0x62/0x120 > [] do_mount+0x1ee/0xc50 > [] SyS_mount+0x7e/0xd0 > [] do_syscall_64+0x61/0x1a0 > [] entry_SYSCALL64_slow_path+0x25/0x25 > [] 0xffffffffffffffff > > > I seem to be able to reproduce this somewhat reliably, it will likely be > stuck by tomorrow morning. Since it does seem to take a day to hang, I was > hoping to avoid a bisect and see if anyone has seen this behavior or knows > it to be fixed in 4.15-rc. > > Note that we are using zram as our only swap device, but at the time that > it shrink_slab() failed to return, there was plenty of memory available > and no swap was in use. Maybe something related with f80207727aac ("mm/memory.c: release locked page in do_swap_page()") > The machine is generally responsive, but `sync` will hang forever and our > only way out is `echo b > /proc/sysrq-trigger`. > > Please suggest any additional information you might need for testing, and > I am happy to try patches. > > Thank you for your help! > > -- > Eric Wheeler -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org