From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933349AbdDSIRL (ORCPT ); Wed, 19 Apr 2017 04:17:11 -0400 Received: from mx2.suse.de ([195.135.220.15]:43455 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1760780AbdDSIRF (ORCPT ); Wed, 19 Apr 2017 04:17:05 -0400 Date: Wed, 19 Apr 2017 10:17:01 +0200 From: Michal Hocko To: Linus Torvalds Cc: Geert Uytterhoeven , Tetsuo Handa , Andrew Morton , Linux Kernel Mailing List , Linux MM , Linux PM list , Vlastimil Babka , Mel Gorman , Linux-Renesas , Tejun Heo Subject: Re: Re: Re: "mm: move pcp and lru-pcp draining into single wq" broke resume from s2ram Message-ID: <20170419081701.GC29789@dhcp22.suse.cz> References: <201704190541.v3J5fUE3054131@www262.sakura.ne.jp> <20170419071039.GB28263@dhcp22.suse.cz> <201704190726.v3J7QAiC076509@www262.sakura.ne.jp> <20170419075712.GB29789@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 19-04-17 10:09:12, Geert Uytterhoeven wrote: > Hi Michal, Tetsuo, > > On Wed, Apr 19, 2017 at 9:57 AM, Michal Hocko wrote: > > From f3c6e287042259d6ae9916f1ff66392c46ce2a3c Mon Sep 17 00:00:00 2001 > > From: Michal Hocko > > Date: Wed, 19 Apr 2017 09:52:46 +0200 > > Subject: [PATCH] mm: make mm_percpu_wq non freezable > > > > Geert has reported a freeze during PM resume and some additional > > debugging has shown that the device_resume worker cannot make a forward > > progress because it waits for an event which is stuck waiting in > > drain_all_pages: > > [ 243.691979] INFO: task kworker/u4:0:5 blocked for more than 120 seconds. > > [ 243.698684] Not tainted 4.11.0-rc7-koelsch-00029-g005882e53d62f25d-dirty #3476 > > [ 243.706439] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > > [ 243.714276] kworker/u4:0 D 0 5 2 0x00000000 > > [ 243.719780] Workqueue: events_unbound async_run_entry_fn > > [ 243.725118] [] (__schedule) from [] (schedule+0xb0/0xcc) > > [ 243.732181] [] (schedule) from [] (schedule_timeout+0x18/0x1f4) > > [ 243.739840] [] (schedule_timeout) from [] (wait_for_common+0x100/0x19c) > > [ 243.748207] [] (wait_for_common) from [] (dpm_wait_for_superior+0x14/0x5c) > > [ 243.756836] [] (dpm_wait_for_superior) from [] (device_resume+0x40/0x1a0) > > [ 243.765380] [] (device_resume) from [] (async_resume+0x18/0x44) > > [ 243.773055] [] (async_resume) from [] (async_run_entry_fn+0x44/0x114) > > [ 243.781245] [] (async_run_entry_fn) from [] (process_one_work+0x1cc/0x31c) > > [ 243.789876] [] (process_one_work) from [] (worker_thread+0x2b8/0x3f0) > > [ 243.798080] [] (worker_thread) from [] (kthread+0x120/0x140) > > [ 243.805500] [] (kthread) from [] (ret_from_fork+0x14/0x2c) > > [...] > > [ 249.441198] bash D 0 1703 1694 0x00000000 > > [ 249.446702] [] (__schedule) from [] (schedule+0xb0/0xcc) > > [ 249.453764] [] (schedule) from [] (schedule_timeout+0x18/0x1f4) > > [ 249.461427] [] (schedule_timeout) from [] (wait_for_common+0x100/0x19c) > > [ 249.469797] [] (wait_for_common) from [] (flush_work+0x128/0x158) > > [ 249.477650] [] (flush_work) from [] (drain_all_pages+0x198/0x1f0) > > [ 249.485503] [] (drain_all_pages) from [] (start_isolate_page_range+0xd8/0x1ac) > > [ 249.494484] [] (start_isolate_page_range) from [] (alloc_contig_range+0xc4/0x304) > > [ 249.503724] [] (alloc_contig_range) from [] (cma_alloc+0x134/0x1bc) > > [ 249.511739] [] (cma_alloc) from [] (__alloc_from_contiguous+0x30/0xa0) > > [ 249.520023] [] (__alloc_from_contiguous) from [] (cma_allocator_alloc+0x40/0x48) > > [ 249.529173] [] (cma_allocator_alloc) from [] (__dma_alloc+0x1d4/0x2e8) > > [ 249.537455] [] (__dma_alloc) from [] (arm_dma_alloc+0x40/0x4c) > > [ 249.545047] [] (arm_dma_alloc) from [] (sh_eth_ring_init+0xec/0x1b8) > > [ 249.553160] [] (sh_eth_ring_init) from [] (sh_eth_open+0x88/0x1e0) > > [ 249.561086] [] (sh_eth_open) from [] (sh_eth_resume+0x7c/0xc0) > > [ 249.568678] [] (sh_eth_resume) from [] (dpm_run_callback+0x48/0xc8) > > [ 249.576702] [] (dpm_run_callback) from [] (device_resume+0x15c/0x1a0) > > [ 249.584898] [] (device_resume) from [] (dpm_resume+0xe4/0x244) > > [ 249.592485] [] (dpm_resume) from [] (dpm_resume_end+0xc/0x18) > > [ 249.599977] [] (dpm_resume_end) from [] (suspend_devices_and_enter+0x3c8/0x490) > > [ 249.609042] [] (suspend_devices_and_enter) from [] (pm_suspend+0x228/0x280) > > [ 249.617759] [] (pm_suspend) from [] (state_store+0xac/0xcc) > > [ 249.625089] [] (state_store) from [] (kernfs_fop_write+0x164/0x1a0) > > [ 249.633116] [] (kernfs_fop_write) from [] (__vfs_write+0x20/0x108) > > [ 249.641043] [] (__vfs_write) from [] (vfs_write+0xb8/0x144) > > [ 249.648373] [] (vfs_write) from [] (SyS_write+0x40/0x80) > > [ 249.655437] [] (SyS_write) from [] (ret_fast_syscall+0x0/0x34) > > [...] > > [ 254.753928] Showing busy workqueues and worker pools: > > [...] > > [ 254.854225] workqueue mm_percpu_wq: flags=0xc > > [ 254.858583] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=0/0 > > [ 254.864428] delayed: drain_local_pages_wq, vmstat_update > > [ 254.870111] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=0/0 > > [ 254.875957] delayed: drain_local_pages_wq BAR(1703), vmstat_update > > > > Tetsuo has properly noted that mm_percpu_wq is created as WQ_FREEZABLE > > so it is frozen this early during resume so we are effectively deadlocked. > > Fix this by dropping WQ_FREEZABLE when creating mm_percpu_wq. We really want to > > have it operational all the time. > > > > Fixes: ce612879ddc7 ("mm: move pcp and lru-pcp draining into single wq") > > Reported-by: Geert Uytterhoeven > > Debugged-by: Tetsuo Handa > > Signed-off-by: Michal Hocko > > Tested-by: Geert Uytterhoeven Thanks for the testing. Linus will you take the patch from this thread or you prefer a resend? -- Michal Hocko SUSE Labs