From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760405AbdDSHpg (ORCPT ); Wed, 19 Apr 2017 03:45:36 -0400 Received: from mx2.suse.de ([195.135.220.15]:41023 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1760307AbdDSHpe (ORCPT ); Wed, 19 Apr 2017 03:45:34 -0400 Date: Wed, 19 Apr 2017 09:45:30 +0200 From: Michal Hocko To: Geert Uytterhoeven Cc: Tetsuo Handa , Andrew Morton , Linus Torvalds , Linux Kernel Mailing List , Linux MM , Linux PM list , Vlastimil Babka , Mel Gorman , Linux-Renesas , Tejun Heo Subject: Re: Re: "mm: move pcp and lru-pcp draining into single wq" broke resume from s2ram Message-ID: <20170419074530.GA29789@dhcp22.suse.cz> References: <20170418201907.GC20671@dhcp22.suse.cz> <201704190541.v3J5fUE3054131@www262.sakura.ne.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 19-04-17 09:16:42, Geert Uytterhoeven wrote: > Hi Tetsuo, > > On Wed, Apr 19, 2017 at 7:41 AM, Tetsuo Handa > wrote: [...] > > Somebody is waiting forever with cpu_hotplug.lock held? > > I think that full dmesg with SysRq-t output is appreciated. > > As SysRq doesn't work with my serial console, I added calls to show_state() > and show_workqueue_state() to check_hung_task(). > > Result with current linus/master attached. [ 47.165412] Enabling non-boot CPUs ... [ 47.205615] CPU1 is up [ 47.382002] PM: noirq resume of devices complete after 174.017 msecs [ 47.390181] PM: early resume of devices complete after 1.468 msecs OK, so this is still the early resume path AFAIU which means that the userspace is still in the fridge... Is it possible that new workers cannot be spawned? [ 243.691979] INFO: task kworker/u4:0:5 blocked for more than 120 seconds. [ 243.698684] Not tainted 4.11.0-rc7-koelsch-00029-g005882e53d62f25d-dirty #3476 [ 243.706439] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 243.714276] kworker/u4:0 D 0 5 2 0x00000000 [ 243.719780] Workqueue: events_unbound async_run_entry_fn [ 243.725118] [] (__schedule) from [] (schedule+0xb0/0xcc) [ 243.732181] [] (schedule) from [] (schedule_timeout+0x18/0x1f4) [ 243.739840] [] (schedule_timeout) from [] (wait_for_common+0x100/0x19c) [ 243.748207] [] (wait_for_common) from [] (dpm_wait_for_superior+0x14/0x5c) [ 243.756836] [] (dpm_wait_for_superior) from [] (device_resume+0x40/0x1a0) [ 243.765380] [] (device_resume) from [] (async_resume+0x18/0x44) [ 243.773055] [] (async_resume) from [] (async_run_entry_fn+0x44/0x114) [ 243.781245] [] (async_run_entry_fn) from [] (process_one_work+0x1cc/0x31c) [ 243.789876] [] (process_one_work) from [] (worker_thread+0x2b8/0x3f0) [ 243.798080] [] (worker_thread) from [] (kthread+0x120/0x140) [ 243.805500] [] (kthread) from [] (ret_from_fork+0x14/0x2c) [...] [ 249.441198] bash D 0 1703 1694 0x00000000 [ 249.446702] [] (__schedule) from [] (schedule+0xb0/0xcc) [ 249.453764] [] (schedule) from [] (schedule_timeout+0x18/0x1f4) [ 249.461427] [] (schedule_timeout) from [] (wait_for_common+0x100/0x19c) [ 249.469797] [] (wait_for_common) from [] (flush_work+0x128/0x158) [ 249.477650] [] (flush_work) from [] (drain_all_pages+0x198/0x1f0) [ 249.485503] [] (drain_all_pages) from [] (start_isolate_page_range+0xd8/0x1ac) [ 249.494484] [] (start_isolate_page_range) from [] (alloc_contig_range+0xc4/0x304) [ 249.503724] [] (alloc_contig_range) from [] (cma_alloc+0x134/0x1bc) [ 249.511739] [] (cma_alloc) from [] (__alloc_from_contiguous+0x30/0xa0) [ 249.520023] [] (__alloc_from_contiguous) from [] (cma_allocator_alloc+0x40/0x48) [ 249.529173] [] (cma_allocator_alloc) from [] (__dma_alloc+0x1d4/0x2e8) [ 249.537455] [] (__dma_alloc) from [] (arm_dma_alloc+0x40/0x4c) [ 249.545047] [] (arm_dma_alloc) from [] (sh_eth_ring_init+0xec/0x1b8) [ 249.553160] [] (sh_eth_ring_init) from [] (sh_eth_open+0x88/0x1e0) [ 249.561086] [] (sh_eth_open) from [] (sh_eth_resume+0x7c/0xc0) [ 249.568678] [] (sh_eth_resume) from [] (dpm_run_callback+0x48/0xc8) [ 249.576702] [] (dpm_run_callback) from [] (device_resume+0x15c/0x1a0) [ 249.584898] [] (device_resume) from [] (dpm_resume+0xe4/0x244) [ 249.592485] [] (dpm_resume) from [] (dpm_resume_end+0xc/0x18) [ 249.599977] [] (dpm_resume_end) from [] (suspend_devices_and_enter+0x3c8/0x490) [ 249.609042] [] (suspend_devices_and_enter) from [] (pm_suspend+0x228/0x280) [ 249.617759] [] (pm_suspend) from [] (state_store+0xac/0xcc) [ 249.625089] [] (state_store) from [] (kernfs_fop_write+0x164/0x1a0) [ 249.633116] [] (kernfs_fop_write) from [] (__vfs_write+0x20/0x108) [ 249.641043] [] (__vfs_write) from [] (vfs_write+0xb8/0x144) [ 249.648373] [] (vfs_write) from [] (SyS_write+0x40/0x80) [ 249.655437] [] (SyS_write) from [] (ret_fast_syscall+0x0/0x34) [...] [ 254.753928] Showing busy workqueues and worker pools: [...] [ 254.854225] workqueue mm_percpu_wq: flags=0xc [ 254.858583] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=0/0 [ 254.864428] delayed: drain_local_pages_wq, vmstat_update [ 254.870111] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=0/0 [ 254.875957] delayed: drain_local_pages_wq BAR(1703), vmstat_update I got lost in the indirection here. But is it possible that the allocating context will wake up the workqeue context? Anyway the patch you have bisected to doesn't change a lot in this scenario as I've said before. If anything the change to using WQ for the draining rather than smp_function_call would change the behavior. Does the below help by any chance? --- diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5fca73c7881a..a9a1ab7ea4c9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2417,6 +2417,14 @@ void drain_all_pages(struct zone *zone) static cpumask_t cpus_with_pcps; /* + * This is an uggly hack but let's back off in the early PM suspend/resume + * paths because the whole infrastructure might not be available yet for + * us - namely kworkers might be still frozen + */ + if (pm_suspended_storage()) + return; + + /* * Make sure nobody triggers this path before mm_percpu_wq is fully * initialized. */ -- Michal Hocko SUSE Labs