From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BE1FCA9EA0 for ; Fri, 25 Oct 2019 14:06:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D2F0C21D81 for ; Fri, 25 Oct 2019 14:06:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572012363; bh=dohttlxCdPOApquhLN5tpx4Dij76s6ZKsW9LgIRt4JA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=w+ZsCssGo6HMxomQaZw1njp4IsgNldHM0NibrAQH5R5JhHXc04npjHf5E8d4DMMn7 ArGAsFhuggGc1lRvV6qv2vLd0NIulWKGyh2wHhfEu0HhW4cMUQSMVLStU7PlhQmcOM IBtrHSakLKS9RdCeXXjpxsXCPTBpL4N/XYJhz478= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2395324AbfJYOGB (ORCPT ); Fri, 25 Oct 2019 10:06:01 -0400 Received: from mail.kernel.org ([198.145.29.99]:50014 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2505176AbfJYNzx (ORCPT ); Fri, 25 Oct 2019 09:55:53 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 707CF21E6F; Fri, 25 Oct 2019 13:55:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1572011753; bh=dohttlxCdPOApquhLN5tpx4Dij76s6ZKsW9LgIRt4JA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dGWN9ABomd9HNkhILbNdgiXIrGfGDjayvQ+2TufNqUoWWenYxTzYNPZPtR8S4skKE Y5taYdaq5Ev/le2up46dK0ynUdxIvCkNYC1Ofy/blXOJDkUjYlzI1jmy5TPXWQKotO ryYebQdbTRBCz9INkaXaWWLZ+5Md2IWPwNoYyF+w= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Mika Westerberg , AceLan Kao , "Rafael J . Wysocki" , Jens Axboe , Sasha Levin , linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.3 30/33] bdi: Do not use freezable workqueue Date: Fri, 25 Oct 2019 09:55:02 -0400 Message-Id: <20191025135505.24762-30-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191025135505.24762-1-sashal@kernel.org> References: <20191025135505.24762-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mika Westerberg [ Upstream commit a2b90f11217790ec0964ba9c93a4abb369758c26 ] A removable block device, such as NVMe or SSD connected over Thunderbolt can be hot-removed any time including when the system is suspended. When device is hot-removed during suspend and the system gets resumed, kernel first resumes devices and then thaws the userspace including freezable workqueues. What happens in that case is that the NVMe driver notices that the device is unplugged and removes it from the system. This ends up calling bdi_unregister() for the gendisk which then schedules wb_workfn() to be run one more time. However, since the bdi_wq is still frozen flush_delayed_work() call in wb_shutdown() blocks forever halting system resume process. User sees this as hang as nothing is happening anymore. Triggering sysrq-w reveals this: Workqueue: nvme-wq nvme_remove_dead_ctrl_work [nvme] Call Trace: ? __schedule+0x2c5/0x630 ? wait_for_completion+0xa4/0x120 schedule+0x3e/0xc0 schedule_timeout+0x1c9/0x320 ? resched_curr+0x1f/0xd0 ? wait_for_completion+0xa4/0x120 wait_for_completion+0xc3/0x120 ? wake_up_q+0x60/0x60 __flush_work+0x131/0x1e0 ? flush_workqueue_prep_pwqs+0x130/0x130 bdi_unregister+0xb9/0x130 del_gendisk+0x2d2/0x2e0 nvme_ns_remove+0xed/0x110 [nvme_core] nvme_remove_namespaces+0x96/0xd0 [nvme_core] nvme_remove+0x5b/0x160 [nvme] pci_device_remove+0x36/0x90 device_release_driver_internal+0xdf/0x1c0 nvme_remove_dead_ctrl_work+0x14/0x30 [nvme] process_one_work+0x1c2/0x3f0 worker_thread+0x48/0x3e0 kthread+0x100/0x140 ? current_work+0x30/0x30 ? kthread_park+0x80/0x80 ret_from_fork+0x35/0x40 This is not limited to NVMes so exactly same issue can be reproduced by hot-removing SSD (over Thunderbolt) while the system is suspended. Prevent this from happening by removing WQ_FREEZABLE from bdi_wq. Reported-by: AceLan Kao Link: https://marc.info/?l=linux-kernel&m=138695698516487 Link: https://bugzilla.kernel.org/show_bug.cgi?id=204385 Link: https://lore.kernel.org/lkml/20191002122136.GD2819@lahna.fi.intel.com/#t Acked-by: Rafael J. Wysocki Signed-off-by: Mika Westerberg Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- mm/backing-dev.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/backing-dev.c b/mm/backing-dev.c index e8e89158adec6..553372b39178c 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -236,8 +236,8 @@ static int __init default_bdi_init(void) { int err; - bdi_wq = alloc_workqueue("writeback", WQ_MEM_RECLAIM | WQ_FREEZABLE | - WQ_UNBOUND | WQ_SYSFS, 0); + bdi_wq = alloc_workqueue("writeback", WQ_MEM_RECLAIM | WQ_UNBOUND | + WQ_SYSFS, 0); if (!bdi_wq) return -ENOMEM; -- 2.20.1