From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECDBCC433FF for ; Thu, 8 Aug 2019 13:53:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C0EB121874 for ; Thu, 8 Aug 2019 13:53:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="siDDZt+Z" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389856AbfHHNxc (ORCPT ); Thu, 8 Aug 2019 09:53:32 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:56358 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389823AbfHHNxc (ORCPT ); Thu, 8 Aug 2019 09:53:32 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=9uoVWfQGn8VAfm+7rvKUTjXtdJhUP1hpvCdefWlpXow=; b=siDDZt+Z5Ntm7ezjx80S8yED5 55JIeXIEhsOzOdPwc5s9i1QeB+hnvaQ7Cw7Cpm+WBegcPqA+JnxWh9Z6gGUfl9fDFsuRvq9ZgcBMc rvZARni+rrp4v1WEKAA2EtZmGm78YbjxVfz1aeSpHMz50R0GUTeRGfcbWLWp/mUV2/YWEcohHl1rT cM8rpfRd6AJX/h8e/iP6cLgslRDRMNQmvbHa+f+7kKchkTbNsMV83j1qVuj8Ea9A8CZq9gxeePUEW J1ztgsoM0RCRFmt/dw5vJL+9LEDHe+cMRbpGQTheQQ4HLRJ8Vxvsku5jI1y9exN4BZ+pujOiHAIJr hPf+xUkHg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1hvirC-0000OC-00; Thu, 08 Aug 2019 13:53:30 +0000 Date: Thu, 8 Aug 2019 06:53:29 -0700 From: Matthew Wilcox To: Mikulas Patocka Cc: Alexander Viro , "Darrick J. Wong" , Mike Snitzer , junxiao.bi@oracle.com, dm-devel@redhat.com, Alasdair Kergon , honglei.wang@oracle.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org Subject: Re: [PATCH] direct-io: use GFP_NOIO to avoid deadlock Message-ID: <20190808135329.GG5482@bombadil.infradead.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.11.4 (2019-03-13) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Thu, Aug 08, 2019 at 05:50:10AM -0400, Mikulas Patocka wrote: > A deadlock with this stacktrace was observed. > > The obvious problem here is that in the call chain > xfs_vm_direct_IO->__blockdev_direct_IO->do_blockdev_direct_IO->kmem_cache_alloc > we do a GFP_KERNEL allocation while we are in a filesystem driver and in a > block device driver. But that's not the problem. The problem is the loop driver calls into the filesystem without calling memalloc_noio_save() / memalloc_noio_restore(). There are dozens of places in XFS which use GFP_KERNEL allocations and all can trigger this same problem if called from the loop driver. > #14 [ffff88272f5af880] kmem_cache_alloc at ffffffff811f484b > #15 [ffff88272f5af8d0] do_blockdev_direct_IO at ffffffff812535b3 > #16 [ffff88272f5afb00] __blockdev_direct_IO at ffffffff81255dc3 > #17 [ffff88272f5afb30] xfs_vm_direct_IO at ffffffffa01fe3fc [xfs] > #18 [ffff88272f5afb90] generic_file_read_iter at ffffffff81198994 > #19 [ffff88272f5afc50] __dta_xfs_file_read_iter_2398 at ffffffffa020c970 [xfs] > #20 [ffff88272f5afcc0] lo_rw_aio at ffffffffa0377042 [loop] > #21 [ffff88272f5afd70] loop_queue_work at ffffffffa0377c3b [loop] > #22 [ffff88272f5afe60] kthread_worker_fn at ffffffff810a8a0c > #23 [ffff88272f5afec0] kthread at ffffffff810a8428 > #24 [ffff88272f5aff50] ret_from_fork at ffffffff81745242