From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3222AC4332B for ; Mon, 8 Feb 2021 16:40:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F336764E26 for ; Mon, 8 Feb 2021 16:40:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234417AbhBHQkq (ORCPT ); Mon, 8 Feb 2021 11:40:46 -0500 Received: from mx2.suse.de ([195.135.220.15]:51186 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234508AbhBHQkN (ORCPT ); Mon, 8 Feb 2021 11:40:13 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9B8C1AE30; Mon, 8 Feb 2021 16:39:32 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 605231E13FD; Mon, 8 Feb 2021 17:39:32 +0100 (CET) From: Jan Kara To: Cc: Christoph Hellwig , Matthew Wilcox , Jan Kara Subject: [PATCH 0/2 RFC v2] fs: Hole punch vs page cache filling races Date: Mon, 8 Feb 2021 17:39:16 +0100 Message-Id: <20210208163918.7871-1-jack@suse.cz> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Hello, Amir has reported [1] a that ext4 has a potential issues when reads can race with hole punching possibly exposing stale data from freed blocks or even corrupting filesystem when stale mapping data gets used for writeout. The problem is that during hole punching, new page cache pages can get instantiated and block mapping from the looked up in a punched range after truncate_inode_pages() has run but before the filesystem removes blocks from the file. In principle any filesystem implementing hole punching thus needs to implement a mechanism to block instantiating page cache pages during hole punching to avoid this race. This is further complicated by the fact that there are multiple places that can instantiate pages in page cache. We can have regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also result in reading in page cache pages through force_page_cache_readahead(). There are couple of ways how to fix this. First way (currently implemented by XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are serialized with hole punching. This is easy to do but as a result all reads would then be serialized with writes and thus mixed read-write workloads suffer heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it when creating new pages in the page cache and looking up their corresponding block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem which provides necessary serialization with hole punching for ext4. If this approach looks viable, I'll convert also other equivalent fs locks to use this new VFS semaphore instead - in particular XFS' XFS_MMAPLOCK, f2fs's i_mmap_sem, fuse's i_mmap_sem and maybe others as well. Honza [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/