From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89377C433E1 for ; Tue, 23 Jun 2020 20:29:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 66A652064B for ; Tue, 23 Jun 2020 20:29:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592944170; bh=eZCFo0gw7hJnJC+MEy9HTeVPYeK2idf5vStqJc60s20=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=DuqDKoIlYH/e6Udvd3J0WF118PrM4NSCiEK9XntBPk/MzLko2x1+kwDEOlr0w4CHT Zj0UEvqIBUgz3FUUw3Zi+AEv0RPs2dXJHYGI2S4gpTqOksfUQs6JtYbZ1NfZKOSEG/ ar+1xCK39yS2AMrOEtYz8tArT85JQ77dom1xQJOE= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390491AbgFWU32 (ORCPT ); Tue, 23 Jun 2020 16:29:28 -0400 Received: from mail.kernel.org ([198.145.29.99]:49296 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390965AbgFWU3S (ORCPT ); Tue, 23 Jun 2020 16:29:18 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5884B2064B; Tue, 23 Jun 2020 20:29:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1592944158; bh=eZCFo0gw7hJnJC+MEy9HTeVPYeK2idf5vStqJc60s20=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dV9kkcasbuj0LJbNc4VioLSANwXHFQzWQQW7mWswhC+rTOJDRXbNRpV5C8EjF9blf JiivgY29iLrOjRGAz7GS8MrQforxYUcn/b3xzINLnsPisCgQRQjUcqvYFo1xCnUKay Aze5Dwy6f275Qkz4wkomGDouwItoQWjuT2mxR10k= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Miklos Szeredi , Sasha Levin Subject: [PATCH 5.4 165/314] fuse: copy_file_range should truncate cache Date: Tue, 23 Jun 2020 21:56:00 +0200 Message-Id: <20200623195346.754920742@linuxfoundation.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200623195338.770401005@linuxfoundation.org> References: <20200623195338.770401005@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Miklos Szeredi [ Upstream commit 9b46418c40fe910e6537618f9932a8be78a3dd6c ] After the copy operation completes the cache is not up-to-date. Truncate all pages in the interval that has successfully been copied. Truncating completely copied dirty pages is okay, since the data has been overwritten anyway. Truncating partially copied dirty pages is not okay; add a comment for now. Fixes: 88bc7d5097a1 ("fuse: add support for copy_file_range()") Signed-off-by: Miklos Szeredi Signed-off-by: Sasha Levin --- fs/fuse/file.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/fs/fuse/file.c b/fs/fuse/file.c index e730e3d8ad996..66214707a9456 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -3292,6 +3292,24 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in, if (err) goto out; + /* + * Write out dirty pages in the destination file before sending the COPY + * request to userspace. After the request is completed, truncate off + * pages (including partial ones) from the cache that have been copied, + * since these contain stale data at that point. + * + * This should be mostly correct, but if the COPY writes to partial + * pages (at the start or end) and the parts not covered by the COPY are + * written through a memory map after calling fuse_writeback_range(), + * then these partial page modifications will be lost on truncation. + * + * It is unlikely that someone would rely on such mixed style + * modifications. Yet this does give less guarantees than if the + * copying was performed with write(2). + * + * To fix this a i_mmap_sem style lock could be used to prevent new + * faults while the copy is ongoing. + */ err = fuse_writeback_range(inode_out, pos_out, pos_out + len - 1); if (err) goto out; @@ -3315,6 +3333,10 @@ static ssize_t __fuse_copy_file_range(struct file *file_in, loff_t pos_in, if (err) goto out; + truncate_inode_pages_range(inode_out->i_mapping, + ALIGN_DOWN(pos_out, PAGE_SIZE), + ALIGN(pos_out + outarg.size, PAGE_SIZE) - 1); + if (fc->writeback_cache) { fuse_write_update_size(inode_out, pos_out + outarg.size); file_update_time(file_out); -- 2.25.1