From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F2F1C32792 for ; Thu, 3 Oct 2019 16:55:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DA4422070B for ; Thu, 3 Oct 2019 16:55:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570121734; bh=OZi46/LEW9kyWJKIzQu9CrXEOyEZid01NguqZCh4Q1M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=Fipg/iWlAW0yEtK7SQphHh3o+M8OR3903UtkF4hIInn2Mt3Qbf3vdyQ6vrt4zxEbS 1c+Tg9QS8ojxYZIdu1ON7gJ4xxpLuCQda7i4oa/hBMbR9ou+I3kpKoTqtXYBrX8JXM wixLij0T0ZdEey0VyUOitKJuIIcyTpcn0FfqGYNs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2393181AbfJCQwN (ORCPT ); Thu, 3 Oct 2019 12:52:13 -0400 Received: from mail.kernel.org ([198.145.29.99]:39740 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732564AbfJCQwM (ORCPT ); Thu, 3 Oct 2019 12:52:12 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EF02420865; Thu, 3 Oct 2019 16:52:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570121531; bh=OZi46/LEW9kyWJKIzQu9CrXEOyEZid01NguqZCh4Q1M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0ak7N7ycqmnDm6hAvIsEMrkumrRqeGW45A57yGAvnwmXBBhZchr3hK953ez/FXenG nJjg+U4X7DqHZLj2Vgwnxn7RPqcZcgxKgKZoMqnCvC/vnaSgbu/3EafmEurgxKrLbQ Fuzci0uadfzgPzN6zbWY0ELwOM/qJR/Qr3qJI99M= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Qu Wenruo , David Sterba Subject: [PATCH 5.3 314/344] btrfs: qgroup: Fix reserved data space leak if we have multiple reserve calls Date: Thu, 3 Oct 2019 17:54:39 +0200 Message-Id: <20191003154610.104056625@linuxfoundation.org> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191003154540.062170222@linuxfoundation.org> References: <20191003154540.062170222@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Qu Wenruo commit d4e204948fe3e0dc8e1fbf3f8f3290c9c2823be3 upstream. [BUG] The following script can cause btrfs qgroup data space leak: mkfs.btrfs -f $dev mount $dev -o nospace_cache $mnt btrfs subv create $mnt/subv btrfs quota en $mnt btrfs quota rescan -w $mnt btrfs qgroup limit 128m $mnt/subv for (( i = 0; i < 3; i++)); do # Create 3 64M holes for latter fallocate to fail truncate -s 192m $mnt/subv/file xfs_io -c "pwrite 64m 4k" $mnt/subv/file > /dev/null xfs_io -c "pwrite 128m 4k" $mnt/subv/file > /dev/null sync # it's supposed to fail, and each failure will leak at least 64M # data space xfs_io -f -c "falloc 0 192m" $mnt/subv/file &> /dev/null rm $mnt/subv/file sync done # Shouldn't fail after we removed the file xfs_io -f -c "falloc 0 64m" $mnt/subv/file [CAUSE] Btrfs qgroup data reserve code allow multiple reservations to happen on a single extent_changeset: E.g: btrfs_qgroup_reserve_data(inode, &data_reserved, 0, SZ_1M); btrfs_qgroup_reserve_data(inode, &data_reserved, SZ_1M, SZ_2M); btrfs_qgroup_reserve_data(inode, &data_reserved, 0, SZ_4M); Btrfs qgroup code has its internal tracking to make sure we don't double-reserve in above example. The only pattern utilizing this feature is in the main while loop of btrfs_fallocate() function. However btrfs_qgroup_reserve_data()'s error handling has a bug in that on error it clears all ranges in the io_tree with EXTENT_QGROUP_RESERVED flag but doesn't free previously reserved bytes. This bug has a two fold effect: - Clearing EXTENT_QGROUP_RESERVED ranges This is the correct behavior, but it prevents btrfs_qgroup_check_reserved_leak() to catch the leakage as the detector is purely EXTENT_QGROUP_RESERVED flag based. - Leak the previously reserved data bytes. The bug manifests when N calls to btrfs_qgroup_reserve_data are made and the last one fails, leaking space reserved in the previous ones. [FIX] Also free previously reserved data bytes when btrfs_qgroup_reserve_data fails. Fixes: 524725537023 ("btrfs: qgroup: Introduce btrfs_qgroup_reserve_data function") CC: stable@vger.kernel.org # 4.4+ Signed-off-by: Qu Wenruo Signed-off-by: David Sterba Signed-off-by: Greg Kroah-Hartman --- fs/btrfs/qgroup.c | 3 +++ 1 file changed, 3 insertions(+) --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -3425,6 +3425,9 @@ cleanup: while ((unode = ulist_next(&reserved->range_changed, &uiter))) clear_extent_bit(&BTRFS_I(inode)->io_tree, unode->val, unode->aux, EXTENT_QGROUP_RESERVED, 0, 0, NULL); + /* Also free data bytes of already reserved one */ + btrfs_qgroup_free_refroot(root->fs_info, root->root_key.objectid, + orig_reserved, BTRFS_QGROUP_RSV_DATA); extent_changeset_release(reserved); return ret; }