From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from brockman.in8.de ([85.214.220.56]:48265 "EHLO mail.in8.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751566Ab3HEMec (ORCPT ); Mon, 5 Aug 2013 08:34:32 -0400 Message-ID: <51FF9BD6.7000003@jan-o-sch.net> Date: Mon, 05 Aug 2013 14:34:30 +0200 From: Jan Schmidt MIME-Version: 1.0 To: Liu Bo CC: linux-btrfs@vger.kernel.org Subject: Re: [RFC PATCH v5 4/5] Btrfs: disable qgroups accounting when quota is off References: <1375285066-14173-1-git-send-email-bo.li.liu@oracle.com> <1375285066-14173-5-git-send-email-bo.li.liu@oracle.com> In-Reply-To: <1375285066-14173-5-git-send-email-bo.li.liu@oracle.com> Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-btrfs-owner@vger.kernel.org List-ID: Nice try hiding this one in a dedup patch set, but I finally found it :-) On Wed, July 31, 2013 at 17:37 (+0200), Liu Bo wrote: > So we don't need to do qgroups accounting trick without enabling quota. > This reduces my tester's costing time from ~28s to ~23s. > > Signed-off-by: Liu Bo > --- > fs/btrfs/extent-tree.c | 6 ++++++ > fs/btrfs/qgroup.c | 6 ++++++ > 2 files changed, 12 insertions(+), 0 deletions(-) > > diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c > index 10a5c72..c6612f5 100644 > --- a/fs/btrfs/extent-tree.c > +++ b/fs/btrfs/extent-tree.c > @@ -2524,6 +2524,12 @@ int btrfs_delayed_refs_qgroup_accounting(struct btrfs_trans_handle *trans, > struct qgroup_update *qgroup_update; > int ret = 0; > > + if (!trans->root->fs_info->quota_enabled) { > + if (trans->delayed_ref_elem.seq) > + btrfs_put_tree_mod_seq(fs_info, &trans->delayed_ref_elem); > + return 0; > + } > + > if (list_empty(&trans->qgroup_ref_list) != > !trans->delayed_ref_elem.seq) { > /* list without seq or seq without list */ > diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c > index 1280eff..f3e82aa 100644 > --- a/fs/btrfs/qgroup.c > +++ b/fs/btrfs/qgroup.c > @@ -1200,6 +1200,9 @@ int btrfs_qgroup_record_ref(struct btrfs_trans_handle *trans, > { > struct qgroup_update *u; > > + if (!trans->root->fs_info->quota_enabled) > + return 0; > + > BUG_ON(!trans->delayed_ref_elem.seq); > u = kmalloc(sizeof(*u), GFP_NOFS); > if (!u) > @@ -1850,6 +1853,9 @@ out: > > void assert_qgroups_uptodate(struct btrfs_trans_handle *trans) > { > + if (!trans->root->fs_info->quota_enabled) > + return; > + > if (list_empty(&trans->qgroup_ref_list) && !trans->delayed_ref_elem.seq) > return; > pr_err("btrfs: qgroups not uptodate in trans handle %p: list is%s empty, seq is %#x.%x\n", > The second hunk looks sensible at first sight. However, hunk 1 and 3 don't. They assert consistency of qgroup state in well defined places. The fact that you need to disable those checks shows that skipping addition to the list in the second hunk cannot be right, or at least not sufficient. We've got the list of qgroup operations trans->qgroup_ref_list and we've got the qgroup's delayed ref blocker, trans->delayed_ref_elem. If you stop adding to the list (hunk 2) which seems reasonable when quota is disabled, then you also must ensure you're not acquiring the delayed ref blocker element, which should give another performance boost. need_ref_seq may be the right place for this change. It just feels a bit too obvious. The critical cases obviously are quota enable and quota disable. I just don't recall why it wasn't that way from the very beginning of qgroups, I might be missing something fundamental here. Thanks, -Jan