From: David Sterba <dsterba@suse.cz>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: dsterba@suse.cz, Qu Wenruo <wqu@suse.com>, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH v2 0/6] btrfs: qgroup: Delay subtree scan to reduce overhead
Date: Sat, 8 Dec 2018 01:47:37 +0100 [thread overview]
Message-ID: <20181208004737.GH23615@twin.jikos.cz> (raw)
In-Reply-To: <d52fa1bb-601a-25ee-d256-dbf05213ec9d@gmx.com>
On Fri, Dec 07, 2018 at 06:51:21AM +0800, Qu Wenruo wrote:
>
>
> On 2018/12/7 上午3:35, David Sterba wrote:
> > On Mon, Nov 12, 2018 at 10:33:33PM +0100, David Sterba wrote:
> >> On Thu, Nov 08, 2018 at 01:49:12PM +0800, Qu Wenruo wrote:
> >>> This patchset can be fetched from github:
> >>> https://github.com/adam900710/linux/tree/qgroup_delayed_subtree_rebased
> >>>
> >>> Which is based on v4.20-rc1.
> >>
> >> Thanks, I'll add it to for-next soon.
> >
> > The branch was there for some time but not for at least a week (my
> > mistake I did not notice in time). I've rebased it on top of recent
> > misc-next, but without the delayed refs patchset from Josef.
> >
> > At the moment I'm considering it for merge to 4.21, there's still some
> > time to pull it out in case it shows up to be too problematic. I'm
> > mostly worried about the unknown interactions with the enospc updates or
>
> For that part, I don't think it would have some obvious problem for
> enospc updates.
>
> As the user-noticeable effect is the delay of reloc tree deletion.
>
> Despite that, it's mostly transparent to extent allocation.
>
> > generally because of lack of qgroup and reloc code reviews.
>
> That's the biggest problem.
>
> However most of the current qgroup + balance optimization is done inside
> qgroup code (to skip certain qgroup record), if we're going to hit some
> problem then this patchset would have the highest possibility to hit
> problem.
>
> Later patches will just keep tweaking qgroup to without affecting any
> other parts mostly.
>
> So I'm fine if you decide to pull it out for now.
I've adapted a stress tests that unpacks a large tarball, snaphosts
every 20 seconds, deletes a random snapshot every 50 seconds, deletes
file from the original subvolume, now enhanced with qgroups just for the
new snapshots inherigin the toplevel subvolume. Lockup.
It gets stuck in a snapshot call with the follwin stacktrace
[<0>] btrfs_tree_read_lock+0xf3/0x150 [btrfs]
[<0>] btrfs_qgroup_trace_subtree+0x280/0x7b0 [btrfs]
[<0>] do_walk_down+0x681/0xb20 [btrfs]
[<0>] walk_down_tree+0xf5/0x1c0 [btrfs]
[<0>] btrfs_drop_snapshot+0x43b/0xb60 [btrfs]
[<0>] btrfs_clean_one_deleted_snapshot+0xc1/0x120 [btrfs]
[<0>] cleaner_kthread+0xf8/0x170 [btrfs]
[<0>] kthread+0x121/0x140
[<0>] ret_from_fork+0x27/0x50
and that's like 10th snapshot and ~3rd deltion. This is qgroup show:
qgroupid rfer excl parent
-------- ---- ---- ------
0/5 865.27MiB 1.66MiB ---
0/257 0.00B 0.00B ---
0/259 0.00B 0.00B ---
0/260 806.58MiB 637.25MiB ---
0/262 0.00B 0.00B ---
0/263 0.00B 0.00B ---
0/264 0.00B 0.00B ---
0/265 0.00B 0.00B ---
0/266 0.00B 0.00B ---
0/267 0.00B 0.00B ---
0/268 0.00B 0.00B ---
0/269 0.00B 0.00B ---
0/270 989.04MiB 1.22MiB ---
0/271 0.00B 0.00B ---
0/272 922.25MiB 416.00KiB ---
0/273 931.02MiB 1.50MiB ---
0/274 910.94MiB 1.52MiB ---
1/1 1.64GiB 1.64GiB
0/5,0/257,0/259,0/260,0/262,0/263,0/264,0/265,0/266,0/267,0/268,0/269,0/270,0/271,0/272,0/273,0/274
No IO or cpu activity at this point, the stacktrace and show output
remains the same.
So, considering this, I'm not going to add the patchset to 4.21 but will
keep it in for-next for testing, any fixups or updates will be applied.
next prev parent reply other threads:[~2018-12-08 0:48 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-08 5:49 [PATCH v2 0/6] btrfs: qgroup: Delay subtree scan to reduce overhead Qu Wenruo
2018-11-08 5:49 ` [PATCH v2 1/6] btrfs: qgroup: Allow btrfs_qgroup_extent_record::old_roots unpopulated at insert time Qu Wenruo
2018-11-08 5:49 ` [PATCH v2 2/6] btrfs: relocation: Delay reloc tree deletion after merge_reloc_roots() Qu Wenruo
2018-11-08 5:49 ` [PATCH v2 3/6] btrfs: qgroup: Refactor btrfs_qgroup_trace_subtree_swap() Qu Wenruo
2018-11-08 5:49 ` [PATCH v2 4/6] btrfs: qgroup: Introduce per-root swapped blocks infrastructure Qu Wenruo
2018-11-08 5:49 ` [PATCH v2 5/6] btrfs: qgroup: Use delayed subtree rescan for balance Qu Wenruo
2018-11-08 5:49 ` [PATCH v2 6/6] btrfs: qgroup: Cleanup old subtree swap code Qu Wenruo
2018-11-12 21:33 ` [PATCH v2 0/6] btrfs: qgroup: Delay subtree scan to reduce overhead David Sterba
2018-11-13 17:07 ` David Sterba
2018-11-13 17:58 ` Filipe Manana
2018-11-13 23:56 ` Qu Wenruo
2018-11-14 19:05 ` David Sterba
2018-11-15 5:23 ` Qu Wenruo
2018-11-15 10:28 ` David Sterba
2018-12-06 19:35 ` David Sterba
2018-12-06 22:51 ` Qu Wenruo
2018-12-08 0:47 ` David Sterba [this message]
2018-12-08 0:50 ` Qu Wenruo
2018-12-08 16:17 ` David Sterba
2018-12-10 10:45 ` Filipe Manana
2018-12-10 11:23 ` Qu Wenruo
2018-12-10 5:51 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181208004737.GH23615@twin.jikos.cz \
--to=dsterba@suse.cz \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
--cc=wqu@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).