linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Filipe Manana <fdmanana@gmail.com>
To: Qu Wenruo <wqu@suse.de>
Cc: dsterba@suse.cz, Qu Wenruo <quwenruo.btrfs@gmx.com>,
	Qu Wenruo <wqu@suse.com>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: [PATCH v2 0/6] btrfs: qgroup: Delay subtree scan to reduce overhead
Date: Mon, 10 Dec 2018 10:45:23 +0000	[thread overview]
Message-ID: <CAL3q7H61QEEHAK4PSfmx8PzO_cWRDPJeHEKizJjS=tdUzT2KSQ@mail.gmail.com> (raw)
In-Reply-To: <4cea2079-cef5-6833-884d-a3c8edc4c14a@suse.de>

On Sat, Dec 8, 2018 at 12:51 AM Qu Wenruo <wqu@suse.de> wrote:
>
>
>
> On 2018/12/8 上午8:47, David Sterba wrote:
> > On Fri, Dec 07, 2018 at 06:51:21AM +0800, Qu Wenruo wrote:
> >>
> >>
> >> On 2018/12/7 上午3:35, David Sterba wrote:
> >>> On Mon, Nov 12, 2018 at 10:33:33PM +0100, David Sterba wrote:
> >>>> On Thu, Nov 08, 2018 at 01:49:12PM +0800, Qu Wenruo wrote:
> >>>>> This patchset can be fetched from github:
> >>>>> https://github.com/adam900710/linux/tree/qgroup_delayed_subtree_rebased
> >>>>>
> >>>>> Which is based on v4.20-rc1.
> >>>>
> >>>> Thanks, I'll add it to for-next soon.
> >>>
> >>> The branch was there for some time but not for at least a week (my
> >>> mistake I did not notice in time). I've rebased it on top of recent
> >>> misc-next, but without the delayed refs patchset from Josef.
> >>>
> >>> At the moment I'm considering it for merge to 4.21, there's still some
> >>> time to pull it out in case it shows up to be too problematic. I'm
> >>> mostly worried about the unknown interactions with the enospc updates or
> >>
> >> For that part, I don't think it would have some obvious problem for
> >> enospc updates.
> >>
> >> As the user-noticeable effect is the delay of reloc tree deletion.
> >>
> >> Despite that, it's mostly transparent to extent allocation.
> >>
> >>> generally because of lack of qgroup and reloc code reviews.
> >>
> >> That's the biggest problem.
> >>
> >> However most of the current qgroup + balance optimization is done inside
> >> qgroup code (to skip certain qgroup record), if we're going to hit some
> >> problem then this patchset would have the highest possibility to hit
> >> problem.
> >>
> >> Later patches will just keep tweaking qgroup to without affecting any
> >> other parts mostly.
> >>
> >> So I'm fine if you decide to pull it out for now.
> >
> > I've adapted a stress tests that unpacks a large tarball, snaphosts
> > every 20 seconds, deletes a random snapshot every 50 seconds, deletes
> > file from the original subvolume, now enhanced with qgroups just for the
> > new snapshots inherigin the toplevel subvolume. Lockup.
> >
> > It gets stuck in a snapshot call with the follwin stacktrace
> >
> > [<0>] btrfs_tree_read_lock+0xf3/0x150 [btrfs]
> > [<0>] btrfs_qgroup_trace_subtree+0x280/0x7b0 [btrfs]
>
> This looks like the original subtree tracing has something wrong.
>
> Thanks for the report, I'll investigate it.

Btw, there's another deadlock with qgroups. I don't recall if I ever
reported it, but I still hit it with fstests (rarely happens) for at
least 1 year iirc:

[29845.732448] INFO: task kworker/u8:8:3898 blocked for more than 120 seconds.
[29845.732852]       Not tainted 4.20.0-rc5-btrfs-next-40 #1
[29845.733248] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[29845.733558] kworker/u8:8    D    0  3898      2 0x80000000
[29845.733878] Workqueue: btrfs-endio-write btrfs_endio_write_helper [btrfs]
[29845.734183] Call Trace:
[29845.734499]  ? __schedule+0x3d4/0xbc0
[29845.734818]  schedule+0x39/0x90
[29845.735131]  btrfs_tree_read_lock+0xe7/0x140 [btrfs]
[29845.735430]  ? remove_wait_queue+0x60/0x60
[29845.735731]  find_parent_nodes+0x25e/0xe30 [btrfs]
[29845.736037]  btrfs_find_all_roots_safe+0xc6/0x140 [btrfs]
[29845.736342]  btrfs_find_all_roots+0x52/0x70 [btrfs]
[29845.736710]  btrfs_qgroup_trace_extent_post+0x37/0x80 [btrfs]
[29845.737046]  btrfs_add_delayed_data_ref+0x240/0x3d0 [btrfs]
[29845.737362]  btrfs_inc_extent_ref+0xb7/0x140 [btrfs]
[29845.737678]  __btrfs_mod_ref+0x174/0x250 [btrfs]
[29845.737999]  ? add_pinned_bytes+0x60/0x60 [btrfs]
[29845.738298]  update_ref_for_cow+0x26b/0x340 [btrfs]
[29845.738592]  __btrfs_cow_block+0x221/0x5b0 [btrfs]
[29845.738899]  btrfs_cow_block+0xf4/0x210 [btrfs]
[29845.739200]  btrfs_search_slot+0x583/0xa40 [btrfs]
[29845.739527]  ? init_object+0x6b/0x80
[29845.739823]  btrfs_lookup_file_extent+0x4a/0x70 [btrfs]
[29845.740119]  __btrfs_drop_extents+0x157/0xd70 [btrfs]
[29845.740524]  insert_reserved_file_extent.constprop.66+0x97/0x2f0 [btrfs]
[29845.740853]  ? start_transaction+0xa2/0x490 [btrfs]
[29845.741166]  btrfs_finish_ordered_io+0x344/0x810 [btrfs]
[29845.741489]  normal_work_helper+0xea/0x530 [btrfs]
[29845.741880]  process_one_work+0x22f/0x5d0
[29845.742174]  worker_thread+0x4f/0x3b0
[29845.742462]  ? rescuer_thread+0x360/0x360
[29845.742759]  kthread+0x103/0x140
[29845.743044]  ? kthread_create_worker_on_cpu+0x70/0x70
[29845.743336]  ret_from_fork+0x3a/0x50

It happened last friday again on 4.20-rcX. It's caused by a change
from 2017 (commit fb235dc06fac9eaa4408ade9c8b20d45d63c89b7 btrfs:
qgroup: Move half of the qgroup accounting time out of commit trans).
The task is deadlocking with itself.

thanks


> Qu
>
> > [<0>] do_walk_down+0x681/0xb20 [btrfs]
> > [<0>] walk_down_tree+0xf5/0x1c0 [btrfs]
> > [<0>] btrfs_drop_snapshot+0x43b/0xb60 [btrfs]
> > [<0>] btrfs_clean_one_deleted_snapshot+0xc1/0x120 [btrfs]
> > [<0>] cleaner_kthread+0xf8/0x170 [btrfs]
> > [<0>] kthread+0x121/0x140
> > [<0>] ret_from_fork+0x27/0x50
> >
> > and that's like 10th snapshot and ~3rd deltion. This is qgroup show:
> >
> > qgroupid         rfer         excl parent
> > --------         ----         ---- ------
> > 0/5         865.27MiB      1.66MiB ---
> > 0/257           0.00B        0.00B ---
> > 0/259           0.00B        0.00B ---
> > 0/260       806.58MiB    637.25MiB ---
> > 0/262           0.00B        0.00B ---
> > 0/263           0.00B        0.00B ---
> > 0/264           0.00B        0.00B ---
> > 0/265           0.00B        0.00B ---
> > 0/266           0.00B        0.00B ---
> > 0/267           0.00B        0.00B ---
> > 0/268           0.00B        0.00B ---
> > 0/269           0.00B        0.00B ---
> > 0/270       989.04MiB      1.22MiB ---
> > 0/271           0.00B        0.00B ---
> > 0/272       922.25MiB    416.00KiB ---
> > 0/273       931.02MiB      1.50MiB ---
> > 0/274       910.94MiB      1.52MiB ---
> > 1/1           1.64GiB      1.64GiB
> > 0/5,0/257,0/259,0/260,0/262,0/263,0/264,0/265,0/266,0/267,0/268,0/269,0/270,0/271,0/272,0/273,0/274
> >
> > No IO or cpu activity at this point, the stacktrace and show output
> > remains the same.
> >
> > So, considering this, I'm not going to add the patchset to 4.21 but will
> > keep it in for-next for testing, any fixups or updates will be applied.
> >
>


-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

  parent reply	other threads:[~2018-12-10 10:45 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-08  5:49 [PATCH v2 0/6] btrfs: qgroup: Delay subtree scan to reduce overhead Qu Wenruo
2018-11-08  5:49 ` [PATCH v2 1/6] btrfs: qgroup: Allow btrfs_qgroup_extent_record::old_roots unpopulated at insert time Qu Wenruo
2018-11-08  5:49 ` [PATCH v2 2/6] btrfs: relocation: Delay reloc tree deletion after merge_reloc_roots() Qu Wenruo
2018-11-08  5:49 ` [PATCH v2 3/6] btrfs: qgroup: Refactor btrfs_qgroup_trace_subtree_swap() Qu Wenruo
2018-11-08  5:49 ` [PATCH v2 4/6] btrfs: qgroup: Introduce per-root swapped blocks infrastructure Qu Wenruo
2018-11-08  5:49 ` [PATCH v2 5/6] btrfs: qgroup: Use delayed subtree rescan for balance Qu Wenruo
2018-11-08  5:49 ` [PATCH v2 6/6] btrfs: qgroup: Cleanup old subtree swap code Qu Wenruo
2018-11-12 21:33 ` [PATCH v2 0/6] btrfs: qgroup: Delay subtree scan to reduce overhead David Sterba
2018-11-13 17:07   ` David Sterba
2018-11-13 17:58     ` Filipe Manana
2018-11-13 23:56       ` Qu Wenruo
2018-11-14 19:05       ` David Sterba
2018-11-15  5:23         ` Qu Wenruo
2018-11-15 10:28           ` David Sterba
2018-12-06 19:35   ` David Sterba
2018-12-06 22:51     ` Qu Wenruo
2018-12-08  0:47       ` David Sterba
2018-12-08  0:50         ` Qu Wenruo
2018-12-08 16:17           ` David Sterba
2018-12-10 10:45           ` Filipe Manana [this message]
2018-12-10 11:23             ` Qu Wenruo
2018-12-10  5:51         ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAL3q7H61QEEHAK4PSfmx8PzO_cWRDPJeHEKizJjS=tdUzT2KSQ@mail.gmail.com' \
    --to=fdmanana@gmail.com \
    --cc=dsterba@suse.cz \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    --cc=wqu@suse.com \
    --cc=wqu@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).