All of lore.kernel.org
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: bugzilla-daemon@bugzilla.kernel.org
Cc: linux-xfs@vger.kernel.org
Subject: Re: [Bug 207053] fsfreeze deadlock on XFS (the FIFREEZE ioctl and subsequent FITHAW hang indefinitely)
Date: Tue, 7 Apr 2020 09:18:12 -0400	[thread overview]
Message-ID: <20200407131812.GB27866@bfoster> (raw)
In-Reply-To: <bug-207053-201763-xyUAU29Yyq@https.bugzilla.kernel.org/>

On Tue, Apr 07, 2020 at 06:41:31AM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
> https://bugzilla.kernel.org/show_bug.cgi?id=207053
> 
> --- Comment #2 from Paul Furtado (paulfurtado91@gmail.com) ---
> Hi Dave,
> 
> Just had another case of this crop up and I was able to get the blocked tasks
> output before automation killed the server. Because the log was too large to
> attach, I've pasted the output into a github gist here:
> https://gist.githubusercontent.com/PaulFurtado/c9bade038b8a5c7ddb53a6e10def058f/raw/ee43926c96c0d6a9ec81a648754c1af599ef0bdd/sysrq_w.log
> 

Hm, so it looks like this is stuck between freeze:

[377279.630957] fsfreeze        D    0 46819  46337 0x00004084
[377279.634910] Call Trace:
[377279.637594]  ? __schedule+0x292/0x6f0
[377279.640833]  ? xfs_xattr_get+0x51/0x80 [xfs]
[377279.644287]  schedule+0x2f/0xa0
[377279.647286]  schedule_timeout+0x1dd/0x300
[377279.650661]  wait_for_completion+0x126/0x190
[377279.654154]  ? wake_up_q+0x80/0x80
[377279.657277]  ? work_busy+0x80/0x80
[377279.660375]  __flush_work+0x177/0x1b0
[377279.663604]  ? worker_attach_to_pool+0x90/0x90
[377279.667121]  __cancel_work_timer+0x12b/0x1b0
[377279.670571]  ? rcu_sync_enter+0x8b/0xd0
[377279.673864]  xfs_stop_block_reaping+0x15/0x30 [xfs]
[377279.677585]  xfs_fs_freeze+0x15/0x40 [xfs]
[377279.680950]  freeze_super+0xc8/0x190
[377279.684086]  do_vfs_ioctl+0x510/0x630
...

... and the eofblocks scanner:

[377279.422496] Workqueue: xfs-eofblocks/nvme13n1 xfs_eofblocks_worker [xfs]
[377279.426971] Call Trace:
[377279.429662]  ? __schedule+0x292/0x6f0
[377279.432839]  schedule+0x2f/0xa0
[377279.435794]  rwsem_down_read_slowpath+0x196/0x530
[377279.439435]  ? kmem_cache_alloc+0x152/0x1f0
[377279.442834]  ? __percpu_down_read+0x49/0x60
[377279.446242]  __percpu_down_read+0x49/0x60
[377279.449586]  __sb_start_write+0x5b/0x60
[377279.452869]  xfs_trans_alloc+0x152/0x160 [xfs]
[377279.456372]  xfs_free_eofblocks+0x12d/0x1f0 [xfs]
[377279.460014]  xfs_inode_free_eofblocks+0x128/0x1a0 [xfs]
[377279.463903]  ? xfs_inode_ag_walk_grab+0x5f/0x90 [xfs]
[377279.467680]  xfs_inode_ag_walk.isra.17+0x1a7/0x410 [xfs]
[377279.471567]  ? __xfs_inode_clear_blocks_tag+0x120/0x120 [xfs]
[377279.475620]  ? kvm_sched_clock_read+0xd/0x20
[377279.479059]  ? sched_clock+0x5/0x10
[377279.482184]  ? __xfs_inode_clear_blocks_tag+0x120/0x120 [xfs]
[377279.486234]  ? radix_tree_gang_lookup_tag+0xa8/0x100
[377279.489974]  ? __xfs_inode_clear_blocks_tag+0x120/0x120 [xfs]
[377279.494041]  xfs_inode_ag_iterator_tag+0x73/0xb0 [xfs]
[377279.497859]  xfs_eofblocks_worker+0x29/0x40 [xfs]
[377279.501484]  process_one_work+0x195/0x380
...

The immediate issue is likely that the eofblocks transaction is not
NOWRITECOUNT (same for the cowblocks scanner, btw), but the problem with
doing that is these helpers are called from other contexts outside of
the background scanners.

Perhaps what we need to do here is let these background scanners acquire
a superblock write reference, similar to what Darrick recently added to
scrub..? We'd have to do that from the scanner workqueue task, so it
would probably need to be a trylock so we don't end up in a similar
situation as above. I.e., we'd either get the reference and cause freeze
to wait until it's dropped or bail out if freeze has already stopped the
transaction subsystem. Thoughts?

Brian

> 
> Thanks,
> Paul
> 
> -- 
> You are receiving this mail because:
> You are watching the assignee of the bug.
> 


  reply	other threads:[~2020-04-07 13:18 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-01 19:02 [Bug 207053] New: fsfreeze deadlock on XFS (the FIFREEZE ioctl and subsequent FITHAW hang indefinitely) bugzilla-daemon
2020-04-02  0:15 ` Dave Chinner
2020-04-02  0:15 ` [Bug 207053] " bugzilla-daemon
2020-04-07  6:41 ` bugzilla-daemon
2020-04-07 13:18   ` Brian Foster [this message]
2020-04-07 15:17     ` Darrick J. Wong
2020-04-07 16:37       ` Brian Foster
2020-04-07 16:49         ` Darrick J. Wong
2020-04-07 17:02           ` Brian Foster
2020-04-07 13:18 ` bugzilla-daemon
2020-04-07 15:17 ` bugzilla-daemon
2020-04-07 16:37 ` bugzilla-daemon
2020-04-07 16:49 ` bugzilla-daemon
2020-04-07 17:02 ` bugzilla-daemon
2020-05-28  6:00 ` bugzilla-daemon
2020-05-28 10:47   ` Brian Foster
2020-05-28 10:47 ` bugzilla-daemon
2020-05-28 16:16 ` bugzilla-daemon
2023-03-16 18:30 ` bugzilla-daemon
2023-03-16 21:02 ` bugzilla-daemon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200407131812.GB27866@bfoster \
    --to=bfoster@redhat.com \
    --cc=bugzilla-daemon@bugzilla.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.