All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Bart Brashers <bart.brashers@gmail.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: mount before xfs_repair hangs
Date: Thu, 12 Mar 2020 10:25:10 +1100	[thread overview]
Message-ID: <20200311232510.GG10776@dread.disaster.area> (raw)
In-Reply-To: <CAHgh4_KpizhD+V59+nV_Tr1W5i4N+yeSKQL9Sq6E5BwyWyr_aA@mail.gmail.com>

On Wed, Mar 11, 2020 at 04:11:27PM -0700, Bart Brashers wrote:
> After working fine for 2 days, it happened again. Drives went offline
> for no apparent reason, and a logicaldevice (as arcconf calls them)
> failed. arcconf listed the hard drives as all online by the time I had
> logged on.
> 
> The server connected to the JBOD had rebooted by the time I noticed the problem.
> 
> There are two xfs filesystems on this server. I can mount one of them,
> and ran xfs_repair on it.
> 
> I first tried mounting the other read-only,no-recovery. That worked.
> Trying to mount normally hangs. I see in ps aux | grep mount that it's
> not using CPU. Here's the mount command I gave:
> 
> mount -t xfs -o inode64,logdev=/dev/md/nvme2 /dev/volgrp4TB/lvol4TB
> /export/lvol4TB/
> 
> I did an echo w > /proc/sysrc-trigger while I was watching the
> console, it said "SysRq : Show Blocked State". Here's what the output
> of dmesg looks like, starting with that line. Then it gives blocks
> about what's happening on each CPU, some of which mention "xfs".
> 
> [  228.927915] SysRq : Show Blocked State
> [  228.928525]   task                        PC stack   pid father
> [  228.928605] mount           D ffff96f79a553150     0 11341  11254 0x00000080
> [  228.928609] Call Trace:
> [  228.928617]  [<ffffffffb0b7f1c9>] schedule+0x29/0x70
> [  228.928624]  [<ffffffffb0b7cb51>] schedule_timeout+0x221/0x2d0
> [  228.928626]  [<ffffffffb0b7f57d>] wait_for_completion+0xfd/0x140
> [  228.928633]  [<ffffffffb04da0b0>] ? wake_up_state+0x20/0x20
> [  228.928667]  [<ffffffffc04c599e>] ? xfs_buf_delwri_submit+0x5e/0xf0 [xfs]
> [  228.928682]  [<ffffffffc04c3217>] xfs_buf_iowait+0x27/0xb0 [xfs]
> [  228.928696]  [<ffffffffc04c599e>] xfs_buf_delwri_submit+0x5e/0xf0 [xfs]
> [  228.928712]  [<ffffffffc04f2a9e>] xlog_do_recovery_pass+0x3ae/0x6e0 [xfs]
> [  228.928727]  [<ffffffffc04f2e59>] xlog_do_log_recovery+0x89/0xd0 [xfs]
> [  228.928742]  [<ffffffffc04f2ed1>] xlog_do_recover+0x31/0x180 [xfs]
> [  228.928758]  [<ffffffffc04f3fef>] xlog_recover+0xbf/0x190 [xfs]
> [  228.928772]  [<ffffffffc04e658f>] xfs_log_mount+0xff/0x310 [xfs]
> [  228.928801]  [<ffffffffc04dd1b0>] xfs_mountfs+0x520/0x8e0 [xfs]
> [  228.928814]  [<ffffffffc04e02a0>] xfs_fs_fill_super+0x410/0x550 [xfs]
> [  228.928818]  [<ffffffffb064c893>] mount_bdev+0x1b3/0x1f0
> [  228.928831]  [<ffffffffc04dfe90>] ?
> xfs_test_remount_options.isra.12+0x70/0x70 [xfs]
> [  228.928842]  [<ffffffffc04deaa5>] xfs_fs_mount+0x15/0x20 [xfs]
> [  228.928845]  [<ffffffffb064d1fe>] mount_fs+0x3e/0x1b0
> [  228.928850]  [<ffffffffb066b377>] vfs_kern_mount+0x67/0x110
> [  228.928852]  [<ffffffffb066dacf>] do_mount+0x1ef/0xce0
> [  228.928855]  [<ffffffffb064521a>] ? __check_object_size+0x1ca/0x250
> [  228.928858]  [<ffffffffb062368c>] ? kmem_cache_alloc_trace+0x3c/0x200
> [  228.928860]  [<ffffffffb066e903>] SyS_mount+0x83/0xd0
> [  228.928863]  [<ffffffffb0b8bede>] system_call_fastpath+0x25/0x2a

It's waiting for the metadata writes for recovered changes to
complete. This implies the underlying device is either hung or it
extremely slow. I'd suggest "extremely slow" because it's doing it's
own internal rebuild and may well be blocking new writes until it
has recovered the regions the new writes are being directed at...

This all looks like HW raid controller problems, nothign to do with
linux or the filesystem.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2020-03-11 23:25 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-07 20:36 mount before xfs_repair hangs Bart Brashers
2020-03-08 19:43 ` Bart Brashers
2020-03-08 22:26   ` Dave Chinner
2020-03-09  1:32     ` Bart Brashers
2020-03-11 23:11       ` Bart Brashers
2020-03-11 23:25         ` Dave Chinner [this message]
2020-03-11 23:27           ` Bart Brashers
2020-03-12  5:45             ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200311232510.GG10776@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=bart.brashers@gmail.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.