All of lore.kernel.org
 help / color / mirror / Atom feed
* xfs_repair keeps reporting errors
@ 2019-10-31 14:40 Emmanuel Florac
  2019-11-01 10:45 ` Brian Foster
  0 siblings, 1 reply; 4+ messages in thread
From: Emmanuel Florac @ 2019-10-31 14:40 UTC (permalink / raw)
  To: linux-xfs

[-- Attachment #1: Type: text/plain, Size: 1587 bytes --]

Hi,

I just had a problem with a RAID array, now that the rebuild process
is complete, as I run xfs_repair (v 5.0) again and again it keeps
reporting problems (here running xfs_repair for the 3rd time in a row):


bad CRC for inode 861144976062
bad magic number 0x0 on inode 861144976062
bad magic number 0x0 on inode 217316006460
bad version number 0x0 on inode 217316006460
inode identifier 0 mismatch on inode 217316006460
bad version number 0x0 on inode 861144976062
bad CRC for inode 217316006461
bad magic number 0x0 on inode 217316006461
inode identifier 0 mismatch on inode 861144976062
bad version number 0x0 on inode 217316006461
inode identifier 0 mismatch on inode 217316006461
bad CRC for inode 217316006462
bad magic number 0x0 on inode 217316006462
bad CRC for inode 861144976063
bad magic number 0x0 on inode 861144976063
bad version number 0x0 on inode 861144976063
bad version number 0x0 on inode 217316006462
inode identifier 0 mismatch on inode 217316006462
bad CRC for inode 217316006463
bad magic number 0x0 on inode 217316006463
inode identifier 0 mismatch on inode 861144976063
bad version number 0x0 on inode 217316006463
inode identifier 0 mismatch on inode 217316006463

Is there anything else to do?

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

[-- Attachment #2: Signature digitale OpenPGP --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs_repair keeps reporting errors
  2019-10-31 14:40 xfs_repair keeps reporting errors Emmanuel Florac
@ 2019-11-01 10:45 ` Brian Foster
  2019-11-04 13:31   ` Emmanuel Florac
  0 siblings, 1 reply; 4+ messages in thread
From: Brian Foster @ 2019-11-01 10:45 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: linux-xfs

On Thu, Oct 31, 2019 at 03:40:49PM +0100, Emmanuel Florac wrote:
> Hi,
> 
> I just had a problem with a RAID array, now that the rebuild process
> is complete, as I run xfs_repair (v 5.0) again and again it keeps
> reporting problems (here running xfs_repair for the 3rd time in a row):
> 

Problems with the same inodes or the same general issue with different
inodes after repeated repair runs?

What kind of RAID event was involved? Was the filesystem known healthy
prior to the event?

> 
> bad CRC for inode 861144976062
> bad magic number 0x0 on inode 861144976062
> bad magic number 0x0 on inode 217316006460
> bad version number 0x0 on inode 217316006460
> inode identifier 0 mismatch on inode 217316006460
> bad version number 0x0 on inode 861144976062
> bad CRC for inode 217316006461
> bad magic number 0x0 on inode 217316006461
> inode identifier 0 mismatch on inode 861144976062
> bad version number 0x0 on inode 217316006461
> inode identifier 0 mismatch on inode 217316006461
> bad CRC for inode 217316006462
> bad magic number 0x0 on inode 217316006462
> bad CRC for inode 861144976063
> bad magic number 0x0 on inode 861144976063
> bad version number 0x0 on inode 861144976063
> bad version number 0x0 on inode 217316006462
> inode identifier 0 mismatch on inode 217316006462
> bad CRC for inode 217316006463
> bad magic number 0x0 on inode 217316006463
> inode identifier 0 mismatch on inode 861144976063
> bad version number 0x0 on inode 217316006463
> inode identifier 0 mismatch on inode 217316006463
> 
> Is there anything else to do?
> 

I think it's hard to say what might be going on here without some view
into the state of the fs. Perhaps some large chunk of the fs has been
zeroed given all of the zeroed out on-disk inode fields? We'd probably
want to see a metadump of the fs to get a closer look.

Brian

> -- 
> ------------------------------------------------------------------------
> Emmanuel Florac     |   Direction technique
>                     |   Intellique
>                     |	<eflorac@intellique.com>
>                     |   +33 1 78 94 84 02
> ------------------------------------------------------------------------



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs_repair keeps reporting errors
  2019-11-01 10:45 ` Brian Foster
@ 2019-11-04 13:31   ` Emmanuel Florac
  2019-11-04 17:21     ` Eric Sandeen
  0 siblings, 1 reply; 4+ messages in thread
From: Emmanuel Florac @ 2019-11-04 13:31 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-xfs

[-- Attachment #1: Type: text/plain, Size: 838 bytes --]

Le Fri, 1 Nov 2019 06:45:51 -0400
Brian Foster <bfoster@redhat.com> écrivait:

> I think it's hard to say what might be going on here without some view
> into the state of the fs. Perhaps some large chunk of the fs has been
> zeroed given all of the zeroed out on-disk inode fields? We'd probably
> want to see a metadump of the fs to get a closer look.
> 

Alas, the metadump is 28 GB... (this is a 500 TB filesystem). So far it
seems to behave better with no background RAID controller operation.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

[-- Attachment #2: Signature digitale OpenPGP --]
[-- Type: application/pgp-signature, Size: 163 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xfs_repair keeps reporting errors
  2019-11-04 13:31   ` Emmanuel Florac
@ 2019-11-04 17:21     ` Eric Sandeen
  0 siblings, 0 replies; 4+ messages in thread
From: Eric Sandeen @ 2019-11-04 17:21 UTC (permalink / raw)
  To: Emmanuel Florac, Brian Foster; +Cc: linux-xfs


[-- Attachment #1.1: Type: text/plain, Size: 626 bytes --]

On 11/4/19 7:31 AM, Emmanuel Florac wrote:
> Le Fri, 1 Nov 2019 06:45:51 -0400
> Brian Foster <bfoster@redhat.com> écrivait:
> 
>> I think it's hard to say what might be going on here without some view
>> into the state of the fs. Perhaps some large chunk of the fs has been
>> zeroed given all of the zeroed out on-disk inode fields? We'd probably
>> want to see a metadump of the fs to get a closer look.
>>
> 
> Alas, the metadump is 28 GB... (this is a 500 TB filesystem). So far it
> seems to behave better with no background RAID controller operation.

They sometimes compress quite well FWIW.

-Eric


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 873 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-11-04 17:21 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-31 14:40 xfs_repair keeps reporting errors Emmanuel Florac
2019-11-01 10:45 ` Brian Foster
2019-11-04 13:31   ` Emmanuel Florac
2019-11-04 17:21     ` Eric Sandeen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.