All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: xfs_repair breaks; xfs_metadump hangs
@ 2009-11-05 11:22 mill / in-medias-res
  0 siblings, 0 replies; 10+ messages in thread
From: mill / in-medias-res @ 2009-11-05 11:22 UTC (permalink / raw)
  To: xfs

On Mittwoch 04 November 2009 mill / in-medias-res wrote:
> a RAID-5 Array which "was broken"
>
> Sounds like you fucked up the filesys very hard, I hope the devs can 
> help you. But there is information missing:
> - how big is sdc2?
The Partition is 2000GB with about 1400 GB used.

> - any chance you put it on a ftp server for download? Because if there's 
> no metadump, no one can tell where metadump hangs, so they'd need your 
> data to analyze.
Too big to upload and the data is too sensitive :(

FYI: im now subscribed to the list.


> Good luck.
Thanks,
Max

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair breaks; xfs_metadump hangs
  2009-11-09  9:51     ` mill / in-medias-res
@ 2009-11-10  1:25       ` Russell Cattelan
  0 siblings, 0 replies; 10+ messages in thread
From: Russell Cattelan @ 2009-11-10  1:25 UTC (permalink / raw)
  To: mill / in-medias-res; +Cc: xfs

[-- Attachment #1: Type: text/plain, Size: 4623 bytes --]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

mill / in-medias-res wrote:
> * mill / in-medias-res <mill@in-medias-res.com> [091109 10:28]:
>>> Hmm interesting.
>>> Can you go into xfs_db and print out the bad inode? send it to us?
>>> I'm guessing the extents are corrupted somehow.
>> Did you mean "xfs_db -x -c 'blockget inode 3256930831' /dev/sdc2" ?
>> xfs_db consumes 99% of CPU and Virt 2510m RES 194m of RAM.
>>
>> How long should i wait?
I was thinking just the inode
xfs_db -x -c 'inode 3256930831' -c 'p' /dev/sdc2


>>
> Done now:
> xfs_db -x -c 'blockget inode 3256930831' /dev/sdc2 > xfs_db.log            :(
> exit code 3
> 338,12s user 12,32s system 49% cpu 11:48,39 total
> The first lines of output:
> bad number of extents 1 for inode 3256930831
> bad nblocks 1 for inode 3256930831, counted 0
> block 9/2317591 type unknown not expected
> link count mismatch for inode 1038934 (name ?), nlink 1, counted 2
> link count mismatch for inode 128 (name ?), nlink 4672, counted 6
> link count mismatch for inode 129 (name ?), nlink 36525, counted 1
> link count mismatch for inode 130 (name ?), nlink 0, counted 1
> link count mismatch for inode 131 (name ?), nlink 0, counted 1
> link count mismatch for inode 132 (name ?), nlink 2, counted 1305238
> link count mismatch for inode 133 (name ?), nlink 0, counted 2
> link count mismatch for inode 134 (name ?), nlink 7144, counted 1
> link count mismatch for inode 135 (name ?), nlink 42666, counted 1
> link count mismatch for inode 136 (name ?), nlink 40424, counted 2
> link count mismatch for inode 137 (name ?), nlink 37040, counted 2
> link count mismatch for inode 138 (name ?), nlink 16, counted 2
> link count mismatch for inode 139 (name ?), nlink 0, counted 2
> link count mismatch for inode 140 (name ?), nlink 20, counted 2
> link count mismatch for inode 141 (name ?), nlink 0, counted 2
> link count mismatch for inode 142 (name ?), nlink 12, counted 2
> link count mismatch for inode 143 (name ?), nlink 62336, counted 2
> link count mismatch for inode 144 (name ?), nlink 3203, counted 2
> link count mismatch for inode 146 (name ?), nlink 27224, counted 2
> link count mismatch for inode 147 (name ?), nlink 41204, counted 2
> link count mismatch for inode 148 (name ?), nlink 21, counted 2
> link count mismatch for inode 149 (name ?), nlink 0, counted 2
> link count mismatch for inode 150 (name ?), nlink 0, counted 2
> link count mismatch for inode 151 (name ?), nlink 0, counted 2
> link count mismatch for inode 152 (name ?), nlink 0, counted 2
> link count mismatch for inode 153 (name ?), nlink 32768, counted 2
> link count mismatch for inode 154 (name ?), nlink 58352, counted 2
> link count mismatch for inode 155 (name ?), nlink 40290, counted 2
> The output is 53 MB big. Tons of link count mismatch for inode ...
Hmm that is not a good sign.
That would suggest a big chunk of inodes go corrupted.
I might be working looking at a few of the inodes and
see if any pattern shows up.

> 
> 
>>> One option to then flag the inode as deleted which will cause repair to
>>> toss is hopefully clean up the mess.
>>>
>>> Here is a write up how to do that.
>>> http://jijo.free.net.ph/19
>> If i can't get that block i will try with deleting it, Thanks!
>>
>> Best regards,
>> Maximilian Mill
>>>> time: 67,27s user 10,09s system 10% cpu 12:05,31 total
>>>>
>>>> I tried to run xfs_metadump serveral times and it hangs everytime on this position:
>>>> xfs_metadump  -g /dev/sdc2 metadump-sdc2-2
>>>> Copied 1411840 of 4835520 inodes (0 of 3 AGs)
>>>>
>>>> It runs till 2 days on the same inode and xfs_db consumes 99% of CPU.
>>>> Should i wait here?
>>>>
>>>> Versions:
>>>> dpkg -l |grep xfs
>>>> ii  xfsdump   3.0.2~bpo50+1       Administrative utilities for the XFS filesys
>>>> ii  xfsprogs  3.0.4~bpo50+1       Utilities for managing the XFS filesystem
>>>> Distribution: Debian lenny with xfsprogs, xfsdump backport from unstable.
>>>>
>>>> The xfs_repair with stock Debian Lenny version also does crash at inode 3256930831.
>>>>
>>>> Best Regards,
>>>> Maximilian Mill
>>>>
>>>> _______________________________________________
>>>> xfs mailing list
>>>> xfs@oss.sgi.com
>>>> http://oss.sgi.com/mailman/listinfo/xfs
>>>>
>>>>   
>>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>>

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkr4wRcACgkQNRmM+OaGhBiOEQCfTiGW5yBzo4mKL6LJlWMPhrCM
F3gAn2MbX5E1RpO1wOQ08ZOxSFq3QoNi
=fVoe
-----END PGP SIGNATURE-----

[-- Attachment #2: cattelan.vcf --]
[-- Type: text/x-vcard, Size: 153 bytes --]

begin:vcard
fn:Russell Cattelan
n:Cattelan;Russell
email;internet:cattelan@thebarn.com
tel;cell:612 805 3144
x-mozilla-html:FALSE
version:2.1
end:vcard


[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair breaks; xfs_metadump hangs
  2009-11-09  9:19   ` mill / in-medias-res
@ 2009-11-09  9:51     ` mill / in-medias-res
  2009-11-10  1:25       ` Russell Cattelan
  0 siblings, 1 reply; 10+ messages in thread
From: mill / in-medias-res @ 2009-11-09  9:51 UTC (permalink / raw)
  To: Russell Cattelan, xfs

* mill / in-medias-res <mill@in-medias-res.com> [091109 10:28]:
> 
> > Hmm interesting.
> > Can you go into xfs_db and print out the bad inode? send it to us?
> > I'm guessing the extents are corrupted somehow.
> 
> Did you mean "xfs_db -x -c 'blockget inode 3256930831' /dev/sdc2" ?
> xfs_db consumes 99% of CPU and Virt 2510m RES 194m of RAM.
> 
> How long should i wait?
> 
Done now:
xfs_db -x -c 'blockget inode 3256930831' /dev/sdc2 > xfs_db.log            :(
exit code 3
338,12s user 12,32s system 49% cpu 11:48,39 total
The first lines of output:
bad number of extents 1 for inode 3256930831
bad nblocks 1 for inode 3256930831, counted 0
block 9/2317591 type unknown not expected
link count mismatch for inode 1038934 (name ?), nlink 1, counted 2
link count mismatch for inode 128 (name ?), nlink 4672, counted 6
link count mismatch for inode 129 (name ?), nlink 36525, counted 1
link count mismatch for inode 130 (name ?), nlink 0, counted 1
link count mismatch for inode 131 (name ?), nlink 0, counted 1
link count mismatch for inode 132 (name ?), nlink 2, counted 1305238
link count mismatch for inode 133 (name ?), nlink 0, counted 2
link count mismatch for inode 134 (name ?), nlink 7144, counted 1
link count mismatch for inode 135 (name ?), nlink 42666, counted 1
link count mismatch for inode 136 (name ?), nlink 40424, counted 2
link count mismatch for inode 137 (name ?), nlink 37040, counted 2
link count mismatch for inode 138 (name ?), nlink 16, counted 2
link count mismatch for inode 139 (name ?), nlink 0, counted 2
link count mismatch for inode 140 (name ?), nlink 20, counted 2
link count mismatch for inode 141 (name ?), nlink 0, counted 2
link count mismatch for inode 142 (name ?), nlink 12, counted 2
link count mismatch for inode 143 (name ?), nlink 62336, counted 2
link count mismatch for inode 144 (name ?), nlink 3203, counted 2
link count mismatch for inode 146 (name ?), nlink 27224, counted 2
link count mismatch for inode 147 (name ?), nlink 41204, counted 2
link count mismatch for inode 148 (name ?), nlink 21, counted 2
link count mismatch for inode 149 (name ?), nlink 0, counted 2
link count mismatch for inode 150 (name ?), nlink 0, counted 2
link count mismatch for inode 151 (name ?), nlink 0, counted 2
link count mismatch for inode 152 (name ?), nlink 0, counted 2
link count mismatch for inode 153 (name ?), nlink 32768, counted 2
link count mismatch for inode 154 (name ?), nlink 58352, counted 2
link count mismatch for inode 155 (name ?), nlink 40290, counted 2
The output is 53 MB big. Tons of link count mismatch for inode ...


> 
> > One option to then flag the inode as deleted which will cause repair to
> > toss is hopefully clean up the mess.
> >
> > Here is a write up how to do that.
> > http://jijo.free.net.ph/19
> If i can't get that block i will try with deleting it, Thanks!
> 
> Best regards,
> Maximilian Mill
> >> time: 67,27s user 10,09s system 10% cpu 12:05,31 total
> >>
> >> I tried to run xfs_metadump serveral times and it hangs everytime on this position:
> >> xfs_metadump  -g /dev/sdc2 metadump-sdc2-2
> >> Copied 1411840 of 4835520 inodes (0 of 3 AGs)
> >>
> >> It runs till 2 days on the same inode and xfs_db consumes 99% of CPU.
> >> Should i wait here?
> >>
> >> Versions:
> >> dpkg -l |grep xfs
> >> ii  xfsdump   3.0.2~bpo50+1       Administrative utilities for the XFS filesys
> >> ii  xfsprogs  3.0.4~bpo50+1       Utilities for managing the XFS filesystem
> >> Distribution: Debian lenny with xfsprogs, xfsdump backport from unstable.
> >>
> >> The xfs_repair with stock Debian Lenny version also does crash at inode 3256930831.
> >>
> >> Best Regards,
> >> Maximilian Mill
> >>
> >> _______________________________________________
> >> xfs mailing list
> >> xfs@oss.sgi.com
> >> http://oss.sgi.com/mailman/listinfo/xfs
> >>
> >>   
> >
> >
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair breaks; xfs_metadump hangs
  2009-11-06 22:42 ` Russell Cattelan
@ 2009-11-09  9:19   ` mill / in-medias-res
  2009-11-09  9:51     ` mill / in-medias-res
  0 siblings, 1 reply; 10+ messages in thread
From: mill / in-medias-res @ 2009-11-09  9:19 UTC (permalink / raw)
  To: Russell Cattelan; +Cc: xfs


> Hmm interesting.
> Can you go into xfs_db and print out the bad inode? send it to us?
> I'm guessing the extents are corrupted somehow.

Did you mean "xfs_db -x -c 'blockget inode 3256930831' /dev/sdc2" ?
xfs_db consumes 99% of CPU and Virt 2510m RES 194m of RAM.

How long should i wait?



> One option to then flag the inode as deleted which will cause repair to
> toss is hopefully clean up the mess.
>
> Here is a write up how to do that.
> http://jijo.free.net.ph/19
If i can't get that block i will try with deleting it, Thanks!

Best regards,
Maximilian Mill
>> time: 67,27s user 10,09s system 10% cpu 12:05,31 total
>>
>> I tried to run xfs_metadump serveral times and it hangs everytime on this position:
>> xfs_metadump  -g /dev/sdc2 metadump-sdc2-2
>> Copied 1411840 of 4835520 inodes (0 of 3 AGs)
>>
>> It runs till 2 days on the same inode and xfs_db consumes 99% of CPU.
>> Should i wait here?
>>
>> Versions:
>> dpkg -l |grep xfs
>> ii  xfsdump   3.0.2~bpo50+1       Administrative utilities for the XFS filesys
>> ii  xfsprogs  3.0.4~bpo50+1       Utilities for managing the XFS filesystem
>> Distribution: Debian lenny with xfsprogs, xfsdump backport from unstable.
>>
>> The xfs_repair with stock Debian Lenny version also does crash at inode 3256930831.
>>
>> Best Regards,
>> Maximilian Mill
>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>>
>>   
>
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair breaks; xfs_metadump hangs
  2009-11-04 15:20 mill / in-medias-res
                   ` (2 preceding siblings ...)
  2009-11-06  9:09 ` mill / in-medias-res
@ 2009-11-06 22:42 ` Russell Cattelan
  2009-11-09  9:19   ` mill / in-medias-res
  3 siblings, 1 reply; 10+ messages in thread
From: Russell Cattelan @ 2009-11-06 22:42 UTC (permalink / raw)
  To: mill / in-medias-res; +Cc: kirschbaum, xfs

mill / in-medias-res wrote:
> Hello XFS-Community,
>
> i have some real trouble with restoring/repairing my two XFS Partion's. These
> Partion's are on a RAID-5 Array which "was broken". The first xfs_repair run
> on /dev/sdc1 did restore 80 GB from ca. 300-400 GB. The Problem was that 99,9%
> of the million files are in lost+found.
>
> Because i was more interested in restoring /dev/sdc2, i did forget about sdc1
> and run xfs_repair on the other Partion:
>
> cmd: xfs_repair -t 1 -P /dev/sdc2
> [...]
> corrupt inode 3256930831 ((a)extents = 1).  This is a bug.
> Please capture the filesystem metadata with xfs_metadump and
> report it to xfs@oss.sgi.com.
> cache_node_purge: refcount was 1, not zero (node=0x377d0008)
> fatal error -- couldn't map inode 3256930831, err = 117
>   
Hmm interesting.
Can you go into xfs_db and print out the bad inode? send it to us?
I'm guessing the extents are corrupted somehow.

One option to then flag the inode as deleted which will cause repair to
toss is hopefully clean up the mess.

Here is a write up how to do that.
http://jijo.free.net.ph/19

> time: 67,27s user 10,09s system 10% cpu 12:05,31 total
>
> I tried to run xfs_metadump serveral times and it hangs everytime on this position:
> xfs_metadump  -g /dev/sdc2 metadump-sdc2-2
> Copied 1411840 of 4835520 inodes (0 of 3 AGs)
>
> It runs till 2 days on the same inode and xfs_db consumes 99% of CPU.
> Should i wait here?
>
> Versions:
> dpkg -l |grep xfs
> ii  xfsdump   3.0.2~bpo50+1       Administrative utilities for the XFS filesys
> ii  xfsprogs  3.0.4~bpo50+1       Utilities for managing the XFS filesystem
> Distribution: Debian lenny with xfsprogs, xfsdump backport from unstable.
>
> The xfs_repair with stock Debian Lenny version also does crash at inode 3256930831.
>
> Best Regards,
> Maximilian Mill
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
>   

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair breaks; xfs_metadump hangs
  2009-11-04 15:20 mill / in-medias-res
  2009-11-05  0:59 ` Michael Monnerie
  2009-11-06  2:27 ` Robert Brockway
@ 2009-11-06  9:09 ` mill / in-medias-res
  2009-11-06 22:42 ` Russell Cattelan
  3 siblings, 0 replies; 10+ messages in thread
From: mill / in-medias-res @ 2009-11-06  9:09 UTC (permalink / raw)
  To: xfs

I forgot to mention, that i actually have an metadump. But it is only written till xfs_db hangs.
It's filesize is 399M, is that enough to work with that?

Best Regards,
Max

* mill / in-medias-res <mill@in-medias-res.com> [091104 16:20]:
> Hello XFS-Community,
> 
> i have some real trouble with restoring/repairing my two XFS Partion's. These
> Partion's are on a RAID-5 Array which "was broken". The first xfs_repair run
> on /dev/sdc1 did restore 80 GB from ca. 300-400 GB. The Problem was that 99,9%
> of the million files are in lost+found.
> 
> Because i was more interested in restoring /dev/sdc2, i did forget about sdc1
> and run xfs_repair on the other Partion:
> 
> cmd: xfs_repair -t 1 -P /dev/sdc2
> [...]
> corrupt inode 3256930831 ((a)extents = 1).  This is a bug.
> Please capture the filesystem metadata with xfs_metadump and
> report it to xfs@oss.sgi.com.
> cache_node_purge: refcount was 1, not zero (node=0x377d0008)
> fatal error -- couldn't map inode 3256930831, err = 117
> 
> time: 67,27s user 10,09s system 10% cpu 12:05,31 total
> 
> I tried to run xfs_metadump serveral times and it hangs everytime on this position:
> xfs_metadump  -g /dev/sdc2 metadump-sdc2-2
> Copied 1411840 of 4835520 inodes (0 of 3 AGs)
> 
> It runs till 2 days on the same inode and xfs_db consumes 99% of CPU.
> Should i wait here?
> 
> Versions:
> dpkg -l |grep xfs
> ii  xfsdump   3.0.2~bpo50+1       Administrative utilities for the XFS filesys
> ii  xfsprogs  3.0.4~bpo50+1       Utilities for managing the XFS filesystem
> Distribution: Debian lenny with xfsprogs, xfsdump backport from unstable.
> 
> The xfs_repair with stock Debian Lenny version also does crash at inode 3256930831.
> 
> Best Regards,
> Maximilian Mill

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair breaks; xfs_metadump hangs
  2009-11-06  2:27 ` Robert Brockway
@ 2009-11-06  8:57   ` mill / in-medias-res
  0 siblings, 0 replies; 10+ messages in thread
From: mill / in-medias-res @ 2009-11-06  8:57 UTC (permalink / raw)
  To: xfs

* Robert Brockway <robert@timetraveller.org> [091106 03:28]:
> On Wed, 4 Nov 2009, mill / in-medias-res wrote:
>
>> Hello XFS-Community,
>>
>> i have some real trouble with restoring/repairing my two XFS Partion's. These
>> Partion's are on a RAID-5 Array which "was broken". The first xfs_repair run
>> on /dev/sdc1 did restore 80 GB from ca. 300-400 GB. The Problem was that 99,9%
>> of the million files are in lost+found.
>
> Time to invoke the disaster recovery plan and restore from backups?
Yeah, i did had an full backup of /dev/sdc1 one day old. Restoring was no problem.
The problem is that i don't have any backup from /dev/sdc2. Only 10-20 GB on DVD's

I have an slower server which serve's the clients, so i can try to get the rest
of the data.
Are there any other working repair tools for xfs ?
> At some point the effort required to recover a badly corrupt FS exceeds  
> the loss from simply restoring from a known good backup.
>
> You can still review the corrupt filesystem offline in order to pick up  
> lost data, if it is worth doing so.
>
> Rob

Max
> -- 
> I tried to change the world but they had a no-return policy
> http://www.practicalsysadmin.com
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair breaks; xfs_metadump hangs
  2009-11-04 15:20 mill / in-medias-res
  2009-11-05  0:59 ` Michael Monnerie
@ 2009-11-06  2:27 ` Robert Brockway
  2009-11-06  8:57   ` mill / in-medias-res
  2009-11-06  9:09 ` mill / in-medias-res
  2009-11-06 22:42 ` Russell Cattelan
  3 siblings, 1 reply; 10+ messages in thread
From: Robert Brockway @ 2009-11-06  2:27 UTC (permalink / raw)
  To: xfs

On Wed, 4 Nov 2009, mill / in-medias-res wrote:

> Hello XFS-Community,
>
> i have some real trouble with restoring/repairing my two XFS Partion's. These
> Partion's are on a RAID-5 Array which "was broken". The first xfs_repair run
> on /dev/sdc1 did restore 80 GB from ca. 300-400 GB. The Problem was that 99,9%
> of the million files are in lost+found.

Time to invoke the disaster recovery plan and restore from backups?

At some point the effort required to recover a badly corrupt FS exceeds 
the loss from simply restoring from a known good backup.

You can still review the corrupt filesystem offline in order to pick up 
lost data, if it is worth doing so.

Rob

-- 
I tried to change the world but they had a no-return policy
http://www.practicalsysadmin.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair breaks; xfs_metadump hangs
  2009-11-04 15:20 mill / in-medias-res
@ 2009-11-05  0:59 ` Michael Monnerie
  2009-11-06  2:27 ` Robert Brockway
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Michael Monnerie @ 2009-11-05  0:59 UTC (permalink / raw)
  To: xfs

On Mittwoch 04 November 2009 mill / in-medias-res wrote:
> a RAID-5 Array which "was broken"

Sounds like you fucked up the filesys very hard, I hope the devs can 
help you. But there is information missing:
- how big is sdc2?
- any chance you put it on a ftp server for download? Because if there's 
no metadump, no one can tell where metadump hangs, so they'd need your 
data to analyze.

Good luck.

mfg zmi
-- 
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* xfs_repair breaks; xfs_metadump hangs
@ 2009-11-04 15:20 mill / in-medias-res
  2009-11-05  0:59 ` Michael Monnerie
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: mill / in-medias-res @ 2009-11-04 15:20 UTC (permalink / raw)
  To: xfs; +Cc: kirschbaum

Hello XFS-Community,

i have some real trouble with restoring/repairing my two XFS Partion's. These
Partion's are on a RAID-5 Array which "was broken". The first xfs_repair run
on /dev/sdc1 did restore 80 GB from ca. 300-400 GB. The Problem was that 99,9%
of the million files are in lost+found.

Because i was more interested in restoring /dev/sdc2, i did forget about sdc1
and run xfs_repair on the other Partion:

cmd: xfs_repair -t 1 -P /dev/sdc2
[...]
corrupt inode 3256930831 ((a)extents = 1).  This is a bug.
Please capture the filesystem metadata with xfs_metadump and
report it to xfs@oss.sgi.com.
cache_node_purge: refcount was 1, not zero (node=0x377d0008)
fatal error -- couldn't map inode 3256930831, err = 117

time: 67,27s user 10,09s system 10% cpu 12:05,31 total

I tried to run xfs_metadump serveral times and it hangs everytime on this position:
xfs_metadump  -g /dev/sdc2 metadump-sdc2-2
Copied 1411840 of 4835520 inodes (0 of 3 AGs)

It runs till 2 days on the same inode and xfs_db consumes 99% of CPU.
Should i wait here?

Versions:
dpkg -l |grep xfs
ii  xfsdump   3.0.2~bpo50+1       Administrative utilities for the XFS filesys
ii  xfsprogs  3.0.4~bpo50+1       Utilities for managing the XFS filesystem
Distribution: Debian lenny with xfsprogs, xfsdump backport from unstable.

The xfs_repair with stock Debian Lenny version also does crash at inode 3256930831.

Best Regards,
Maximilian Mill

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2009-11-10  1:25 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-11-05 11:22 xfs_repair breaks; xfs_metadump hangs mill / in-medias-res
  -- strict thread matches above, loose matches on Subject: below --
2009-11-04 15:20 mill / in-medias-res
2009-11-05  0:59 ` Michael Monnerie
2009-11-06  2:27 ` Robert Brockway
2009-11-06  8:57   ` mill / in-medias-res
2009-11-06  9:09 ` mill / in-medias-res
2009-11-06 22:42 ` Russell Cattelan
2009-11-09  9:19   ` mill / in-medias-res
2009-11-09  9:51     ` mill / in-medias-res
2009-11-10  1:25       ` Russell Cattelan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.