All of lore.kernel.org
 help / color / mirror / Atom feed
* I just trashed my RAID5 array - recovery possible?
@ 2003-03-20  0:03 Wolfram Schlich
  2003-03-20  0:20 ` Neil Brown
  0 siblings, 1 reply; 11+ messages in thread
From: Wolfram Schlich @ 2003-03-20  0:03 UTC (permalink / raw)
  To: Linux-RAID mailinglist

Hi,

I just trashed my RAID5 array. The Promise IDE driver messed up
sharing IRQs with a network interface card while writing a file to
the array via the mentioned NIC card.
Is any recovery possible? I wouldn't care about some lost
megabytes... really. Thanks in advance!

Here's my raidtab (hde + hdg = controller 1, hdi + hdk = controller 2)
/etc/raidtab:
--8<--
raiddev /dev/md1
        raid-level 5
        nr-raid-disks 4
        nr-spare-disks 0
        persistent-superblock 1
        parity-algorithm left-symmetric
        chunk-size 64
        device /dev/hde1
        raid-disk 0
        device /dev/hdg1
        raid-disk 1
        device /dev/hdi1
        raid-disk 2
        device /dev/hdk1
        raid-disk 3
--8<--

Here are all messages I was able to collect:

Boot:
--8<--
 [events: 00000038]
 [events: 00000039]
 [events: 0000003a]
 [events: 0000003a]
md: autorun ...
md: considering hdk1 ...
md:  adding hdk1 ...
md:  adding hdi1 ...
md:  adding hdg1 ...
md:  adding hde1 ...
md: created md1
md: bind<hde1,1>
md: bind<hdg1,2>
md: bind<hdi1,3>
md: bind<hdk1,4>
md: running: <hdk1><hdi1><hdg1><hde1>
md: hdk1's event counter: 0000003a
md: hdi1's event counter: 0000003a
md: hdg1's event counter: 00000039
md: hde1's event counter: 00000038
md: superblock update time inconsistency -- using the most recent one
md: freshest: hdk1
md: kicking non-fresh hde1 from array!
md: unbind<hde1,3>
md: export_rdev(hde1)
md1: kicking faulty hdg1!
md: unbind<hdg1,2>
md: export_rdev(hdg1)
md1: removing former faulty hde1!
md: md1: raid array is not clean -- starting background
reconstruction
md1: max total readahead window set to 768k
md1: 3 data-disks, max readahead per data-disk: 256k
raid5: device hdk1 operational as raid disk 3
raid5: device hdi1 operational as raid disk 2
raid5: not enough operational devices for md1 (2/4 failed)
RAID5 conf printout:
 --- rd:4 wd:2 fd:2
 disk 0, s:0, o:0, n:0 rd:0 us:1 dev:[dev 00:00]
 disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
 disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdk1
raid5: failed to run raid set md1
md: pers->run() failed ...
md :do_md_run() returned -22
md: md1 stopped.
md: unbind<hdk1,1>
md: export_rdev(hdk1)
md: unbind<hdi1,0>
md: export_rdev(hdi1)
md: ... autorun DONE.
--8<--

mdadm --examine /dev/hd[egik]1
--8<--
/dev/hde1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 6c1d7352:fc248a72:f4ef9da6:fa5478c3
  Creation Time : Tue Mar 11 01:16:24 2003
     Raid Level : raid5
    Device Size : 117218176 (111.79 GiB 120.03 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1

    Update Time : Thu Mar 20 01:38:14 2003
          State : dirty, no-errors
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 84953914 - correct
         Events : 0.56

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1      33        1        1      active sync   /dev/hde1
   0     0      34        1        0      active sync   /dev/hdg1
   1     1      33        1        1      active sync   /dev/hde1
   2     2      56        1        2      active sync   /dev/hdi1
   3     3      57        1        3      active sync   /dev/hdk1
/dev/hdg1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 6c1d7352:fc248a72:f4ef9da6:fa5478c3
  Creation Time : Tue Mar 11 01:16:24 2003
     Raid Level : raid5
    Device Size : 117218176 (111.79 GiB 120.03 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1

    Update Time : Thu Mar 20 00:43:49 2003
          State : dirty, no-errors
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 84952c4d - correct
         Events : 0.57

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0      34        1        0      active sync   /dev/hdg1
   0     0      34        1        0      active sync   /dev/hdg1
   1     1      33        1        1      faulty   /dev/hde1
   2     2      56        1        2      active sync   /dev/hdi1
   3     3      57        1        3      active sync   /dev/hdk1
/dev/hdi1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 6c1d7352:fc248a72:f4ef9da6:fa5478c3
  Creation Time : Tue Mar 11 01:16:24 2003
     Raid Level : raid5
    Device Size : 117218176 (111.79 GiB 120.03 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1

    Update Time : Thu Mar 20 00:43:49 2003
          State : dirty, no-errors
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0
       Checksum : 84952c62 - correct
         Events : 0.58

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2      56        1        2      active sync   /dev/hdi1
   0     0      34        1        0      faulty   /dev/hdg1
   1     1      33        1        1      faulty   /dev/hde1
   2     2      56        1        2      active sync   /dev/hdi1
   3     3      57        1        3      active sync   /dev/hdk1
/dev/hdk1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 6c1d7352:fc248a72:f4ef9da6:fa5478c3
  Creation Time : Tue Mar 11 01:16:24 2003
     Raid Level : raid5
    Device Size : 117218176 (111.79 GiB 120.03 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1

    Update Time : Thu Mar 20 00:43:49 2003
          State : dirty, no-errors
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0
       Checksum : 84952c65 - correct
         Events : 0.58

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3      57        1        3      active sync   /dev/hdk1
   0     0      34        1        0      faulty   /dev/hdg1
   1     1      33        1        1      faulty   /dev/hde1
   2     2      56        1        2      active sync   /dev/hdi1
   3     3      57        1        3      active sync   /dev/hdk1
--8<--

I do really *hope* there's help... *sigh*. Gotta go to bed now.
Thanks again, I'm completely lost at this point.
-- 
Wolfram Schlich; Friedhofstr. 8, D-88069 Tettnang; +49-(0)178-SCHLICH

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20  0:03 I just trashed my RAID5 array - recovery possible? Wolfram Schlich
@ 2003-03-20  0:20 ` Neil Brown
  2003-03-20  7:10   ` Wolfram Schlich
  2003-03-20  7:51   ` Wolfram Schlich
  0 siblings, 2 replies; 11+ messages in thread
From: Neil Brown @ 2003-03-20  0:20 UTC (permalink / raw)
  To: Wolfram Schlich; +Cc: Linux-RAID mailinglist

On Thursday March 20, lists@schlich.org wrote:
> Hi,
> 
> I just trashed my RAID5 array. The Promise IDE driver messed up
> sharing IRQs with a network interface card while writing a file to
> the array via the mentioned NIC card.
> Is any recovery possible? I wouldn't care about some lost
> megabytes... really. Thanks in advance!

I recommend:
	mdadm -A /dev/md1 --force /dev/sd[egik]1

NeilBrown

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20  0:20 ` Neil Brown
@ 2003-03-20  7:10   ` Wolfram Schlich
  2003-03-20  7:51   ` Wolfram Schlich
  1 sibling, 0 replies; 11+ messages in thread
From: Wolfram Schlich @ 2003-03-20  7:10 UTC (permalink / raw)
  To: Linux-RAID mailinglist

* Neil Brown <neilb@cse.unsw.edu.au> [2003-03-20 01:26]:
> On Thursday March 20, lists@schlich.org wrote:
> > Hi,
> > 
> > I just trashed my RAID5 array. The Promise IDE driver messed up
> > sharing IRQs with a network interface card while writing a file to
> > the array via the mentioned NIC card.
> > Is any recovery possible? I wouldn't care about some lost
> > megabytes... really. Thanks in advance!
> 
> I recommend:
> 	mdadm -A /dev/md1 --force /dev/sd[egik]1

Thanks for the hint! (small typo - it's hdX, not sdX)
How are the chances of success/failure/losing all data completely?
Well, I guess I just don't have any choice, do I? :-)
-- 
Wolfram Schlich; Friedhofstr. 8, D-88069 Tettnang; +49-(0)178-SCHLICH

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20  0:20 ` Neil Brown
  2003-03-20  7:10   ` Wolfram Schlich
@ 2003-03-20  7:51   ` Wolfram Schlich
  2003-03-20  9:55     ` Neil Brown
  1 sibling, 1 reply; 11+ messages in thread
From: Wolfram Schlich @ 2003-03-20  7:51 UTC (permalink / raw)
  To: Linux-RAID mailinglist

* Neil Brown <neilb@cse.unsw.edu.au> [2003-03-20 01:26]:
> On Thursday March 20, lists@schlich.org wrote:
> > Hi,
> > 
> > I just trashed my RAID5 array. The Promise IDE driver messed up
> > sharing IRQs with a network interface card while writing a file to
> > the array via the mentioned NIC card.
> > Is any recovery possible? I wouldn't care about some lost
> > megabytes... really. Thanks in advance!
> 
> I recommend:
> 	mdadm -A /dev/md1 --force /dev/sd[egik]1

I've just tried that. Looks better than before ;-) Here's the result:
--8<--
 [events: 00000038]
md: bind<hde1,1>
 [events: 0000003a]
md: bind<hdi1,2>
 [events: 0000003a]
md: bind<hdk1,3>
 [events: 0000003a]
md: bind<hdg1,4>
md: hdg1's event counter: 0000003a
md: hdk1's event counter: 0000003a
md: hdi1's event counter: 0000003a
md: hde1's event counter: 00000038
md: superblock update time inconsistency -- using the most recent one
md: freshest: hdg1
md: kicking non-fresh hde1 from array!
md: unbind<hde1,3>
md: export_rdev(hde1)
md1: removing former faulty hde1!
md1: max total readahead window set to 768k
md1: 3 data-disks, max readahead per data-disk: 256k
raid5: device hdg1 operational as raid disk 0
raid5: device hdk1 operational as raid disk 3
raid5: device hdi1 operational as raid disk 2
raid5: md1, not all disks are operational -- trying to recover array
raid5: allocated 4340kB for md1
raid5: raid level 5 set md1 active with 3 out of 4 devices, algorithm 2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hdg1
 disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
 disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdk1
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hdg1
 disk 1, s:0, o:0, n:1 rd:1 us:1 dev:[dev 00:00]
 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdi1
 disk 3, s:0, o:1, n:3 rd:3 us:1 dev:hdk1
md: updating md1 RAID superblock on device
md: hdg1 [events: 0000003b]<6>(write) hdg1's sb offset: 117218176
md: recovery thread got woken up ...
md1: no spare disk to reconstruct array! -- continuing in degraded mode
md: recovery thread finished ...
md: hdk1 [events: 0000003b]<6>(write) hdk1's sb offset: 117218176
md: hdi1 [events: 0000003b]<6>(write) hdi1's sb offset: 117218176
raid5: switching cache buffer size, 4096 --> 1024
raid5: switching cache buffer size, 1024 --> 4096
--8<--

And when I try to mount the array:
--8<--
EXT3-fs error (device md(9,1)): ext3_check_descriptors: Block bitmap for group 509 not in group (block 4294967295)!
EXT3-fs: group descriptors corrupted !
--8<--

What should I do now? Raidhotadd the 4th device? Run e2fsck prior to
that or afterwards?
Thanks in advance!
-- 
Wolfram Schlich; Friedhofstr. 8, D-88069 Tettnang; +49-(0)178-SCHLICH

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20  7:51   ` Wolfram Schlich
@ 2003-03-20  9:55     ` Neil Brown
  2003-03-20 12:22       ` Wolfram Schlich
  2003-03-20 17:56       ` Wolfram Schlich
  0 siblings, 2 replies; 11+ messages in thread
From: Neil Brown @ 2003-03-20  9:55 UTC (permalink / raw)
  To: Wolfram Schlich; +Cc: Linux-RAID mailinglist

On Thursday March 20, lists@schlich.org wrote:
> 
> And when I try to mount the array:
> --8<--
> EXT3-fs error (device md(9,1)): ext3_check_descriptors: Block bitmap for group 509 not in group (block 4294967295)!
> EXT3-fs: group descriptors corrupted !
> --8<--
> 
> What should I do now? Raidhotadd the 4th device? Run e2fsck prior to
> that or afterwards?

I would 
   fsck -n /dev/md1
to non-distructively see how much damage there is.

If that shows an uncomfortable large amount of damage, you could
assembling the array from a different triple of devices.
It is currently assemble from
   /dev/hd[gki]1

If you want to try g, k and e for example, use
  mdadm -Af /dev/md1 /dev/hd[gke]1

and then try "fsck -n" on that.

I wouldn't hot-add the fourth drive until you have decided you will
live with whatever you have.

NeilBrown

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20  9:55     ` Neil Brown
@ 2003-03-20 12:22       ` Wolfram Schlich
  2003-03-20 13:34         ` Ross Vandegrift
  2003-03-20 17:56       ` Wolfram Schlich
  1 sibling, 1 reply; 11+ messages in thread
From: Wolfram Schlich @ 2003-03-20 12:22 UTC (permalink / raw)
  To: Linux-RAID mailinglist

* Neil Brown <neilb@cse.unsw.edu.au> [2003-03-20 10:58]:
> On Thursday March 20, lists@schlich.org wrote:
> > 
> > And when I try to mount the array:
> > --8<--
> > EXT3-fs error (device md(9,1)): ext3_check_descriptors: Block bitmap for group 509 not in group (block 4294967295)!
> > EXT3-fs: group descriptors corrupted !
> > --8<--
> > 
> > What should I do now? Raidhotadd the 4th device? Run e2fsck prior to
> > that or afterwards?
> 
> I would 
>    fsck -n /dev/md1
> to non-distructively see how much damage there is.
> 
> If that shows an uncomfortable large amount of damage, you could
> assembling the array from a different triple of devices.
> It is currently assemble from
>    /dev/hd[gki]1

I've put up the logfile of fsck at
http://wolfram.schlich.org/tmp/fsck.log.1
I have no idea whether this is an "uncomfortable large amount of
damage", so would you mind having a look at it? :-) TIA!

> If you want to try g, k and e for example, use
>   mdadm -Af /dev/md1 /dev/hd[gke]1
> 
> and then try "fsck -n" on that.
> 
> I wouldn't hot-add the fourth drive until you have decided you will
> live with whatever you have.

Ok. Well, hde was the first drive that was kicked out, hdg came
afterwards, so hd[gik] should probably be the best combination?!
-- 
Mit freundlichen Gruessen / Yours sincerely
Wolfram Schlich; Friedhofstr. 8, D-88069 Tettnang; +49-(0)178-SCHLICH

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20 12:22       ` Wolfram Schlich
@ 2003-03-20 13:34         ` Ross Vandegrift
  2003-03-20 14:19           ` Wolfram Schlich
  0 siblings, 1 reply; 11+ messages in thread
From: Ross Vandegrift @ 2003-03-20 13:34 UTC (permalink / raw)
  To: Linux-RAID mailinglist

On Thu, Mar 20, 2003 at 01:22:27PM +0100, Wolfram Schlich wrote:
> I've put up the logfile of fsck at
> http://wolfram.schlich.org/tmp/fsck.log.1
> I have no idea whether this is an "uncomfortable large amount of
> damage", so would you mind having a look at it? :-) TIA!

That's a pretty huge amount of damage.  If fsck is able to fix it
without trashing data, then it doesn't matter.  I missed how big your
array was, but if you have the resources to make an image of the raw
devices, I'd definately reccomend you do so:

# cat /dev/hdx > /some/path/with/lots/of/space/hdx.img

This will let you restore your disks and try again.  At any rate, good
luck with the data.

-- 
Ross Vandegrift
ross@willow.seitz.com

A Pope has a Water Cannon.                               It is a Water Cannon.
He fires Holy-Water from it.                        It is a Holy-Water Cannon.
He Blesses it.                                 It is a Holy Holy-Water Cannon.
He Blesses the Hell out of it.          It is a Wholly Holy Holy-Water Cannon.
He has it pierced.                It is a Holey Wholly Holy Holy-Water Cannon.
He makes it official.       It is a Canon Holey Wholly Holy Holy-Water Cannon.
Batman and Robin arrive.                                       He shoots them.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20 13:34         ` Ross Vandegrift
@ 2003-03-20 14:19           ` Wolfram Schlich
  2003-03-20 15:32             ` Paul Clements
  0 siblings, 1 reply; 11+ messages in thread
From: Wolfram Schlich @ 2003-03-20 14:19 UTC (permalink / raw)
  To: Linux-RAID mailinglist

* Ross Vandegrift <ross@willow.seitz.com> [2003-03-20 14:35]:
> On Thu, Mar 20, 2003 at 01:22:27PM +0100, Wolfram Schlich wrote:
> > I've put up the logfile of fsck at
> > http://wolfram.schlich.org/tmp/fsck.log.1
> > I have no idea whether this is an "uncomfortable large amount of
> > damage", so would you mind having a look at it? :-) TIA!
> 
> That's a pretty huge amount of damage.  If fsck is able to fix it
> without trashing data, then it doesn't matter.  I missed how big your
> array was, but if you have the resources to make an image of the raw
> devices, I'd definately reccomend you do so:

Unfortunately the array is 4x120G. I just don't have that space
anywhere else :-(( Anyway, do the *kinds* of errors show these are
'critical' ones?

> # cat /dev/hdx > /some/path/with/lots/of/space/hdx.img
> 
> This will let you restore your disks and try again.  At any rate, good
> luck with the data.

Thanks.
-- 
Wolfram Schlich; Friedhofstr. 8, D-88069 Tettnang; +49-(0)178-SCHLICH

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20 14:19           ` Wolfram Schlich
@ 2003-03-20 15:32             ` Paul Clements
  2003-03-20 15:46               ` Wolfram Schlich
  0 siblings, 1 reply; 11+ messages in thread
From: Paul Clements @ 2003-03-20 15:32 UTC (permalink / raw)
  To: Wolfram Schlich; +Cc: Linux-RAID mailinglist

Wolfram Schlich wrote:
> 
> * Ross Vandegrift <ross@willow.seitz.com> [2003-03-20 14:35]:
> > On Thu, Mar 20, 2003 at 01:22:27PM +0100, Wolfram Schlich wrote:
> > > I've put up the logfile of fsck at
> > > http://wolfram.schlich.org/tmp/fsck.log.1
> > > I have no idea whether this is an "uncomfortable large amount of
> > > damage", so would you mind having a look at it? :-) TIA!
> >
> > That's a pretty huge amount of damage.  If fsck is able to fix it
> > without trashing data, then it doesn't matter.  I missed how big your
> > array was, but if you have the resources to make an image of the raw
> > devices, I'd definately reccomend you do so:
> 
> Unfortunately the array is 4x120G. I just don't have that space
> anywhere else :-(( Anyway, do the *kinds* of errors show these are
> 'critical' ones?

Well, mostly looks like your free block and inode counts are wrong,
which I believe fsck can correct fairly accurately. However, as Ross
said, you have extensive corruption of your filesystem metadata, so the
data very well could be corrupted too, but fsck isn't going to tell you
about that...you'll just have to find out... :). If you can't make a
backup of the disks, I guess you just have to give it a shot, anyway,
right?

--
Paul

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20 15:32             ` Paul Clements
@ 2003-03-20 15:46               ` Wolfram Schlich
  0 siblings, 0 replies; 11+ messages in thread
From: Wolfram Schlich @ 2003-03-20 15:46 UTC (permalink / raw)
  To: Linux-RAID mailinglist

* Paul Clements <Paul.Clements@SteelEye.com> [2003-03-20 16:40]:
> Wolfram Schlich wrote:
> > 
> > * Ross Vandegrift <ross@willow.seitz.com> [2003-03-20 14:35]:
> > > On Thu, Mar 20, 2003 at 01:22:27PM +0100, Wolfram Schlich wrote:
> > > > I've put up the logfile of fsck at
> > > > http://wolfram.schlich.org/tmp/fsck.log.1
> > > > I have no idea whether this is an "uncomfortable large amount of
> > > > damage", so would you mind having a look at it? :-) TIA!
> > >
> > > That's a pretty huge amount of damage.  If fsck is able to fix it
> > > without trashing data, then it doesn't matter.  I missed how big your
> > > array was, but if you have the resources to make an image of the raw
> > > devices, I'd definately reccomend you do so:
> > 
> > Unfortunately the array is 4x120G. I just don't have that space
> > anywhere else :-(( Anyway, do the *kinds* of errors show these are
> > 'critical' ones?
> 
> Well, mostly looks like your free block and inode counts are wrong,
> which I believe fsck can correct fairly accurately. However, as Ross
> said, you have extensive corruption of your filesystem metadata, so the
> data very well could be corrupted too, but fsck isn't going to tell you
> about that...you'll just have to find out... :). If you can't make a
> backup of the disks, I guess you just have to give it a shot, anyway,
> right?

Right :-( Scary that such little 'differences' result in such big
problems. I think I will post to the ext3-users list as well.
Thanks for your reply!
-- 
Wolfram Schlich; Friedhofstr. 8, D-88069 Tettnang; +49-(0)178-SCHLICH

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: I just trashed my RAID5 array - recovery possible?
  2003-03-20  9:55     ` Neil Brown
  2003-03-20 12:22       ` Wolfram Schlich
@ 2003-03-20 17:56       ` Wolfram Schlich
  1 sibling, 0 replies; 11+ messages in thread
From: Wolfram Schlich @ 2003-03-20 17:56 UTC (permalink / raw)
  To: Linux-RAID mailinglist

* Neil Brown <neilb@cse.unsw.edu.au> [2003-03-20 10:58]:
> On Thursday March 20, lists@schlich.org wrote:
> > 
> > And when I try to mount the array:
> > --8<--
> > EXT3-fs error (device md(9,1)): ext3_check_descriptors: Block bitmap for group 509 not in group (block 4294967295)!
> > EXT3-fs: group descriptors corrupted !
> > --8<--
> > 
> > What should I do now? Raidhotadd the 4th device? Run e2fsck prior to
> > that or afterwards?
> 
> I would 
>    fsck -n /dev/md1
> to non-distructively see how much damage there is.
> 
> If that shows an uncomfortable large amount of damage, you could
> assembling the array from a different triple of devices.
> It is currently assemble from
>    /dev/hd[gki]1
> 
> If you want to try g, k and e for example, use
>   mdadm -Af /dev/md1 /dev/hd[gke]1
> 
> and then try "fsck -n" on that.

I've run fsck on all 4 different combos now:

http://wolfram.schlich.org/tmp/fsck.log.1
http://wolfram.schlich.org/tmp/fsck.log.1.README

http://wolfram.schlich.org/tmp/fsck.log.2
http://wolfram.schlich.org/tmp/fsck.log.2.README

http://wolfram.schlich.org/tmp/fsck.log.3
http://wolfram.schlich.org/tmp/fsck.log.3.README

http://wolfram.schlich.org/tmp/fsck.log.4
http://wolfram.schlich.org/tmp/fsck.log.4.README

Which one should I actually give a *real* try?
-- 
Wolfram Schlich; Friedhofstr. 8, D-88069 Tettnang; +49-(0)178-SCHLICH

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2003-03-20 17:56 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-03-20  0:03 I just trashed my RAID5 array - recovery possible? Wolfram Schlich
2003-03-20  0:20 ` Neil Brown
2003-03-20  7:10   ` Wolfram Schlich
2003-03-20  7:51   ` Wolfram Schlich
2003-03-20  9:55     ` Neil Brown
2003-03-20 12:22       ` Wolfram Schlich
2003-03-20 13:34         ` Ross Vandegrift
2003-03-20 14:19           ` Wolfram Schlich
2003-03-20 15:32             ` Paul Clements
2003-03-20 15:46               ` Wolfram Schlich
2003-03-20 17:56       ` Wolfram Schlich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.