All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID5 degraded after mdadm -S, mdadm --assemble (everytime)
@ 2006-06-24 10:47 Ronald Lembcke
  2006-06-24 11:10 ` Ronald Lembcke
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Ronald Lembcke @ 2006-06-24 10:47 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 7144 bytes --]

Hi!

I set up a RAID5 array of 4 disks. I initially created a degraded array
and added the fourth disk (sda1) later.

The array is "clean", but when I do  
  mdadm -S /dev/md0 
  mdadm --assemble /dev/md0 /dev/sd[abcd]1
it won't start. It always says sda1 is "failed".

When I remove sda1 and add it again everything seems to be fine until I
stop the array. 

Below is the output of /proc/mdstat, mdadm -D -Q, mdadm -E and a piece of the
kernel log.
The output of mdadm -E looks strange for /dev/sd[bcd]1, saying "1 failed".

What can I do about this?
How could this happen? I mixed up the syntax when adding the fourth disk and
tried these two commands (at least one didn't yield an error message):
mdadm --manage -a /dev/md0 /dev/sda1
mdadm --manage -a /dev/sda1 /dev/md0


Thanks in advance ...
                      Roni



ganges:~# cat /proc/mdstat 
Personalities : [raid5] [raid4] 
md0 : active raid5 sda1[4] sdc1[0] sdb1[2] sdd1[1]
      691404864 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>


ganges:~# mdadm -Q -D /dev/md0
/dev/md0:
        Version : 01.00.03
  Creation Time : Wed Jun 21 13:00:41 2006
     Raid Level : raid5
     Array Size : 691404864 (659.38 GiB 708.00 GB)
    Device Size : 460936576 (219.79 GiB 236.00 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 15:54:23 2006
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : 0
           UUID : f937e8c2:15b41d19:fe79ccca:2614b165
         Events : 32429

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       2       8       17        2      active sync   /dev/sdb1
       4       8        1        3      active sync   /dev/sda1



ganges:~# mdadm -E /dev/sd[abcd]1
/dev/sda1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x0
     Array UUID : f937e8c2:15b41d19:fe79ccca:2614b165
           Name : 0
  Creation Time : Wed Jun 21 13:00:41 2006
     Raid Level : raid5
   Raid Devices : 4

    Device Size : 460936832 (219.79 GiB 236.00 GB)
     Array Size : 1382809728 (659.38 GiB 708.00 GB)
      Used Size : 460936576 (219.79 GiB 236.00 GB)
   Super Offset : 460936960 sectors
          State : active
    Device UUID : f41dfb24:72cc87b7:4003ad32:bc19c70c

    Update Time : Fri Jun 23 15:54:23 2006
       Checksum : ad466c73 - correct
         Events : 32429

         Layout : left-symmetric
     Chunk Size : 64K

   Array State : uuuu 1 failed
/dev/sdb1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x0
     Array UUID : f937e8c2:15b41d19:fe79ccca:2614b165
           Name : 0
  Creation Time : Wed Jun 21 13:00:41 2006
     Raid Level : raid5
   Raid Devices : 4

    Device Size : 460936832 (219.79 GiB 236.00 GB)
     Array Size : 1382809728 (659.38 GiB 708.00 GB)
      Used Size : 460936576 (219.79 GiB 236.00 GB)
   Super Offset : 460936960 sectors
          State : active
    Device UUID : 6283effa:df4cb959:d449e09e:4eb0a65b

    Update Time : Fri Jun 23 15:54:23 2006
       Checksum : e07f2f74 - correct
         Events : 32429

         Layout : left-symmetric
     Chunk Size : 64K

   Array State : uuUu 1 failed
/dev/sdc1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x0
     Array UUID : f937e8c2:15b41d19:fe79ccca:2614b165
           Name : 0
  Creation Time : Wed Jun 21 13:00:41 2006
     Raid Level : raid5
   Raid Devices : 4

    Device Size : 460936768 (219.79 GiB 236.00 GB)
     Array Size : 1382809728 (659.38 GiB 708.00 GB)
      Used Size : 460936576 (219.79 GiB 236.00 GB)
   Super Offset : 460936896 sectors
          State : active
    Device UUID : 4f581aed:e24b4ac2:3d2ca149:191c89c1

    Update Time : Fri Jun 23 15:54:23 2006
       Checksum : 4bde5117 - correct
         Events : 32429

         Layout : left-symmetric
     Chunk Size : 64K

   Array State : Uuuu 1 failed
/dev/sdd1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x0
     Array UUID : f937e8c2:15b41d19:fe79ccca:2614b165
           Name : 0
  Creation Time : Wed Jun 21 13:00:41 2006
     Raid Level : raid5
   Raid Devices : 4

    Device Size : 460936832 (219.79 GiB 236.00 GB)
     Array Size : 1382809728 (659.38 GiB 708.00 GB)
      Used Size : 460936576 (219.79 GiB 236.00 GB)
   Super Offset : 460936960 sectors
          State : active
    Device UUID : b5fc3eba:07da8be3:81646894:e3c313dc

    Update Time : Fri Jun 23 15:54:23 2006
       Checksum : 9f966431 - correct
         Events : 32429

         Layout : left-symmetric
     Chunk Size : 64K

   Array State : uUuu 1 failed

[  174.318555] md: md0 stopped.
[  174.400617] md: bind<sdd1>
[  174.401850] md: bind<sdb1>
[  174.403068] md: bind<sda1>
[  174.404321] md: bind<sdc1>
[  174.442943] raid5: measuring checksumming speed
[  174.463185]    8regs     :   543.000 MB/sec
[  174.483171]    8regs_prefetch:   431.000 MB/sec
[  174.503162]    32regs    :   335.000 MB/sec
[  174.523152]    32regs_prefetch:   293.000 MB/sec
[  174.543144]    pII_mmx   :   938.000 MB/sec
[  174.563138]    p5_mmx    :   901.000 MB/sec
[  174.563466] raid5: using function: pII_mmx (938.000 MB/sec)
[  174.578432] md: raid5 personality registered for level 5
[  174.578808] md: raid4 personality registered for level 4
[  174.580416] raid5: device sdc1 operational as raid disk 0
[  174.580773] raid5: device sdb1 operational as raid disk 2
[  174.581118] raid5: device sdd1 operational as raid disk 1
[  174.584893] raid5: allocated 4196kB for md0
[  174.585242] raid5: raid level 5 set md0 active with 3 out of 4 devices, algorithm 2
[  174.585805] RAID5 conf printout:
[  174.586108]  --- rd:4 wd:3 fd:1
[  174.586411]  disk 0, o:1, dev:sdc1
[  174.586718]  disk 1, o:1, dev:sdd1
[  174.587019]  disk 2, o:1, dev:sdb1
[  219.660549] md: unbind<sda1>
[  219.660921] md: export_rdev(sda1)
[  227.126828] md: bind<sda1>
[  227.127242] RAID5 conf printout:
[  227.127538]  --- rd:4 wd:3 fd:1
[  227.127829]  disk 0, o:1, dev:sdc1
[  227.128132]  disk 1, o:1, dev:sdd1
[  227.128428]  disk 2, o:1, dev:sdb1
[  227.128721]  disk 3, o:1, dev:sda1
[  227.129163] md: syncing RAID array md0
[  227.129478] md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
[  227.129892] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for reconstruction.
[  227.130499] md: using 128k window, over a total of 230468288 blocks.
[16359.493868] md: md0: sync done.
[16359.499961] RAID5 conf printout:
[16359.500213]  --- rd:4 wd:4 fd:0
[16359.500453]  disk 0, o:1, dev:sdc1
[16359.500714]  disk 1, o:1, dev:sdd1
[16359.500958]  disk 2, o:1, dev:sdb1
[16359.501202]  disk 3, o:1, dev:sda1



[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 degraded after mdadm -S, mdadm --assemble (everytime)
  2006-06-24 10:47 RAID5 degraded after mdadm -S, mdadm --assemble (everytime) Ronald Lembcke
@ 2006-06-24 11:10 ` Ronald Lembcke
  2006-06-25 13:59 ` Bug in 2.6.17 / mdadm 2.5.1 Ronald Lembcke
  2006-06-26 14:20 ` RAID5 degraded after mdadm -S, mdadm --assemble (everytime) Bill Davidsen
  2 siblings, 0 replies; 8+ messages in thread
From: Ronald Lembcke @ 2006-06-24 11:10 UTC (permalink / raw)
  Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 68 bytes --]

Sorry, forgot to mention: I use Linux kernel 2.6.17 and mdadm 2.5.1

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Bug in 2.6.17 / mdadm 2.5.1
  2006-06-24 10:47 RAID5 degraded after mdadm -S, mdadm --assemble (everytime) Ronald Lembcke
  2006-06-24 11:10 ` Ronald Lembcke
@ 2006-06-25 13:59 ` Ronald Lembcke
  2006-06-26  1:06   ` Neil Brown
  2006-06-26 14:20 ` RAID5 degraded after mdadm -S, mdadm --assemble (everytime) Bill Davidsen
  2 siblings, 1 reply; 8+ messages in thread
From: Ronald Lembcke @ 2006-06-25 13:59 UTC (permalink / raw)
  To: linux-kernel, linux-raid; +Cc: es186

[-- Attachment #1: Type: text/plain, Size: 3161 bytes --]

Hi!

There's a bug in Kernel 2.6.17 and / or mdadm which prevents (re)adding
a disk to a degraded RAID5-Array.

The mail I'm replying to was sent to linux-raid only. A summary of my
problem is in the quoted part, and everything you need to reproduce it
is below.
There's more information (kernel log, output of mdadm -E, ...) in the
original mail (
Subject: RAID5 degraded after mdadm -S, mdadm --assemble (everytime)
Message-ID: <20060624104745.GA6352@defiant.crash>
It can be found here for example: 
http://www.spinics.net/lists/raid/msg12859.html )

More about this problem below the quoted pard.

On Sat Jun 24 12:47:45 2006, I wrote:
> I set up a RAID5 array of 4 disks. I initially created a degraded array
> and added the fourth disk (sda1) later.
> 
> The array is "clean", but when I do  
>   mdadm -S /dev/md0 
>   mdadm --assemble /dev/md0 /dev/sd[abcd]1
> it won't start. It always says sda1 is "failed".
> 
> When I remove sda1 and add it again everything seems to be fine until I
> stop the array. 

CPU: AMD-K6(tm) 3D processor
Kernel: Linux version 2.6.17 (root@ganges) (gcc version 4.0.3 (Debian
4.0.3-1)) #2 Tue Jun 20 17:48:32 CEST 2006

The problem is: The superblocks get inconsistent, but I couldn't find
where this actually happens.

Here are some simple steps to reproduce it (don't forget to adjust the
device-names if you're allready using /dev/md1 or /dev/loop[0-3]):
The behaviour changes when you execute --zero-superblock in the example 
below (it looks even more broken).
It also changes when you fail some other disk instead of loop2.
When loop3 is failed (without executing --zero-superblock) it can
successfully be re-added.


############################################
cd /tmp; mkdir raiddtest; cd raidtest
dd bs=1M count=1 if=/dev/zero of=disk0
dd bs=1M count=1 if=/dev/zero of=disk1
dd bs=1M count=1 if=/dev/zero of=disk2
dd bs=1M count=1 if=/dev/zero of=disk3
losetup /dev/loop0 disk0
losetup /dev/loop1 disk1
losetup /dev/loop2 disk2
losetup /dev/loop3 disk3
mdadm --create /dev/md1 --level=5 --raid-devices=4 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3
mdadm /dev/md1 --fail /dev/loop2
mdadm /dev/md1 --remove /dev/loop2

#mdadm --zero-superblock /dev/loop2
# here something goes wrong
mdadm /dev/md1 --add /dev/loop2

mdadm --stop /dev/md1
# can't reassemble
mdadm --assemble /dev/md1 /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3
############################################


To cleanup everything :)
############################################
mdadm --stop /dev/md1
losetup -d /dev/loop0
losetup -d /dev/loop1
losetup -d /dev/loop2
losetup -d /dev/loop3
rm disk0 disk1 disk2 disk3
###########################################


After mdadm --create the superblocks are ok, but look a little bit
strange (the failed device):

dev_roles[i]: 0000 0001 0002 fffe 0003
the disks have dev_num: 0,1,2,4

But after --fail --remove --add 
dev_roles[i]: 0000 0001 fffe fffe 0003 0002
but the disks still have dev_num: 0,1,2,4.
Either loop2 must have disk_num=5 or dev_roles needs to be
0,1,2,0xfffe,4.


Greetings,
           Roni

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bug in 2.6.17 / mdadm 2.5.1
  2006-06-25 13:59 ` Bug in 2.6.17 / mdadm 2.5.1 Ronald Lembcke
@ 2006-06-26  1:06   ` Neil Brown
  2006-06-26  1:53     ` Neil Brown
  2006-06-26 21:24     ` Andre Tomt
  0 siblings, 2 replies; 8+ messages in thread
From: Neil Brown @ 2006-06-26  1:06 UTC (permalink / raw)
  To: Ronald Lembcke; +Cc: linux-kernel, linux-raid

On Sunday June 25, es186@fen-net.de wrote:
> Hi!
> 
> There's a bug in Kernel 2.6.17 and / or mdadm which prevents (re)adding
> a disk to a degraded RAID5-Array.

Thank you for the detailed report.
The bug is in the md driver in the kernel (not in mdadm), and only
affects version-1 superblocks.  Debian recently changed the default
(in /etc/mdadm.conf) to use version-1 superblocks which I thought
would be OK (I've some testing) but obviously I missed something. :-(

If you remove the "metadata=1" (or whatever it is) from
/etc/mdadm/mdadm.conf and then create the array, it will be created
with a version-0.90 superblock has had more testing.

Alternately you can apply the following patch to the kernel and
version-1 superblocks should work better.

NeilBrown

-------------------------------------------------
Set desc_nr correctly for version-1 superblocks.

This has to be done in ->load_super, not ->validate_super

### Diffstat output
 ./drivers/md/md.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c	2006-06-26 11:02:43.000000000 +1000
+++ ./drivers/md/md.c	2006-06-26 11:02:46.000000000 +1000
@@ -1057,6 +1057,11 @@ static int super_1_load(mdk_rdev_t *rdev
 	if (rdev->sb_size & bmask)
 		rdev-> sb_size = (rdev->sb_size | bmask)+1;
 
+	if (sb->level == cpu_to_le32(LEVEL_MULTIPATH))
+		rdev->desc_nr = -1;
+	else
+		rdev->desc_nr = le32_to_cpu(sb->dev_number);
+
 	if (refdev == 0)
 		ret = 1;
 	else {
@@ -1165,7 +1170,6 @@ static int super_1_validate(mddev_t *mdd
 
 	if (mddev->level != LEVEL_MULTIPATH) {
 		int role;
-		rdev->desc_nr = le32_to_cpu(sb->dev_number);
 		role = le16_to_cpu(sb->dev_roles[rdev->desc_nr]);
 		switch(role) {
 		case 0xffff: /* spare */

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bug in 2.6.17 / mdadm 2.5.1
  2006-06-26  1:06   ` Neil Brown
@ 2006-06-26  1:53     ` Neil Brown
  2006-06-26 21:24     ` Andre Tomt
  1 sibling, 0 replies; 8+ messages in thread
From: Neil Brown @ 2006-06-26  1:53 UTC (permalink / raw)
  To: Ronald Lembcke; +Cc: linux-kernel, linux-raid

On Monday June 26, neilb@suse.de wrote:
> On Sunday June 25, es186@fen-net.de wrote:
> > Hi!
> > 
> > There's a bug in Kernel 2.6.17 and / or mdadm which prevents (re)adding
> > a disk to a degraded RAID5-Array.
> 
> Thank you for the detailed report.
> The bug is in the md driver in the kernel (not in mdadm), and only
> affects version-1 superblocks.  Debian recently changed the default
> (in /etc/mdadm.conf) to use version-1 superblocks which I thought
> would be OK (I've some testing) but obviously I missed something. :-(
> 
> If you remove the "metadata=1" (or whatever it is) from
> /etc/mdadm/mdadm.conf and then create the array, it will be created
> with a version-0.90 superblock has had more testing.
> 
> Alternately you can apply the following patch to the kernel and
> version-1 superblocks should work better.

And as a third alternate, you can apply this patch to mdadm-2.5.1
It will work-around the kernel bug.

NeilBrown

diff .prev/Manage.c ./Manage.c
--- .prev/Manage.c	2006-06-20 10:01:17.000000000 +1000
+++ ./Manage.c	2006-06-26 11:46:56.000000000 +1000
@@ -271,8 +271,14 @@ int Manage_subdevs(char *devname, int fd
 				 * If so, we can simply re-add it.
 				 */
 				st->ss->uuid_from_super(duuid, dsuper);
-			
-				if (osuper) {
+
+				/* re-add doesn't work for version-1 superblocks
+				 * before 2.6.18 :-(
+				 */
+				if (array.major_version == 1 &&
+				    get_linux_version() <= 2006018)
+					;
+				else if (osuper) {
 					st->ss->uuid_from_super(ouuid, osuper);
 					if (memcmp(duuid, ouuid, sizeof(ouuid))==0) {
 						/* look close enough for now.  Kernel
@@ -295,7 +301,10 @@ int Manage_subdevs(char *devname, int fd
 					}
 				}
 			}
-			for (j=0; j< st->max_devs; j++) {
+			/* due to a bug in 2.6.17 and earlier, we start
+			 * looking from raid_disks, not 0
+			 */
+			for (j = array.raid_disks ; j< st->max_devs; j++) {
 				disc.number = j;
 				if (ioctl(fd, GET_DISK_INFO, &disc))
 					break;

diff .prev/super1.c ./super1.c
--- .prev/super1.c	2006-06-20 10:01:46.000000000 +1000
+++ ./super1.c	2006-06-26 11:47:12.000000000 +1000
@@ -277,6 +277,18 @@ static void examine_super1(void *sbv, ch
 	default: break;
 	}
 	printf("\n");
+	printf("    Array Slot : %d (", __le32_to_cpu(sb->dev_number));
+	for (i= __le32_to_cpu(sb->max_dev); i> 0 ; i--)
+		if (__le16_to_cpu(sb->dev_roles[i-1]) != 0xffff)
+			break;
+	for (d=0; d < i; d++) {
+		int role = __le16_to_cpu(sb->dev_roles[d]);
+		if (d) printf(", ");
+		if (role == 0xffff) printf("empty");
+		else if(role == 0xfffe) printf("failed");
+		else printf("%d", role);
+	}
+	printf(")\n");
 	printf("   Array State : ");
 	for (d=0; d<__le32_to_cpu(sb->raid_disks); d++) {
 		int cnt = 0;
@@ -767,7 +779,8 @@ static int write_init_super1(struct supe
 		if (memcmp(sb->set_uuid, refsb->set_uuid, 16)==0) {
 			/* same array, so preserve events and dev_number */
 			sb->events = refsb->events;
-			sb->dev_number = refsb->dev_number;
+			if (get_linux_version() >= 2006018)
+				sb->dev_number = refsb->dev_number;
 		}
 		free(refsb);
 	}

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID5 degraded after mdadm -S, mdadm --assemble (everytime)
  2006-06-24 10:47 RAID5 degraded after mdadm -S, mdadm --assemble (everytime) Ronald Lembcke
  2006-06-24 11:10 ` Ronald Lembcke
  2006-06-25 13:59 ` Bug in 2.6.17 / mdadm 2.5.1 Ronald Lembcke
@ 2006-06-26 14:20 ` Bill Davidsen
  2 siblings, 0 replies; 8+ messages in thread
From: Bill Davidsen @ 2006-06-26 14:20 UTC (permalink / raw)
  To: Ronald Lembcke; +Cc: linux-raid

Ronald Lembcke wrote:

>Hi!
>
>I set up a RAID5 array of 4 disks. I initially created a degraded array
>and added the fourth disk (sda1) later.
>
>The array is "clean", but when I do  
>  mdadm -S /dev/md0 
>  mdadm --assemble /dev/md0 /dev/sd[abcd]1
>it won't start. It always says sda1 is "failed".
>
>When I remove sda1 and add it again everything seems to be fine until I
>stop the array. 
>
>Below is the output of /proc/mdstat, mdadm -D -Q, mdadm -E and a piece of the
>kernel log.
>The output of mdadm -E looks strange for /dev/sd[bcd]1, saying "1 failed".
>
>What can I do about this?
>How could this happen? I mixed up the syntax when adding the fourth disk and
>tried these two commands (at least one didn't yield an error message):
>mdadm --manage -a /dev/md0 /dev/sda1
>mdadm --manage -a /dev/sda1 /dev/md0
>
>
>Thanks in advance ...
>                      Roni
>
>
>
>ganges:~# cat /proc/mdstat 
>Personalities : [raid5] [raid4] 
>md0 : active raid5 sda1[4] sdc1[0] sdb1[2] sdd1[1]
>      691404864 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
>      
>unused devices: <none>
>
I will just comment that the 0 1 2   4 numbering on the devices is 
unusual. When you created this did you do something which made md think 
there was another device, failed or missing, which was device[3]? I just 
looked at a bunch of my arrays and found no similar examples.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bug in 2.6.17 / mdadm 2.5.1
  2006-06-26  1:06   ` Neil Brown
  2006-06-26  1:53     ` Neil Brown
@ 2006-06-26 21:24     ` Andre Tomt
  2006-06-27  1:00       ` Neil Brown
  1 sibling, 1 reply; 8+ messages in thread
From: Andre Tomt @ 2006-06-26 21:24 UTC (permalink / raw)
  To: Neil Brown; +Cc: Ronald Lembcke, linux-kernel, linux-raid

Neil Brown wrote:
<snip>
> Alternately you can apply the following patch to the kernel and
> version-1 superblocks should work better.

-stable material?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bug in 2.6.17 / mdadm 2.5.1
  2006-06-26 21:24     ` Andre Tomt
@ 2006-06-27  1:00       ` Neil Brown
  0 siblings, 0 replies; 8+ messages in thread
From: Neil Brown @ 2006-06-27  1:00 UTC (permalink / raw)
  To: Andre Tomt; +Cc: Ronald Lembcke, linux-kernel, linux-raid

On Monday June 26, andre@tomt.net wrote:
> Neil Brown wrote:
> <snip>
> > Alternately you can apply the following patch to the kernel and
> > version-1 superblocks should work better.
> 
> -stable material?

Maybe.  I'm not sure it exactly qualifies, but I might try sending it
to them and see what they think.

NeilBrown

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2006-06-27  1:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-06-24 10:47 RAID5 degraded after mdadm -S, mdadm --assemble (everytime) Ronald Lembcke
2006-06-24 11:10 ` Ronald Lembcke
2006-06-25 13:59 ` Bug in 2.6.17 / mdadm 2.5.1 Ronald Lembcke
2006-06-26  1:06   ` Neil Brown
2006-06-26  1:53     ` Neil Brown
2006-06-26 21:24     ` Andre Tomt
2006-06-27  1:00       ` Neil Brown
2006-06-26 14:20 ` RAID5 degraded after mdadm -S, mdadm --assemble (everytime) Bill Davidsen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.