linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Possible Bug with MD multipath and raid1 on top
@ 2002-09-14 18:33 Oktay Akbal
  2002-09-14 23:07 ` Lars Marowsky-Bree
  0 siblings, 1 reply; 6+ messages in thread
From: Oktay Akbal @ 2002-09-14 18:33 UTC (permalink / raw)
  To: linux-kernel

Hello,

I found a very strange effect when using a raid1 on top of multipathing
with Kernel 2.4.18 (Suse-version of it) with a 2-Port qlogic HBA
connecting two arrays.

The raidtab used to set this up is:

raiddev                 /dev/md0
raid-level              multipath
nr-raid-disks           1
nr-spare-disks          1
chunk-size              32

device                  /dev/sda1
raid-disk               0

device                  /dev/sdc1
spare-disk              1

raiddev                 /dev/md1
raid-level              multipath
nr-raid-disks           1
nr-spare-disks          1
chunk-size              32

device                  /dev/sdb1
raid-disk               0

device                  /dev/sdd1
spare-disk              1

raiddev                 /dev/md2
raid-level              1
nr-raid-disks           2
nr-spare-disks          0
chunk-size              32

device                  /dev/md0
raid-disk               0

device                  /dev/md1
raid-disk               1


As you see, one port from the hba "sees" sda and sdb, the second port
sdc and sdd.
When I now pull out one of the cables two disks are missing and the
multipath driver correctly uses the second path to the disks and
continues to work. After plugging out the second cable all drives
are marked as failed (mdstat), but the raid1 (md2) is still reported
as functional with one device (md0) missing.
(Sorry do not have the output at hand but md2 was reported [_U], while
sda-sdd were marked [F]).

All Processes using the raid1-device get stuck and this situation
does not recover. Even some simple process testing the disk-access
got stuck  (I think ps showed state   L<D).

Even if I'm quite sure that this is a bug, how should I test disk access
without ending in "uninterruptible sleep" ?

Thanks

Oktay Akbal


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Possible Bug with MD multipath and raid1 on top
  2002-09-14 18:33 Possible Bug with MD multipath and raid1 on top Oktay Akbal
@ 2002-09-14 23:07 ` Lars Marowsky-Bree
  2002-09-15  5:29   ` Oktay Akbal
  2002-09-15  7:31   ` Nachtrag: " Oktay Akbal
  0 siblings, 2 replies; 6+ messages in thread
From: Lars Marowsky-Bree @ 2002-09-14 23:07 UTC (permalink / raw)
  To: Oktay Akbal, linux-kernel

On 2002-09-14T20:33:07,
   Oktay Akbal <oktay.akbal@s-tec.de> said:

> I found a very strange effect when using a raid1 on top of multipathing
> with Kernel 2.4.18 (Suse-version of it) with a 2-Port qlogic HBA
> connecting two arrays.

Is this with or without the patch I recently posted to linux-kernel?

If so, please use the patch at http://lars.marowsky-bree.de/dl/md-mp instead,
which is slightly newer and fixes one important (affecting raid0 use) and two
minor issues. Please be aware that you are beta-testing code for the time
being ;-) (Which is highly appreciated!)

> When I now pull out one of the cables two disks are missing and the
> multipath driver correctly uses the second path to the disks and
> continues to work. After plugging out the second cable all drives
> are marked as failed (mdstat), but the raid1 (md2) is still reported
> as functional with one device (md0) missing.

So far this sounds OK. (Even though the updated md-mp patch will _never_ fail
the last path but instead return the error to the layer upwards; this protects
against certain scenarios in 2.4 where a device error can't be distinguished
from a failed path and we don't want that to lead to an inaccessible device)

> All Processes using the raid1-device get stuck and this situation
> does not recover. Even some simple process testing the disk-access
> got stuck  (I think ps showed state   L<D).

That's not OK, obviously ;-)

I will try to reproduce this on Monday. As I don't have the hardware, but
instead use a loop device (which I can make fail on demand), if I can't
reproduce it, it might in fact be the FC driver which gets stuck somehow.

> Even if I'm quite sure that this is a bug, how should I test disk access
> without ending in "uninterruptible sleep" ?

Uhm, essentially, you should never get stuck in uninterruptible sleep. All
errors should "eventually" time out.

Please compile the kernel with magic-sysrq enabled and check where the
processes are stuck using magic-sysrq t. It might help if you piped the
results through ksymoops.


Sincerely,
    Lars Marowsky-Brée <lmb@suse.de>

-- 
Immortality is an adequate definition of high availability for me.
	--- Gregory F. Pfister


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Possible Bug with MD multipath and raid1 on top
  2002-09-14 23:07 ` Lars Marowsky-Bree
@ 2002-09-15  5:29   ` Oktay Akbal
  2002-09-15 21:12     ` Lars Marowsky-Bree
  2002-09-15  7:31   ` Nachtrag: " Oktay Akbal
  1 sibling, 1 reply; 6+ messages in thread
From: Oktay Akbal @ 2002-09-15  5:29 UTC (permalink / raw)
  To: Lars Marowsky-Bree; +Cc: linux-kernel

On Sun, 15 Sep 2002, Lars Marowsky-Bree wrote:

> On 2002-09-14T20:33:07,
>    Oktay Akbal <oktay.akbal@s-tec.de> said:
>
> > I found a very strange effect when using a raid1 on top of multipathing
> > with Kernel 2.4.18 (Suse-version of it) with a 2-Port qlogic HBA
> > connecting two arrays.
>
> Is this with or without the patch I recently posted to linux-kernel?


Since it is the latest official Suse-2.4.18 from SLES I assume this patch
is not included.

> > continues to work. After plugging out the second cable all drives
> > are marked as failed (mdstat), but the raid1 (md2) is still reported
> > as functional with one device (md0) missing.
>
> So far this sounds OK.

All disks are dead. The md0 device is missing. The same should be true for
md1, since there is no difference in setup. Why should the raid1 no report
both mirrors as dead ?

> (Even though the updated md-mp patch will _never_ fail
> the last path but instead return the error to the layer upwards; this protects
> against certain scenarios in 2.4 where a device error can't be distinguished
> from a failed path and we don't want that to lead to an inaccessible device)

How would the failing of all Pathes then be noticed ?

> I will try to reproduce this on Monday. As I don't have the hardware, but
> instead use a loop device (which I can make fail on demand), if I can't
> reproduce it, it might in fact be the FC driver which gets stuck somehow.

This might well be, since I don't found the qlogic-driver very impressing
so far. To use md-multipath the multipathing (failover) functionality from
the driver was disabled.

Oktay Akbal


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Nachtrag: Possible Bug with MD multipath and raid1 on top
  2002-09-14 23:07 ` Lars Marowsky-Bree
  2002-09-15  5:29   ` Oktay Akbal
@ 2002-09-15  7:31   ` Oktay Akbal
  2002-09-15  7:39     ` Oktay Akbal
  1 sibling, 1 reply; 6+ messages in thread
From: Oktay Akbal @ 2002-09-15  7:31 UTC (permalink / raw)
  To: Lars Marowsky-Bree; +Cc: linux-kernel

> If so, please use the patch at http://lars.marowsky-bree.de/dl/md-mp instead,
> which is slightly newer and fixes one important (affecting raid0 use) and two
> minor issues. Please be aware that you are beta-testing code for the time
> being ;-) (Which is highly appreciated!)

Bin leider in Urlaub, d.h. mit Hardware wird das wohl schwer.
Gegen welchen Kernel ist den der Patch genau ?

Ich versuche das gerade mit den loop-devices. Leider ist mir dabei
erstmal der rechner komplett abgestuerzt. Wie "failed" man denn ein
loop-device ?

Oktay Akbal


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Nachtrag: Possible Bug with MD multipath and raid1 on top
  2002-09-15  7:31   ` Nachtrag: " Oktay Akbal
@ 2002-09-15  7:39     ` Oktay Akbal
  0 siblings, 0 replies; 6+ messages in thread
From: Oktay Akbal @ 2002-09-15  7:39 UTC (permalink / raw)
  To: linux-kernel

Please excuse the german part. It should have gone only to Lars.
Sorry.

Oktay Akbal


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Possible Bug with MD multipath and raid1 on top
  2002-09-15  5:29   ` Oktay Akbal
@ 2002-09-15 21:12     ` Lars Marowsky-Bree
  0 siblings, 0 replies; 6+ messages in thread
From: Lars Marowsky-Bree @ 2002-09-15 21:12 UTC (permalink / raw)
  To: Oktay Akbal; +Cc: linux-kernel

On 2002-09-15T07:29:30,
   Oktay Akbal <oktay.akbal@s-tec.de> said:

> > Is this with or without the patch I recently posted to linux-kernel?
> 
> Since it is the latest official Suse-2.4.18 from SLES I assume this patch
> is not included.

Oh, ok. Multipathing is known to not work perfectly right in the mainstream
kernel. In this case, you might want to try the patch.

> > So far this sounds OK.
> All disks are dead. The md0 device is missing. The same should be true for
> md1, since there is no difference in setup. Why should the raid1 no report
> both mirrors as dead ?

Oh, right. I misread your mail and just saw that the md1 was also on the same
devices. Strange indeed.

> > (Even though the updated md-mp patch will _never_ fail the last path but
> > instead return the error to the layer upwards; this protects against
> > certain scenarios in 2.4 where a device error can't be distinguished from
> > a failed path and we don't want that to lead to an inaccessible device)
> How would the failing of all Pathes then be noticed ?

Well, IO errors would occur, be reported to the caller and those would
supposedly be noticed.

However, the 2.4 error reporting can't distinguish between a path or a device
error. So a failed read (destroyed block, for example) will fail a path. As
the read request is retried on all paths if necessary, it would be highly
undesireable to fail _all_ paths because of this. The last path will remain
"accessible", but the application will see an error in this case.

> This might well be, since I don't found the qlogic-driver very impressing
> so far. To use md-multipath the multipathing (failover) functionality from
> the driver was disabled.

OK. Well, I never tested the QLogic proprietary failover because I consider it
to be the wrong approach ;-) The md layer should work though by now.


Sincerely,
    Lars Marowsky-Brée <lmb@suse.de>

-- 
Immortality is an adequate definition of high availability for me.
	--- Gregory F. Pfister


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2002-09-15 21:13 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-09-14 18:33 Possible Bug with MD multipath and raid1 on top Oktay Akbal
2002-09-14 23:07 ` Lars Marowsky-Bree
2002-09-15  5:29   ` Oktay Akbal
2002-09-15 21:12     ` Lars Marowsky-Bree
2002-09-15  7:31   ` Nachtrag: " Oktay Akbal
2002-09-15  7:39     ` Oktay Akbal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).