All of lore.kernel.org
 help / color / mirror / Atom feed
* mdadm --fail on RAID0 array
@ 2013-08-06 21:27 Martin Wilck
  2013-08-07  6:21 ` NeilBrown
  0 siblings, 1 reply; 2+ messages in thread
From: Martin Wilck @ 2013-08-06 21:27 UTC (permalink / raw)
  To: linux-raid

Hi Neil, everyone,

I'd like to discuss the following "feature" I just discovered.
It is impossible to set devices faulty in a container if they
are only members of a RAID0:

mdadm -CR /dev/md/ddf -e ddf -l container -n 3 /dev/loop10  /dev/loop11
mdadm -CR /dev/md/vol1 -l raid0 -n 2 /dev/md/ddf
mdadm --fail /dev/md/vol1 /dev/loop11
mdadm: set device faulty failed for /dev/loop11:
   Device or resource busy

This is independent of DDF / external metadata; it happens with native
MD meta data, too. I don't quite understand why this is so; certainly
RAID0 has no way to recover from disk failure, but simply refusing to
ack the fact that a disk is broken doesn't seem right. IMO the array
should switch to read-only ASAP, and mark itself failed in the meta
data. But I may be missing something important for a native MD case.

However, in a container, it must be possible to set a disk failed, and
that's currently not the case if a disk is only member in a RAID0. In
the DDF case, we'd expect to set the array failed in the meta data and
update the disk state to "Failed". "mdadm --fail" on container devices
doesn't work, either, because the kernel refuses to do that without RAID
personality (actually, this is what I'd like to change in the first
place, but I don't oversee potential problems).

This has actually potential to cause severe breakage. Consider a DDF
container with 3 disks d0, d1, d2. A RAID0 array uses 50% of space on
d0, d1, and a RAID1 uses another 50% on d1, d2. Now d0 goes bad. mdmon
wouldn't notice. When d1 or d2 go bad, too, mdmon would try to use the
free space on d0 for rebuilding.

For this scenario to get fixed, it wouldn't be sufficient for the kernel
to accept mdadm --fail on RAID0. We'd also need to monitor the RAID0
(or, actually, all phys devices) with mdmon. In other words, this would
require to run mdmon on every container, not only on subarrays with
redundancy.

Thoughts?

Martin

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: mdadm --fail on RAID0 array
  2013-08-06 21:27 mdadm --fail on RAID0 array Martin Wilck
@ 2013-08-07  6:21 ` NeilBrown
  0 siblings, 0 replies; 2+ messages in thread
From: NeilBrown @ 2013-08-07  6:21 UTC (permalink / raw)
  To: Martin Wilck; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3403 bytes --]

On Tue, 06 Aug 2013 23:27:18 +0200 Martin Wilck <mwilck@arcor.de> wrote:

> Hi Neil, everyone,
> 
> I'd like to discuss the following "feature" I just discovered.
> It is impossible to set devices faulty in a container if they
> are only members of a RAID0:
> 
> mdadm -CR /dev/md/ddf -e ddf -l container -n 3 /dev/loop10  /dev/loop11
> mdadm -CR /dev/md/vol1 -l raid0 -n 2 /dev/md/ddf
> mdadm --fail /dev/md/vol1 /dev/loop11
> mdadm: set device faulty failed for /dev/loop11:
>    Device or resource busy
> 
> This is independent of DDF / external metadata; it happens with native
> MD meta data, too. I don't quite understand why this is so; certainly
> RAID0 has no way to recover from disk failure, but simply refusing to
> ack the fact that a disk is broken doesn't seem right. IMO the array
> should switch to read-only ASAP, and mark itself failed in the meta
> data. But I may be missing something important for a native MD case.
> 
> However, in a container, it must be possible to set a disk failed, and
> that's currently not the case if a disk is only member in a RAID0. In
> the DDF case, we'd expect to set the array failed in the meta data and
> update the disk state to "Failed". "mdadm --fail" on container devices
> doesn't work, either, because the kernel refuses to do that without RAID
> personality (actually, this is what I'd like to change in the first
> place, but I don't oversee potential problems).
> 
> This has actually potential to cause severe breakage. Consider a DDF
> container with 3 disks d0, d1, d2. A RAID0 array uses 50% of space on
> d0, d1, and a RAID1 uses another 50% on d1, d2. Now d0 goes bad. mdmon
> wouldn't notice. When d1 or d2 go bad, too, mdmon would try to use the
> free space on d0 for rebuilding.
> 
> For this scenario to get fixed, it wouldn't be sufficient for the kernel
> to accept mdadm --fail on RAID0. We'd also need to monitor the RAID0
> (or, actually, all phys devices) with mdmon. In other words, this would
> require to run mdmon on every container, not only on subarrays with
> redundancy.
> 
> Thoughts?

Hi Martin,
 
I don't believe there is any value in recording that one device out of a
RAID0 is failed, any more than there is value in the block layer recording
that one block our of a disk drive has failed.
Any IO attempt will fail.  IO attempts to elsewhere in the array (or plain
disk) will not fail.  That is as it should be.  A filesystem must be able to
cope.  Maybe the filesystem should switch to read-only but that isn't my
problem.
If the filesystem wants to keep writing to those parts of the array that
still work, md should not get in its way.

In the scenario you describe with a RAID0 and a RAID1, after mdmon assigns
some of d0 to the RAID1 md will start recovery.  This will presumably
generate a write error (if it is really bad we'll get an error even earlier
when trying to update the metadata).
That error should (and I believe will) result in the device being marked
faulty.  We must be ready for an error at the point and I can see no
particular value in getting the error indication earlier.

So I really don't think there is a problem here (except that "RAID0" should
never have been called "RAID" as it has no "Redundancy" and causes people to
think it is in some way similar to other RAID levels.  It isn't.)


Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2013-08-07  6:21 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-06 21:27 mdadm --fail on RAID0 array Martin Wilck
2013-08-07  6:21 ` NeilBrown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.