All of lore.kernel.org
 help / color / mirror / Atom feed
* "md/raid:mdX: cannot start dirty degraded array."
@ 2021-10-08 19:57 Andreas Trottmann
  2021-10-08 21:04 ` Wol
  2021-10-27 20:42 ` Andreas U. Trottmann
  0 siblings, 2 replies; 4+ messages in thread
From: Andreas Trottmann @ 2021-10-08 19:57 UTC (permalink / raw)
  To: linux-raid

Hello linux-raid

I am running a server that runs a number of virtual machines and manages 
their virtual disks as logical volumes using lvmraid (so: indivdual SSDs 
are used as PVs for LVM; the LVs are using RAID to create redundancy and 
are created with commands such as "lvcreate --type raid5 --stripes 4 
--stripesize 128 ...")

The server is running Debian 10 "buster" with latest updates and its 
stock kernel: Linux (hostname) 4.19.0-17-amd64 #1 SMP Debian 4.19.194-3 
(2021-07-18) x86_64 GNU/Linux


Recently, the server had one of its SSDs serving as a PV fail.

After a restart, all of the logical volumes came back, except one.

As far as I remember, there were NO raid operations (resync, reshape or 
the like) going on when the SSD failed.


The volume in question consists of four stripes and uses raid5.


When I'm trying to activate it, I get:

# lvchange -a y /dev/vg_ssds_0/host-home
   Couldn't find device with uuid 8iz0p5-vh1c-kaxK-cTRC-1ryd-eQd1-wX1Yq9.
   device-mapper: reload ioctl on  (253:245) failed: Input/output error


dmesg shows:

device-mapper: raid: Failed to read superblock of device at position 1
md/raid:mdX: not clean -- starting background reconstruction
md/raid:mdX: device dm-50 operational as raid disk 0
md/raid:mdX: device dm-168 operational as raid disk 2
md/raid:mdX: device dm-230 operational as raid disk 3
md/raid:mdX: cannot start dirty degraded array.
md/raid:mdX: failed to run raid set.
md: pers->run() failed ...
device-mapper: table: 253:245: raid: Failed to run raid array
device-mapper: ioctl: error adding target to table


I can successfully activate and access three of the four _rmeta_X and 
_rimage_X LVs: _0, _2 and _3.

_rmeta_1 and _rimage_1 was on the failed SSD.

This makes me think that the data should be recoverable; three out of 
four RAID5 stripes should be enough.

I copied the entire data of all of the _rimage and _rmeta volumes onto a 
safe space.

The _rmeta ones look like this:

# od -t xC /dev/vg_ssds_0/host-home_rmeta_0
0000000 44 6d 52 64 01 00 00 00 04 00 00 00 00 00 00 00
0000020 ce 0b 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0000040 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
0000060 05 00 00 00 02 00 00 00 00 01 00 00 00 00 00 00
0000100 ff ff ff ff ff ff ff ff 05 00 00 00 02 00 00 00
0000120 00 01 00 00 00 00 00 00 00 00 00 cb 01 00 00 00
0000140 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0000160 00 00 00 80 00 00 00 00 00 00 00 00 00 00 00 00
0000200 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
0010000 62 69 74 6d 04 00 00 00 00 00 00 00 00 00 00 00
0010020 00 00 00 00 00 00 00 00 ce 0b 00 00 00 00 00 00
0010040 ce 0b 00 00 00 00 00 00 00 00 00 99 00 00 00 00
0010060 00 00 00 00 00 00 20 00 05 00 00 00 00 00 00 00
0010100 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
*
20000000

the only difference of _rmeta_2 and _rmeta_3 to _rmeta_0 is a "2" and a 
"3", respectively, on offset 12; this should be "array_position" and it 
makes sense to me that _rmeta_0 contains 0, _rmeta_2 contains 2 and 
_rmeta_3 contains 3.

I googled for the error message "md/raid:mdX: not clean -- starting 
background", and found 
https://forums.opensuse.org/showthread.php/497294-LVM-RAID5-broken-after-sata-link-error

In the case described there, the "failed_devices" field was not zero, 
and zeroing it out using a hex editor made "vgchange -a y" do the right 
thing again. However, in my _rmetas, it looks like the "failed_devices" 
fields are already all zero:

44 6D 52 64               magic
01 00 00 00               compat_features FEATURE_FLAG_SUPPORTS_V190
04 00 00 00               num_devices
00 00 00 00               array_position
CE 0B 00 00  00 00 00 00  events
00 00 00 00  00 00 00 00  failed_devices (none)
FF FF FF FF  FF FF FF FF  disk_recovery_offset
FF FF FF FF  FF FF FF FF  array_resync_offset
05 00 00 00               level
02 00 00 00               layout
00 01 00 00               stripe_sectors
00 00 00 00               flags
FF FF FF FF  FF FF FF FF  reshape_position
05 00 00 00               new_level
02 00 00 00               new_layout
00 01 00 00               new_strip_sectors
00 00 00 00               delta_disks
00 00 00 CB  01 00 00 00  array_sectors (0x01CB000000)
00 00 00 00  00 00 00 00  data_offset
00 00 00 00  00 00 00 00  new_data_offset
00 00 00 80  00 00 00 00  sectors
00 00 00 00  00 00 00 00  extended_failed_devices (none)
(...)                     (more zero bytes skipped)
00 00 00 00               incompat_features

This looks very fine to me; the "array sectors" value fits with the 
actual size of the array.


I was not able to find the meaning of the block starting at offset 
0x0010000 (62 69 74 6d; "bitm").


I now have two questions:

* is there anything I can do to those _rmeta blocks in order to make 
"vgchange -a y" work again?

* if not: I successfully copied the "_rimage_" into files. Is there 
anything magical that I can do with with losetup and mdadm to create a 
new /dev/md/... device that I can access to copy data from?



Thank you very much in advance and kind regards

-- 
Andreas Trottmann


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: "md/raid:mdX: cannot start dirty degraded array."
  2021-10-08 19:57 "md/raid:mdX: cannot start dirty degraded array." Andreas Trottmann
@ 2021-10-08 21:04 ` Wol
  2021-10-11 13:55   ` Andreas Trottmann
  2021-10-27 20:42 ` Andreas U. Trottmann
  1 sibling, 1 reply; 4+ messages in thread
From: Wol @ 2021-10-08 21:04 UTC (permalink / raw)
  To: Andreas Trottmann, linux-raid

On 08/10/2021 20:57, Andreas Trottmann wrote:
> Hello linux-raid
> 
> I am running a server that runs a number of virtual machines and manages 
> their virtual disks as logical volumes using lvmraid (so: indivdual SSDs 
> are used as PVs for LVM; the LVs are using RAID to create redundancy and 
> are created with commands such as "lvcreate --type raid5 --stripes 4 
> --stripesize 128 ...")
> 
> The server is running Debian 10 "buster" with latest updates and its 
> stock kernel: Linux (hostname) 4.19.0-17-amd64 #1 SMP Debian 4.19.194-3 
> (2021-07-18) x86_64 GNU/Linux

Ummm is there an lvm mailing list? I've not seen a question like this 
before - this list is really for md-raid. There may be people who can 
help but I've got a feeling you're in the wrong place, sorry.

In md terms, volumes have an "event count", and that error sounds like 
one drive has been lost, and the others do not have a matching event 
count. Hopefully that's given you a clue. With mdadm you'd do a forced 
assembly, but it carries the risk of data loss.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: "md/raid:mdX: cannot start dirty degraded array."
  2021-10-08 21:04 ` Wol
@ 2021-10-11 13:55   ` Andreas Trottmann
  0 siblings, 0 replies; 4+ messages in thread
From: Andreas Trottmann @ 2021-10-11 13:55 UTC (permalink / raw)
  To: Wol, linux-raid

Am 08.10.21 um 23:04 schrieb Wol:

>> I am running a server that runs a number of virtual machines and 
>> manages their virtual disks as logical volumes using lvmraid (so: 
>> indivdual SSDs are used as PVs for LVM; the LVs are using RAID to 
>> create redundancy and are created with commands such as "lvcreate 
>> --type raid5 --stripes 4 --stripesize 128 ...")

> Ummm is there an lvm mailing list? I've not seen a question like this 
> before - this list is really for md-raid. There may be people who can 
> help but I've got a feeling you're in the wrong place, sorry.

Thank you very much - I'll try linux-lvm@redhat.com.

> In md terms, volumes have an "event count", and that error sounds like 
> one drive has been lost, and the others do not have a matching event 
> count. Hopefully that's given you a clue.

Yes, but it appears the event count is the same in all three (of 
originally four) surviving "metadata" blocks. If I understand the source 
code correctly, it's an __le64 at offset 0x10, which contains 0x0BCE in 
all of my _rmeta blocks.




> With mdadm you'd do a forced 
> assembly, but it carries the risk of data loss.

I'll try to do this; I can work on copies of the data volumes, therefore 
any data loss that could occur because of me using the wrong mdadm 
command can be undone by getting the original data back.

If I have any success, I'll reply to this e-mail so at least this part 
of the solution will be archived somewhere.


Kind regards

-- 
Andreas Trottmann
CTO
Werft22 AG

T +41 56 210 91 32
F +41 56 210 91 34
M +41 79 229 88 55
andreas.trottmann@werft22.com

Landstrasse 1
CH‑5415 Rieden bei Baden / Schweiz
www.werft22.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: "md/raid:mdX: cannot start dirty degraded array."
  2021-10-08 19:57 "md/raid:mdX: cannot start dirty degraded array." Andreas Trottmann
  2021-10-08 21:04 ` Wol
@ 2021-10-27 20:42 ` Andreas U. Trottmann
  1 sibling, 0 replies; 4+ messages in thread
From: Andreas U. Trottmann @ 2021-10-27 20:42 UTC (permalink / raw)
  To: linux-raid

Am 08.10.21 um 21:57 schrieb Andreas Trottmann:

> I am running a server that runs a number of virtual machines and manages 
> their virtual disks as logical volumes using lvmraid (...)

> After a restart, all of the logical volumes came back, except one.

> When I'm trying to activate it, I get:
> 
> # lvchange -a y /dev/vg_ssds_0/host-home
>    Couldn't find device with uuid 8iz0p5-vh1c-kaxK-cTRC-1ryd-eQd1-wX1Yq9.
>    device-mapper: reload ioctl on  (253:245) failed: Input/output error


I am replying to my own e-mail here in order to document how I got the 
data back, in case someone in a similar situation finds this mail when 
searching for the symptoms.

First: I did *not* succeeed in activating the lvmraid volume. No matter 
how I tried to modify the _rmeta volumes, I always got "reload ioctl 
(...) failed: Input/output error" from "lvchange", and "cannot start 
dirty degraded array" in dmesg.

So, I used "lvchange -a y /dev/vg_ssds_0/host-home_rimage_0" (and 
_rimage_2 and _rimage_3, as those were the ones that were *not* on the 
failed PV) to get access to the indivdual RAID SubLVs. I then used "dd 
if=/dev/vg_ssds_0/host-home_rimage_0 of=/mnt/space/rimage_0" to copy the 
data to a file on a filesystem with enough space. I repeated this with 2 
and 3 as well. I then used losetup to access /mnt/space/rimage_0 as 
/dev/loop0, rimage_2 as loop2, and rimage_3 as loop3.

Now I wanted to use mdadm to "build" the RAID in the "array that doesn't 
have per-device metadata (superblocks)" case:

# mdadm --build /dev/md0 -n 4 -c 128 -l 5 --assume-clean --readonly 
/dev/loop0 missing /dev/loop2 /dev/loop3

However, this failed with "mdadm: Raid level 5 not permitted with --build".

("-c 128" was the chunk size used when creating the lvmraid, "-n 4" and 
"-l 5" refer to the number of devices and the raid level)

I then read the man page about the "superblocks", and found out that the 
"1.0" style of RAID metadata (selected with an mdadm "-e 1.0" option) 
places a superblock at the end of the device. Some experimenting on 
unused devices showed that the size used for actual data was the size of 
the block device minus 144 KiB (possibly 144 KiB = 128 KiB (chunksize) + 
8 KiB (size of superblock) + 8 KiB (size of bitmap). So I added 147456 
zero bytes at the end of each file:

# for i in 0 2 3; do head -c 147456 /dev/zero >> /mnt/space/rimage_$i; done

After detaching and re-attaching the loop devices, I ran

# mdadm --create /dev/md0 -n 4 -c 128 -l 5 -e 1.0 --assume-clean 
/dev/loop0 missing /dev/loop2 /dev/loop3

(substituting "missing" in the place where the missing RAID SubLV would 
have been)

And, voilà: /dev/md0 was perfectly readable, fsck showed no errors, and 
it could be mounted correctly, with all data intact.



Kind regards

-- 
Andreas Trottmann
Werft22 AG
Tel    +41 (0)56 210 91 32
Fax    +41 (0)56 210 91 34
Mobile +41 (0)79 229 88 55

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-10-27 20:42 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-08 19:57 "md/raid:mdX: cannot start dirty degraded array." Andreas Trottmann
2021-10-08 21:04 ` Wol
2021-10-11 13:55   ` Andreas Trottmann
2021-10-27 20:42 ` Andreas U. Trottmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.