From mboxrd@z Thu Jan 1 00:00:00 1970 From: Song Liu Subject: Re: Reassembling Raid5 in degraded state Date: Mon, 13 Jan 2020 09:31:44 -0800 Message-ID: References: <54384251-9978-eb99-e7ec-2da35f41566c@inview.de> <5E1C86F4.4070506@youngman.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Return-path: In-Reply-To: <5E1C86F4.4070506@youngman.org.uk> Sender: linux-raid-owner@vger.kernel.org To: Wols Lists Cc: Christian Deufel , linux-raid , NeilBrown List-Id: linux-raid.ids On Mon, Jan 13, 2020 at 7:04 AM Wols Lists wrote: > > On 13/01/20 09:41, Christian Deufel wrote: > > My plan now would be to run mdadm --assemble --force /dev/md3 with 3 > > disk, to get the Raid going in a degraded state. > > Yup, this would almost certainly work. I would recommend overlays and > running a fsck just to check it's all okay before actually doing it on > the actual disks. The event counts say to me that you'll probably lose > little to nothing. > > > Does anyone have any experience in doing so and can recommend which 3 > > disks I should use. I would use sdc1 sdd1 and sdf1, since sdd1 and sdf1 > > are displayed as active sync in every examine and sdc1 as it is also > > displayed as active sync. > > Those three disks would be perfect. > > > Do you think that by doing it this way I have a chance to get my Data > > back or do you have any other suggestion as to get the Data back and the > > Raid running again? > > You shouldn't have any trouble, I hope. Take a look at > > https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn > > and take note of the comment about using the latest mdadm - what version > is yours (mdadm --version)? That might assemble the array no problem at all. > > Song, Neil, > > Just a guess as to what went wrong, but the array event count does not > match the disk counts. Iirc this is one of the events that cause an Which mismatch do you mean? > assemble to stop. Is it possible that a crash at the wrong moment could > interrupt an update and trigger this problem? It looks like sdc1 failed first. Then sdd1 and sdf1 got events for sdc1 failed. Based on super block on sdd1 and sdf1, we already have two failed drive, so assemble stopped. Does this answer the question? Thanks, Song