All of lore.kernel.org
 help / color / mirror / Atom feed
From: Song Liu <liu.song.a23@gmail.com>
To: Wols Lists <antlists@youngman.org.uk>
Cc: Christian Deufel <christian.deufel@inview.de>,
	linux-raid <linux-raid@vger.kernel.org>,
	NeilBrown <neilb@suse.com>
Subject: Re: Reassembling Raid5 in degraded state
Date: Mon, 13 Jan 2020 09:31:44 -0800	[thread overview]
Message-ID: <CAPhsuW45052b5OjoWgQhs0r50CeisBS3ya3nGi74Jr0_8HDB5A@mail.gmail.com> (raw)
In-Reply-To: <5E1C86F4.4070506@youngman.org.uk>

On Mon, Jan 13, 2020 at 7:04 AM Wols Lists <antlists@youngman.org.uk> wrote:
>
> On 13/01/20 09:41, Christian Deufel wrote:
> > My plan now would be to run mdadm --assemble --force /dev/md3 with 3
> > disk, to get the Raid going in a degraded state.
>
> Yup, this would almost certainly work. I would recommend overlays and
> running a fsck just to check it's all okay before actually doing it on
> the actual disks. The event counts say to me that you'll probably lose
> little to nothing.
>
> > Does anyone have any experience in doing so and can recommend which 3
> > disks I should use. I would use sdc1 sdd1 and sdf1, since sdd1 and sdf1
> > are displayed as active sync in every examine and sdc1 as it is also
> > displayed as active sync.
>
> Those three disks would be perfect.
>
> > Do you think that by doing it this way I have a chance to get my Data
> > back or do you have any other suggestion as to get the Data back and the
> > Raid running again?
>
> You shouldn't have any trouble, I hope. Take a look at
>
> https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
>
> and take note of the comment about using the latest mdadm - what version
> is yours (mdadm --version)? That might assemble the array no problem at all.
>
> Song, Neil,
>
> Just a guess as to what went wrong, but the array event count does not
> match the disk counts. Iirc this is one of the events that cause an

Which mismatch do you mean?

> assemble to stop. Is it possible that a crash at the wrong moment could
> interrupt an update and trigger this problem?

It looks like sdc1 failed first. Then sdd1 and sdf1 got events for sdc1 failed.
Based on super block on sdd1 and sdf1, we already have two failed drive,
so assemble stopped.

Does this answer the question?

Thanks,
Song

  reply	other threads:[~2020-01-13 17:31 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-13  9:41 Reassembling Raid5 in degraded state Christian Deufel
2020-01-13 15:04 ` Wols Lists
2020-01-13 17:31   ` Song Liu [this message]
2020-01-13 18:46     ` Wol
2020-01-14 11:28 Christian Deufel
2020-01-14 16:40 ` Phil Turmel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPhsuW45052b5OjoWgQhs0r50CeisBS3ya3nGi74Jr0_8HDB5A@mail.gmail.com \
    --to=liu.song.a23@gmail.com \
    --cc=antlists@youngman.org.uk \
    --cc=christian.deufel@inview.de \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.