All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wols Lists <antlists@youngman.org.uk>
To: Felipe Kich <fkich1977@gmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Help in recovering a RAID5 volume
Date: Thu, 10 Nov 2016 18:58:28 +0000	[thread overview]
Message-ID: <5824C354.3060209@youngman.org.uk> (raw)
In-Reply-To: <CA+GmhESqEQd6J3CSteR08qw907NnC7r1ewecs3NkWUmpei6BVw@mail.gmail.com>

On 10/11/16 17:47, Felipe Kich wrote:
> Hi Anthony,
> 
> Thanks for the reply. Here's some answers to your questions and also
> another question.
> 
> It really seems that 2 disks are bad, but 2 are still good, according
> to SMART. I'll replace them ASAP.
> For now, I don't need to increase the array size. It's more than
> enough for what I need.
> 
You might find the extra price of larger drives is minimal. It's down to
you. And even 2TB drives would give you the space to go raid-6.

> About the drive duplication, I don't have spare discs available now
> for that, I only have one 4TB disk at hand, so I'd like to know if
> it's possible to create device images that I can mount and try to
> rebuild the array, to test if it would work, then I can go and buy new
> disks to replace the defective ones.

Okay, if you've got a 4TB drive ...

I can't remember what the second bad drive was ... iirc the one that was
truly dud was sdc ...

So. What I'd do is create two partitions on the 4TB that are the same
(or possibly slightly larger) than your sdx1 partition. ddrescue the 1
partition from the best of the dud drives across. Create two partitions
the same size (or larger) than your sdx2 partition, and likewise
ddrescue the 2 partition.

Do a --force assembly, and then mount the arrays read-only. The
partition should be fine. Look over it and see. I think you can do a
fsck without it actually changing anything. fsck will probably find a
few problems.

If everything's fine, add in the other two partitions and let it rebuild.

And then replace the drives as quickly as possible. With this setup
you're critically vulnerable to the 4TB failing. Read up on the
--replace option to replace the drives with minimal risk.
> 
> And sure, I'll send you the logs you asked, no problem.
> 
> Regards.
> 
Ta muchly.

Cheers,
Wol

  reply	other threads:[~2016-11-10 18:58 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-10 15:41 Help in recovering a RAID5 volume Felipe Kich
2016-11-10 17:06 ` Wols Lists
2016-11-10 17:32   ` Wols Lists
2016-11-10 17:47   ` Felipe Kich
2016-11-10 18:58     ` Wols Lists [this message]
2016-11-22 16:12       ` Felipe Kich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5824C354.3060209@youngman.org.uk \
    --to=antlists@youngman.org.uk \
    --cc=fkich1977@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.