All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wol <antlists@youngman.org.uk>
To: bruce.korb+reply@gmail.com
Cc: linux-raid@vger.kernel.org
Subject: Re: Trying to rescue a RAID-1 array
Date: Fri, 1 Apr 2022 22:02:19 +0100	[thread overview]
Message-ID: <0955d209-ab2b-dc20-a9fb-ad15c09eb884@youngman.org.uk> (raw)
In-Reply-To: <CAKRnqNL0aNWGHFgcz-KVKn3dpVpUa1oN5miu+oiv16vdJx3OHw@mail.gmail.com>

On 01/04/2022 21:23, Bruce Korb wrote:
>> Because if it did your data is probably toast, but if it didn't we might
>> be home and dry.

> If I can figure out how to mount it (read only), then I can see if a
> new FS was installed.
> If it wasn't, then I've got my data.
> 
That looks promising then. It looks like your original array may have 
been v1.0 too ... a very good sign.

Read up on loopback devices - it's in the wiki somewhere on recovering 
your raid ...

What that does is you stick a file between between the file system and 
whatever's running on top, so linux caches all your writes into the file 
and doesn't touch the disk.

Let's hope xfs_recover didn't actually write anything or we could be in 
trouble.

The whole point about v1.0 is - hallelujah - the file system starts at 
the start of the partition!

So now you've got loopback sorted, FORGET ABOUT THE RAID. Put the 
loopback over sdc1, and mount it. If it needs xfs_recover, because 
you've got the loopback, you can let it run, and hopefully it will not 
do very much.

IFF the wind is blowing in the right direction (and there's at least a 
decent chance), you've got your data back!

If it all goes pear shaped, it may well still be recoverable, but I'll 
probably be out of ideas. But the loopback will have saved your data so 
you'll be able to try again.

Oh - and if it looks sort-of-okay but xfs_recover has trashed sdc1, at 
least try the same thing with sde1. You stand a chance that xfs_recover 
only trashed one drive and, the other being an exact copy, it could have 
survived and be okay.

Cheers,
Wol

  reply	other threads:[~2022-04-01 21:02 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-31 16:44 Trying to rescue a RAID-1 array Bruce Korb
2022-03-31 17:06 ` Wols Lists
2022-03-31 18:14   ` Bruce Korb
2022-03-31 21:34     ` Wols Lists
2022-04-01 17:59       ` Bruce Korb
2022-04-01 18:21         ` Bruce Korb
2022-04-01 19:45           ` Wol
2022-04-01 20:23             ` Bruce Korb
2022-04-01 21:02               ` Wol [this message]
2022-04-01 21:24                 ` Bruce Korb
2022-04-01 21:33                   ` Wol
2022-04-20  8:40           ` Need to move RAID1 with mounted partition Leslie Rhorer
2022-04-20  8:55             ` Roman Mamedov
2022-04-20  9:21               ` Leslie Rhorer
2022-04-20 11:08             ` Andy Smith
2022-04-20 12:07               ` Pascal Hambourg
2022-04-20 12:24                 ` Leslie Rhorer
2022-04-20 12:16               ` Leslie Rhorer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0955d209-ab2b-dc20-a9fb-ad15c09eb884@youngman.org.uk \
    --to=antlists@youngman.org.uk \
    --cc=bruce.korb+reply@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.