All of lore.kernel.org
 help / color / mirror / Atom feed
From: Valentijn <v@lentijn.sess.ink>
To: Matt Callaghan <matt_callaghan@sympatico.ca>,
	Wols Lists <antlists@youngman.org.uk>,
	linux-raid@vger.kernel.org
Subject: Re: mdadm RAID6 "active" with spares and failed disks; need help
Date: Thu, 22 Jan 2015 10:47:38 +0100	[thread overview]
Message-ID: <54C0C73A.4080409@lentijn.sess.ink> (raw)
In-Reply-To: <BLU436-SMTP1430CA88C11B0E46F15BB3F81480@phx.gbl>

Hi Matt,

As long as your data is still somewhere on these disks, all is not - 
necessarily - lost. You could still try using dumpe2fs (and later 
e2fsck) and/or dumpe2fs with different superblocks. And even if you 
cannot find your file system by any means, you could try to use the 
"foremost" utility to scrape off images, documents and the like from 
these disks.

So I still don't think all is lost. However, I do think that will cost 
more time. You may want to dedicate a spare machine to this task, 
because of the resources.

I see that your mdadm says this, somewhere along your odyssee:
mdadm: /dev/sdk1 appears to contain an ext2fs file system
        size=1695282944K  mtime=Tue Apr 12 11:10:24 1977
... which could mean (I'm not sure, I'm just guessing) that due to the 
internal bitmap, your fs has been overwritten.

Your new array in fact said:
Internal Bitmap : 8 sectors from superblock
     Update Time : Wed Jan  7 09:46:44 2015
   Bad Block Log : 512 entries available at offset 72 sectors
        Checksum : c7603819 - correct
          Events : 0
... as far as I understand, this means that 8 blocks from the 
superblock, some - whatever size - sectors were occupied by the Internal 
Bitmap, which, in turn, would mean your filesystem superblock has been 
overwritten.

The good news is: there is more than one superblock.

BTW, didn't you have the right raid drive ordering from the original 
disks? You did have output of "mdadm --examine" after the array broke 
down, didn't you? So your "create" statement is, by definition, correct 
if a new "--examine" output shows the same output - hence the filesystem 
is correct if the latter is the case?

So please try if "dumpe2fs -h -o superblock=32768" does anything. Or 
98304, 163840, 229376. Dumpe2fs just dumps the fs header, nothing more.

If dumpe2fs doesn't do anything (but complain that it "Couldn't find 
valid filesystem superblock"), then you could still try if "foremost" 
finds anything. It's not that hard to use, you simply dedicate some 
storage to it and tell it to scrape your array. It *will* find things 
and it's up to you to see if
1) documents, images and the like are all 64K or 128K or less - and/or 
contain large blocks of rubbish. This probably means you have the wrong 
array config, because foremost in this case only finds single "chunks" 
with correct data - if a file is longer, it doesn't find it and/or spews 
out random data from other images
2) documents, images etcetera are OK. This means your array is OK. You 
then can use foremost to scrape off everything (it may take weeks but it 
could work), or simply try to find where the filesystem superblock hangs 
out (if the array is in good order, the fs superblock must be somewhere, 
right?)

Please, please try to do as little as possible on the real disks. Use 
dmsetup to create snapshots. Copy the disks. Use hardware that is in 
good state - you don't want to loose your data that you just found back 
because the memory is flakey, do you? ;-)

I hope it's going to work.

Best regards,

Valentijn

On 01/21/15 01:34, Matt Callaghan wrote:
> I tried again with the --bitmap=none, clearly that was a miss on my part.
> However, still even with that correction, and attempting across varying
> combinations of "drive ordering", the filesystem appears corrupt.

  reply	other threads:[~2015-01-22  9:47 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <54ABEE54.6020707@sympatico.ca>
2015-01-07 13:34 ` mdadm RAID6 "active" with spares and failed disks; need help Matt Callaghan
2015-01-11 20:26   ` Matt Callaghan
2015-01-11 23:22     ` Valentijn Sessink
2015-01-12 16:35       ` Wols Lists
2015-01-21  0:34         ` Matt Callaghan
2015-01-22  9:47           ` Valentijn [this message]
2015-03-27 23:48             ` Matt Callaghan
2015-03-28  1:59               ` Phil Turmel
2015-03-28 10:11                 ` Roman Mamedov
2015-03-28 15:11                   ` Read-only mounts (was mdadm RAID6 "active" with spares and failed disks; need help) Phil Turmel
     [not found]                 ` <55161943.1090206@sympatico.ca>
     [not found]                   ` <BLU436-SMTP135A84B36120ACD11B2783D81F70@phx.gbl>
2015-03-28 17:40                     ` mdadm RAID6 "active" with spares and failed disks; need help Phil Turmel
2015-01-06 14:16 Matt Callaghan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54C0C73A.4080409@lentijn.sess.ink \
    --to=v@lentijn.sess.ink \
    --cc=antlists@youngman.org.uk \
    --cc=linux-raid@vger.kernel.org \
    --cc=matt_callaghan@sympatico.ca \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.