All of lore.kernel.org
 help / color / mirror / Atom feed
From: Steve Costaras <stevecs@chaven.com>
To: Eli Morris <ermorris@ucsc.edu>
Cc: xfs@oss.sgi.com
Subject: Re: xfs_repair of critical volume
Date: Sun, 31 Oct 2010 16:10:06 -0500	[thread overview]
Message-ID: <4CCDDB2E.1080508@chaven.com> (raw)
In-Reply-To: <C17C2CB6-A695-41B2-B12A-1CBF6DAD556F@ucsc.edu>



On 2010-10-31 14:56, Eli Morris wrote:
>
> Hi guys,
>
> Thanks for all the responses. On the XFS volume that I'm trying to recover here, I've already re-initialized the RAID, so I've kissed that data goodbye. I am using LVM2. Each of the 5 RAID volumes is a physical volume. Then a logical volume is created out of those, and then the filesystem lies on top of that. So now we have, in order, 2 intact PVs, 1 OK, but blank PV, 2 intact PVs. On the RAID where we lost the drives, replacements are in place and I created a now healthy volume. Through LVM, I was then able to create a new PV from the re-constituted RAID volume and put that into our logical volume in place of the destroyed PV. So now, I have a logical volume that I can activate and I can see the filesystem. It still reports as having all the old files as before, although it doesn't. So the hardware is now OK. It's just what to do with our damaged filesystem that has a huge chunk missing out of it. I put the xfs_repair trial output on an http server, as suggested (good sug!
 ge!
>   stion) and it is here:

What was your raid stripe size (hardware)?  Did you have any 
partitioning scheme on the hdw raid volumes or did you just use the 
native device?    When you created the volume group & lv did you do any 
striping or just concatenation of the luns?  if striping what was your 
lvcreate parameters (stripe size et al).

You mentioned that you lost only 1 of the 5 arrays.    Assuming the 
others did not have any failures?    You wiped the array that failed so 
you  have 4/5 of the data and 1/5 is zeroed.     Which removes the 
possibility of vendor recovery/assistance.

Assuming that everything is equal there should be an equal distribution 
of files across the AG's and the AG's should have been distributed 
across the 5 volumes.    Do you have the xfs_info data?     I think you 
may be a bit out of luck here with xfs_repair.     I am not sure how XFS 
handles files/fragmentation between AG's and AG's relation to the 
underlying 'physical volume'.    I.e.  problem would be if a particular 
AG was on a different volume than the blocks of the actual file, 
likewise another complexity would be fragmented files where data was not 
contiguous.    What is the average size of the files that you had on the 
volume?

In similar circumstances if files were small enough to be on the 
remaining disks and contiguous/non fragmented I've had some luck w/ 
forensic tools Foremost & Scalpel.

Steve



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2010-10-31 21:08 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-31 19:56 xfs_repair of critical volume Eli Morris
2010-10-31 20:40 ` Emmanuel Florac
2010-11-01  3:40   ` Eli Morris
2010-11-01 10:07     ` Emmanuel Florac
2010-10-31 21:10 ` Steve Costaras [this message]
2010-11-01 15:03 ` Stan Hoeppner
  -- strict thread matches above, loose matches on Subject: below --
2010-10-31  7:54 Eli Morris
2010-10-31  9:54 ` Stan Hoeppner
2010-11-12  8:48   ` Eli Morris
2010-11-12 13:22     ` Michael Monnerie
2010-11-12 22:14       ` Stan Hoeppner
2010-11-13  8:19         ` Emmanuel Florac
2010-11-13  9:28           ` Stan Hoeppner
2010-11-13 15:35             ` Michael Monnerie
2010-11-14  3:31               ` Stan Hoeppner
2010-12-04 10:30         ` Martin Steigerwald
2010-12-05  4:49           ` Stan Hoeppner
2010-12-05  9:44             ` Roger Willcocks
2010-11-12 23:01       ` Eli Morris
2010-11-13 15:25         ` Michael Monnerie
2010-11-14 11:05         ` Dave Chinner
2010-11-15  4:09           ` Eli Morris
2010-11-16  0:04             ` Dave Chinner
2010-11-17  7:29               ` Eli Morris
2010-11-17  7:47                 ` Dave Chinner
2010-11-30  7:22                   ` Eli Morris
2010-12-02 11:33                     ` Michael Monnerie
2010-12-03  0:58                       ` Stan Hoeppner
2010-12-04  0:43                       ` Eli Morris
2010-10-31 14:10 ` Emmanuel Florac
2010-10-31 14:41   ` Steve Costaras
2010-10-31 16:52 ` Roger Willcocks
2010-11-01 22:21 ` Eric Sandeen
2010-11-01 23:32   ` Eli Morris
2010-11-02  0:14     ` Eric Sandeen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4CCDDB2E.1080508@chaven.com \
    --to=stevecs@chaven.com \
    --cc=ermorris@ucsc.edu \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.