All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID 6 corrupted
@ 2017-05-17  8:27 Łukasz Wróblewski
  2017-05-17  9:45 ` Adam Borowski
  2017-05-17  9:52 ` Duncan
  0 siblings, 2 replies; 14+ messages in thread
From: Łukasz Wróblewski @ 2017-05-17  8:27 UTC (permalink / raw)
  To: linux-btrfs

Hi,

About two years ago I created RAID 6 consisting of 5 disks with BTRFS.
One of the disks has crashed.
I started to exchange it for another, but I did something wrong.
Or at the time, RAID56 support was experimental in BTRFS.
There was a situation where I could not mount the partition again.

I decided to put the disks aside and wait for the better tools.
A newer version of BTRFS in the hope that I can retrieve the data.

Currently the situation looks like this:
 ~ # uname -a
Linux localhost 4.10.12-coreos #1 SMP Tue Apr 25 22:08:35 UTC 2017
x86_64 AMD FX(tm)-6100 Six-Core Processor AuthenticAMD GNU/Linux
 ~ # btrfs --version
btrfs-progs v4.4.1
 ~ # btrfs fi show
warning devid 1 not found already
warning devid 4 not found already
bytenr mismatch, want=2373780258816, have=0
warning, device 7 is missing
warning, device 1 is missing
bytenr mismatch, want=2093993689088, have=0
ERROR: cannot read chunk root
Label: none  uuid: 50127310-d15c-49ca-8cdd-8798ea0fda2e
Total devices 5 FS bytes used 5.44TiB
devid    2 size 1.82TiB used 1.82TiB path /dev/sde
devid    3 size 1.82TiB used 1.82TiB path /dev/sdc
devid    5 size 1.82TiB used 1.82TiB path /dev/sdb
*** Some devices missing

Label: 'DDR'  uuid: 4a9f6a0f-e41f-48a5-a566-507349d47b30
Total devices 7 FS bytes used 477.15GiB
devid    4 size 1.82TiB used 7.00GiB path /dev/sdd
*** Some devices missing

 ~ # mount /dev/sdb /mnt/
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
 ~ # dmesg | tail
[ 2612.350751] BTRFS info (device sde): disk space caching is enabled
[ 2612.378507] BTRFS error (device sde): failed to read chunk tree: -5
[ 2612.393729] BTRFS error (device sde): open_ctree failed
 ~ # mount -o usebackuproot /dev/sdb /mnt/
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
 ~ # dmesg | tail
[ 2675.427445] BTRFS info (device sde): trying to use backup root at mount time
[ 2675.434528] BTRFS info (device sde): disk space caching is enabled
[ 2675.442031] BTRFS error (device sde): failed to read chunk tree: -5
[ 2675.457321] BTRFS error (device sde): open_ctree failed
 ~ #


"fi show" shows two systems.
It should really be one, but devid 4 should belong to
uuid: 50127310-d15c-49ca-8cdd-8798ea0fda2e


I tried
./btrfs restore /dev/sdb /mnt/restore

...
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
Trying another mirror
bytenr mismatch, want=2373682249728, have=0
bytenr mismatch, want=2373682233344, have=0
bytenr mismatch, want=2373682249728, have=0
Error searching -5
Error searching /mnt/....

But little data has been recovered.



Can I retrieve my data?
How can I do this?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-17  8:27 RAID 6 corrupted Łukasz Wróblewski
@ 2017-05-17  9:45 ` Adam Borowski
  2017-05-17  9:52 ` Duncan
  1 sibling, 0 replies; 14+ messages in thread
From: Adam Borowski @ 2017-05-17  9:45 UTC (permalink / raw)
  To: Łukasz Wróblewski; +Cc: linux-btrfs

On Wed, May 17, 2017 at 10:27:53AM +0200, Łukasz Wróblewski wrote:
> About two years ago I created RAID 6 consisting of 5 disks with BTRFS.
> One of the disks has crashed.
> I started to exchange it for another, but I did something wrong.
> Or at the time, RAID56 support was experimental in BTRFS.

It still is experimental.  And not experimental as in "might have some
bugs", but as in "eats babies for breakfast".

> There was a situation where I could not mount the partition again.
> 
> I decided to put the disks aside and wait for the better tools.
> A newer version of BTRFS in the hope that I can retrieve the data.
> 
> Currently the situation looks like this:
>  ~ # uname -a
> Linux localhost 4.10.12-coreos #1 SMP Tue Apr 25 22:08:35 UTC 2017
> x86_64 AMD FX(tm)-6100 Six-Core Processor AuthenticAMD GNU/Linux

There were no RAID5/6 improvements for a long while, until a batch of fixes
got merged into 4.12-rc1 (yes, the first rc that Linus pushed just this
Sunday).

I'm not knowledgeable about this code, though -- I merely did some
superficial tests.  These tests show drastic improvement over 4.11 but I
don't even know how in-line repair is affected (I looked mostly as scrub).

>  ~ # btrfs --version
> btrfs-progs v4.4.1
>  ~ # btrfs fi show
> warning devid 1 not found already
> warning devid 4 not found already
> bytenr mismatch, want=2373780258816, have=0
> warning, device 7 is missing
> warning, device 1 is missing

So you have _two_ missing drives.  RAID6 should still recover from that, but
at least in 4.11 and earlier there were some serious problems.  No idea if
4.12-rc1 is better here.

> bytenr mismatch, want=2093993689088, have=0
> ERROR: cannot read chunk root
> Label: none  uuid: 50127310-d15c-49ca-8cdd-8798ea0fda2e
> Total devices 5 FS bytes used 5.44TiB
> devid    2 size 1.82TiB used 1.82TiB path /dev/sde
> devid    3 size 1.82TiB used 1.82TiB path /dev/sdc
> devid    5 size 1.82TiB used 1.82TiB path /dev/sdb
> *** Some devices missing
> 
> Label: 'DDR'  uuid: 4a9f6a0f-e41f-48a5-a566-507349d47b30
> Total devices 7 FS bytes used 477.15GiB
> devid    4 size 1.82TiB used 7.00GiB path /dev/sdd
> *** Some devices missing
> 
>  ~ # mount /dev/sdb /mnt/
> mount: wrong fs type, bad option, bad superblock on /dev/sdb,
>        missing codepage or helper program, or other error
> 
>        In some cases useful info is found in syslog - try
>        dmesg | tail or so.
>  ~ # dmesg | tail
> [ 2612.350751] BTRFS info (device sde): disk space caching is enabled
> [ 2612.378507] BTRFS error (device sde): failed to read chunk tree: -5
> [ 2612.393729] BTRFS error (device sde): open_ctree failed
>  ~ # mount -o usebackuproot /dev/sdb /mnt/
> mount: wrong fs type, bad option, bad superblock on /dev/sdb,
>        missing codepage or helper program, or other error
> 
>        In some cases useful info is found in syslog - try
>        dmesg | tail or so.
>  ~ # dmesg | tail
> [ 2675.427445] BTRFS info (device sde): trying to use backup root at mount time
> [ 2675.434528] BTRFS info (device sde): disk space caching is enabled
> [ 2675.442031] BTRFS error (device sde): failed to read chunk tree: -5
> [ 2675.457321] BTRFS error (device sde): open_ctree failed
>  ~ #
> 
> 
> "fi show" shows two systems.
> It should really be one, but devid 4 should belong to
> uuid: 50127310-d15c-49ca-8cdd-8798ea0fda2e

This is one of the disks that are missing, so that's ok.  (Well, why it's
not a part of the filesystem anymore is not ok, but I'm speaking about
behaviour after the failure.)

> I tried
> ./btrfs restore /dev/sdb /mnt/restore
> 
> ...
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> Trying another mirror
> bytenr mismatch, want=2373682249728, have=0
> bytenr mismatch, want=2373682233344, have=0
> bytenr mismatch, want=2373682249728, have=0
> Error searching -5
> Error searching /mnt/....
> 
> But little data has been recovered.

Restore is userspace rather than kernel, I don't know what kind of fixes it
had recently, but 4.4 you're using lacks almost two years of development, so
I'd try a more recent version before trying more involved steps.

> Can I retrieve my data?

If you do, I'd recommend switching to at least -mraid1 (RAID1 metadata
RAID5/6 data) -- a bug that hits a data block makes you lose a single file,
one that hits metadata tends to result in total filesystem loss, and RAID1
is quite reliable already.

> How can I do this?

That's the trick question. :/


Meow!
-- 
Don't be racist.  White, amber or black, all beers should be judged based
solely on their merits.  Heck, even if occasionally a cider applies for a
beer's job, why not?
On the other hand, corpo lager is not a race.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-17  8:27 RAID 6 corrupted Łukasz Wróblewski
  2017-05-17  9:45 ` Adam Borowski
@ 2017-05-17  9:52 ` Duncan
  2017-05-17 10:05   ` Duncan
  2017-05-17 10:15   ` Adam Borowski
  1 sibling, 2 replies; 14+ messages in thread
From: Duncan @ 2017-05-17  9:52 UTC (permalink / raw)
  To: linux-btrfs

Łukasz Wróblewski posted on Wed, 17 May 2017 10:27:53 +0200 as excerpted:

> About two years ago I created RAID 6 consisting of 5 disks with BTRFS.
> One of the disks has crashed.
> I started to exchange it for another, but I did something wrong.
> Or at the time, RAID56 support was experimental in BTRFS.
> There was a situation where I could not mount the partition again.
> 
> I decided to put the disks aside and wait for the better tools.
> A newer version of BTRFS in the hope that I can retrieve the data.
> 
> Currently the situation looks like this:
>  ~ # uname -a
> Linux localhost 4.10.12-coreos

For anything raid56 related, you'll need at /least/ 4.11, as it has some 
major raid56 stability patch updates.

There's still some issues with raid56, the biggest being that unlike most 
of btrfs, the parity isn't COW, meaning the traditional parity-raid write 
hole remains as an issue, a BIG issue for btrfs due to its design, but 
that's not really fixable without a full raid56 rewrite, so it'll remain 
the case with the existing raid56 mode likely forever, and a new 
implementation that COWs parity as well may eventually happen.

But the patches that went into 4.11 fix the known existing issues other 
than the usual write hole, and they're effectively mandatory for any 
attempt at repair or recovery of existing raid56 once there are issues of 
any sort.  They're not a guarantee of a fix, but before that, any attempt 
at a fix has a rather good chance of making the problem worse instead of 
better, so you really do want 4.11 if you're doing btrfs raid56 at all.

Beyond that, I'll let others more familiar with btrfs raid56 help, but 
I'd still not recommend it, even with those fixes.  IMO, the existence of 
the parity-raid write hole simply doesn't work well with btrfs general 
assumptions, and can't and won't work well until there's a COW-parity 
based btrfs parity-raid solution, so even with those fixes I wouldn't use 
it myself, nor can I in good faith recommend it to others.

The best I could recommend is getting off of it, possibly by blowing away 
the existing btrfs raid56 mode filesystem and recreating as whatever, 
then restoring from backups.  Of course if you didn't have backups of 
data on a known experimental btrfs raid56 mode, then as is always the 
case even on mature filesystems on known good hardware, but even more so 
given the experimental nature of btrfs raid56, you were deliberately 
defining the data as not worth the time and trouble to do the backups, so 
there's nothing really to lose by doing that, since you already saved 
what you self-evidently considered more important, the time and resources 
you would have otherwise put into doing that backup and keeping it 
current.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-17  9:52 ` Duncan
@ 2017-05-17 10:05   ` Duncan
  2017-05-17 10:15   ` Adam Borowski
  1 sibling, 0 replies; 14+ messages in thread
From: Duncan @ 2017-05-17 10:05 UTC (permalink / raw)
  To: linux-btrfs

Duncan posted on Wed, 17 May 2017 09:52:28 +0000 as excerpted:

> For anything raid56 related, you'll need at /least/ 4.11, as it has some
> major raid56 stability patch updates.

Oops.  Looks like it's even fresher than that, 4.12-rc1, according to 
Adam's reply.  I thought I read that it hit 4.11.  My bad!

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-17  9:52 ` Duncan
  2017-05-17 10:05   ` Duncan
@ 2017-05-17 10:15   ` Adam Borowski
  2017-05-18  2:09     ` Łukasz Wróblewski
  1 sibling, 1 reply; 14+ messages in thread
From: Adam Borowski @ 2017-05-17 10:15 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Wed, May 17, 2017 at 09:52:28AM +0000, Duncan wrote:
> Łukasz Wróblewski posted on Wed, 17 May 2017 10:27:53 +0200 as excerpted:
> > About two years ago I created RAID 6 consisting of 5 disks with BTRFS.
> > One of the disks has crashed.
> > I started to exchange it for another, but I did something wrong.
> > Or at the time, RAID56 support was experimental in BTRFS.

> > Linux localhost 4.10.12-coreos
> 
> For anything raid56 related, you'll need at /least/ 4.11, as it has some 
> major raid56 stability patch updates.

Uhm, am I missing something?
    git shortlog v4.10..v4.11 -- fs/btrfs/
shows nothing related to RAID5/6 (save for a single commit that removes an
unused variable).

> There's still some issues with raid56, the biggest being that unlike most 
> of btrfs, the parity isn't COW, meaning the traditional parity-raid write 
> hole remains as an issue, a BIG issue for btrfs due to its design, but 
> that's not really fixable without a full raid56 rewrite, so it'll remain 
> the case with the existing raid56 mode likely forever, and a new 
> implementation that COWs parity as well may eventually happen.

Ideas like plug extents (real or virtual) would fix this without a format
change.  In fact, I don't see how a format change would help: there's no way
around having every stripe belong to exactly 0 or 1 transactions, as
otherwise it's RMW rather than COW.  That'd require some auto reclaim, but
reclaim in general sounds like a thing we want, with recent research knorrie
did about SSD.

Well, there's RAIDz, but that's no RAID5 but a hybrid between that and
RAID1, and it doesn't play well with btrfs' concept of block groups.

> But the patches that went into 4.11 fix the known existing issues other 
> than the usual write hole, and they're effectively mandatory for any 
> attempt at repair or recovery of existing raid56 once there are issues of 
> any sort.  They're not a guarantee of a fix, but before that, any attempt 
> at a fix has a rather good chance of making the problem worse instead of 
> better, so you really do want 4.11 if you're doing btrfs raid56 at all.

If you mean 4.12-rc1, then yeah, these patches are a big step forward.


Meow!
-- 
Don't be racist.  White, amber or black, all beers should be judged based
solely on their merits.  Heck, even if occasionally a cider applies for a
beer's job, why not?
On the other hand, corpo lager is not a race.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-17 10:15   ` Adam Borowski
@ 2017-05-18  2:09     ` Łukasz Wróblewski
  2017-05-18  3:29       ` Adam Borowski
  2017-05-18  5:17       ` Roman Mamedov
  0 siblings, 2 replies; 14+ messages in thread
From: Łukasz Wróblewski @ 2017-05-18  2:09 UTC (permalink / raw)
  To: Adam Borowski; +Cc: Duncan, linux-btrfs

Thanks guys.

I will try when stable 4.12 comes out.
Unfortunately I do not have a backup.
Fortunately, these data are not so critical.
Some private photos and videos of youth.
However, I would be very happy if I could get it back.

Theoretically, if the devid 4 drive returned to
50127310-d15c-49ca-8cdd-8798ea0fda2e. In the absence of a single disk
recovery should be easy.

Can I change the uuid of the group to which the disk belongs, to not
format the data of course?




2017-05-17 12:15 GMT+02:00 Adam Borowski <kilobyte@angband.pl>:
> On Wed, May 17, 2017 at 09:52:28AM +0000, Duncan wrote:
>> Łukasz Wróblewski posted on Wed, 17 May 2017 10:27:53 +0200 as excerpted:
>> > About two years ago I created RAID 6 consisting of 5 disks with BTRFS.
>> > One of the disks has crashed.
>> > I started to exchange it for another, but I did something wrong.
>> > Or at the time, RAID56 support was experimental in BTRFS.
>
>> > Linux localhost 4.10.12-coreos
>>
>> For anything raid56 related, you'll need at /least/ 4.11, as it has some
>> major raid56 stability patch updates.
>
> Uhm, am I missing something?
>     git shortlog v4.10..v4.11 -- fs/btrfs/
> shows nothing related to RAID5/6 (save for a single commit that removes an
> unused variable).
>
>> There's still some issues with raid56, the biggest being that unlike most
>> of btrfs, the parity isn't COW, meaning the traditional parity-raid write
>> hole remains as an issue, a BIG issue for btrfs due to its design, but
>> that's not really fixable without a full raid56 rewrite, so it'll remain
>> the case with the existing raid56 mode likely forever, and a new
>> implementation that COWs parity as well may eventually happen.
>
> Ideas like plug extents (real or virtual) would fix this without a format
> change.  In fact, I don't see how a format change would help: there's no way
> around having every stripe belong to exactly 0 or 1 transactions, as
> otherwise it's RMW rather than COW.  That'd require some auto reclaim, but
> reclaim in general sounds like a thing we want, with recent research knorrie
> did about SSD.
>
> Well, there's RAIDz, but that's no RAID5 but a hybrid between that and
> RAID1, and it doesn't play well with btrfs' concept of block groups.
>
>> But the patches that went into 4.11 fix the known existing issues other
>> than the usual write hole, and they're effectively mandatory for any
>> attempt at repair or recovery of existing raid56 once there are issues of
>> any sort.  They're not a guarantee of a fix, but before that, any attempt
>> at a fix has a rather good chance of making the problem worse instead of
>> better, so you really do want 4.11 if you're doing btrfs raid56 at all.
>
> If you mean 4.12-rc1, then yeah, these patches are a big step forward.
>
>
> Meow!
> --
> Don't be racist.  White, amber or black, all beers should be judged based
> solely on their merits.  Heck, even if occasionally a cider applies for a
> beer's job, why not?
> On the other hand, corpo lager is not a race.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-18  2:09     ` Łukasz Wróblewski
@ 2017-05-18  3:29       ` Adam Borowski
  2017-05-18  6:08         ` Duncan
  2017-05-18  5:17       ` Roman Mamedov
  1 sibling, 1 reply; 14+ messages in thread
From: Adam Borowski @ 2017-05-18  3:29 UTC (permalink / raw)
  To: Łukasz Wróblewski; +Cc: linux-btrfs

On Thu, May 18, 2017 at 04:09:38AM +0200, Łukasz Wróblewski wrote:
> Thanks guys.
> 
> I will try when stable 4.12 comes out.

It won't come out for ~2.5 months.  I'd recommend building the -rc,
recovering, then going back to a stable kernel.

> Unfortunately I do not have a backup.
> Fortunately, these data are not so critical.
> Some private photos and videos of youth.
> However, I would be very happy if I could get it back.

Usually, the first step should be duplicating the entire filesystem, so
attempts at recovery won't break things further.  As it's a home setup,
I doubt you have 5 spare disks at hand or can readily obtain them, so it
depends how much you care about that data.  Without copies, I'd suggest
doing safe and safeish things first.

> Theoretically, if the devid 4 drive returned to
> 50127310-d15c-49ca-8cdd-8798ea0fda2e. In the absence of a single disk
> recovery should be easy.

The drive's fsid being wrong makes it very likely to not being helpful for
the recovery at all, thus forcing the fsid back sounds like one of less safe
things to do.  In theory, 3/5 healthy disks should be fine for recovering
raid6 (assuming no write hole nastiness), thus it's only a matter of whether
the implementation is up to scratch.  It'd be interesting to see how 4.12
fares.


As the Master Species say, meow!
-- 
Don't be racist.  White, amber or black, all beers should be judged based
solely on their merits.  Heck, even if occasionally a cider applies for a
beer's job, why not?
On the other hand, corpo lager is not a race.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-18  2:09     ` Łukasz Wróblewski
  2017-05-18  3:29       ` Adam Borowski
@ 2017-05-18  5:17       ` Roman Mamedov
  2017-05-18  6:12         ` Duncan
  1 sibling, 1 reply; 14+ messages in thread
From: Roman Mamedov @ 2017-05-18  5:17 UTC (permalink / raw)
  To: Łukasz Wróblewski; +Cc: Adam Borowski, Duncan, linux-btrfs

On Thu, 18 May 2017 04:09:38 +0200
Łukasz Wróblewski <lw@nri.pl> wrote:

> I will try when stable 4.12 comes out.
> Unfortunately I do not have a backup.
> Fortunately, these data are not so critical.
> Some private photos and videos of youth.
> However, I would be very happy if I could get it back.

Try saving your data with "btrfs restore" first (i.e. you can do it right now,
as it doesn't depend on kernel versions), after you have your data recovered
and reliably backed up, then you can proceed with experiments on new kernel,
patches and whatnot.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-18  3:29       ` Adam Borowski
@ 2017-05-18  6:08         ` Duncan
  2017-05-18  8:34           ` Adam Borowski
  0 siblings, 1 reply; 14+ messages in thread
From: Duncan @ 2017-05-18  6:08 UTC (permalink / raw)
  To: linux-btrfs

Adam Borowski posted on Thu, 18 May 2017 05:29:57 +0200 as excerpted:

> On Thu, May 18, 2017 at 04:09:38AM +0200, Łukasz Wróblewski wrote:
>> Thanks guys.
>> 
>> I will try when stable 4.12 comes out.
> 
> It won't come out for ~2.5 months.  I'd recommend building the -rc,
> recovering, then going back to a stable kernel.

??  4.12-rc1 is already out.  rc-s normally come out weekly and there's 
almost always 7-8 of them, depending on how Linus feels about the 
stability of things in the last few rc-s and what his coming schedule 
looks like.

So we should only be looking at another ~7 weeks until 4.12.0 release, 
now.  Where'd ~2.5 months come from?  Were you including the two weeks of 
commit window pre-rc1 that's already passed?  That'd make it about 9-10 
weeks, so about 2.5 months, but about half a month of that's already gone.

Or does Linus have a couple weeks of no-release holiday coming up?  IIRC 
he usually does releases even on holiday, tho, but he may delay opening 
the commit window or delay an rc a couple days.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-18  5:17       ` Roman Mamedov
@ 2017-05-18  6:12         ` Duncan
  2017-05-19  8:55           ` Pasi Kärkkäinen
  0 siblings, 1 reply; 14+ messages in thread
From: Duncan @ 2017-05-18  6:12 UTC (permalink / raw)
  To: linux-btrfs

Roman Mamedov posted on Thu, 18 May 2017 10:17:19 +0500 as excerpted:

> On Thu, 18 May 2017 04:09:38 +0200 Łukasz Wróblewski <lw@nri.pl> wrote:
> 
>> I will try when stable 4.12 comes out.
>> Unfortunately I do not have a backup.
>> Fortunately, these data are not so critical.
>> Some private photos and videos of youth.
>> However, I would be very happy if I could get it back.
> 
> Try saving your data with "btrfs restore" first 

First post, he tried that.  No luck.  Tho that was with 4.4 userspace.  
It might be worth trying with the 4.11-rc or soon to be released 4.11 
userspace, tho...

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-18  6:08         ` Duncan
@ 2017-05-18  8:34           ` Adam Borowski
  0 siblings, 0 replies; 14+ messages in thread
From: Adam Borowski @ 2017-05-18  8:34 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Thu, May 18, 2017 at 06:08:56AM +0000, Duncan wrote:
> Adam Borowski posted on Thu, 18 May 2017 05:29:57 +0200 as excerpted:
> > On Thu, May 18, 2017 at 04:09:38AM +0200, Łukasz Wróblewski wrote:
> >> I will try when stable 4.12 comes out.
> > 
> > It won't come out for ~2.5 months.
> 
> ??  4.12-rc1 is already out.  rc-s normally come out weekly and there's 
> almost always 7-8 of them, depending on how Linus feels about the 
> stability of things in the last few rc-s and what his coming schedule 
> looks like.
> 
> So we should only be looking at another ~7 weeks until 4.12.0 release, 
> now.  Where'd ~2.5 months come from?

D'oh.

-- 
Don't be racist.  White, amber or black, all beers should be judged based
solely on their merits.  Heck, even if occasionally a cider applies for a
beer's job, why not?
On the other hand, corpo lager is not a race.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-18  6:12         ` Duncan
@ 2017-05-19  8:55           ` Pasi Kärkkäinen
  2017-05-19  9:09             ` Roman Mamedov
  2017-05-20  2:30             ` Duncan
  0 siblings, 2 replies; 14+ messages in thread
From: Pasi Kärkkäinen @ 2017-05-19  8:55 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Thu, May 18, 2017 at 06:12:22AM +0000, Duncan wrote:
> Roman Mamedov posted on Thu, 18 May 2017 10:17:19 +0500 as excerpted:
> 
> > On Thu, 18 May 2017 04:09:38 +0200 ??ukasz Wróblewski <lw@nri.pl> wrote:
> > 
> >> I will try when stable 4.12 comes out.
> >> Unfortunately I do not have a backup.
> >> Fortunately, these data are not so critical.
> >> Some private photos and videos of youth.
> >> However, I would be very happy if I could get it back.
> > 
> > Try saving your data with "btrfs restore" first 
> 
> First post, he tried that.  No luck.  Tho that was with 4.4 userspace.  
> It might be worth trying with the 4.11-rc or soon to be released 4.11 
> userspace, tho...
> 

Try with 4.12-rc, I assume :)


-- Pasi



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-19  8:55           ` Pasi Kärkkäinen
@ 2017-05-19  9:09             ` Roman Mamedov
  2017-05-20  2:30             ` Duncan
  1 sibling, 0 replies; 14+ messages in thread
From: Roman Mamedov @ 2017-05-19  9:09 UTC (permalink / raw)
  To: Pasi Kärkkäinen; +Cc: Duncan, linux-btrfs

On Fri, 19 May 2017 11:55:27 +0300
Pasi Kärkkäinen <pasik@iki.fi> wrote:

> > > Try saving your data with "btrfs restore" first 
> > 
> > First post, he tried that.  No luck.  Tho that was with 4.4 userspace.  
> > It might be worth trying with the 4.11-rc or soon to be released 4.11 
> > userspace, tho...
> > 
> 
> Try with 4.12-rc, I assume :)

No, actually I missed that this was already tried, and a newer kernel will not
help at "btrfs restore", AFAIU it works entirely in userspace, not kernel.

Newer btrfs-progs could be something to try though, as the version used seems
pretty old -- btrfs-progs v4.4.1, while the current one is v4.11.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: RAID 6 corrupted
  2017-05-19  8:55           ` Pasi Kärkkäinen
  2017-05-19  9:09             ` Roman Mamedov
@ 2017-05-20  2:30             ` Duncan
  1 sibling, 0 replies; 14+ messages in thread
From: Duncan @ 2017-05-20  2:30 UTC (permalink / raw)
  To: linux-btrfs

Pasi Kärkkäinen posted on Fri, 19 May 2017 11:55:27 +0300 as excerpted:

> On Thu, May 18, 2017 at 06:12:22AM +0000, Duncan wrote:
>> Roman Mamedov posted on Thu, 18 May 2017 10:17:19 +0500 as excerpted:
>> 
>> > On Thu, 18 May 2017 04:09:38 +0200 ??ukasz Wróblewski <lw@nri.pl>
>> > wrote:
>> > 
>> >> I will try when stable 4.12 comes out.
>> >> Unfortunately I do not have a backup.
>> >> Fortunately, these data are not so critical.
>> >> Some private photos and videos of youth.
>> >> However, I would be very happy if I could get it back.
>> > 
>> > Try saving your data with "btrfs restore" first
>> 
>> First post, he tried that.  No luck.  Tho that was with 4.4 userspace.
>> It might be worth trying with the 4.11-rc or soon to be released 4.11
>> userspace, tho...
>> 
>> 
> Try with 4.12-rc, I assume :)

No.  btrfs restore is 100% userspace, so we're talking btrfs-progs (thus 
the "userspace" specifier) version here, not kernel version.

And at the time 4.11-rc was the latest progs version, with 4.11 due 
within a day or two, so "4.11-rc or soon to be released 4.11 userspace" 
was correct.

Of course since then 4.11 userspace /has/ been released, so it'd be 4.11 
without the -rc, now.

(When I got the CCed email I couldn't make heads or tails since I was 
missing the thread context, tho the fact that I was dead tired probably 
didn't help.  I just checked the list, which I interface with as a 
newsgroup on gmane.org's list2news service, after getting home from work 
today, however, and there the entire thread is visible, as one would 
expect for a newsgroup, so I could make sense of the context, and realize 
it was a case of kernel version vs. userspace version confusion. Easy 
enough to clarify... =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-05-20  2:31 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-17  8:27 RAID 6 corrupted Łukasz Wróblewski
2017-05-17  9:45 ` Adam Borowski
2017-05-17  9:52 ` Duncan
2017-05-17 10:05   ` Duncan
2017-05-17 10:15   ` Adam Borowski
2017-05-18  2:09     ` Łukasz Wróblewski
2017-05-18  3:29       ` Adam Borowski
2017-05-18  6:08         ` Duncan
2017-05-18  8:34           ` Adam Borowski
2017-05-18  5:17       ` Roman Mamedov
2017-05-18  6:12         ` Duncan
2017-05-19  8:55           ` Pasi Kärkkäinen
2017-05-19  9:09             ` Roman Mamedov
2017-05-20  2:30             ` Duncan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.