All of lore.kernel.org
 help / color / mirror / Atom feed
* btrfs_log2phys: cannot lookup extent mapping
@ 2016-12-20 15:52 David Hanke
  2016-12-20 23:24 ` Duncan
  0 siblings, 1 reply; 11+ messages in thread
From: David Hanke @ 2016-12-20 15:52 UTC (permalink / raw)
  To: linux-btrfs

Greetings!

I've been using a btrfs-based volume for backups, but lately the 
system's been filling the syslog with errors like "btrfs_log2phys: 
cannot lookup extent mapping for 7129125486592" at the rate of hundreds 
per second. (Please see output below for more details.) Despite the 
errors, the files I've looked at appear to be written and read successfully.

I'm wondering if the contents of the volume are trustworthy and whether 
this problem is resolvable without backing up, erasing and starting over?

Thank you!

David


# uname -a
Linux backup2 3.0.101.RNx86_64.3 #1 SMP Wed Apr 1 16:02:14 PDT 2015 
x86_64 GNU/Linux

# btrfs --version
Btrfs v3.17.3

#   btrfs fi show
Label: '43f66d40:root'  uuid: 6e546e1a-15a3-4e9d-97f7-9693659a5e7e
     Total devices 1 FS bytes used 571.47MiB
     devid    1 size 4.00GiB used 3.36GiB path /dev/md0

Label: '43f66d40:data'  uuid: 0900d3c7-fda1-463a-81e5-19c04e68a0cb
     Total devices 1 FS bytes used 27.61TiB
     devid    1 size 36.34TiB used 27.99TiB path /dev/md127

Btrfs v3.17.3

# dmesg | head
s_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688
btrfs_log2phys: cannot lookup extent mapping for 7168108658688

# dmesg | tail
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944
btrfs_log2phys: cannot lookup extent mapping for 7168110034944

[dmesg.log is filled with 3951 lines of the same type of errors]


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-20 15:52 btrfs_log2phys: cannot lookup extent mapping David Hanke
@ 2016-12-20 23:24 ` Duncan
  2016-12-21 14:50   ` David Hanke
  0 siblings, 1 reply; 11+ messages in thread
From: Duncan @ 2016-12-20 23:24 UTC (permalink / raw)
  To: linux-btrfs

David Hanke posted on Tue, 20 Dec 2016 09:52:25 -0600 as excerpted:

> I've been using a btrfs-based volume for backups, but lately the
> system's been filling the syslog with errors like "btrfs_log2phys:
> cannot lookup extent mapping for 7129125486592" at the rate of hundreds
> per second. (Please see output below for more details.) Despite the
> errors, the files I've looked at appear to be written and read
> successfully.
> 
> I'm wondering if the contents of the volume are trustworthy and whether
> this problem is resolvable without backing up, erasing and starting
> over?
> 
> Thank you!
> 
> David
> 
> 
> # uname -a
> Linux backup2 3.0.101.RNx86_64.3 #1 SMP Wed Apr 1 16:02:14 PDT 2015
> x86_64 GNU/Linux
> 
> # btrfs --version
> Btrfs v3.17.3

FWIW...

[TL;DR: see the four bottom line choices, at the bottom.]

This is the upstream btrfs development and discussion list for a 
filesystem that's still stabilizing (that is, not fully stable and 
mature) and that remains under heavy development and bug fixing.  As 
such, list focus is heavily forward looking, with an extremely strong 
recommendation to use current kernels (and to a lessor extent btrfs 
userspace) if you're going to be running btrfs, as these have all the 
latest bugfixes.

Put a different way, the general view and strong recommendation of the 
list is that because btrfs is still under heavy development, with bug 
fixes, some more major than others, every kernel cycle, while we 
recognize that choosing to run old and stale^H^Hble kernels and userspace 
is a legitimate choice on its own, that choice of stability over support 
for the latest and greatest, is viewed as incompatible with choosing to 
run a still under heavy development filesystem.  Choosing one OR the 
other is strongly recommended.

For list purposes, we recommend and best support the last two kernel 
release series in two tracks, LTS/long-term-stable, or current release 
track.  On the LTS track, that's the LTS 4.4 and 4.1 series.  On the 
current track, 4.9 is the latest release, so 4.9 and 4.8 are best 
supported.

Meanwhile, it's worth keeping in mind that the experimental label and 
accompanying extremely strong "eat your babies" level warnings weren't 
peeled off until IIRC 3.12 or so, meaning anything before that is not 
only ancient history in list terms, but also still labeled as "eat your 
babies" level experimental.  Why anyone choosing to run an ancient eat-
your-babies level experimental version of a filesystem that's now rather 
more stable and mature, tho not yet fully stabilized, is beyond me.  If 
they're interested in newer filesystems, running newer and less buggy 
versions is reasonable; if they're interested in years-stale level of 
stability, then running such filesystems, especially when still labeled 
eat-your-babies level experimental back then, seems an extremely odd 
choice indeed.

Of course, on-list we do recognize that various distros did and do offer 
support at some level for older than list-recommended version btrfs, in 
part because they backport fixes from newer versions.  However, because 
we're forward development focused we don't track what patches these 
distros may or may not have backported and thus aren't in a good position 
to provide good support for them.  Instead, users choosing to use such 
kernels are generally asked to choose between upgrading to something 
reasonably supportable on-list if they wish to go that route, or referred 
back to their distros for the support they're in a far better position to 
offer, since they know what they've backported and what they haven't, 
while we don't.

As for btrfs userspace, the way btrfs works, during normal runtime, 
userspace primarily calls the kernel to do the real work, so userspace 
version isn't as big a deal unless you're trying to use a feature only 
supported by newer versions, except that if it's /too/ old, the impedance 
mismatch between the commands as they were then and the commands in 
current versions makes support rather more difficult.  However, once 
there's a problem, then the age of userspace code becomes more vital, as 
then it's actually the userspace code doing the work, and only newer 
versions of btrfs check and btrfs restore, for instance, can detect and 
fix problems where code has only recently been added to do so.

In general, then, with btrfs-progs releases and versioning synced to that 
of the kernel, a reasonable rule of thumb is to run userspace of a 
similar version to your kernel, tho unless you're experiencing problems, 
getting a version or two behind on your userspace isn't a big deal.  That 
way, userspace command formats and output will be close enough to current 
for easier support, and if there's a fix for a specific problem you've 
posted in newer userspace, the problem and fix are still fresh enough in 
people's minds that someone will probably recognize it and point out that 
a newer version can handle that, and you can worry about upgrading to the 
latest and greatest at that point.

So bottom line, you have four choices:

1) Upgrade to something reasonably current to get better on-list support.

This would be LTS kernels 4.4 preferred, or 4.1, acceptable, or current 
kernels 4.9 or 4.8, and similarly versioned userspace, so no older than 
btrfs-progs 4.0.

2) Choose to stay with your distro's versions and get support from them.

Particularly if you are already paying for that support, might as well 
use it.

3) Recognize the fundamental incompatibility between wanting to run old 
and stale/stable for the stability it is supposed to offer, and wanting 
to run a still under heavy development not fully stable and mature 
filesystem like btrfs, and either switch to a more stable and mature 
filesystem that better meets your needs for those qualities, or upgrade 
to a distro or distro version that better meets your needs for current 
software better supported by current upstreams like this btrfs list.

4) Stay with what you have, and muddle through as best you can.

After all, it's not like we /refuse/ to offer support to btrfs that old, 
if we recognize a problem that we know can be fixed by code that old 
we'll still tell you, and if we know there's a fix in newer versions 
we'll still tell you and try to point you at the appropriate patch for 
you to apply to your old version if possible, but we simply recognize 
that for something that old, our support will be rather limited, at best.

But it remains your system and your data, so your choice, even if it's 
against everything we normally recommend.


Finally, a personal disclosure.  I'm a btrfs user and list regular, not a 
dev.  As such, my own answers will rarely get code-level technical or 
point to specific patches, but because I /am/ a regular, I can still 
answer the stuff that comes up regularly, leaving the real devs and more 
expert replies to cover detailed content that's beyond me.  So while it's 
quite possible someone else will recognize a specific bug and be able to 
point you toward a specific fix, tho honestly I don't expect it for 
something as old as what you're posting about, general list-recommended 
upgrades and alternatives for people posting with positively ancient 
versions is squarely within my reply territory. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-20 23:24 ` Duncan
@ 2016-12-21 14:50   ` David Hanke
  2016-12-22 10:11     ` Duncan
                       ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: David Hanke @ 2016-12-21 14:50 UTC (permalink / raw)
  To: linux-btrfs

Hi Duncan,

Thank you for your reply. If I've emailed the wrong list, please let me 
know. What I hear you saying, in short, is that btrfs is not yet fully 
stable but current 4.x versions may work better. I'm willing to upgrade, 
but I'm told that the upgrade process may result in total failure, and 
I'm not sure I can trust the contents of the volume either way. Given 
that, it seems I must backup the backup, erase and start over. What 
would you do?

Thank you,

David


On 12/20/16 17:24, Duncan wrote:
> David Hanke posted on Tue, 20 Dec 2016 09:52:25 -0600 as excerpted:
>
>> I've been using a btrfs-based volume for backups, but lately the
>> system's been filling the syslog with errors like "btrfs_log2phys:
>> cannot lookup extent mapping for 7129125486592" at the rate of hundreds
>> per second. (Please see output below for more details.) Despite the
>> errors, the files I've looked at appear to be written and read
>> successfully.
>>
>> I'm wondering if the contents of the volume are trustworthy and whether
>> this problem is resolvable without backing up, erasing and starting
>> over?
>>
>> Thank you!
>>
>> David
>>
>>
>> # uname -a
>> Linux backup2 3.0.101.RNx86_64.3 #1 SMP Wed Apr 1 16:02:14 PDT 2015
>> x86_64 GNU/Linux
>>
>> # btrfs --version
>> Btrfs v3.17.3
> FWIW...
>
> [TL;DR: see the four bottom line choices, at the bottom.]
>
> This is the upstream btrfs development and discussion list for a
> filesystem that's still stabilizing (that is, not fully stable and
> mature) and that remains under heavy development and bug fixing.  As
> such, list focus is heavily forward looking, with an extremely strong
> recommendation to use current kernels (and to a lessor extent btrfs
> userspace) if you're going to be running btrfs, as these have all the
> latest bugfixes.
>
> Put a different way, the general view and strong recommendation of the
> list is that because btrfs is still under heavy development, with bug
> fixes, some more major than others, every kernel cycle, while we
> recognize that choosing to run old and stale^H^Hble kernels and userspace
> is a legitimate choice on its own, that choice of stability over support
> for the latest and greatest, is viewed as incompatible with choosing to
> run a still under heavy development filesystem.  Choosing one OR the
> other is strongly recommended.
>
> For list purposes, we recommend and best support the last two kernel
> release series in two tracks, LTS/long-term-stable, or current release
> track.  On the LTS track, that's the LTS 4.4 and 4.1 series.  On the
> current track, 4.9 is the latest release, so 4.9 and 4.8 are best
> supported.
>
> Meanwhile, it's worth keeping in mind that the experimental label and
> accompanying extremely strong "eat your babies" level warnings weren't
> peeled off until IIRC 3.12 or so, meaning anything before that is not
> only ancient history in list terms, but also still labeled as "eat your
> babies" level experimental.  Why anyone choosing to run an ancient eat-
> your-babies level experimental version of a filesystem that's now rather
> more stable and mature, tho not yet fully stabilized, is beyond me.  If
> they're interested in newer filesystems, running newer and less buggy
> versions is reasonable; if they're interested in years-stale level of
> stability, then running such filesystems, especially when still labeled
> eat-your-babies level experimental back then, seems an extremely odd
> choice indeed.
>
> Of course, on-list we do recognize that various distros did and do offer
> support at some level for older than list-recommended version btrfs, in
> part because they backport fixes from newer versions.  However, because
> we're forward development focused we don't track what patches these
> distros may or may not have backported and thus aren't in a good position
> to provide good support for them.  Instead, users choosing to use such
> kernels are generally asked to choose between upgrading to something
> reasonably supportable on-list if they wish to go that route, or referred
> back to their distros for the support they're in a far better position to
> offer, since they know what they've backported and what they haven't,
> while we don't.
>
> As for btrfs userspace, the way btrfs works, during normal runtime,
> userspace primarily calls the kernel to do the real work, so userspace
> version isn't as big a deal unless you're trying to use a feature only
> supported by newer versions, except that if it's /too/ old, the impedance
> mismatch between the commands as they were then and the commands in
> current versions makes support rather more difficult.  However, once
> there's a problem, then the age of userspace code becomes more vital, as
> then it's actually the userspace code doing the work, and only newer
> versions of btrfs check and btrfs restore, for instance, can detect and
> fix problems where code has only recently been added to do so.
>
> In general, then, with btrfs-progs releases and versioning synced to that
> of the kernel, a reasonable rule of thumb is to run userspace of a
> similar version to your kernel, tho unless you're experiencing problems,
> getting a version or two behind on your userspace isn't a big deal.  That
> way, userspace command formats and output will be close enough to current
> for easier support, and if there's a fix for a specific problem you've
> posted in newer userspace, the problem and fix are still fresh enough in
> people's minds that someone will probably recognize it and point out that
> a newer version can handle that, and you can worry about upgrading to the
> latest and greatest at that point.
>
> So bottom line, you have four choices:
>
> 1) Upgrade to something reasonably current to get better on-list support.
>
> This would be LTS kernels 4.4 preferred, or 4.1, acceptable, or current
> kernels 4.9 or 4.8, and similarly versioned userspace, so no older than
> btrfs-progs 4.0.
>
> 2) Choose to stay with your distro's versions and get support from them.
>
> Particularly if you are already paying for that support, might as well
> use it.
>
> 3) Recognize the fundamental incompatibility between wanting to run old
> and stale/stable for the stability it is supposed to offer, and wanting
> to run a still under heavy development not fully stable and mature
> filesystem like btrfs, and either switch to a more stable and mature
> filesystem that better meets your needs for those qualities, or upgrade
> to a distro or distro version that better meets your needs for current
> software better supported by current upstreams like this btrfs list.
>
> 4) Stay with what you have, and muddle through as best you can.
>
> After all, it's not like we /refuse/ to offer support to btrfs that old,
> if we recognize a problem that we know can be fixed by code that old
> we'll still tell you, and if we know there's a fix in newer versions
> we'll still tell you and try to point you at the appropriate patch for
> you to apply to your old version if possible, but we simply recognize
> that for something that old, our support will be rather limited, at best.
>
> But it remains your system and your data, so your choice, even if it's
> against everything we normally recommend.
>
>
> Finally, a personal disclosure.  I'm a btrfs user and list regular, not a
> dev.  As such, my own answers will rarely get code-level technical or
> point to specific patches, but because I /am/ a regular, I can still
> answer the stuff that comes up regularly, leaving the real devs and more
> expert replies to cover detailed content that's beyond me.  So while it's
> quite possible someone else will recognize a specific bug and be able to
> point you toward a specific fix, tho honestly I don't expect it for
> something as old as what you're posting about, general list-recommended
> upgrades and alternatives for people posting with positively ancient
> versions is squarely within my reply territory. =:^)
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-21 14:50   ` David Hanke
@ 2016-12-22 10:11     ` Duncan
  2016-12-22 15:14       ` Adam Borowski
  2016-12-22 23:38     ` Xin Zhou
  2016-12-27 16:22     ` David Hanke
  2 siblings, 1 reply; 11+ messages in thread
From: Duncan @ 2016-12-22 10:11 UTC (permalink / raw)
  To: linux-btrfs

David Hanke posted on Wed, 21 Dec 2016 08:50:02 -0600 as excerpted:

> Thank you for your reply. If I've emailed the wrong list, please let me
> know.

Well, it's the right list... for /current/ btrfs.  For 3.0, as I said 
your distro lists may be more appropriate.  But from the below you do 
seem willing to upgrade, so...

> What I hear you saying, in short, is that btrfs is not yet fully
> stable but current 4.x versions may work better.

Yes.

> I'm willing to upgrade,
> but I'm told that the upgrade process may result in total failure, and
> I'm not sure I can trust the contents of the volume either way. Given
> that, it seems I must backup the backup, erase and start over. What
> would you do?

That's exactly what I'd do, but...

Given the maturing-but-not-yet-fully-stable-and-mature state of btrfs 
today, being no further from a usable current backup than the data you're 
willing to lose, at least worst-case, remains an even stronger 
recommendation than it is on fully mature and stable filesystem, kernel 
and hardware.  (And even on such a stable system, any sysadmin worth the 
name defines the real value of data by the extent to which it is backed 
up, no backup means it's simply not worth the trouble and the loss of the 
data is a smaller loss than the loss of resources and hassle required to 
back it up as insurance against loss of the data, regardless of any 
claims to the contrary.)

Knowing that, I do have reasonable backups, and while they aren't always 
current, I take seriously the backup or lack thereof as data value 
definition discussed above, so if I lose it due to not having a backup, I 
swallow hard and know I must have considered the time saved worth it...

Which is a long way of saying I have my backups closer at hand and am 
more willing to use them and lose what wasn't backed up, than some.  So 
it's easier for me to say that's what I'd do, than it would be for some.  
I actually make it a point to keep my data in reasonably sized for 
management partitions, with equivalently sized partitions elsewhere for 
the backups, to multiple levels in many cases, tho some are rather old.  
So freshening or restoring a backup is simply a matter of copying from 
one partition (or pair of partitions given that many of them are btrfs 
raid1 pair-mirrors) to another, deliberately pre-provisioned to the same 
size, for use /as/ the working and backup copies.  Similarly, falling 
back to a backup is simply a matter of ensuring the appropriate physical 
media is connected, and either mounting it as a backup, or switching a 
couple entries in fstab, and mounting it in place of the original.

So it's relatively easy here, but only because I've taken pains to set it 
up to make it so.

Meanwhile, btrfs does have some tools that can /sometimes/ help recover 
data off of unmountable fs' that would otherwise be "in the backup gap".  
Btrfs restore has helped me save that "backup gap" data a few times -- it 
may not have been worth the trouble of a backup when the risk was still 
theoretical, and I'd have accepted the loss if it came to it, but that 
didn't mean it wasn't worth spending a bit more time trying to save it, 
successfully in my case, once I knew I was actually in the recovery or 
loss situation.

Tho in your case it looks like you are looking at the warnings before it 
gets to that point, and it's both a backup already, so you presumably 
have the live data in most cases, and you can still mount and read most 
or all of it, so it's just a question of the time and hassle to do it.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-22 10:11     ` Duncan
@ 2016-12-22 15:14       ` Adam Borowski
  2016-12-22 18:28         ` Austin S. Hemmelgarn
  0 siblings, 1 reply; 11+ messages in thread
From: Adam Borowski @ 2016-12-22 15:14 UTC (permalink / raw)
  To: linux-btrfs

On Thu, Dec 22, 2016 at 10:11:35AM +0000, Duncan wrote:
> Given the maturing-but-not-yet-fully-stable-and-mature state of btrfs 
> today, being no further from a usable current backup than the data you're 
> willing to lose, at least worst-case, remains an even stronger 
> recommendation than it is on fully mature and stable filesystem, kernel 
> and hardware.

The usual rant about backups which I snipped is 110%[1] right, however I
disagree that btrfs is worse than other filesystems for data safety.

On one hand, btrfs:
* is buggy
* fails the KISS principle to a ridiculous degree
* lacks logic people take for granted (especially on RAID)
On the other, other filesystems:
* suffer from silent data loss every time the disk doesn't notice an error!
  Allowing silent data loss fails the most basic requirement for a
  filesystem.  Btrfs at least makes that loss noisy (single) so you can
  recover from backups, or handles it (redundant RAID).
* don't have frequent snapshots to save you from human error (including
  other software)
* make backups time-costly.  rsync needs to at least stat everything, on a
  populated disk that's often half an hour or more, on btrfs a no-op backup
  takes O(1).

So sorry, but I had enough woe with those "fully mature and stable"
filesystems.  Thus I use btrfs pretty much everywhere, backing up my crap
every 24 hours, important bits every 3 hours.


Meow!

[1]. Above 100% as it's more true than people read it.
-- 
Autotools hint: to do a zx-spectrum build on a pdp11 host, type:
  ./configure --host=zx-spectrum --build=pdp11

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-22 15:14       ` Adam Borowski
@ 2016-12-22 18:28         ` Austin S. Hemmelgarn
  2016-12-23  8:14           ` Adam Borowski
  0 siblings, 1 reply; 11+ messages in thread
From: Austin S. Hemmelgarn @ 2016-12-22 18:28 UTC (permalink / raw)
  To: Adam Borowski, linux-btrfs

On 2016-12-22 10:14, Adam Borowski wrote:
> On Thu, Dec 22, 2016 at 10:11:35AM +0000, Duncan wrote:
>> Given the maturing-but-not-yet-fully-stable-and-mature state of btrfs
>> today, being no further from a usable current backup than the data you're
>> willing to lose, at least worst-case, remains an even stronger
>> recommendation than it is on fully mature and stable filesystem, kernel
>> and hardware.
>
> The usual rant about backups which I snipped is 110%[1] right, however I
> disagree that btrfs is worse than other filesystems for data safety.
>
> On one hand, btrfs:
> * is buggy
> * fails the KISS principle to a ridiculous degree
> * lacks logic people take for granted (especially on RAID)
> On the other, other filesystems:
> * suffer from silent data loss every time the disk doesn't notice an error!
>   Allowing silent data loss fails the most basic requirement for a
>   filesystem.  Btrfs at least makes that loss noisy (single) so you can
>   recover from backups, or handles it (redundant RAID).
No, allowing silent data loss fails the most basic requirement for a 
_storage system_.  A filesystem is generally a key component in a data 
storage system, but people regularly conflate the two as having the same 
meaning, which is absolutely wrong.  Most traditional filesystems are 
designed under the assumption that if someone cares about at-rest data 
integrity, they will purchase hardware to ensure at-rest data integrity. 
  This is a perfectly reasonable stance, especially considering that 
ensuring at-rest data integrity is _hard_ (BTRFS is better at it than 
most filesystems, but it still can't do it to the degree that most of 
the people who actually require it need).  A filesystem's job is 
traditionally to organize things, not verify them or provide redundancy.
> * don't have frequent snapshots to save you from human error (including
>   other software)
> * make backups time-costly.  rsync needs to at least stat everything, on a
>   populated disk that's often half an hour or more, on btrfs a no-op backup
>   takes O(1).
These two points I agree on, despite me not using snapshots or send/receive.
>
> So sorry, but I had enough woe with those "fully mature and stable"
> filesystems.  Thus I use btrfs pretty much everywhere, backing up my crap
> every 24 hours, important bits every 3 hours.
I use BTRFS pretty much everywhere too.  I've also had more catastrophic 
failures from BTRFS than any other filesystem I've used except FAT (NTFS 
is a close third).  I've also recovered sanely without needing a new 
filesystem and a full data restoration on ext4, FAT, and even XFS more 
than I have on BTRFS (ext4 and FAT are well enough documented that I can 
put a broken filesystem back together by hand if needed (and have done 
so on multiple occasions)).

That said, the two of us and most of the other list regulars have a much 
better understanding of the involved risks than a significant majority 
of 'normal' users, partly because we have done our research regarding 
this, and partly because we're watching the list regularly.  For us, the 
risk is a calculated one, for anyone who's just trying it out for 
laughs, or happened to get it because the distro they picked happened to 
use it by default though, it's a very much unknown risk.

Ignoring the checksumming, COW, and multi-device support in BTRFS, 
pretty much everything else wins in terms of reliability by a pretty 
significant margin (and in terms of performance too, even mounted with 
no checksumming and no COW for everything but metadata, ext4 and XFS 
still beat the tar out of BTRFS in terms of performance).  BTRFS crashes 
more, and fails harder than any other first-class (listed on the main 
'Filesystems' menu, not in 'Misc Filesystems') filesystem in the 
mainline Linux kernel right now.  For it to be reliable, the devices 
need to be monitored, the filesystems need to be curated, and you 
absolutely have to understand the risks.  Given this, for a vast 
majority of users, BTRFS _is_ worse on average for data safety than 
almost any other filesystem in the kernel.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-21 14:50   ` David Hanke
  2016-12-22 10:11     ` Duncan
@ 2016-12-22 23:38     ` Xin Zhou
  2016-12-23 12:45       ` Austin S. Hemmelgarn
  2016-12-27 16:22     ` David Hanke
  2 siblings, 1 reply; 11+ messages in thread
From: Xin Zhou @ 2016-12-22 23:38 UTC (permalink / raw)
  To: David Hanke; +Cc: linux-btrfs

Hi,
If the change of disk format between versions is precisely documented,
it is plausible to create a utility to convert the old volume to new ones,
trigger the workflow, upgrade the kernel and boots up for mounting the new volume.
Currently, the btrfs wiki shows partial content of the on-disk format.
Thanks,
Xin
 
 

Sent: Wednesday, December 21, 2016 at 6:50 AM
From: "David Hanke" <hanke.list@ece.wisc.edu>
To: linux-btrfs@vger.kernel.org
Subject: Re: btrfs_log2phys: cannot lookup extent mapping
Hi Duncan,

Thank you for your reply. If I've emailed the wrong list, please let me
know. What I hear you saying, in short, is that btrfs is not yet fully
stable but current 4.x versions may work better. I'm willing to upgrade,
but I'm told that the upgrade process may result in total failure, and
I'm not sure I can trust the contents of the volume either way. Given
that, it seems I must backup the backup, erase and start over. What
would you do?

Thank you,

David


On 12/20/16 17:24, Duncan wrote:
> David Hanke posted on Tue, 20 Dec 2016 09:52:25 -0600 as excerpted:
>
>> I've been using a btrfs-based volume for backups, but lately the
>> system's been filling the syslog with errors like "btrfs_log2phys:
>> cannot lookup extent mapping for 7129125486592" at the rate of hundreds
>> per second. (Please see output below for more details.) Despite the
>> errors, the files I've looked at appear to be written and read
>> successfully.
>>
>> I'm wondering if the contents of the volume are trustworthy and whether
>> this problem is resolvable without backing up, erasing and starting
>> over?
>>
>> Thank you!
>>
>> David
>>
>>
>> # uname -a
>> Linux backup2 3.0.101.RNx86_64.3 #1 SMP Wed Apr 1 16:02:14 PDT 2015
>> x86_64 GNU/Linux
>>
>> # btrfs --version
>> Btrfs v3.17.3
> FWIW...
>
> [TL;DR: see the four bottom line choices, at the bottom.]
>
> This is the upstream btrfs development and discussion list for a
> filesystem that's still stabilizing (that is, not fully stable and
> mature) and that remains under heavy development and bug fixing. As
> such, list focus is heavily forward looking, with an extremely strong
> recommendation to use current kernels (and to a lessor extent btrfs
> userspace) if you're going to be running btrfs, as these have all the
> latest bugfixes.
>
> Put a different way, the general view and strong recommendation of the
> list is that because btrfs is still under heavy development, with bug
> fixes, some more major than others, every kernel cycle, while we
> recognize that choosing to run old and stale^H^Hble kernels and userspace
> is a legitimate choice on its own, that choice of stability over support
> for the latest and greatest, is viewed as incompatible with choosing to
> run a still under heavy development filesystem. Choosing one OR the
> other is strongly recommended.
>
> For list purposes, we recommend and best support the last two kernel
> release series in two tracks, LTS/long-term-stable, or current release
> track. On the LTS track, that's the LTS 4.4 and 4.1 series. On the
> current track, 4.9 is the latest release, so 4.9 and 4.8 are best
> supported.
>
> Meanwhile, it's worth keeping in mind that the experimental label and
> accompanying extremely strong "eat your babies" level warnings weren't
> peeled off until IIRC 3.12 or so, meaning anything before that is not
> only ancient history in list terms, but also still labeled as "eat your
> babies" level experimental. Why anyone choosing to run an ancient eat-
> your-babies level experimental version of a filesystem that's now rather
> more stable and mature, tho not yet fully stabilized, is beyond me. If
> they're interested in newer filesystems, running newer and less buggy
> versions is reasonable; if they're interested in years-stale level of
> stability, then running such filesystems, especially when still labeled
> eat-your-babies level experimental back then, seems an extremely odd
> choice indeed.
>
> Of course, on-list we do recognize that various distros did and do offer
> support at some level for older than list-recommended version btrfs, in
> part because they backport fixes from newer versions. However, because
> we're forward development focused we don't track what patches these
> distros may or may not have backported and thus aren't in a good position
> to provide good support for them. Instead, users choosing to use such
> kernels are generally asked to choose between upgrading to something
> reasonably supportable on-list if they wish to go that route, or referred
> back to their distros for the support they're in a far better position to
> offer, since they know what they've backported and what they haven't,
> while we don't.
>
> As for btrfs userspace, the way btrfs works, during normal runtime,
> userspace primarily calls the kernel to do the real work, so userspace
> version isn't as big a deal unless you're trying to use a feature only
> supported by newer versions, except that if it's /too/ old, the impedance
> mismatch between the commands as they were then and the commands in
> current versions makes support rather more difficult. However, once
> there's a problem, then the age of userspace code becomes more vital, as
> then it's actually the userspace code doing the work, and only newer
> versions of btrfs check and btrfs restore, for instance, can detect and
> fix problems where code has only recently been added to do so.
>
> In general, then, with btrfs-progs releases and versioning synced to that
> of the kernel, a reasonable rule of thumb is to run userspace of a
> similar version to your kernel, tho unless you're experiencing problems,
> getting a version or two behind on your userspace isn't a big deal. That
> way, userspace command formats and output will be close enough to current
> for easier support, and if there's a fix for a specific problem you've
> posted in newer userspace, the problem and fix are still fresh enough in
> people's minds that someone will probably recognize it and point out that
> a newer version can handle that, and you can worry about upgrading to the
> latest and greatest at that point.
>
> So bottom line, you have four choices:
>
> 1) Upgrade to something reasonably current to get better on-list support.
>
> This would be LTS kernels 4.4 preferred, or 4.1, acceptable, or current
> kernels 4.9 or 4.8, and similarly versioned userspace, so no older than
> btrfs-progs 4.0.
>
> 2) Choose to stay with your distro's versions and get support from them.
>
> Particularly if you are already paying for that support, might as well
> use it.
>
> 3) Recognize the fundamental incompatibility between wanting to run old
> and stale/stable for the stability it is supposed to offer, and wanting
> to run a still under heavy development not fully stable and mature
> filesystem like btrfs, and either switch to a more stable and mature
> filesystem that better meets your needs for those qualities, or upgrade
> to a distro or distro version that better meets your needs for current
> software better supported by current upstreams like this btrfs list.
>
> 4) Stay with what you have, and muddle through as best you can.
>
> After all, it's not like we /refuse/ to offer support to btrfs that old,
> if we recognize a problem that we know can be fixed by code that old
> we'll still tell you, and if we know there's a fix in newer versions
> we'll still tell you and try to point you at the appropriate patch for
> you to apply to your old version if possible, but we simply recognize
> that for something that old, our support will be rather limited, at best.
>
> But it remains your system and your data, so your choice, even if it's
> against everything we normally recommend.
>
>
> Finally, a personal disclosure. I'm a btrfs user and list regular, not a
> dev. As such, my own answers will rarely get code-level technical or
> point to specific patches, but because I /am/ a regular, I can still
> answer the stuff that comes up regularly, leaving the real devs and more
> expert replies to cover detailed content that's beyond me. So while it's
> quite possible someone else will recognize a specific bug and be able to
> point you toward a specific fix, tho honestly I don't expect it for
> something as old as what you're posting about, general list-recommended
> upgrades and alternatives for people posting with positively ancient
> versions is squarely within my reply territory. =:^)
>

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-22 18:28         ` Austin S. Hemmelgarn
@ 2016-12-23  8:14           ` Adam Borowski
  2016-12-23 12:43             ` Austin S. Hemmelgarn
  0 siblings, 1 reply; 11+ messages in thread
From: Adam Borowski @ 2016-12-23  8:14 UTC (permalink / raw)
  To: Austin S. Hemmelgarn; +Cc: linux-btrfs

On Thu, Dec 22, 2016 at 01:28:37PM -0500, Austin S. Hemmelgarn wrote:
> On 2016-12-22 10:14, Adam Borowski wrote:
> > On the other, other filesystems:
> > * suffer from silent data loss every time the disk doesn't notice an error!
> >   Allowing silent data loss fails the most basic requirement for a
> >   filesystem.  Btrfs at least makes that loss noisy (single) so you can
> >   recover from backups, or handles it (redundant RAID).
> No, allowing silent data loss fails the most basic requirement for a
> _storage system_.  A filesystem is generally a key component in a data
> storage system, but people regularly conflate the two as having the same
> meaning, which is absolutely wrong.  Most traditional filesystems are
> designed under the assumption that if someone cares about at-rest data
> integrity, they will purchase hardware to ensure at-rest data integrity.

You mean, like per-sector checksums even cheapest disks are supposed to
have?  I don't think storage-side hardware can possibly ensure such
integrity, they can at most be better made than bottom-of-the-barrel disks.

There's a difference between detecting corruption (checksums) and rectifying
it; the latter relies on the former done reliably.

> This is a perfectly reasonable stance, especially considering that ensuring
> at-rest data integrity is _hard_ (BTRFS is better at it than most
> filesystems, but it still can't do it to the degree that most of the people
> who actually require it need).  A filesystem's job is traditionally to
> organize things, not verify them or provide redundancy.

Which layer do you propose to verify integrity of the data then?  Anything
even remotely complete would need to be closely integrated with the
filesystem -- and thus it might be done outright as a part of the filesystem
rather than as an afterthought.

> > So sorry, but I had enough woe with those "fully mature and stable"
> > filesystems.  Thus I use btrfs pretty much everywhere, backing up my crap
> > every 24 hours, important bits every 3 hours.
> I use BTRFS pretty much everywhere too.  I've also had more catastrophic
> failures from BTRFS than any other filesystem I've used except FAT (NTFS is
> a close third).

Perhaps it's just a matter of luck, but my personal experience doesn't paint
btrfs in such a bad light.  Non-dev woes that I suffered are:

* 2.6.31: ENOSPC that no deletion/etc could recover from, had to backup and
  restore

* 3.14: deleting ~100k daily snapshots in one go on a box with only 3G RAM
  OOMed (slab allocation, despite lots of free swap user pages could be
  swapped to).  I aborted mount after several hours, dmesg suggested it was
  making progress, but I didn't wait and instead nuked it and restored from
  the originals (these were backups).

* 3.8 vendor kernel: on an arm SoC[1] that's been pounded for ~3 years with
  heavy load (3 jobs doing snapshot+dpkg+compile+teardown) I once hit
  unrecoverable corruption somewhere on a snapshot, had to copy base images
  (less work than recreating, they were ok), nuke and re-mkfs.  Had this
  been real data rather than transient retryable working copy, it'd be lost.

(Obviously not counting regular hardware failures.)

> I've also recovered sanely without needing a new filesystem and a full
> data restoration on ext4, FAT, and even XFS more than I have on BTRFS

Right; thought I did have one case when btrfs saved me when ext4 would have
not -- previous generation was readily available when the most recent write
hit a newly bad sector.

And being recently burned by ext4 silently losing data, then shortly later
btrfs nicely informing me about such loss (immediately rectified by taking
from backups and replacing the disk), I'm really reluctant about using any
filesystem without checksums.

> That said, the two of us and most of the other list regulars have a much
> better understanding of the involved risks than a significant majority of
> 'normal' users

True that.  BTRFS is... quirky.

> and in terms of performance too, even mounted with no checksumming
> and no COW for everything but metadata, ext4 and XFS still beat the tar out
> of BTRFS in terms of performance)

Pine64, class 4 SD card (quoting numbers from memory, 3 tries each):
* git reset --hard of a big tree: btrfs 3m45s, f2fs 4m, ext4 12m, xfs 16-18m
  (big variance)
* ./configure && make -j4 && make test of a shit package with only ~2MB of
  persistent writes: f2fs 95s, btrfs 97s, xfs 120s, ext4 122s.  I don't even
  understand where the difference comes from, on a CPU-bound task with
  virtually no writeout...


Meow!

[1]. Using Samsung's fancy-schmancy über eMMC -- like Ukrainian brewers, too
backward to know corpo beer is supposed to be made from urine, no one told
those guys flash is supposed to have sharply limited write endurance.
-- 
Autotools hint: to do a zx-spectrum build on a pdp11 host, type:
  ./configure --host=zx-spectrum --build=pdp11

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-23  8:14           ` Adam Borowski
@ 2016-12-23 12:43             ` Austin S. Hemmelgarn
  0 siblings, 0 replies; 11+ messages in thread
From: Austin S. Hemmelgarn @ 2016-12-23 12:43 UTC (permalink / raw)
  To: Adam Borowski; +Cc: linux-btrfs

On 2016-12-23 03:14, Adam Borowski wrote:
> On Thu, Dec 22, 2016 at 01:28:37PM -0500, Austin S. Hemmelgarn wrote:
>> On 2016-12-22 10:14, Adam Borowski wrote:
>>> On the other, other filesystems:
>>> * suffer from silent data loss every time the disk doesn't notice an error!
>>>   Allowing silent data loss fails the most basic requirement for a
>>>   filesystem.  Btrfs at least makes that loss noisy (single) so you can
>>>   recover from backups, or handles it (redundant RAID).
>> No, allowing silent data loss fails the most basic requirement for a
>> _storage system_.  A filesystem is generally a key component in a data
>> storage system, but people regularly conflate the two as having the same
>> meaning, which is absolutely wrong.  Most traditional filesystems are
>> designed under the assumption that if someone cares about at-rest data
>> integrity, they will purchase hardware to ensure at-rest data integrity.
>
> You mean, like per-sector checksums even cheapest disks are supposed to
> have?  I don't think storage-side hardware can possibly ensure such
> integrity, they can at most be better made than bottom-of-the-barrel disks.
Or RAID arrays, or some other setup.
>
> There's a difference between detecting corruption (checksums) and rectifying
> it; the latter relies on the former done reliably.
Agreed, but there are situations in which even BTRFS can't detect things 
reliably.
>
>> This is a perfectly reasonable stance, especially considering that ensuring
>> at-rest data integrity is _hard_ (BTRFS is better at it than most
>> filesystems, but it still can't do it to the degree that most of the people
>> who actually require it need).  A filesystem's job is traditionally to
>> organize things, not verify them or provide redundancy.
>
> Which layer do you propose to verify integrity of the data then?  Anything
> even remotely complete would need to be closely integrated with the
> filesystem -- and thus it might be done outright as a part of the filesystem
> rather than as an afterthought.
I'm not saying a filesystem shouldn't verify data integrity, I'm saying 
that many don't because they rely on another layer (usually between them 
and the block device) to do so, which is a perfectly reasonable approach.
>
>>> So sorry, but I had enough woe with those "fully mature and stable"
>>> filesystems.  Thus I use btrfs pretty much everywhere, backing up my crap
>>> every 24 hours, important bits every 3 hours.
>> I use BTRFS pretty much everywhere too.  I've also had more catastrophic
>> failures from BTRFS than any other filesystem I've used except FAT (NTFS is
>> a close third).
>
> Perhaps it's just a matter of luck, but my personal experience doesn't paint
> btrfs in such a bad light.  Non-dev woes that I suffered are:
>
> * 2.6.31: ENOSPC that no deletion/etc could recover from, had to backup and
>   restore
>
> * 3.14: deleting ~100k daily snapshots in one go on a box with only 3G RAM
>   OOMed (slab allocation, despite lots of free swap user pages could be
>   swapped to).  I aborted mount after several hours, dmesg suggested it was
>   making progress, but I didn't wait and instead nuked it and restored from
>   the originals (these were backups).
>
> * 3.8 vendor kernel: on an arm SoC[1] that's been pounded for ~3 years with
>   heavy load (3 jobs doing snapshot+dpkg+compile+teardown) I once hit
>   unrecoverable corruption somewhere on a snapshot, had to copy base images
>   (less work than recreating, they were ok), nuke and re-mkfs.  Had this
>   been real data rather than transient retryable working copy, it'd be lost.
I've lost about 6 filesystems to various issues since I started using 
BTRFS.  Given that that's 6 filesystems since about 3.10, which work out 
to about 2 filesystems a year (and this is still not counting hardware 
failures or issues I caused myself while poking around at things I 
shouldn't have been).  In comparison to about 4 in 10 years aggregated 
over every other filesystem I've ever used (NTFS, FAT32, exFAT, XFS, 
JFS, NILFS2, ext{2,3,4}, HFS+, SquashFS, and a couple of others), which 
works out to 1 every 2.5 years.  BTRFS has a pretty blatantly worse 
track record than anything else I've used.

That said, I have not lost a single FS since 3.18 using BTRFS, but most 
of that is that the parts I actually use (raid1 mode, checksumming, 
single snapshots per subvolume) are functionally stable, and that I've 
gotten much smarter about keeping things from getting into states where 
the filesystem will get irreversibly wedged into a corner.
>
> (Obviously not counting regular hardware failures.)
>
>> I've also recovered sanely without needing a new filesystem and a full
>> data restoration on ext4, FAT, and even XFS more than I have on BTRFS
>
> Right; thought I did have one case when btrfs saved me when ext4 would have
> not -- previous generation was readily available when the most recent write
> hit a newly bad sector.
Same, but I also wouldn't have been using ext4 by itself, I would have 
been using it on top of LVM based RAID, and thus would have survived 
anyway with a better than 50% chance of having the correct data.  You 
can't compare BTRFS as-is with it's default feature set to ext4 or XFS 
by themselves in terms of reliability, because BTRFS tries to do more. 
You need to be comparing to an equivalent storage setup (so either ZFS, 
or ext4/XFS on top of a good RAID array), in which case it generally 
loses pretty bad.
>
> And being recently burned by ext4 silently losing data, then shortly later
> btrfs nicely informing me about such loss (immediately rectified by taking
> from backups and replacing the disk), I'm really reluctant about using any
> filesystem without checksums.
>
>> That said, the two of us and most of the other list regulars have a much
>> better understanding of the involved risks than a significant majority of
>> 'normal' users
>
> True that.  BTRFS is... quirky.
I think the bigger issues are that it's significantly different from ZFS 
in many respects (which is the closest experience most seasoned 
sysadmins will have had), and many distros started shipping 'support' 
for it way sooner than they should have.
>
>> and in terms of performance too, even mounted with no checksumming
>> and no COW for everything but metadata, ext4 and XFS still beat the tar out
>> of BTRFS in terms of performance)
>
> Pine64, class 4 SD card (quoting numbers from memory, 3 tries each):
> * git reset --hard of a big tree: btrfs 3m45s, f2fs 4m, ext4 12m, xfs 16-18m
>   (big variance)
> * ./configure && make -j4 && make test of a shit package with only ~2MB of
>   persistent writes: f2fs 95s, btrfs 97s, xfs 120s, ext4 122s.  I don't even
>   understand where the difference comes from, on a CPU-bound task with
>   virtually no writeout...
An SD card benefits very significantly from the COW nature of BTRFS 
though because it makes the firmware's job of wear-leveling easier. 
Doing similar on an x86 system with a good SSD (high-quality 
wear-leveling, no built-in deduplication, no built-in compression, only 
about 5% difference between read and write speed) or a decent consumer 
HDD (7200 RPM 1TB SATA 3), I see BTRFS do roughly 10-20% worse than XFS 
and ext4 (I've not tested F2FS much, it holds little interest for me for 
multiple reasons).  Same storage stack, I see similar relative 
performance for runs of iozone and fio, and roughly similar relative 
performance for xfstests restricted to just the stuff that runs on all 
three filesystems.  Now, part of this may be because it's x86, but I 
doubt it since it's a recent 64-bit processor.
>
>
> Meow!
>
> [1]. Using Samsung's fancy-schmancy über eMMC -- like Ukrainian brewers, too
> backward to know corpo beer is supposed to be made from urine, no one told
> those guys flash is supposed to have sharply limited write endurance.
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-22 23:38     ` Xin Zhou
@ 2016-12-23 12:45       ` Austin S. Hemmelgarn
  0 siblings, 0 replies; 11+ messages in thread
From: Austin S. Hemmelgarn @ 2016-12-23 12:45 UTC (permalink / raw)
  To: Xin Zhou, David Hanke; +Cc: linux-btrfs

On 2016-12-22 18:38, Xin Zhou wrote:
> Hi,
> If the change of disk format between versions is precisely documented,
> it is plausible to create a utility to convert the old volume to new ones,
> trigger the workflow, upgrade the kernel and boots up for mounting the new volume.
> Currently, the btrfs wiki shows partial content of the on-disk format.
> Thanks,
> Xin
Last I checked, the on-disk format has not changed since some time in 
2.6.  The only potential issues going back are if he creates a new 
filesystem with newer btrfs-progs, or tries to use the newer format 
free-space cache (which I would still avoid even on current versions for 
other reasons).
>
>
>
> Sent: Wednesday, December 21, 2016 at 6:50 AM
> From: "David Hanke" <hanke.list@ece.wisc.edu>
> To: linux-btrfs@vger.kernel.org
> Subject: Re: btrfs_log2phys: cannot lookup extent mapping
> Hi Duncan,
>
> Thank you for your reply. If I've emailed the wrong list, please let me
> know. What I hear you saying, in short, is that btrfs is not yet fully
> stable but current 4.x versions may work better. I'm willing to upgrade,
> but I'm told that the upgrade process may result in total failure, and
> I'm not sure I can trust the contents of the volume either way. Given
> that, it seems I must backup the backup, erase and start over. What
> would you do?
>
> Thank you,
>
> David
>
>
> On 12/20/16 17:24, Duncan wrote:
>> David Hanke posted on Tue, 20 Dec 2016 09:52:25 -0600 as excerpted:
>>
>>> I've been using a btrfs-based volume for backups, but lately the
>>> system's been filling the syslog with errors like "btrfs_log2phys:
>>> cannot lookup extent mapping for 7129125486592" at the rate of hundreds
>>> per second. (Please see output below for more details.) Despite the
>>> errors, the files I've looked at appear to be written and read
>>> successfully.
>>>
>>> I'm wondering if the contents of the volume are trustworthy and whether
>>> this problem is resolvable without backing up, erasing and starting
>>> over?
>>>
>>> Thank you!
>>>
>>> David
>>>
>>>
>>> # uname -a
>>> Linux backup2 3.0.101.RNx86_64.3 #1 SMP Wed Apr 1 16:02:14 PDT 2015
>>> x86_64 GNU/Linux
>>>
>>> # btrfs --version
>>> Btrfs v3.17.3
>> FWIW...
>>
>> [TL;DR: see the four bottom line choices, at the bottom.]
>>
>> This is the upstream btrfs development and discussion list for a
>> filesystem that's still stabilizing (that is, not fully stable and
>> mature) and that remains under heavy development and bug fixing. As
>> such, list focus is heavily forward looking, with an extremely strong
>> recommendation to use current kernels (and to a lessor extent btrfs
>> userspace) if you're going to be running btrfs, as these have all the
>> latest bugfixes.
>>
>> Put a different way, the general view and strong recommendation of the
>> list is that because btrfs is still under heavy development, with bug
>> fixes, some more major than others, every kernel cycle, while we
>> recognize that choosing to run old and stale^H^Hble kernels and userspace
>> is a legitimate choice on its own, that choice of stability over support
>> for the latest and greatest, is viewed as incompatible with choosing to
>> run a still under heavy development filesystem. Choosing one OR the
>> other is strongly recommended.
>>
>> For list purposes, we recommend and best support the last two kernel
>> release series in two tracks, LTS/long-term-stable, or current release
>> track. On the LTS track, that's the LTS 4.4 and 4.1 series. On the
>> current track, 4.9 is the latest release, so 4.9 and 4.8 are best
>> supported.
>>
>> Meanwhile, it's worth keeping in mind that the experimental label and
>> accompanying extremely strong "eat your babies" level warnings weren't
>> peeled off until IIRC 3.12 or so, meaning anything before that is not
>> only ancient history in list terms, but also still labeled as "eat your
>> babies" level experimental. Why anyone choosing to run an ancient eat-
>> your-babies level experimental version of a filesystem that's now rather
>> more stable and mature, tho not yet fully stabilized, is beyond me. If
>> they're interested in newer filesystems, running newer and less buggy
>> versions is reasonable; if they're interested in years-stale level of
>> stability, then running such filesystems, especially when still labeled
>> eat-your-babies level experimental back then, seems an extremely odd
>> choice indeed.
>>
>> Of course, on-list we do recognize that various distros did and do offer
>> support at some level for older than list-recommended version btrfs, in
>> part because they backport fixes from newer versions. However, because
>> we're forward development focused we don't track what patches these
>> distros may or may not have backported and thus aren't in a good position
>> to provide good support for them. Instead, users choosing to use such
>> kernels are generally asked to choose between upgrading to something
>> reasonably supportable on-list if they wish to go that route, or referred
>> back to their distros for the support they're in a far better position to
>> offer, since they know what they've backported and what they haven't,
>> while we don't.
>>
>> As for btrfs userspace, the way btrfs works, during normal runtime,
>> userspace primarily calls the kernel to do the real work, so userspace
>> version isn't as big a deal unless you're trying to use a feature only
>> supported by newer versions, except that if it's /too/ old, the impedance
>> mismatch between the commands as they were then and the commands in
>> current versions makes support rather more difficult. However, once
>> there's a problem, then the age of userspace code becomes more vital, as
>> then it's actually the userspace code doing the work, and only newer
>> versions of btrfs check and btrfs restore, for instance, can detect and
>> fix problems where code has only recently been added to do so.
>>
>> In general, then, with btrfs-progs releases and versioning synced to that
>> of the kernel, a reasonable rule of thumb is to run userspace of a
>> similar version to your kernel, tho unless you're experiencing problems,
>> getting a version or two behind on your userspace isn't a big deal. That
>> way, userspace command formats and output will be close enough to current
>> for easier support, and if there's a fix for a specific problem you've
>> posted in newer userspace, the problem and fix are still fresh enough in
>> people's minds that someone will probably recognize it and point out that
>> a newer version can handle that, and you can worry about upgrading to the
>> latest and greatest at that point.
>>
>> So bottom line, you have four choices:
>>
>> 1) Upgrade to something reasonably current to get better on-list support.
>>
>> This would be LTS kernels 4.4 preferred, or 4.1, acceptable, or current
>> kernels 4.9 or 4.8, and similarly versioned userspace, so no older than
>> btrfs-progs 4.0.
>>
>> 2) Choose to stay with your distro's versions and get support from them.
>>
>> Particularly if you are already paying for that support, might as well
>> use it.
>>
>> 3) Recognize the fundamental incompatibility between wanting to run old
>> and stale/stable for the stability it is supposed to offer, and wanting
>> to run a still under heavy development not fully stable and mature
>> filesystem like btrfs, and either switch to a more stable and mature
>> filesystem that better meets your needs for those qualities, or upgrade
>> to a distro or distro version that better meets your needs for current
>> software better supported by current upstreams like this btrfs list.
>>
>> 4) Stay with what you have, and muddle through as best you can.
>>
>> After all, it's not like we /refuse/ to offer support to btrfs that old,
>> if we recognize a problem that we know can be fixed by code that old
>> we'll still tell you, and if we know there's a fix in newer versions
>> we'll still tell you and try to point you at the appropriate patch for
>> you to apply to your old version if possible, but we simply recognize
>> that for something that old, our support will be rather limited, at best.
>>
>> But it remains your system and your data, so your choice, even if it's
>> against everything we normally recommend.
>>
>>
>> Finally, a personal disclosure. I'm a btrfs user and list regular, not a
>> dev. As such, my own answers will rarely get code-level technical or
>> point to specific patches, but because I /am/ a regular, I can still
>> answer the stuff that comes up regularly, leaving the real devs and more
>> expert replies to cover detailed content that's beyond me. So while it's
>> quite possible someone else will recognize a specific bug and be able to
>> point you toward a specific fix, tho honestly I don't expect it for
>> something as old as what you're posting about, general list-recommended
>> upgrades and alternatives for people posting with positively ancient
>> versions is squarely within my reply territory. =:^)
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs_log2phys: cannot lookup extent mapping
  2016-12-21 14:50   ` David Hanke
  2016-12-22 10:11     ` Duncan
  2016-12-22 23:38     ` Xin Zhou
@ 2016-12-27 16:22     ` David Hanke
  2 siblings, 0 replies; 11+ messages in thread
From: David Hanke @ 2016-12-27 16:22 UTC (permalink / raw)
  To: linux-btrfs, xin.zhou, ahferroin7, kilobyte, 1i5t5.duncan

Belated thanks to Duncan, Adam, Austin and Xin for your replies and 
thank you to everyone who's working on btrfs!

Sincerely,

David


On 12/21/16 08:50, David Hanke wrote:
> Hi Duncan,
>
> Thank you for your reply. If I've emailed the wrong list, please let 
> me know. What I hear you saying, in short, is that btrfs is not yet 
> fully stable but current 4.x versions may work better. I'm willing to 
> upgrade, but I'm told that the upgrade process may result in total 
> failure, and I'm not sure I can trust the contents of the volume 
> either way. Given that, it seems I must backup the backup, erase and 
> start over. What would you do?
>
> Thank you,
>
> David
>
>
> On 12/20/16 17:24, Duncan wrote:
>> David Hanke posted on Tue, 20 Dec 2016 09:52:25 -0600 as excerpted:
>>
>>> I've been using a btrfs-based volume for backups, but lately the
>>> system's been filling the syslog with errors like "btrfs_log2phys:
>>> cannot lookup extent mapping for 7129125486592" at the rate of hundreds
>>> per second. (Please see output below for more details.) Despite the
>>> errors, the files I've looked at appear to be written and read
>>> successfully.
>>>
>>> I'm wondering if the contents of the volume are trustworthy and whether
>>> this problem is resolvable without backing up, erasing and starting
>>> over?
>>>
>>> Thank you!
>>>
>>> David
>>>
>>>
>>> # uname -a
>>> Linux backup2 3.0.101.RNx86_64.3 #1 SMP Wed Apr 1 16:02:14 PDT 2015
>>> x86_64 GNU/Linux
>>>
>>> # btrfs --version
>>> Btrfs v3.17.3
>> FWIW...
>>
>> [TL;DR: see the four bottom line choices, at the bottom.]
>>
>> This is the upstream btrfs development and discussion list for a
>> filesystem that's still stabilizing (that is, not fully stable and
>> mature) and that remains under heavy development and bug fixing.  As
>> such, list focus is heavily forward looking, with an extremely strong
>> recommendation to use current kernels (and to a lessor extent btrfs
>> userspace) if you're going to be running btrfs, as these have all the
>> latest bugfixes.
>>
>> Put a different way, the general view and strong recommendation of the
>> list is that because btrfs is still under heavy development, with bug
>> fixes, some more major than others, every kernel cycle, while we
>> recognize that choosing to run old and stale^H^Hble kernels and 
>> userspace
>> is a legitimate choice on its own, that choice of stability over support
>> for the latest and greatest, is viewed as incompatible with choosing to
>> run a still under heavy development filesystem.  Choosing one OR the
>> other is strongly recommended.
>>
>> For list purposes, we recommend and best support the last two kernel
>> release series in two tracks, LTS/long-term-stable, or current release
>> track.  On the LTS track, that's the LTS 4.4 and 4.1 series.  On the
>> current track, 4.9 is the latest release, so 4.9 and 4.8 are best
>> supported.
>>
>> Meanwhile, it's worth keeping in mind that the experimental label and
>> accompanying extremely strong "eat your babies" level warnings weren't
>> peeled off until IIRC 3.12 or so, meaning anything before that is not
>> only ancient history in list terms, but also still labeled as "eat your
>> babies" level experimental.  Why anyone choosing to run an ancient eat-
>> your-babies level experimental version of a filesystem that's now rather
>> more stable and mature, tho not yet fully stabilized, is beyond me.  If
>> they're interested in newer filesystems, running newer and less buggy
>> versions is reasonable; if they're interested in years-stale level of
>> stability, then running such filesystems, especially when still labeled
>> eat-your-babies level experimental back then, seems an extremely odd
>> choice indeed.
>>
>> Of course, on-list we do recognize that various distros did and do offer
>> support at some level for older than list-recommended version btrfs, in
>> part because they backport fixes from newer versions.  However, because
>> we're forward development focused we don't track what patches these
>> distros may or may not have backported and thus aren't in a good 
>> position
>> to provide good support for them.  Instead, users choosing to use such
>> kernels are generally asked to choose between upgrading to something
>> reasonably supportable on-list if they wish to go that route, or 
>> referred
>> back to their distros for the support they're in a far better 
>> position to
>> offer, since they know what they've backported and what they haven't,
>> while we don't.
>>
>> As for btrfs userspace, the way btrfs works, during normal runtime,
>> userspace primarily calls the kernel to do the real work, so userspace
>> version isn't as big a deal unless you're trying to use a feature only
>> supported by newer versions, except that if it's /too/ old, the 
>> impedance
>> mismatch between the commands as they were then and the commands in
>> current versions makes support rather more difficult.  However, once
>> there's a problem, then the age of userspace code becomes more vital, as
>> then it's actually the userspace code doing the work, and only newer
>> versions of btrfs check and btrfs restore, for instance, can detect and
>> fix problems where code has only recently been added to do so.
>>
>> In general, then, with btrfs-progs releases and versioning synced to 
>> that
>> of the kernel, a reasonable rule of thumb is to run userspace of a
>> similar version to your kernel, tho unless you're experiencing problems,
>> getting a version or two behind on your userspace isn't a big deal.  
>> That
>> way, userspace command formats and output will be close enough to 
>> current
>> for easier support, and if there's a fix for a specific problem you've
>> posted in newer userspace, the problem and fix are still fresh enough in
>> people's minds that someone will probably recognize it and point out 
>> that
>> a newer version can handle that, and you can worry about upgrading to 
>> the
>> latest and greatest at that point.
>>
>> So bottom line, you have four choices:
>>
>> 1) Upgrade to something reasonably current to get better on-list 
>> support.
>>
>> This would be LTS kernels 4.4 preferred, or 4.1, acceptable, or current
>> kernels 4.9 or 4.8, and similarly versioned userspace, so no older than
>> btrfs-progs 4.0.
>>
>> 2) Choose to stay with your distro's versions and get support from them.
>>
>> Particularly if you are already paying for that support, might as well
>> use it.
>>
>> 3) Recognize the fundamental incompatibility between wanting to run old
>> and stale/stable for the stability it is supposed to offer, and wanting
>> to run a still under heavy development not fully stable and mature
>> filesystem like btrfs, and either switch to a more stable and mature
>> filesystem that better meets your needs for those qualities, or upgrade
>> to a distro or distro version that better meets your needs for current
>> software better supported by current upstreams like this btrfs list.
>>
>> 4) Stay with what you have, and muddle through as best you can.
>>
>> After all, it's not like we /refuse/ to offer support to btrfs that old,
>> if we recognize a problem that we know can be fixed by code that old
>> we'll still tell you, and if we know there's a fix in newer versions
>> we'll still tell you and try to point you at the appropriate patch for
>> you to apply to your old version if possible, but we simply recognize
>> that for something that old, our support will be rather limited, at 
>> best.
>>
>> But it remains your system and your data, so your choice, even if it's
>> against everything we normally recommend.
>>
>>
>> Finally, a personal disclosure.  I'm a btrfs user and list regular, 
>> not a
>> dev.  As such, my own answers will rarely get code-level technical or
>> point to specific patches, but because I /am/ a regular, I can still
>> answer the stuff that comes up regularly, leaving the real devs and more
>> expert replies to cover detailed content that's beyond me.  So while 
>> it's
>> quite possible someone else will recognize a specific bug and be able to
>> point you toward a specific fix, tho honestly I don't expect it for
>> something as old as what you're posting about, general list-recommended
>> upgrades and alternatives for people posting with positively ancient
>> versions is squarely within my reply territory. =:^)
>>
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-12-27 16:23 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-20 15:52 btrfs_log2phys: cannot lookup extent mapping David Hanke
2016-12-20 23:24 ` Duncan
2016-12-21 14:50   ` David Hanke
2016-12-22 10:11     ` Duncan
2016-12-22 15:14       ` Adam Borowski
2016-12-22 18:28         ` Austin S. Hemmelgarn
2016-12-23  8:14           ` Adam Borowski
2016-12-23 12:43             ` Austin S. Hemmelgarn
2016-12-22 23:38     ` Xin Zhou
2016-12-23 12:45       ` Austin S. Hemmelgarn
2016-12-27 16:22     ` David Hanke

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.