All of lore.kernel.org
 help / color / mirror / Atom feed
* btrfs dev sta not updating
@ 2020-06-23  2:09 Russell Coker
  2020-06-23  6:03 ` Nikolay Borisov
  0 siblings, 1 reply; 11+ messages in thread
From: Russell Coker @ 2020-06-23  2:09 UTC (permalink / raw)
  To: linux-btrfs

[395198.926320] BTRFS warning (device sdc1): csum failed root 5 ino 276 off 
19267584 csum 0x8941f998 expected csum 0xccd545e0 mirror 1
[395199.147439] BTRFS warning (device sdc1): csum failed root 5 ino 276 off 
20611072 csum 0x8941f998 expected csum 0xdaf657cb mirror 1
[395199.183680] BTRFS warning (device sdc1): csum failed root 5 ino 276 off 
24190976 csum 0x8941f998 expected csum 0xcddce0b1 mirror 1
[395199.185172] BTRFS warning (device sdc1): csum failed root 5 ino 276 off 
19267584 csum 0x8941f998 expected csum 0xccd545e0 mirror 1
[395199.330841] BTRFS warning (device sdc1): csum failed root 5 ino 277 off 0 
csum 0x8941f998 expected csum 0xa54d865c mirror 1

I have a USB stick that's corrupted, I get the above kernel messages when I 
try to copy files from it.  But according to btrfs dev sta it has had 0 read 
and 0 corruption errors.

root@xev:/mnt/tmp# btrfs dev sta .
[/dev/sdc1].write_io_errs    0
[/dev/sdc1].read_io_errs     0
[/dev/sdc1].flush_io_errs    0
[/dev/sdc1].corruption_errs  0
[/dev/sdc1].generation_errs  0
root@xev:/mnt/tmp# uname -a
Linux xev 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64 GNU/Linux

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23  2:09 btrfs dev sta not updating Russell Coker
@ 2020-06-23  6:03 ` Nikolay Borisov
  2020-06-23  6:17   ` waxhead
  2020-06-23  8:00   ` Russell Coker
  0 siblings, 2 replies; 11+ messages in thread
From: Nikolay Borisov @ 2020-06-23  6:03 UTC (permalink / raw)
  To: Russell Coker, linux-btrfs



On 23.06.20 г. 5:09 ч., Russell Coker wrote:
> [395198.926320] BTRFS warning (device sdc1): csum failed root 5 ino 276 off 
> 19267584 csum 0x8941f998 expected csum 0xccd545e0 mirror 1
> [395199.147439] BTRFS warning (device sdc1): csum failed root 5 ino 276 off 
> 20611072 csum 0x8941f998 expected csum 0xdaf657cb mirror 1
> [395199.183680] BTRFS warning (device sdc1): csum failed root 5 ino 276 off 
> 24190976 csum 0x8941f998 expected csum 0xcddce0b1 mirror 1
> [395199.185172] BTRFS warning (device sdc1): csum failed root 5 ino 276 off 
> 19267584 csum 0x8941f998 expected csum 0xccd545e0 mirror 1
> [395199.330841] BTRFS warning (device sdc1): csum failed root 5 ino 277 off 0 
> csum 0x8941f998 expected csum 0xa54d865c mirror 1
> 
> I have a USB stick that's corrupted, I get the above kernel messages when I 
> try to copy files from it.  But according to btrfs dev sta it has had 0 read 
> and 0 corruption errors.
> 
> root@xev:/mnt/tmp# btrfs dev sta .
> [/dev/sdc1].write_io_errs    0
> [/dev/sdc1].read_io_errs     0
> [/dev/sdc1].flush_io_errs    0
> [/dev/sdc1].corruption_errs  0
> [/dev/sdc1].generation_errs  0
> root@xev:/mnt/tmp# uname -a
> Linux xev 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64 GNU/Linux
> 

The read/write io err counters are updated when even repair bio have
failed. So in your case you had some checksum errors, but btrfs managed
to repair them by reading from a different mirror. In this case those
aren't really counted as io errs since in the end you did get the
correct data.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23  6:03 ` Nikolay Borisov
@ 2020-06-23  6:17   ` waxhead
  2020-06-23  7:11     ` Nikolay Borisov
  2020-06-23  8:00   ` Russell Coker
  1 sibling, 1 reply; 11+ messages in thread
From: waxhead @ 2020-06-23  6:17 UTC (permalink / raw)
  To: Nikolay Borisov, Russell Coker, linux-btrfs



Nikolay Borisov wrote:
> 
> 
> On 23.06.20 г. 5:09 ч., Russell Coker wrote:
>> [395198.926320] BTRFS warning (device sdc1): csum failed root 5 ino 276 off
>> 19267584 csum 0x8941f998 expected csum 0xccd545e0 mirror 1
>> [395199.147439] BTRFS warning (device sdc1): csum failed root 5 ino 276 off
>> 20611072 csum 0x8941f998 expected csum 0xdaf657cb mirror 1
>> [395199.183680] BTRFS warning (device sdc1): csum failed root 5 ino 276 off
>> 24190976 csum 0x8941f998 expected csum 0xcddce0b1 mirror 1
>> [395199.185172] BTRFS warning (device sdc1): csum failed root 5 ino 276 off
>> 19267584 csum 0x8941f998 expected csum 0xccd545e0 mirror 1
>> [395199.330841] BTRFS warning (device sdc1): csum failed root 5 ino 277 off 0
>> csum 0x8941f998 expected csum 0xa54d865c mirror 1
>>
>> I have a USB stick that's corrupted, I get the above kernel messages when I
>> try to copy files from it.  But according to btrfs dev sta it has had 0 read
>> and 0 corruption errors.
>>
>> root@xev:/mnt/tmp# btrfs dev sta .
>> [/dev/sdc1].write_io_errs    0
>> [/dev/sdc1].read_io_errs     0
>> [/dev/sdc1].flush_io_errs    0
>> [/dev/sdc1].corruption_errs  0
>> [/dev/sdc1].generation_errs  0
>> root@xev:/mnt/tmp# uname -a
>> Linux xev 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64 GNU/Linux
>>
> 
> The read/write io err counters are updated when even repair bio have
> failed. So in your case you had some checksum errors, but btrfs managed
> to repair them by reading from a different mirror. In this case those
> aren't really counted as io errs since in the end you did get the
> correct data.
> 
I don't think this is what most people expect.
A simple way to solve this could be to put the non-fatal errors in 
parentheses if this can be done easily.

For example:
[/dev/sdc1].write_io_errs    0 (5)

IMHO this would be more readable and more useful.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23  6:17   ` waxhead
@ 2020-06-23  7:11     ` Nikolay Borisov
  0 siblings, 0 replies; 11+ messages in thread
From: Nikolay Borisov @ 2020-06-23  7:11 UTC (permalink / raw)
  To: waxhead, Russell Coker, linux-btrfs



On 23.06.20 г. 9:17 ч., waxhead wrote:
> 
> 
> Nikolay Borisov wrote:
>>
>>
>> On 23.06.20 г. 5:09 ч., Russell Coker wrote:
>>> [395198.926320] BTRFS warning (device sdc1): csum failed root 5 ino
>>> 276 off
>>> 19267584 csum 0x8941f998 expected csum 0xccd545e0 mirror 1
>>> [395199.147439] BTRFS warning (device sdc1): csum failed root 5 ino
>>> 276 off
>>> 20611072 csum 0x8941f998 expected csum 0xdaf657cb mirror 1
>>> [395199.183680] BTRFS warning (device sdc1): csum failed root 5 ino
>>> 276 off
>>> 24190976 csum 0x8941f998 expected csum 0xcddce0b1 mirror 1
>>> [395199.185172] BTRFS warning (device sdc1): csum failed root 5 ino
>>> 276 off
>>> 19267584 csum 0x8941f998 expected csum 0xccd545e0 mirror 1
>>> [395199.330841] BTRFS warning (device sdc1): csum failed root 5 ino
>>> 277 off 0
>>> csum 0x8941f998 expected csum 0xa54d865c mirror 1
>>>
>>> I have a USB stick that's corrupted, I get the above kernel messages
>>> when I
>>> try to copy files from it.  But according to btrfs dev sta it has had
>>> 0 read
>>> and 0 corruption errors.
>>>
>>> root@xev:/mnt/tmp# btrfs dev sta .
>>> [/dev/sdc1].write_io_errs    0
>>> [/dev/sdc1].read_io_errs     0
>>> [/dev/sdc1].flush_io_errs    0
>>> [/dev/sdc1].corruption_errs  0
>>> [/dev/sdc1].generation_errs  0
>>> root@xev:/mnt/tmp# uname -a
>>> Linux xev 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64
>>> GNU/Linux
>>>
>>
>> The read/write io err counters are updated when even repair bio have
>> failed. So in your case you had some checksum errors, but btrfs managed
>> to repair them by reading from a different mirror. In this case those
>> aren't really counted as io errs since in the end you did get the
>> correct data.
>>
> I don't think this is what most people expect.
> A simple way to solve this could be to put the non-fatal errors in
> parentheses if this can be done easily.
> 
> For example:
> [/dev/sdc1].write_io_errs    0 (5)
> 
> IMHO this would be more readable and more useful.

Frankly just by looking at this example output, without having read any
accompanying documentation it would be hard to deduce what's the
difference between the numbers. Furthermore, those error numbers are
persisted on disk, so if we want to add new persistent error numbers the
disk format would have to be changed. On the other hand we *could* make
even transient errors be counted as persistent ones e.g. in
read_io_errs. But this leads to a different can of worms - if a user
sees read_io_errs should they be worried because potentially some data
is stale or not (give we won't be distinguishing between unrepairable vs
transient ones).

Weighing pros and cons of adding "transient" errors I'd say the effort
would be better invested if instead we clearly document how errors are
counted, admittedly that's a department we are severely lacking in!


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23  6:03 ` Nikolay Borisov
  2020-06-23  6:17   ` waxhead
@ 2020-06-23  8:00   ` Russell Coker
  2020-06-23  8:17     ` Nikolay Borisov
  1 sibling, 1 reply; 11+ messages in thread
From: Russell Coker @ 2020-06-23  8:00 UTC (permalink / raw)
  To: linux-btrfs

On Tuesday, 23 June 2020 4:03:37 PM AEST Nikolay Borisov wrote:
> > I have a USB stick that's corrupted, I get the above kernel messages when
> > I
> > try to copy files from it.  But according to btrfs dev sta it has had 0
> > read and 0 corruption errors.
> > 
> > root@xev:/mnt/tmp# btrfs dev sta .
> > [/dev/sdc1].write_io_errs    0
> > [/dev/sdc1].read_io_errs     0
> > [/dev/sdc1].flush_io_errs    0
> > [/dev/sdc1].corruption_errs  0
> > [/dev/sdc1].generation_errs  0
> > root@xev:/mnt/tmp# uname -a
> > Linux xev 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64
> > GNU/Linux
> The read/write io err counters are updated when even repair bio have
> failed. So in your case you had some checksum errors, but btrfs managed
> to repair them by reading from a different mirror. In this case those
> aren't really counted as io errs since in the end you did get the
> correct data.

In this case I'm getting application IO errors and lost data, so if the error 
count is designed to not count recovered errors then it's still not doing the 
right thing.

# md5sum *
md5sum: 'Rise of the Machines S1 Ep6 - Mega Digger-qcOpMtIWsrgN.mp4': Input/
output error
md5sum: 'Rise of the Machines S1 Ep7 - Ultimate Dragster-Ke9hMplfQAWF.mp4': 
Input/output error
md5sum: 'Rise of the Machines S1 Ep8 - Aircraft Carrier-Qxht6qMEwMKr.mp4': 
Input/output error
^C
# btrfs dev sta .
[/dev/sdc1].write_io_errs    0
[/dev/sdc1].read_io_errs     0
[/dev/sdc1].flush_io_errs    0
[/dev/sdc1].corruption_errs  0
[/dev/sdc1].generation_errs  0
# tail /var/log/kern.log
Jun 23 17:48:40 xev kernel: [417603.547748] BTRFS warning (device sdc1): csum 
failed root 5 ino 275 off 59580416 csum 0x8941f998 expected csum 0xb5b581fc 
mirror 1
Jun 23 17:48:40 xev kernel: [417603.609861] BTRFS warning (device sdc1): csum 
failed root 5 ino 275 off 60628992 csum 0x8941f998 expected csum 0x4b6c9883 
mirror 1
Jun 23 17:48:40 xev kernel: [417603.672251] BTRFS warning (device sdc1): csum 
failed root 5 ino 275 off 61677568 csum 0x8941f998 expected csum 0x89f5fb68 
mirror 1
# uname -a
Linux xev 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64 GNU/Linux

On Tuesday, 23 June 2020 4:17:55 PM AEST waxhead wrote:
> I don't think this is what most people expect.
> A simple way to solve this could be to put the non-fatal errors in
> parentheses if this can be done easily.

I think that most people would expect a "device stats" command to just give 
stats of the device and not refer to what happens at the higher level.  If a 
device is giving corruption or read errors then "device stats" should tell 
that.

On Tuesday, 23 June 2020 5:11:05 PM AEST Nikolay Borisov wrote:
> read_io_errs. But this leads to a different can of worms - if a user
> sees read_io_errs should they be worried because potentially some data
> is stale or not (give we won't be distinguishing between unrepairable vs
> transient ones).

If a user sees errors reported their degree of worry should be based on the 
degree to which they use RAID and have decent backups.  If you have RAID-1 and 
only 1 device has errors then you are OK.  If you have 2 devices with errors 
then you have a problem.

Below is an example of a zpool having errors that were corrected.  The DEVICE 
had an unrecoverable error, but the RAID-Z pool recovered it from other 
devices.  It states that "Applications are unaffected" so the user knows the 
degree of worry that should be involved.

# zpool status
  pool: pet630
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: scrub repaired 380K in 156h39m with 0 errors on Sat Jun 20 13:03:26 
2020
config:

        NAME           STATE     READ WRITE CKSUM
        pet630         ONLINE       0     0     0
          raidz1-0     ONLINE       0     0     0
            sdf        ONLINE       0     0     0
            sdq        ONLINE       0     0     0
            sdd        ONLINE       0     0     0
            sdh        ONLINE       0     0     0
            sdi        ONLINE      41     0     1


-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23  8:00   ` Russell Coker
@ 2020-06-23  8:17     ` Nikolay Borisov
  2020-06-23  9:48       ` Russell Coker
  0 siblings, 1 reply; 11+ messages in thread
From: Nikolay Borisov @ 2020-06-23  8:17 UTC (permalink / raw)
  To: Russell Coker, linux-btrfs



On 23.06.20 г. 11:00 ч., Russell Coker wrote:
> On Tuesday, 23 June 2020 4:03:37 PM AEST Nikolay Borisov wrote:
>>> I have a USB stick that's corrupted, I get the above kernel messages when
>>> I
>>> try to copy files from it.  But according to btrfs dev sta it has had 0
>>> read and 0 corruption errors.
>>>
>>> root@xev:/mnt/tmp# btrfs dev sta .
>>> [/dev/sdc1].write_io_errs    0
>>> [/dev/sdc1].read_io_errs     0
>>> [/dev/sdc1].flush_io_errs    0
>>> [/dev/sdc1].corruption_errs  0
>>> [/dev/sdc1].generation_errs  0
>>> root@xev:/mnt/tmp# uname -a
>>> Linux xev 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64
>>> GNU/Linux
>> The read/write io err counters are updated when even repair bio have
>> failed. So in your case you had some checksum errors, but btrfs managed
>> to repair them by reading from a different mirror. In this case those
>> aren't really counted as io errs since in the end you did get the
>> correct data.
> 
> In this case I'm getting application IO errors and lost data, so if the error 
> count is designed to not count recovered errors then it's still not doing the 
> right thing.

In this case yes, however this was utterly not clear from your initial
email. In fact it seems you have omitted quite a lot of information. So
let's step back and start afresh. So first give information about your
current btrfs setup by giving the output of:

btrfs fi usage /path/to/btrfs

From the output provided it seems the affected mirror is '1', which to
me implies you have at least another disk containing the same data. So
unless you have errors in mirror 0 as well those errors should be
recovered from by simply reading from that mirror.

> 
> # md5sum *
> md5sum: 'Rise of the Machines S1 Ep6 - Mega Digger-qcOpMtIWsrgN.mp4': Input/
> output error
> md5sum: 'Rise of the Machines S1 Ep7 - Ultimate Dragster-Ke9hMplfQAWF.mp4': 
> Input/output error
> md5sum: 'Rise of the Machines S1 Ep8 - Aircraft Carrier-Qxht6qMEwMKr.mp4': 
> Input/output error
> ^C

You are trying to md5sum 3 distinct files....

> # btrfs dev sta .
> [/dev/sdc1].write_io_errs    0
> [/dev/sdc1].read_io_errs     0
> [/dev/sdc1].flush_io_errs    0
> [/dev/sdc1].corruption_errs  0
> [/dev/sdc1].generation_errs  0
> # tail /var/log/kern.log
> Jun 23 17:48:40 xev kernel: [417603.547748] BTRFS warning (device sdc1): csum 
> failed root 5 ino 275 off 59580416 csum 0x8941f998 expected csum 0xb5b581fc 
> mirror 1
> Jun 23 17:48:40 xev kernel: [417603.609861] BTRFS warning (device sdc1): csum 
> failed root 5 ino 275 off 60628992 csum 0x8941f998 expected csum 0x4b6c9883 
> mirror 1
> Jun 23 17:48:40 xev kernel: [417603.672251] BTRFS warning (device sdc1): csum 
> failed root 5 ino 275 off 61677568 csum 0x8941f998 expected csum 0x89f5fb68 
> mirror 1

Yet here all the errors happen in one inode, namely 275. So the md5sum
commands do not correspond to those errors specifically. Also provide
the name of inode 275. And for good measure also provide the output of
"btrfs check /dev/sdc1" - this is a read only command so if there is
some metadata corruption it will at least not make it any worse.


> # uname -a
> Linux xev 5.6.0-2-amd64 #1 SMP Debian 5.6.14-1 (2020-05-23) x86_64 GNU/Linux
> 
> On Tuesday, 23 June 2020 4:17:55 PM AEST waxhead wrote:
>> I don't think this is what most people expect.
>> A simple way to solve this could be to put the non-fatal errors in
>> parentheses if this can be done easily.
> 
> I think that most people would expect a "device stats" command to just give 
> stats of the device and not refer to what happens at the higher level.  If a 
> device is giving corruption or read errors then "device stats" should tell 
> that.

That's a fair point.

> 
> On Tuesday, 23 June 2020 5:11:05 PM AEST Nikolay Borisov wrote:
>> read_io_errs. But this leads to a different can of worms - if a user
>> sees read_io_errs should they be worried because potentially some data
>> is stale or not (give we won't be distinguishing between unrepairable vs
>> transient ones).
> 
> If a user sees errors reported their degree of worry should be based on the 
> degree to which they use RAID and have decent backups.  If you have RAID-1 and 
> only 1 device has errors then you are OK.  If you have 2 devices with errors 
> then you have a problem.
> 
> Below is an example of a zpool having errors that were corrected.  The DEVICE 
> had an unrecoverable error, but the RAID-Z pool recovered it from other 
> devices.  It states that "Applications are unaffected" so the user knows the 
> degree of worry that should be involved.

BTRFS' internal structure is very different from ZFS' so we don't have
this notion of vdev, consisting of multiple child devices. And so each
physical + vdev can be considered a separate device. So again, without
extending the on-disk format i.e introducing new items or changing the
format of existing ones we can't accommodate the exact same reports. And
while the on-disk format can be changed (which of course comes with
added complexity) there should be a very good reason to do so. Clearly
something is amiss in your case, but I would like to first properly root
cause it before jumping to conclusions.

> 
> # zpool status
>   pool: pet630
>  state: ONLINE
> status: One or more devices has experienced an unrecoverable error.  An
>         attempt was made to correct the error.  Applications are unaffected.
> action: Determine if the device needs to be replaced, and clear the errors
>         using 'zpool clear' or replace the device with 'zpool replace'.
>    see: http://zfsonlinux.org/msg/ZFS-8000-9P
>   scan: scrub repaired 380K in 156h39m with 0 errors on Sat Jun 20 13:03:26 
> 2020
> config:
> 
>         NAME           STATE     READ WRITE CKSUM
>         pet630         ONLINE       0     0     0
>           raidz1-0     ONLINE       0     0     0
>             sdf        ONLINE       0     0     0
>             sdq        ONLINE       0     0     0
>             sdd        ONLINE       0     0     0
>             sdh        ONLINE       0     0     0
>             sdi        ONLINE      41     0     1
> 
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23  8:17     ` Nikolay Borisov
@ 2020-06-23  9:48       ` Russell Coker
  2020-06-23 11:13         ` Nikolay Borisov
  0 siblings, 1 reply; 11+ messages in thread
From: Russell Coker @ 2020-06-23  9:48 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Tuesday, 23 June 2020 6:17:00 PM AEST Nikolay Borisov wrote:
> > In this case I'm getting application IO errors and lost data, so if the
> > error count is designed to not count recovered errors then it's still not
> > doing the right thing.
> 
> In this case yes, however this was utterly not clear from your initial
> email. In fact it seems you have omitted quite a lot of information. So
> let's step back and start afresh. So first give information about your
> current btrfs setup by giving the output of:
> 
> btrfs fi usage /path/to/btrfs

# btrfs fi usa .
Overall:
    Device size:                  62.50GiB
    Device allocated:             19.02GiB
    Device unallocated:           43.48GiB
    Device missing:                  0.00B
    Used:                         16.26GiB
    Free (estimated):             44.25GiB      (min: 22.51GiB)
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:               17.06MiB      (used: 0.00B)

Data,single: Size:17.01GiB, Used:16.23GiB (95.43%)
   /dev/sdc1      17.01GiB

Metadata,DUP: Size:1.00GiB, Used:17.19MiB (1.68%)
   /dev/sdc1       2.00GiB

System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
   /dev/sdc1      16.00MiB

Unallocated:
   /dev/sdc1      43.48GiB

> From the output provided it seems the affected mirror is '1', which to
> me implies you have at least another disk containing the same data. So
> unless you have errors in mirror 0 as well those errors should be
> recovered from by simply reading from that mirror.
> 
> > # md5sum *
> > md5sum: 'Rise of the Machines S1 Ep6 - Mega Digger-qcOpMtIWsrgN.mp4':
> > Input/ output error
> > md5sum: 'Rise of the Machines S1 Ep7 - Ultimate
> > Dragster-Ke9hMplfQAWF.mp4':
> > Input/output error
> > md5sum: 'Rise of the Machines S1 Ep8 - Aircraft Carrier-Qxht6qMEwMKr.mp4':
> > Input/output error
> > ^C
> 
> You are trying to md5sum 3 distinct files....

There's more files, some of the files were read correctly.

> > # btrfs dev sta .
> > [/dev/sdc1].write_io_errs    0
> > [/dev/sdc1].read_io_errs     0
> > [/dev/sdc1].flush_io_errs    0
> > [/dev/sdc1].corruption_errs  0
> > [/dev/sdc1].generation_errs  0
> > # tail /var/log/kern.log
> > Jun 23 17:48:40 xev kernel: [417603.547748] BTRFS warning (device sdc1):
> > csum failed root 5 ino 275 off 59580416 csum 0x8941f998 expected csum
> > 0xb5b581fc mirror 1
> > Jun 23 17:48:40 xev kernel: [417603.609861] BTRFS warning (device sdc1):
> > csum failed root 5 ino 275 off 60628992 csum 0x8941f998 expected csum
> > 0x4b6c9883 mirror 1
> > Jun 23 17:48:40 xev kernel: [417603.672251] BTRFS warning (device sdc1):
> > csum failed root 5 ino 275 off 61677568 csum 0x8941f998 expected csum
> > 0x89f5fb68 mirror 1
> 
> Yet here all the errors happen in one inode, namely 275. So the md5sum
> commands do not correspond to those errors specifically. Also provide
> the name of inode 275. And for good measure also provide the output of
> "btrfs check /dev/sdc1" - this is a read only command so if there is
> some metadata corruption it will at least not make it any worse.

# ls -li /mnt/tmp|grep 275
275 -rw-r--r--. 1 root root  507979219 Jun  3 11:05 Rise of the Machines S1 
Ep8 - Aircraft Carrier-Qxht6qMEwMKr.mp4
# umount /mnt/tmp
# btrfs check /dev/sdc1
Opening filesystem to check...
Checking filesystem on /dev/sdc1
UUID: 841b569f-63ab-477f-b603-64e4e4339146
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 17446019072 bytes used, no error found
total csum bytes: 17014904
total tree bytes: 18038784
total fs tree bytes: 81920
total extent tree bytes: 114688
btree space waste bytes: 669647
file data blocks allocated: 17427980288
 referenced 17427980288

I don't mind about making problems worse, there is no precious data on that 
device, just downloads of some TV shows which are also stored elsewhere.  But 
I don't want to have such problems happen to more important data.

> > On Tuesday, 23 June 2020 5:11:05 PM AEST Nikolay Borisov wrote:
> >> read_io_errs. But this leads to a different can of worms - if a user
> >> sees read_io_errs should they be worried because potentially some data
> >> is stale or not (give we won't be distinguishing between unrepairable vs
> >> transient ones).
> > 
> > If a user sees errors reported their degree of worry should be based on
> > the
> > degree to which they use RAID and have decent backups.  If you have RAID-1
> > and only 1 device has errors then you are OK.  If you have 2 devices with
> > errors then you have a problem.
> > 
> > Below is an example of a zpool having errors that were corrected.  The
> > DEVICE had an unrecoverable error, but the RAID-Z pool recovered it from
> > other devices.  It states that "Applications are unaffected" so the user
> > knows the degree of worry that should be involved.
> 
> BTRFS' internal structure is very different from ZFS' so we don't have
> this notion of vdev, consisting of multiple child devices. And so each
> physical + vdev can be considered a separate device. So again, without
> extending the on-disk format i.e introducing new items or changing the
> format of existing ones we can't accommodate the exact same reports. And
> while the on-disk format can be changed (which of course comes with
> added complexity) there should be a very good reason to do so. Clearly
> something is amiss in your case, but I would like to first properly root
> cause it before jumping to conclusions.

ZFS gives fewer numbers when asking for device status, but the numbers provide 
more useful information.

Also it is possible to have numbers stored only in RAM that get lost when the 
filesystem is umounted.  That would still be very useful for monitoring 
systems.  I want to know about problems and replace disks before the problems 
become significant enough to lose data.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23  9:48       ` Russell Coker
@ 2020-06-23 11:13         ` Nikolay Borisov
  2020-06-23 11:21           ` Russell Coker
  2020-06-24 11:39           ` Zygo Blaxell
  0 siblings, 2 replies; 11+ messages in thread
From: Nikolay Borisov @ 2020-06-23 11:13 UTC (permalink / raw)
  To: Russell Coker; +Cc: linux-btrfs



On 23.06.20 г. 12:48 ч., Russell Coker wrote:
> On Tuesday, 23 June 2020 6:17:00 PM AEST Nikolay Borisov wrote:
>>> In this case I'm getting application IO errors and lost data, so if the
>>> error count is designed to not count recovered errors then it's still not
>>> doing the right thing.
>>
>> In this case yes, however this was utterly not clear from your initial
>> email. In fact it seems you have omitted quite a lot of information. So
>> let's step back and start afresh. So first give information about your
>> current btrfs setup by giving the output of:
>>
>> btrfs fi usage /path/to/btrfs
> 
> # btrfs fi usa .
> Overall:
>     Device size:                  62.50GiB
>     Device allocated:             19.02GiB
>     Device unallocated:           43.48GiB
>     Device missing:                  0.00B
>     Used:                         16.26GiB
>     Free (estimated):             44.25GiB      (min: 22.51GiB)
>     Data ratio:                       1.00
>     Metadata ratio:                   2.00
>     Global reserve:               17.06MiB      (used: 0.00B)
> 
> Data,single: Size:17.01GiB, Used:16.23GiB (95.43%)
>    /dev/sdc1      17.01GiB
> 
> Metadata,DUP: Size:1.00GiB, Used:17.19MiB (1.68%)
>    /dev/sdc1       2.00GiB
> 
> System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
>    /dev/sdc1      16.00MiB
> 
> Unallocated:
>    /dev/sdc1      43.48GiB

Do you use compression on this filesystem i.e have you mounted with
-ocompression= option ?

Based on this data alone it's evident that you don't really have mirrors
of the data, in this case having experienced the checksum errors should
have indeed resulted in error counters being incremented. I'll look into
this.

<snip>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23 11:13         ` Nikolay Borisov
@ 2020-06-23 11:21           ` Russell Coker
  2020-06-24 11:39           ` Zygo Blaxell
  1 sibling, 0 replies; 11+ messages in thread
From: Russell Coker @ 2020-06-23 11:21 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Tuesday, 23 June 2020 9:13:04 PM AEST Nikolay Borisov wrote:
> > # btrfs fi usa .
> > Overall:
> > Device size:                  62.50GiB
> > Device allocated:             19.02GiB
> > Device unallocated:           43.48GiB
> > Device missing:                  0.00B
> > Used:                         16.26GiB
> > Free (estimated):             44.25GiB      (min: 22.51GiB)
> > Data ratio:                       1.00
> > Metadata ratio:                   2.00
> > Global reserve:               17.06MiB      (used: 0.00B)
> > 
> > Data,single: Size:17.01GiB, Used:16.23GiB (95.43%)
> > /dev/sdc1      17.01GiB
> > 
> > Metadata,DUP: Size:1.00GiB, Used:17.19MiB (1.68%)
> > /dev/sdc1       2.00GiB
> > 
> > System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
> > /dev/sdc1      16.00MiB
> > 
> > Unallocated:
> > /dev/sdc1      43.48GiB
> 
> Do you use compression on this filesystem i.e have you mounted with
> -ocompression= option ?

No, used the default mount with the Debian build of kernel 5.6.14.  Everything 
was pretty much default with it.  Made a filesystem, copied a bunch of large 
files to it, tried to read it, got problems.

It was a storage device I suspected of having errors, copying files to/from it 
with BTRFS is a good way of exposing errors.
 
> Based on this data alone it's evident that you don't really have mirrors
> of the data, in this case having experienced the checksum errors should
> have indeed resulted in error counters being incremented. I'll look into
> this.

Thanks.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/




^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-23 11:13         ` Nikolay Borisov
  2020-06-23 11:21           ` Russell Coker
@ 2020-06-24 11:39           ` Zygo Blaxell
  2020-06-24 13:04             ` Nikolay Borisov
  1 sibling, 1 reply; 11+ messages in thread
From: Zygo Blaxell @ 2020-06-24 11:39 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Russell Coker, linux-btrfs

On Tue, Jun 23, 2020 at 02:13:04PM +0300, Nikolay Borisov wrote:
> 
> 
> On 23.06.20 г. 12:48 ч., Russell Coker wrote:
> > On Tuesday, 23 June 2020 6:17:00 PM AEST Nikolay Borisov wrote:
> >>> In this case I'm getting application IO errors and lost data, so if the
> >>> error count is designed to not count recovered errors then it's still not
> >>> doing the right thing.
> >>
> >> In this case yes, however this was utterly not clear from your initial
> >> email. In fact it seems you have omitted quite a lot of information. So
> >> let's step back and start afresh. So first give information about your
> >> current btrfs setup by giving the output of:
> >>
> >> btrfs fi usage /path/to/btrfs
> > 
> > # btrfs fi usa .
> > Overall:
> >     Device size:                  62.50GiB
> >     Device allocated:             19.02GiB
> >     Device unallocated:           43.48GiB
> >     Device missing:                  0.00B
> >     Used:                         16.26GiB
> >     Free (estimated):             44.25GiB      (min: 22.51GiB)
> >     Data ratio:                       1.00
> >     Metadata ratio:                   2.00
> >     Global reserve:               17.06MiB      (used: 0.00B)
> > 
> > Data,single: Size:17.01GiB, Used:16.23GiB (95.43%)
> >    /dev/sdc1      17.01GiB
> > 
> > Metadata,DUP: Size:1.00GiB, Used:17.19MiB (1.68%)
> >    /dev/sdc1       2.00GiB
> > 
> > System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
> >    /dev/sdc1      16.00MiB
> > 
> > Unallocated:
> >    /dev/sdc1      43.48GiB
> 
> Do you use compression on this filesystem i.e have you mounted with
> -ocompression= option ?
> 
> Based on this data alone it's evident that you don't really have mirrors
> of the data, in this case having experienced the checksum errors should
> have indeed resulted in error counters being incremented. I'll look into
> this.

In commit 0cc068e6ee59 "btrfs: don't report readahead errors and don't
update statistics" we stopped counting errors if they occur during
readahead.  If there's a mirror available, we do still correct errors
in that case.  Errors in readahead are fairly common, e.g. there are
usually a few during lvm pvmove operations, so it maybe makes sense
not to count them; however, if the errors are not counted, they should
also not be repaired.  Instead, they should be repaired only during
non-readahead reads (i.e. when the repairs will be counted in dev stats).
Repairing errors without counting is bad because it hides an important
indicator of device failure.

This thread might be a different issue since there aren't any mirrors
with single data, but if you're look at dev stats correctness anyway...

> <snip>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: btrfs dev sta not updating
  2020-06-24 11:39           ` Zygo Blaxell
@ 2020-06-24 13:04             ` Nikolay Borisov
  0 siblings, 0 replies; 11+ messages in thread
From: Nikolay Borisov @ 2020-06-24 13:04 UTC (permalink / raw)
  To: Zygo Blaxell; +Cc: Russell Coker, linux-btrfs



On 24.06.20 г. 14:39 ч., Zygo Blaxell wrote:
> On Tue, Jun 23, 2020 at 02:13:04PM +0300, Nikolay Borisov wrote:
>>
>>
>> On 23.06.20 г. 12:48 ч., Russell Coker wrote:
>>> On Tuesday, 23 June 2020 6:17:00 PM AEST Nikolay Borisov wrote:
>>>>> In this case I'm getting application IO errors and lost data, so if the
>>>>> error count is designed to not count recovered errors then it's still not
>>>>> doing the right thing.
>>>>
>>>> In this case yes, however this was utterly not clear from your initial
>>>> email. In fact it seems you have omitted quite a lot of information. So
>>>> let's step back and start afresh. So first give information about your
>>>> current btrfs setup by giving the output of:
>>>>
>>>> btrfs fi usage /path/to/btrfs
>>>
>>> # btrfs fi usa .
>>> Overall:
>>>     Device size:                  62.50GiB
>>>     Device allocated:             19.02GiB
>>>     Device unallocated:           43.48GiB
>>>     Device missing:                  0.00B
>>>     Used:                         16.26GiB
>>>     Free (estimated):             44.25GiB      (min: 22.51GiB)
>>>     Data ratio:                       1.00
>>>     Metadata ratio:                   2.00
>>>     Global reserve:               17.06MiB      (used: 0.00B)
>>>
>>> Data,single: Size:17.01GiB, Used:16.23GiB (95.43%)
>>>    /dev/sdc1      17.01GiB
>>>
>>> Metadata,DUP: Size:1.00GiB, Used:17.19MiB (1.68%)
>>>    /dev/sdc1       2.00GiB
>>>
>>> System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
>>>    /dev/sdc1      16.00MiB
>>>
>>> Unallocated:
>>>    /dev/sdc1      43.48GiB
>>
>> Do you use compression on this filesystem i.e have you mounted with
>> -ocompression= option ?
>>
>> Based on this data alone it's evident that you don't really have mirrors
>> of the data, in this case having experienced the checksum errors should
>> have indeed resulted in error counters being incremented. I'll look into
>> this.
> 
> In commit 0cc068e6ee59 "btrfs: don't report readahead errors and don't
> update statistics" we stopped counting errors if they occur during
> readahead.  If there's a mirror available, we do still correct errors
> in that case.  Errors in readahead are fairly common, e.g. there are
> usually a few during lvm pvmove operations, so it maybe makes sense
> not to count them; however, if the errors are not counted, they should
> also not be repaired.  Instead, they should be repaired only during
> non-readahead reads (i.e. when the repairs will be counted in dev stats).
> Repairing errors without counting is bad because it hides an important
> indicator of device failure.
> 
> This thread might be a different issue since there aren't any mirrors
> with single data, but if you're look at dev stats correctness anyway...

Turns out this is a genueine bug, namely errors stats are only ever
updated in btrfs_end_bio which  happens well before checksums are
checked. In fact at the time when we are checking checksums
end_bio_extent_readpage->readpage_end_io_hook
(btrfs_readpage_end_io_hook) we don't (currently) have enough context to
increment the errors. I'm currently testing a tentative fix for this.

> 
>> <snip>
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-06-24 13:04 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-23  2:09 btrfs dev sta not updating Russell Coker
2020-06-23  6:03 ` Nikolay Borisov
2020-06-23  6:17   ` waxhead
2020-06-23  7:11     ` Nikolay Borisov
2020-06-23  8:00   ` Russell Coker
2020-06-23  8:17     ` Nikolay Borisov
2020-06-23  9:48       ` Russell Coker
2020-06-23 11:13         ` Nikolay Borisov
2020-06-23 11:21           ` Russell Coker
2020-06-24 11:39           ` Zygo Blaxell
2020-06-24 13:04             ` Nikolay Borisov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.