All of lore.kernel.org
 help / color / mirror / Atom feed
* btrfs-cleaner 100% busy on an idle filesystem with 4.19.3
@ 2018-11-22 12:31 Tomasz Chmielewski
  2018-11-22 12:46 ` Nikolay Borisov
  0 siblings, 1 reply; 6+ messages in thread
From: Tomasz Chmielewski @ 2018-11-22 12:31 UTC (permalink / raw)
  To: Btrfs BTRFS

Yet another system upgraded to 4.19 and showing strange issues.

btrfs-cleaner is showing as ~90-100% busy in iotop:

    TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
   1340 be/4 root        0.00 B/s    0.00 B/s  0.00 % 92.88 % 
[btrfs-cleaner]


Note disk read, disk write are 0.00 B/s.


iostat -mx 1 shows all disks are ~100% busy, yet there are 0 reads and 0 
writes to them:

Device            r/s     w/s     rMB/s     wMB/s   rrqm/s   wrqm/s  
%rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda              0.00    0.00      0.00      0.00     0.00     0.00   
0.00   0.00    0.00    0.00   0.91     0.00     0.00   0.00  90.80
sdb              0.00    0.00      0.00      0.00     0.00     0.00   
0.00   0.00    0.00    0.00   1.00     0.00     0.00   0.00 100.00
sdc              0.00    0.00      0.00      0.00     0.00     0.00   
0.00   0.00    0.00    0.00   1.00     0.00     0.00   0.00 100.00



btrfs-cleaner persists 100% busy after reboots. The system is generally 
not very responsive.



# echo w > /proc/sysrq-trigger

# dmesg -c
[  931.585611] sysrq: SysRq : Show Blocked State
[  931.585715]   task                        PC stack   pid father
[  931.590168] btrfs-cleaner   D    0  1340      2 0x80000000
[  931.590175] Call Trace:
[  931.590190]  __schedule+0x29e/0x840
[  931.590195]  schedule+0x2c/0x80
[  931.590199]  schedule_timeout+0x258/0x360
[  931.590204]  io_schedule_timeout+0x1e/0x50
[  931.590208]  wait_for_completion_io+0xb7/0x140
[  931.590214]  ? wake_up_q+0x80/0x80
[  931.590219]  submit_bio_wait+0x61/0x90
[  931.590225]  blkdev_issue_discard+0x7a/0xd0
[  931.590266]  btrfs_issue_discard+0x123/0x160 [btrfs]
[  931.590299]  btrfs_discard_extent+0xd8/0x160 [btrfs]
[  931.590335]  btrfs_finish_extent_commit+0xe2/0x240 [btrfs]
[  931.590382]  btrfs_commit_transaction+0x573/0x840 [btrfs]
[  931.590415]  ? btrfs_block_rsv_check+0x25/0x70 [btrfs]
[  931.590456]  __btrfs_end_transaction+0x2be/0x2d0 [btrfs]
[  931.590493]  btrfs_end_transaction_throttle+0x13/0x20 [btrfs]
[  931.590530]  btrfs_drop_snapshot+0x489/0x800 [btrfs]
[  931.590567]  btrfs_clean_one_deleted_snapshot+0xbb/0xf0 [btrfs]
[  931.590607]  cleaner_kthread+0x136/0x160 [btrfs]
[  931.590612]  kthread+0x120/0x140
[  931.590646]  ? btree_submit_bio_start+0x20/0x20 [btrfs]
[  931.590658]  ? kthread_bind+0x40/0x40
[  931.590661]  ret_from_fork+0x22/0x40




After rebooting to 4.19.3, I've started seeing read errors. There were 
no errors with a previous kernel (4.17.14); disks are healthy according 
to SMART; no errors reported when we read the whole surface with dd.

# grep -i btrfs.*corrected /var/log/syslog|wc -l
156

Things like:

Nov 22 12:17:43 lxd05 kernel: [  711.538836] BTRFS info (device sda2): 
read error corrected: ino 0 off 3739083145216 (dev /dev/sdc2 sector 
101088384)
Nov 22 12:17:43 lxd05 kernel: [  711.538905] BTRFS info (device sda2): 
read error corrected: ino 0 off 3739083149312 (dev /dev/sdc2 sector 
101088392)
Nov 22 12:17:43 lxd05 kernel: [  711.538958] BTRFS info (device sda2): 
read error corrected: ino 0 off 3739083153408 (dev /dev/sdc2 sector 
101088400)
Nov 22 12:17:43 lxd05 kernel: [  711.539006] BTRFS info (device sda2): 
read error corrected: ino 0 off 3739083157504 (dev /dev/sdc2 sector 
101088408)


Yet - 0 errors, according to stats, not sure if it's expected or not:

# btrfs device stats /data/lxd
[/dev/sda2].write_io_errs    0
[/dev/sda2].read_io_errs     0
[/dev/sda2].flush_io_errs    0
[/dev/sda2].corruption_errs  0
[/dev/sda2].generation_errs  0
[/dev/sdb2].write_io_errs    0
[/dev/sdb2].read_io_errs     0
[/dev/sdb2].flush_io_errs    0
[/dev/sdb2].corruption_errs  0
[/dev/sdb2].generation_errs  0
[/dev/sdc2].write_io_errs    0
[/dev/sdc2].read_io_errs     0
[/dev/sdc2].flush_io_errs    0
[/dev/sdc2].corruption_errs  0
[/dev/sdc2].generation_errs  0


# btrfs fi usage /data/lxd
Overall:
     Device size:                   5.18TiB
     Device allocated:              2.36TiB
     Device unallocated:            2.83TiB
     Device missing:                  0.00B
     Used:                          1.93TiB
     Free (estimated):              1.62TiB      (min: 1.62TiB)
     Data ratio:                       2.00
     Metadata ratio:                   2.00
     Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID1: Size:1.17TiB, Used:983.87GiB
    /dev/sda2     796.00GiB
    /dev/sdb2     799.00GiB
    /dev/sdc2     797.00GiB

Metadata,RAID1: Size:10.00GiB, Used:4.09GiB
    /dev/sda2       8.00GiB
    /dev/sdb2       5.00GiB
    /dev/sdc2       7.00GiB

System,RAID1: Size:64.00MiB, Used:224.00KiB
    /dev/sda2      32.00MiB
    /dev/sdb2      64.00MiB
    /dev/sdc2      32.00MiB

Unallocated:
    /dev/sda2     964.46GiB
    /dev/sdb2     964.43GiB
    /dev/sdc2     964.46GiB


# btrfs fi df /data/lxd
Data, RAID1: total=1.17TiB, used=983.04GiB
System, RAID1: total=64.00MiB, used=224.00KiB
Metadata, RAID1: total=10.00GiB, used=4.08GiB
GlobalReserve, single: total=512.00MiB, used=0.00B


# btrfs fi show /data/lxd
Label: 'lxd5'  uuid: 2b77b498-a644-430b-9dd9-2ad3d381448a
         Total devices 3 FS bytes used 987.12GiB
         devid    1 size 1.73TiB used 804.03GiB path /dev/sda2
         devid    2 size 1.73TiB used 804.06GiB path /dev/sdb2
         devid    3 size 1.73TiB used 804.03GiB path /dev/sdc2


Tomasz Chmielewski
https://lxadm.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs-cleaner 100% busy on an idle filesystem with 4.19.3
  2018-11-22 12:31 btrfs-cleaner 100% busy on an idle filesystem with 4.19.3 Tomasz Chmielewski
@ 2018-11-22 12:46 ` Nikolay Borisov
  2018-11-22 13:07   ` Tomasz Chmielewski
  0 siblings, 1 reply; 6+ messages in thread
From: Nikolay Borisov @ 2018-11-22 12:46 UTC (permalink / raw)
  To: Tomasz Chmielewski, Btrfs BTRFS



On 22.11.18 г. 14:31 ч., Tomasz Chmielewski wrote:
> Yet another system upgraded to 4.19 and showing strange issues.
> 
> btrfs-cleaner is showing as ~90-100% busy in iotop:
> 
>    TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
>   1340 be/4 root        0.00 B/s    0.00 B/s  0.00 % 92.88 %
> [btrfs-cleaner]
> 
> 
> Note disk read, disk write are 0.00 B/s.
> 
> 
> iostat -mx 1 shows all disks are ~100% busy, yet there are 0 reads and 0
> writes to them:
> 
> Device            r/s     w/s     rMB/s     wMB/s   rrqm/s   wrqm/s 
> %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
> sda              0.00    0.00      0.00      0.00     0.00     0.00  
> 0.00   0.00    0.00    0.00   0.91     0.00     0.00   0.00  90.80
> sdb              0.00    0.00      0.00      0.00     0.00     0.00  
> 0.00   0.00    0.00    0.00   1.00     0.00     0.00   0.00 100.00
> sdc              0.00    0.00      0.00      0.00     0.00     0.00  
> 0.00   0.00    0.00    0.00   1.00     0.00     0.00   0.00 100.00
> 
> 
> 
> btrfs-cleaner persists 100% busy after reboots. The system is generally
> not very responsive.
> 
> 
> 
> # echo w > /proc/sysrq-trigger
> 
> # dmesg -c
> [  931.585611] sysrq: SysRq : Show Blocked State
> [  931.585715]   task                        PC stack   pid father
> [  931.590168] btrfs-cleaner   D    0  1340      2 0x80000000
> [  931.590175] Call Trace:
> [  931.590190]  __schedule+0x29e/0x840
> [  931.590195]  schedule+0x2c/0x80
> [  931.590199]  schedule_timeout+0x258/0x360
> [  931.590204]  io_schedule_timeout+0x1e/0x50
> [  931.590208]  wait_for_completion_io+0xb7/0x140
> [  931.590214]  ? wake_up_q+0x80/0x80
> [  931.590219]  submit_bio_wait+0x61/0x90
> [  931.590225]  blkdev_issue_discard+0x7a/0xd0
> [  931.590266]  btrfs_issue_discard+0x123/0x160 [btrfs]
> [  931.590299]  btrfs_discard_extent+0xd8/0x160 [btrfs]
> [  931.590335]  btrfs_finish_extent_commit+0xe2/0x240 [btrfs]
> [  931.590382]  btrfs_commit_transaction+0x573/0x840 [btrfs]
> [  931.590415]  ? btrfs_block_rsv_check+0x25/0x70 [btrfs]
> [  931.590456]  __btrfs_end_transaction+0x2be/0x2d0 [btrfs]
> [  931.590493]  btrfs_end_transaction_throttle+0x13/0x20 [btrfs]
> [  931.590530]  btrfs_drop_snapshot+0x489/0x800 [btrfs]
> [  931.590567]  btrfs_clean_one_deleted_snapshot+0xbb/0xf0 [btrfs]
> [  931.590607]  cleaner_kthread+0x136/0x160 [btrfs]
> [  931.590612]  kthread+0x120/0x140
> [  931.590646]  ? btree_submit_bio_start+0x20/0x20 [btrfs]
> [  931.590658]  ? kthread_bind+0x40/0x40
> [  931.590661]  ret_from_fork+0x22/0x40
> 



It seems your filesystem is mounted with the DSICARD option meaning
every delete will result in discard this is highly suboptimal for ssd's.
Try remounting the fs without the discard option see if it helps.
Generally for discard you want to submit it in big batches (what fstrim
does) so that the ftl on the ssd could apply any optimisations it might
have up its sleeve.


Would you finally care to share the smart data + the model and make of
the ssd?

> 
> 
> 
> After rebooting to 4.19.3, I've started seeing read errors. There were
> no errors with a previous kernel (4.17.14); disks are healthy according
> to SMART; no errors reported when we read the whole surface with dd.
> 
> # grep -i btrfs.*corrected /var/log/syslog|wc -l
> 156
> 
> Things like:
> 
> Nov 22 12:17:43 lxd05 kernel: [  711.538836] BTRFS info (device sda2):
> read error corrected: ino 0 off 3739083145216 (dev /dev/sdc2 sector
> 101088384)
> Nov 22 12:17:43 lxd05 kernel: [  711.538905] BTRFS info (device sda2):
> read error corrected: ino 0 off 3739083149312 (dev /dev/sdc2 sector
> 101088392)
> Nov 22 12:17:43 lxd05 kernel: [  711.538958] BTRFS info (device sda2):
> read error corrected: ino 0 off 3739083153408 (dev /dev/sdc2 sector
> 101088400)
> Nov 22 12:17:43 lxd05 kernel: [  711.539006] BTRFS info (device sda2):
> read error corrected: ino 0 off 3739083157504 (dev /dev/sdc2 sector
> 101088408)
> 
> 
> Yet - 0 errors, according to stats, not sure if it's expected or not:

Since btrfs was able to fix the read failure on the fly it doesn't
record the error.  IMO you have problems with your storage below the
filesystem level.

> 
> # btrfs device stats /data/lxd
> [/dev/sda2].write_io_errs    0
> [/dev/sda2].read_io_errs     0
> [/dev/sda2].flush_io_errs    0
> [/dev/sda2].corruption_errs  0
> [/dev/sda2].generation_errs  0
> [/dev/sdb2].write_io_errs    0
> [/dev/sdb2].read_io_errs     0
> [/dev/sdb2].flush_io_errs    0
> [/dev/sdb2].corruption_errs  0
> [/dev/sdb2].generation_errs  0
> [/dev/sdc2].write_io_errs    0
> [/dev/sdc2].read_io_errs     0
> [/dev/sdc2].flush_io_errs    0
> [/dev/sdc2].corruption_errs  0
> [/dev/sdc2].generation_errs  0
> 
> 

<snip>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs-cleaner 100% busy on an idle filesystem with 4.19.3
  2018-11-22 12:46 ` Nikolay Borisov
@ 2018-11-22 13:07   ` Tomasz Chmielewski
  2018-11-22 14:03     ` Roman Mamedov
  2018-11-23  6:47     ` Chris Murphy
  0 siblings, 2 replies; 6+ messages in thread
From: Tomasz Chmielewski @ 2018-11-22 13:07 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: Btrfs BTRFS

On 2018-11-22 21:46, Nikolay Borisov wrote:

>> # echo w > /proc/sysrq-trigger
>> 
>> # dmesg -c
>> [  931.585611] sysrq: SysRq : Show Blocked State
>> [  931.585715]   task                        PC stack   pid father
>> [  931.590168] btrfs-cleaner   D    0  1340      2 0x80000000
>> [  931.590175] Call Trace:
>> [  931.590190]  __schedule+0x29e/0x840
>> [  931.590195]  schedule+0x2c/0x80
>> [  931.590199]  schedule_timeout+0x258/0x360
>> [  931.590204]  io_schedule_timeout+0x1e/0x50
>> [  931.590208]  wait_for_completion_io+0xb7/0x140
>> [  931.590214]  ? wake_up_q+0x80/0x80
>> [  931.590219]  submit_bio_wait+0x61/0x90
>> [  931.590225]  blkdev_issue_discard+0x7a/0xd0
>> [  931.590266]  btrfs_issue_discard+0x123/0x160 [btrfs]
>> [  931.590299]  btrfs_discard_extent+0xd8/0x160 [btrfs]
>> [  931.590335]  btrfs_finish_extent_commit+0xe2/0x240 [btrfs]
>> [  931.590382]  btrfs_commit_transaction+0x573/0x840 [btrfs]
>> [  931.590415]  ? btrfs_block_rsv_check+0x25/0x70 [btrfs]
>> [  931.590456]  __btrfs_end_transaction+0x2be/0x2d0 [btrfs]
>> [  931.590493]  btrfs_end_transaction_throttle+0x13/0x20 [btrfs]
>> [  931.590530]  btrfs_drop_snapshot+0x489/0x800 [btrfs]
>> [  931.590567]  btrfs_clean_one_deleted_snapshot+0xbb/0xf0 [btrfs]
>> [  931.590607]  cleaner_kthread+0x136/0x160 [btrfs]
>> [  931.590612]  kthread+0x120/0x140
>> [  931.590646]  ? btree_submit_bio_start+0x20/0x20 [btrfs]
>> [  931.590658]  ? kthread_bind+0x40/0x40
>> [  931.590661]  ret_from_fork+0x22/0x40
>> 
> 
> It seems your filesystem is mounted with the DSICARD option meaning
> every delete will result in discard this is highly suboptimal for 
> ssd's.
> Try remounting the fs without the discard option see if it helps.
> Generally for discard you want to submit it in big batches (what fstrim
> does) so that the ftl on the ssd could apply any optimisations it might
> have up its sleeve.

Spot on!

Removed "discard" from fstab and added "ssd", rebooted - no more 
btrfs-cleaner running.

Do you know if the issue you described ("discard this is highly 
suboptimal for ssd") affects other filesystems as well to a similar 
extent? I.e. if using ext4 on ssd?


> Would you finally care to share the smart data + the model and make of
> the ssd?

2x these:

Model Family:     Samsung based SSDs
Device Model:     SAMSUNG MZ7LM1T9HCJM-00005
Firmware Version: GXT1103Q
User Capacity:    1,920,383,410,176 bytes [1.92 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device

1x this:

Device Model:     Micron_5200_MTFDDAK1T9TDC
Firmware Version: D1MU004
User Capacity:    1,920,383,410,176 bytes [1.92 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches


But - seems the issue was unneeded discard option, so not pasting 
unnecessary SMART data, thanks for finding this out.


Tomasz Chmielewski

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs-cleaner 100% busy on an idle filesystem with 4.19.3
  2018-11-22 13:07   ` Tomasz Chmielewski
@ 2018-11-22 14:03     ` Roman Mamedov
  2018-11-22 14:13       ` Qu Wenruo
  2018-11-23  6:47     ` Chris Murphy
  1 sibling, 1 reply; 6+ messages in thread
From: Roman Mamedov @ 2018-11-22 14:03 UTC (permalink / raw)
  To: Tomasz Chmielewski; +Cc: Nikolay Borisov, Btrfs BTRFS

On Thu, 22 Nov 2018 22:07:25 +0900
Tomasz Chmielewski <tch@virtall.com> wrote:

> Spot on!
> 
> Removed "discard" from fstab and added "ssd", rebooted - no more 
> btrfs-cleaner running.

Recently there has been a bugfix for TRIM in Btrfs:
  
  btrfs: Ensure btrfs_trim_fs can trim the whole fs
  https://patchwork.kernel.org/patch/10579539/

Perhaps your upgraded kernel is the first one to contain it, and for the first
time you're seeing TRIM to actually *work*, with the actual performance impact
of it on a large fragmented FS, instead of a few contiguous unallocated areas.

-- 
With respect,
Roman

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs-cleaner 100% busy on an idle filesystem with 4.19.3
  2018-11-22 14:03     ` Roman Mamedov
@ 2018-11-22 14:13       ` Qu Wenruo
  0 siblings, 0 replies; 6+ messages in thread
From: Qu Wenruo @ 2018-11-22 14:13 UTC (permalink / raw)
  To: Roman Mamedov, Tomasz Chmielewski; +Cc: Nikolay Borisov, Btrfs BTRFS



On 2018/11/22 下午10:03, Roman Mamedov wrote:
> On Thu, 22 Nov 2018 22:07:25 +0900
> Tomasz Chmielewski <tch@virtall.com> wrote:
> 
>> Spot on!
>>
>> Removed "discard" from fstab and added "ssd", rebooted - no more 
>> btrfs-cleaner running.
> 
> Recently there has been a bugfix for TRIM in Btrfs:
>   
>   btrfs: Ensure btrfs_trim_fs can trim the whole fs
>   https://patchwork.kernel.org/patch/10579539/
> 
> Perhaps your upgraded kernel is the first one to contain it, and for the first
> time you're seeing TRIM to actually *work*, with the actual performance impact
> of it on a large fragmented FS, instead of a few contiguous unallocated areas.
> 
That only affects btrfs_trim_fs(), and you can see it's only called from
ioctl interface, so it's definitely not the case.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: btrfs-cleaner 100% busy on an idle filesystem with 4.19.3
  2018-11-22 13:07   ` Tomasz Chmielewski
  2018-11-22 14:03     ` Roman Mamedov
@ 2018-11-23  6:47     ` Chris Murphy
  1 sibling, 0 replies; 6+ messages in thread
From: Chris Murphy @ 2018-11-23  6:47 UTC (permalink / raw)
  To: Tomasz Chmielewski; +Cc: Nikolay Borisov, Btrfs BTRFS

On Thu, Nov 22, 2018 at 6:07 AM Tomasz Chmielewski <tch@virtall.com> wrote:
>
> On 2018-11-22 21:46, Nikolay Borisov wrote:
>
> >> # echo w > /proc/sysrq-trigger
> >>
> >> # dmesg -c
> >> [  931.585611] sysrq: SysRq : Show Blocked State
> >> [  931.585715]   task                        PC stack   pid father
> >> [  931.590168] btrfs-cleaner   D    0  1340      2 0x80000000
> >> [  931.590175] Call Trace:
> >> [  931.590190]  __schedule+0x29e/0x840
> >> [  931.590195]  schedule+0x2c/0x80
> >> [  931.590199]  schedule_timeout+0x258/0x360
> >> [  931.590204]  io_schedule_timeout+0x1e/0x50
> >> [  931.590208]  wait_for_completion_io+0xb7/0x140
> >> [  931.590214]  ? wake_up_q+0x80/0x80
> >> [  931.590219]  submit_bio_wait+0x61/0x90
> >> [  931.590225]  blkdev_issue_discard+0x7a/0xd0
> >> [  931.590266]  btrfs_issue_discard+0x123/0x160 [btrfs]
> >> [  931.590299]  btrfs_discard_extent+0xd8/0x160 [btrfs]
> >> [  931.590335]  btrfs_finish_extent_commit+0xe2/0x240 [btrfs]
> >> [  931.590382]  btrfs_commit_transaction+0x573/0x840 [btrfs]
> >> [  931.590415]  ? btrfs_block_rsv_check+0x25/0x70 [btrfs]
> >> [  931.590456]  __btrfs_end_transaction+0x2be/0x2d0 [btrfs]
> >> [  931.590493]  btrfs_end_transaction_throttle+0x13/0x20 [btrfs]
> >> [  931.590530]  btrfs_drop_snapshot+0x489/0x800 [btrfs]
> >> [  931.590567]  btrfs_clean_one_deleted_snapshot+0xbb/0xf0 [btrfs]
> >> [  931.590607]  cleaner_kthread+0x136/0x160 [btrfs]
> >> [  931.590612]  kthread+0x120/0x140
> >> [  931.590646]  ? btree_submit_bio_start+0x20/0x20 [btrfs]
> >> [  931.590658]  ? kthread_bind+0x40/0x40
> >> [  931.590661]  ret_from_fork+0x22/0x40
> >>
> >
> > It seems your filesystem is mounted with the DSICARD option meaning
> > every delete will result in discard this is highly suboptimal for
> > ssd's.
> > Try remounting the fs without the discard option see if it helps.
> > Generally for discard you want to submit it in big batches (what fstrim
> > does) so that the ftl on the ssd could apply any optimisations it might
> > have up its sleeve.
>
> Spot on!
>
> Removed "discard" from fstab and added "ssd", rebooted - no more
> btrfs-cleaner running.
>
> Do you know if the issue you described ("discard this is highly
> suboptimal for ssd") affects other filesystems as well to a similar
> extent? I.e. if using ext4 on ssd?

Quite a lot of activity on ext4 and XFS are overwrites, so discard
isn't needed. And it might be discard is subject to delays. On Btrfs,
it's almost immediate, to the degree that on a couple SSDs I've
tested, stale trees referenced exclusively by the most recent backup
tree entires in the superblock are already zeros. That functionally
means no automatic recoveries at mount time if there's a problem with
any of the current trees.

I was using it for about a year to no ill effect, BUT not a lot of
file deletions either. I wouldn't recommend it, and instead suggest
enabling the fstrim.timer which by default runs fstrim.service once a
week (which in turn issues fstrim, I think on all mounted volumes.)

I am a bit more concerned about the read errors you had that were
being corrected automatically? The corruption suggests a firmware bug
related to trim. I'd check the affected SSD firmware revision and
consider updating it (only after a backup, it's plausible the firmware
update is not guaranteed to be data safe). Does the volume use DUP or
raid1 metadata? I'm not sure how it's correcting for these problems
otherwise.


-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-11-23  6:48 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-22 12:31 btrfs-cleaner 100% busy on an idle filesystem with 4.19.3 Tomasz Chmielewski
2018-11-22 12:46 ` Nikolay Borisov
2018-11-22 13:07   ` Tomasz Chmielewski
2018-11-22 14:03     ` Roman Mamedov
2018-11-22 14:13       ` Qu Wenruo
2018-11-23  6:47     ` Chris Murphy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.