* fstrim segmentation fault and btrfs crash on vanilla 5.4.14
@ 2020-01-26 15:54 Raviu
2020-01-26 16:09 ` Raviu
0 siblings, 1 reply; 7+ messages in thread
From: Raviu @ 2020-01-26 15:54 UTC (permalink / raw)
To: linux-btrfs
Hi,
I've two btrfs filesystems on the same nvme disk / and /home.
I'm running fstrim -va as cronjob daily.
Today, once fstrim run, apps writing to /home got frozen. Reviewing dmesg show a bug message related to fstrim and btrfs.
Rebooting the system- actually forcibly as it couldn't umount /home - and running fstrim manualy on each filesystem; on / it worked fine, on /home I got the same error.
Here are the dmesg errors:
http://cwillu.com:8080/38.132.118.66/1
Here is the output of btrfs check --readonly with home unmounted:
http://cwillu.com:8080/38.132.118.66/2
I've run scrub whith home mounted it said, `Error summary: no errors found`
The fstrim kernel error is reproducible on my machine, it occurs every time I run it on my home. So I can test a fix, just hope it doesn't cause data loss.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: fstrim segmentation fault and btrfs crash on vanilla 5.4.14
2020-01-26 15:54 fstrim segmentation fault and btrfs crash on vanilla 5.4.14 Raviu
@ 2020-01-26 16:09 ` Raviu
2020-01-26 16:54 ` Nikolay Borisov
0 siblings, 1 reply; 7+ messages in thread
From: Raviu @ 2020-01-26 16:09 UTC (permalink / raw)
To: linux-btrfs
Here is dmesg output:
[ 237.525947] assertion failed: prev, in ../fs/btrfs/extent_io.c:1595
[ 237.525984] ------------[ cut here ]------------
[ 237.525985] kernel BUG at ../fs/btrfs/ctree.h:3117!
[ 237.525992] invalid opcode: 0000 [#1] SMP PTI
[ 237.525998] CPU: 4 PID: 4423 Comm: fstrim Tainted: G U OE 5.4.14-8-vanilla #1
[ 237.526001] Hardware name: ASUSTeK COMPUTER INC.
[ 237.526044] RIP: 0010:assfail.constprop.58+0x18/0x1a [btrfs]
[ 237.526048] Code: 0b 0f 1f 44 00 00 48 8b 3d 15 9e 07 00 e9 70 20 ce e2 89 f1 48 c7 c2 ae 27 77 c0 48 89 fe 48 c7 c7 20 87 77 c0 e8 56 c5 ba e2 <0f> 0b 0f 1f 44 00 00 e8 9c 1b bc e2 48 8b 3d 7d 9f 07 00 e8 40 20
[ 237.526053] RSP: 0018:ffffae2cc2befc20 EFLAGS: 00010282
[ 237.526056] RAX: 0000000000000037 RBX: 0000000000000021 RCX: 0000000000000000
[ 237.526059] RDX: 0000000000000000 RSI: ffff94221eb19a18 RDI: ffff94221eb19a18
[ 237.526062] RBP: ffff942216ce6e00 R08: 0000000000000403 R09: 0000000000000001
[ 237.526064] R10: ffffae2cc2befc38 R11: 0000000000000001 R12: ffffae2cc2befca0
[ 237.526067] R13: ffff942216ce6e24 R14: ffffae2cc2befc98 R15: 0000000000100000
[ 237.526071] FS: 00007fa1b8087fc0(0000) GS:ffff94221eb00000(0000) knlGS:0000000000000000
[ 237.526074] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 237.526076] CR2: 000032eed3b5a000 CR3: 00000007c33bc002 CR4: 00000000003606e0
[ 237.526079] Call Trace:
[ 237.526120] find_first_clear_extent_bit+0x13d/0x150 [btrfs]
[ 237.526148] btrfs_trim_fs+0x211/0x3f0 [btrfs]
[ 237.526184] btrfs_ioctl_fitrim+0x103/0x170 [btrfs]
[ 237.526219] btrfs_ioctl+0x129a/0x2ed0 [btrfs]
[ 237.526227] ? filemap_map_pages+0x190/0x3d0
[ 237.526232] ? do_filp_open+0xaf/0x110
[ 237.526238] ? _copy_to_user+0x22/0x30
[ 237.526242] ? cp_new_stat+0x150/0x180
[ 237.526247] ? do_vfs_ioctl+0xa4/0x640
[ 237.526278] ? btrfs_ioctl_get_supported_features+0x30/0x30 [btrfs]
[ 237.526283] do_vfs_ioctl+0xa4/0x640
[ 237.526288] ? __do_sys_newfstat+0x3c/0x60
[ 237.526292] ksys_ioctl+0x70/0x80
[ 237.526297] __x64_sys_ioctl+0x16/0x20
[ 237.526303] do_syscall_64+0x5a/0x1c0
[ 237.526310] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 237.526315] RIP: 0033:0x7fa1b797d587
[ 237.526319] Code: b3 66 90 48 8b 05 11 49 2c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e1 48 2c 00 f7 d8 64 89 01 48
[ 237.526325] RSP: 002b:00007ffc4b977f98 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 237.526330] RAX: ffffffffffffffda RBX: 00007ffc4b978100 RCX: 00007fa1b797d587
[ 237.526333] RDX: 00007ffc4b977fa0 RSI: 00000000c0185879 RDI: 0000000000000003
[ 237.526337] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
[ 237.526340] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc4b978ede
[ 237.526344] R13: 00007ffc4b978ede R14: 0000000000000000 R15: 00007fa1b8087f38
[ 237.526348] Modules linked in: loop tun ccm fuse af_packet vboxnetadp(OE) vboxnetflt(OE) scsi_transport_iscsi vboxdrv(OE) dmi_sysfs snd_hda_codec_hdmi snd_hda_codec_realtek intel_rapl_msr snd_hda_codec_generic intel_rapl_common ledtrig_audio snd_hda_intel snd_intel_nhlt ip6t_REJECT nf_reject_ipv6 iwlmvm snd_hda_codec ip6t_rt snd_hda_core snd_hwdep msr mac80211 ipt_REJECT nf_reject_ipv4 iTCO_wdt x86_pkg_temp_thermal intel_powerclamp libarc4 snd_pcm xt_multiport mei_hdcp hid_multitouch iTCO_vendor_support coretemp nls_iso8859_1 kvm_intel nls_cp437 snd_timer vfat kvm iwlwifi pcspkr irqbypass snd asus_nb_wmi wmi_bmof fat xt_limit r8169 soundcore i2c_i801 mei_me realtek rtsx_pci_ms joydev xt_hl cfg80211 libphy memstick intel_lpss_pci intel_pch_thermal intel_lpss mei idma64 xt_tcpudp thermal xt_addrtype xt_conntrack ac asus_wireless acpi_pad ip6table_filter ip6_tables nf_conntrack_netbios_ns nf_conntrack_broadcast nf_nat_ftp nf_nat nf_conntrack_ftp nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4
[ 237.526378] iptable_filter ip_tables x_tables bpfilter btrfs libcrc32c xor raid6_pq dm_crypt algif_skcipher af_alg hid_asus asus_wmi sparse_keymap rfkill hid_generic usbhid crct10dif_pclmul crc32_pclmul i915 crc32c_intel ghash_clmulni_intel rtsx_pci_sdmmc mmc_core mxm_wmi xhci_pci xhci_hcd i2c_algo_bit aesni_intel drm_kms_helper glue_helper syscopyarea sysfillrect sysimgblt crypto_simd fb_sys_fops cryptd drm serio_raw rtsx_pci usbcore i2c_hid wmi battery video button sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua efivarfs
[ 237.526431] ---[ end trace c78dad92fa11be80 ]---
[ 237.526467] RIP: 0010:assfail.constprop.58+0x18/0x1a [btrfs]
[ 237.526472] Code: 0b 0f 1f 44 00 00 48 8b 3d 15 9e 07 00 e9 70 20 ce e2 89 f1 48 c7 c2 ae 27 77 c0 48 89 fe 48 c7 c7 20 87 77 c0 e8 56 c5 ba e2 <0f> 0b 0f 1f 44 00 00 e8 9c 1b bc e2 48 8b 3d 7d 9f 07 00 e8 40 20
[ 237.526477] RSP: 0018:ffffae2cc2befc20 EFLAGS: 00010282
[ 237.526481] RAX: 0000000000000037 RBX: 0000000000000021 RCX: 0000000000000000
[ 237.526485] RDX: 0000000000000000 RSI: ffff94221eb19a18 RDI: ffff94221eb19a18
[ 237.526489] RBP: ffff942216ce6e00 R08: 0000000000000403 R09: 0000000000000001
[ 237.526492] R10: ffffae2cc2befc38 R11: 0000000000000001 R12: ffffae2cc2befca0
[ 237.526496] R13: ffff942216ce6e24 R14: ffffae2cc2befc98 R15: 0000000000100000
[ 237.526500] FS: 00007fa1b8087fc0(0000) GS:ffff94221eb00000(0000) knlGS:0000000000000000
[ 237.526504] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 237.526507] CR2: 000032eed3b5a000 CR3: 00000007c33bc002 CR4: 00000000003606e0
Here is the btrfs check --readonly output:
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 434638807040 bytes used, no error found
total csum bytes: 327446132
total tree bytes: 1604894720
total fs tree bytes: 1104494592
total extent tree bytes: 144556032
btree space waste bytes: 237406069
file data blocks allocated: 5616596910080
referenced 823187898368
Posted for archiving as recommended.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Sunday, January 26, 2020 5:54 PM, Raviu <raviu@protonmail.com> wrote:
> Hi,
> I've two btrfs filesystems on the same nvme disk / and /home.
> I'm running fstrim -va as cronjob daily.
> Today, once fstrim run, apps writing to /home got frozen. Reviewing dmesg show a bug message related to fstrim and btrfs.
> Rebooting the system- actually forcibly as it couldn't umount /home - and running fstrim manualy on each filesystem; on / it worked fine, on /home I got the same error.
> Here are the dmesg errors:
>
> http://cwillu.com:8080/38.132.118.66/1
>
> Here is the output of btrfs check --readonly with home unmounted:
>
> http://cwillu.com:8080/38.132.118.66/2
>
> I've run scrub whith home mounted it said, `Error summary: no errors found`
>
> The fstrim kernel error is reproducible on my machine, it occurs every time I run it on my home. So I can test a fix, just hope it doesn't cause data loss.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: fstrim segmentation fault and btrfs crash on vanilla 5.4.14
2020-01-26 16:09 ` Raviu
@ 2020-01-26 16:54 ` Nikolay Borisov
2020-01-26 17:10 ` Raviu
0 siblings, 1 reply; 7+ messages in thread
From: Nikolay Borisov @ 2020-01-26 16:54 UTC (permalink / raw)
To: Raviu, linux-btrfs
On 26.01.20 г. 18:09 ч., Raviu wrote:
> Here is dmesg output:
>
> [ 237.525947] assertion failed: prev, in ../fs/btrfs/extent_io.c:1595
> [ 237.525984] ------------[ cut here ]------------
> [ 237.525985] kernel BUG at ../fs/btrfs/ctree.h:3117!
> [ 237.525992] invalid opcode: 0000 [#1] SMP PTI
> [ 237.525998] CPU: 4 PID: 4423 Comm: fstrim Tainted: G U OE 5.4.14-8-vanilla #1
> [ 237.526001] Hardware name: ASUSTeK COMPUTER INC.
> [ 237.526044] RIP: 0010:assfail.constprop.58+0x18/0x1a [btrfs]
> [ 237.526048] Code: 0b 0f 1f 44 00 00 48 8b 3d 15 9e 07 00 e9 70 20 ce e2 89 f1 48 c7 c2 ae 27 77 c0 48 89 fe 48 c7 c7 20 87 77 c0 e8 56 c5 ba e2 <0f> 0b 0f 1f 44 00 00 e8 9c 1b bc e2 48 8b 3d 7d 9f 07 00 e8 40 20
> [ 237.526053] RSP: 0018:ffffae2cc2befc20 EFLAGS: 00010282
> [ 237.526056] RAX: 0000000000000037 RBX: 0000000000000021 RCX: 0000000000000000
> [ 237.526059] RDX: 0000000000000000 RSI: ffff94221eb19a18 RDI: ffff94221eb19a18
> [ 237.526062] RBP: ffff942216ce6e00 R08: 0000000000000403 R09: 0000000000000001
> [ 237.526064] R10: ffffae2cc2befc38 R11: 0000000000000001 R12: ffffae2cc2befca0
> [ 237.526067] R13: ffff942216ce6e24 R14: ffffae2cc2befc98 R15: 0000000000100000
> [ 237.526071] FS: 00007fa1b8087fc0(0000) GS:ffff94221eb00000(0000) knlGS:0000000000000000
> [ 237.526074] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 237.526076] CR2: 000032eed3b5a000 CR3: 00000007c33bc002 CR4: 00000000003606e0
> [ 237.526079] Call Trace:
> [ 237.526120] find_first_clear_extent_bit+0x13d/0x150 [btrfs]
So you are hitting the ASSERT(prev) in find_first_clear_extent_bit. The
good news is this is just an in-memory state and it's used to optimize
fstrim so that only non-trimmed regions are being trimmed. This state is
cleared on unmount so if you mount/remount then you shouldn't be hitting
it.
But then again, the ASSERT is there to catch an impossible case. To help
me debug this could you provide the output of:
btrfs fi show /home
And then the output of btrfs inspect-internal dump-tree -t3 /dev/XXX
where XXX should be the block device that contains the fs mounted on /home .
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: fstrim segmentation fault and btrfs crash on vanilla 5.4.14
2020-01-26 16:54 ` Nikolay Borisov
@ 2020-01-26 17:10 ` Raviu
[not found] ` <1DC5LVxi3doXrpiHkvBd4EwjgymRJuf8Znu4UCC-Ut0mOy9f-QYOyvQT3hf-QJX3Hk8hm_UhBGk_3rLcGYs_b6NdpNHDuJU-qog6PFxjEDE=@protonmail.com>
0 siblings, 1 reply; 7+ messages in thread
From: Raviu @ 2020-01-26 17:10 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: linux-btrfs
Here is the output of different btrfs filesystem subcommands:
`btrfs filesystem show /home`
Label: none uuid: 9c32ec4c-e918-4a79-92cf-c85faf3724e4
Total devices 2 FS bytes used 404.99GiB
devid 1 size 687.36GiB used 425.10GiB path /dev/mapper/cr_nvme0n1p4
devid 2 size 100.00GiB used 1.03GiB path /dev/mapper/cr_nvme0n1p1
`btrfs filesystem df /home`
Data, single: total=420.01GiB, used=403.48GiB
System, RAID1: total=32.00MiB, used=48.00KiB
System, DUP: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=1.00GiB, used=48.47MiB
Metadata, DUP: total=2.00GiB, used=1.46GiB
GlobalReserve, single: total=477.95MiB, used=0.00B
`btrfs filesystem usage /home`
Overall:
Device size: 787.36GiB
Device allocated: 426.13GiB
Device unallocated: 361.23GiB
Device missing: 0.00B
Used: 406.50GiB
Free (estimated): 377.75GiB (min: 197.14GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 477.95MiB (used: 0.00B)
Data,single: Size:420.01GiB, Used:403.48GiB (96.07%)
/dev/mapper/cr_nvme0n1p4 420.01GiB
Metadata,RAID1: Size:1.00GiB, Used:48.47MiB (4.73%)
/dev/mapper/cr_nvme0n1p4 1.00GiB
/dev/mapper/cr_nvme0n1p1 1.00GiB
Metadata,DUP: Size:2.00GiB, Used:1.46GiB (73.12%)
/dev/mapper/cr_nvme0n1p4 4.00GiB
System,RAID1: Size:32.00MiB, Used:48.00KiB (0.15%)
/dev/mapper/cr_nvme0n1p4 32.00MiB
/dev/mapper/cr_nvme0n1p1 32.00MiB
System,DUP: Size:32.00MiB, Used:16.00KiB (0.05%)
/dev/mapper/cr_nvme0n1p4 64.00MiB
Unallocated:
/dev/mapper/cr_nvme0n1p4 262.26GiB
/dev/mapper/cr_nvme0n1p1 98.97GiB
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Sunday, January 26, 2020 6:54 PM, Nikolay Borisov <nborisov@suse.com> wrote:
> On 26.01.20 г. 18:09 ч., Raviu wrote:
>
> > Here is dmesg output:
> > [ 237.525947] assertion failed: prev, in ../fs/btrfs/extent_io.c:1595
> > [ 237.525984] ------------[ cut here ]------------
> > [ 237.525985] kernel BUG at ../fs/btrfs/ctree.h:3117!
> > [ 237.525992] invalid opcode: 0000 [#1] SMP PTI
> > [ 237.525998] CPU: 4 PID: 4423 Comm: fstrim Tainted: G U OE 5.4.14-8-vanilla #1
> > [ 237.526001] Hardware name: ASUSTeK COMPUTER INC.
> > [ 237.526044] RIP: 0010:assfail.constprop.58+0x18/0x1a [btrfs]
> > [ 237.526048] Code: 0b 0f 1f 44 00 00 48 8b 3d 15 9e 07 00 e9 70 20 ce e2 89 f1 48 c7 c2 ae 27 77 c0 48 89 fe 48 c7 c7 20 87 77 c0 e8 56 c5 ba e2 <0f> 0b 0f 1f 44 00 00 e8 9c 1b bc e2 48 8b 3d 7d 9f 07 00 e8 40 20
> > [ 237.526053] RSP: 0018:ffffae2cc2befc20 EFLAGS: 00010282
> > [ 237.526056] RAX: 0000000000000037 RBX: 0000000000000021 RCX: 0000000000000000
> > [ 237.526059] RDX: 0000000000000000 RSI: ffff94221eb19a18 RDI: ffff94221eb19a18
> > [ 237.526062] RBP: ffff942216ce6e00 R08: 0000000000000403 R09: 0000000000000001
> > [ 237.526064] R10: ffffae2cc2befc38 R11: 0000000000000001 R12: ffffae2cc2befca0
> > [ 237.526067] R13: ffff942216ce6e24 R14: ffffae2cc2befc98 R15: 0000000000100000
> > [ 237.526071] FS: 00007fa1b8087fc0(0000) GS:ffff94221eb00000(0000) knlGS:0000000000000000
> > [ 237.526074] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [ 237.526076] CR2: 000032eed3b5a000 CR3: 00000007c33bc002 CR4: 00000000003606e0
> > [ 237.526079] Call Trace:
> > [ 237.526120] find_first_clear_extent_bit+0x13d/0x150 [btrfs]
>
> So you are hitting the ASSERT(prev) in find_first_clear_extent_bit. The
> good news is this is just an in-memory state and it's used to optimize
> fstrim so that only non-trimmed regions are being trimmed. This state is
> cleared on unmount so if you mount/remount then you shouldn't be hitting
> it.
>
> But then again, the ASSERT is there to catch an impossible case. To help
> me debug this could you provide the output of:
>
> btrfs fi show /home
>
> And then the output of btrfs inspect-internal dump-tree -t3 /dev/XXX
>
> where XXX should be the block device that contains the fs mounted on /home .
</nborisov@suse.com>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: fstrim segmentation fault and btrfs crash on vanilla 5.4.14
[not found] ` <1DC5LVxi3doXrpiHkvBd4EwjgymRJuf8Znu4UCC-Ut0mOy9f-QYOyvQT3hf-QJX3Hk8hm_UhBGk_3rLcGYs_b6NdpNHDuJU-qog6PFxjEDE=@protonmail.com>
@ 2020-01-27 7:53 ` Raviu
2020-01-27 8:03 ` Nikolay Borisov
0 siblings, 1 reply; 7+ messages in thread
From: Raviu @ 2020-01-27 7:53 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: linux-btrfs
Follow-up:
It seems that I could solve my problem. Maybe this can also help you know the root cause and reproduce it some how.
My buggy btrfs was initially on a single partition, I've freed another partition later and added as `btrfs device add` as the free partition was to the left of the original one so a resize was not possible.
Originally I'd metadata as dup and data as single.
Even after adding the new device it remained like that for few days, yesterday, I noticed it reported both dup and RAID1 of metadata which was weird. It did some sort of metadata balance on its own when it got a new partition, actually I didn't run such balance command before as both partition are on the same disk, so thought that raid-1 is useless.
So I've taken a backup, then run `btrfs balance start -mconvert=raid1 /home/` .. So only raid1 is reported now on `btrfs filesystem df` and `btrfs filesystem usage`. I then run fstrim and it worked fine.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: fstrim segmentation fault and btrfs crash on vanilla 5.4.14
2020-01-27 7:53 ` Raviu
@ 2020-01-27 8:03 ` Nikolay Borisov
2020-01-27 8:11 ` Raviu
0 siblings, 1 reply; 7+ messages in thread
From: Nikolay Borisov @ 2020-01-27 8:03 UTC (permalink / raw)
To: Raviu; +Cc: linux-btrfs
On 27.01.20 г. 9:53 ч., Raviu wrote:
> Follow-up:
> It seems that I could solve my problem. Maybe this can also help you know the root cause and reproduce it some how.
> My buggy btrfs was initially on a single partition, I've freed another partition later and added as `btrfs device add` as the free partition was to the left of the original one so a resize was not possible.
> Originally I'd metadata as dup and data as single.
> Even after adding the new device it remained like that for few days, yesterday, I noticed it reported both dup and RAID1 of metadata which was weird. It did some sort of metadata balance on its own when it got a new partition, actually I didn't run such balance command before as both partition are on the same disk, so thought that raid-1 is useless.
> So I've taken a backup, then run `btrfs balance start -mconvert=raid1 /home/` .. So only raid1 is reported now on `btrfs filesystem df` and `btrfs filesystem usage`. I then run fstrim and it worked fine.
>
>
Yes, this is very hlepful. So if I got you correct you initially had
everything in a single disk. Then you added a second disk but you hadn't
run balance. And that's when it was crashing then you run balance and
now it's not crashing? If that's the case then I know what the problem
is and will send a fix.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: fstrim segmentation fault and btrfs crash on vanilla 5.4.14
2020-01-27 8:03 ` Nikolay Borisov
@ 2020-01-27 8:11 ` Raviu
0 siblings, 0 replies; 7+ messages in thread
From: Raviu @ 2020-01-27 8:11 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: linux-btrfs
> On 27.01.20 г. 9:53 ч., Raviu wrote:
>
> > Follow-up:
> > It seems that I could solve my problem. Maybe this can also help you know the root cause and reproduce it some how.
> > My buggy btrfs was initially on a single partition, I've freed another partition later and added as `btrfs device add` as the free partition was to the left of the original one so a resize was not possible.
> > Originally I'd metadata as dup and data as single.
> > Even after adding the new device it remained like that for few days, yesterday, I noticed it reported both dup and RAID1 of metadata which was weird. It did some sort of metadata balance on its own when it got a new partition, actually I didn't run such balance command before as both partition are on the same disk, so thought that raid-1 is useless.
> > So I've taken a backup, then run `btrfs balance start -mconvert=raid1 /home/` .. So only raid1 is reported now on `btrfs filesystem df` and `btrfs filesystem usage`. I then run fstrim and it worked fine.
>
> Yes, this is very hlepful. So if I got you correct you initially had
> everything in a single disk. Then you added a second disk but you hadn't
> run balance. And that's when it was crashing then you run balance and
> now it's not crashing? If that's the case then I know what the problem
> is and will send a fix.
Yes, exactly.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-01-27 8:11 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-26 15:54 fstrim segmentation fault and btrfs crash on vanilla 5.4.14 Raviu
2020-01-26 16:09 ` Raviu
2020-01-26 16:54 ` Nikolay Borisov
2020-01-26 17:10 ` Raviu
[not found] ` <1DC5LVxi3doXrpiHkvBd4EwjgymRJuf8Znu4UCC-Ut0mOy9f-QYOyvQT3hf-QJX3Hk8hm_UhBGk_3rLcGYs_b6NdpNHDuJU-qog6PFxjEDE=@protonmail.com>
2020-01-27 7:53 ` Raviu
2020-01-27 8:03 ` Nikolay Borisov
2020-01-27 8:11 ` Raviu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).