All of lore.kernel.org
 help / color / mirror / Atom feed
* mounting failed any file on my filesystem
@ 2016-12-29 19:05 Jan Koester
  2016-12-29 22:31 ` Duncan
  0 siblings, 1 reply; 5+ messages in thread
From: Jan Koester @ 2016-12-29 19:05 UTC (permalink / raw)
  To: linux-btrfs

Hi,

i have problem with filesystem if my system crashed i have made been hard reset 
of the system after my Filesystem was crashed. I have already tried to repair 
without success you can see it on log file. It's seem one corrupted block 
brings complete filesystem to crashing.

Have anybody idea what happened with my filesystem ?

dmesg if open file:
[29450.404327] WARNING: CPU: 5 PID: 16161 at /build/linux-lIgGMF/linux-4.8.11/
fs/btrfs/extent-tree.c:6945 __btrfs_free_extent.isra.71+0x8e2/0xd60 [btrfs]
[29450.404331] Modules linked in: snd_usb_audio snd_usbmidi_lib snd_rawmidi 
snd_seq_device nfnetlink_queue nfnetlink_log nfnetlink cfg80211 bnep 
ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter pci_stub 
vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) bluetooth rfkill 
binfmt_misc ext4 crc16 jbd2 fscrypto ecb mbcache btrfs xor raid6_pq kvm_amd 
kvm amdkfd irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel radeon 
snd_hda_codec_realtek snd_hda_codec_generic sr_mod cdrom pcspkr serio_raw 
snd_hda_codec_hdmi fam15h_power k10temp evdev snd_hda_intel ttm snd_hda_codec 
drm_kms_helper snd_hda_core snd_hwdep snd_pcm drm snd_timer snd soundcore 
i2c_algo_bit sg sp5100_tco nuvoton_cir shpchp rc_core acpi_cpufreq tpm_tis 
tpm_tis_core tpm button cuse fuse parport_pc ppdev lp parport ip_tables
[29450.404512]  x_tables autofs4 xfs libcrc32c crc32c_generic ata_generic 
hid_generic usbhid hid sd_mod uas usb_storage ohci_pci crc32c_intel 
aesni_intel aes_x86_64 glue_helper lrw gf128mul ablk_helper cryptd psmouse 
e1000e xhci_pci r8169 xhci_hcd ptp ahci mii pps_core pata_atiixp libahci 
ohci_hcd libata ehci_pci ehci_hcd usbcore scsi_mod i2c_piix4 usb_common fjes
[29450.404543] CPU: 5 PID: 16161 Comm: kworker/u12:3 Tainted: G        W  OE   
4.8.0-2-amd64 #1 Debian 4.8.11-1
[29450.404544] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./
970 Extreme3, BIOS P1.80 07/31/2013
[29450.404579] Workqueue: btrfs-extent-refs btrfs_extent_refs_helper [btrfs]
[29450.404581]  0000000000000286 000000000bc03c37 ffffffff93d269f5 
0000000000000000
[29450.404586]  0000000000000000 ffffffff93a7c16e 00000000fffffffe 
000000dd8f8a8000
[29450.404590]  0000000000000000 ffff91c65d625000 ffff91c4eac5c000 
ffff91c4e7c09690
[29450.404594] Call Trace:
[29450.404599]  [<ffffffff93d269f5>] ? dump_stack+0x5c/0x77
[29450.404603]  [<ffffffff93a7c16e>] ? __warn+0xbe/0xe0
[29450.404630]  [<ffffffffc0c262a2>] ? __btrfs_free_extent.isra.71+0x8e2/0xd60 
[btrfs]
[29450.404660]  [<ffffffffc0c20801>] ? block_group_cache_tree_search+0x21/0xd0 
[btrfs]
[29450.404690]  [<ffffffffc0c256d3>] ? update_block_group.isra.70+0x133/0x420 
[btrfs]
[29450.404699]  [<ffffffff93b8a9ff>] ? __set_page_dirty_nobuffers+0xef/0x140
[29450.404736]  [<ffffffffc0c95ab9>] ? btrfs_merge_delayed_refs+0x69/0x580 
[btrfs]
[29450.404767]  [<ffffffffc0c2a8ac>] ? __btrfs_run_delayed_refs+0xadc/0x1240 
[btrfs]
[29450.404801]  [<ffffffffc0c2deee>] ? btrfs_run_delayed_refs+0x8e/0x2a0 
[btrfs]
[29450.404834]  [<ffffffffc0c2e189>] ? delayed_ref_async_start+0x89/0xa0 
[btrfs]
[29450.404871]  [<ffffffffc0c788b1>] ? btrfs_scrubparity_helper+0xd1/0x2d0 
[btrfs]
[29450.404879]  [<ffffffff93a94e00>] ? process_one_work+0x160/0x410
[29450.404886]  [<ffffffff93a950fd>] ? worker_thread+0x4d/0x480
[29450.404892]  [<ffffffff93a950b0>] ? process_one_work+0x410/0x410
[29450.404899]  [<ffffffff93a9aecd>] ? kthread+0xcd/0xf0
[29450.404906]  [<ffffffff93fefcaf>] ? ret_from_fork+0x1f/0x40
[29450.404913]  [<ffffffff93a9ae00>] ? kthread_create_on_node+0x190/0x190
[29450.404919] ---[ end trace 9627fcfceb44da0b ]---
[29450.404926] BTRFS info (device sdd): leaf 950359678976 total ptrs 67 free 
space 442
[29450.404934]  item 0 key (503649468416 192 1107296256) itemoff 3971 itemsize 
24
[29450.404940]          block group used 36864
[29450.404947]  item 1 key (503649538048 169 0) itemoff 3938 itemsize 33
[29450.404952]          extent refs 1 gen 861177 flags 2
[29450.404959]          tree block backref root 2
[29450.404965]  item 2 key (503649550336 169 1) itemoff 3905 itemsize 33
[29450.404971]          extent refs 1 gen 861177 flags 2
[29450.404977]          tree block backref root 2
[29450.404984]  item 3 key (503649566720 169 0) itemoff 3872 itemsize 33
[29450.404991]          extent refs 1 gen 861177 flags 2
[29450.404996]          tree block backref root 2
[29450.405001]  item 4 key (503649570816 169 0) itemoff 3839 itemsize 33
[29450.405007]          extent refs 1 gen 861177 flags 2
[29450.405012]          tree block backref root 2
[29450.405019]  item 5 key (503649607680 169 0) itemoff 3806 itemsize 33
[29450.405025]          extent refs 1 gen 861177 flags 2
[29450.405028]          tree block backref root 2
[29450.405030]  item 6 key (503649628160 169 0) itemoff 3773 itemsize 33
[29450.405031]          extent refs 1 gen 861177 flags 2
[29450.405032]          tree block backref root 2
[29450.405034]  item 7 key (503649636352 169 0) itemoff 3740 itemsize 33
[29450.405036]          extent refs 1 gen 861177 flags 2
[29450.405037]          tree block backref root 2
[29450.405039]  item 8 key (503649669120 169 1) itemoff 3707 itemsize 33
[29450.405040]          extent refs 1 gen 861177 flags 2
[29450.405041]          tree block backref root 1
[29450.405043]  item 9 key (503649705984 169 0) itemoff 3674 itemsize 33
[29450.405045]          extent refs 1 gen 861177 flags 2
[29450.405046]          tree block backref root 1
[29450.405048]  item 10 key (950359621632 169 4) itemoff 3641 itemsize 33
[29450.405049]          extent refs 1 gen 861178 flags 2
[29450.405050]          tree block backref root 5
[29450.405053]  item 11 key (950359621632 192 1107296256) itemoff 3617 
itemsize 24
[29450.405054]          block group used 0
[29450.405056]  item 12 key (950359625728 169 3) itemoff 3584 itemsize 33
[29450.405057]          extent refs 1 gen 861178 flags 2
[29450.405058]          tree block backref root 5
[29450.405060]  item 13 key (950359629824 169 2) itemoff 3551 itemsize 33
[29450.405062]          extent refs 1 gen 861178 flags 2
[29450.405063]          tree block backref root 5
[29450.405065]  item 14 key (950359633920 169 1) itemoff 3518 itemsize 33
[29450.405067]          extent refs 1 gen 861178 flags 2
[29450.405067]          tree block backref root 5
[29450.405070]  item 15 key (950359638016 169 0) itemoff 3485 itemsize 33
[29450.405071]          extent refs 1 gen 861178 flags 2
[29450.405072]          tree block backref root 5
[29450.405074]  item 16 key (950359642112 169 1) itemoff 3452 itemsize 33
[29450.405076]          extent refs 1 gen 861178 flags 2
[29450.405077]          tree block backref root 5
[29450.405079]  item 17 key (950359646208 169 0) itemoff 3419 itemsize 33
[29450.405080]          extent refs 1 gen 861178 flags 2
[29450.405081]          tree block backref root 5
[29450.405083]  item 18 key (950359650304 169 3) itemoff 3386 itemsize 33
[29450.405085]          extent refs 1 gen 861178 flags 2
[29450.405086]          tree block backref root 5
[29450.405088]  item 19 key (950359654400 169 2) itemoff 3353 itemsize 33
[29450.405089]          extent refs 1 gen 861178 flags 2
[29450.405090]          tree block backref root 5
[29450.405093]  item 20 key (950359658496 169 1) itemoff 3320 itemsize 33
[29450.405094]          extent refs 1 gen 861178 flags 2
[29450.405095]          tree block backref root 5
[29450.405097]  item 21 key (950359662592 169 0) itemoff 3287 itemsize 33
[29450.405099]          extent refs 1 gen 861178 flags 2
[29450.405100]          tree block backref root 5
[29450.405102]  item 22 key (950359666688 169 1) itemoff 3254 itemsize 33
[29450.405103]          extent refs 1 gen 861178 flags 2
[29450.405104]          tree block backref root 5
[29450.405106]  item 23 key (950359670784 169 1) itemoff 3221 itemsize 33
[29450.405108]          extent refs 1 gen 861178 flags 2
[29450.405109]          tree block backref root 2
[29450.405111]  item 24 key (950359674880 169 0) itemoff 3188 itemsize 33
[29450.405112]          extent refs 1 gen 861178 flags 2
[29450.405113]          tree block backref root 5
[29450.405115]  item 25 key (950359678976 169 0) itemoff 3155 itemsize 33
[29450.405117]          extent refs 1 gen 861178 flags 2
[29450.405118]          tree block backref root 2
[29450.405120]  item 26 key (950359683072 169 1) itemoff 3122 itemsize 33
[29450.405122]          extent refs 1 gen 861178 flags 2
[29450.405123]          tree block backref root 5
[29450.405125]  item 27 key (950359687168 169 0) itemoff 3089 itemsize 33
[29450.405126]          extent refs 1 gen 861178 flags 2
[29450.405127]          tree block backref root 5
[29450.405129]  item 28 key (950359691264 169 1) itemoff 3056 itemsize 33
[29450.405131]          extent refs 1 gen 861178 flags 2
[29450.405132]          tree block backref root 5
[29450.405134]  item 29 key (950359695360 169 0) itemoff 3023 itemsize 33
[29450.405135]          extent refs 1 gen 861178 flags 2
[29450.405136]          tree block backref root 5
[29450.405138]  item 30 key (950359699456 169 2) itemoff 2990 itemsize 33
[29450.405140]          extent refs 1 gen 861178 flags 2
[29450.405141]          tree block backref root 5
[29450.405143]  item 31 key (950359703552 169 1) itemoff 2957 itemsize 33
[29450.405144]          extent refs 1 gen 861178 flags 2
[29450.405145]          tree block backref root 5
[29450.405148]  item 32 key (951466917888 192 1107296256) itemoff 2933 
itemsize 24
[29450.405149]          block group used 0
[29450.405151]  item 33 key (952574214144 192 100663296) itemoff 2909 itemsize 
24
[29450.405152]          block group used 0
[29450.405154]  item 34 key (952674877440 192 3221225472) itemoff 2885 
itemsize 24
[29450.405155]          block group used 0
[29450.405157]  item 35 key (955896102912 192 3221225472) itemoff 2861 
itemsize 24
[29450.405158]          block group used 0
[29450.405160]  item 36 key (959117328384 192 3221225472) itemoff 2837 
itemsize 24
[29450.405161]          block group used 0
[29450.405163]  item 37 key (962338553856 192 3221225472) itemoff 2813 
itemsize 24
[29450.405164]          block group used 0
[29450.405166]  item 38 key (965559779328 192 3221225472) itemoff 2789 
itemsize 24
[29450.405167]          block group used 0
[29450.405169]  item 39 key (968781004800 192 3221225472) itemoff 2765 
itemsize 24
[29450.405171]          block group used 0
[29450.405173]  item 40 key (972002230272 192 3221225472) itemoff 2741 
itemsize 24
[29450.405174]          block group used 0
[29450.405176]  item 41 key (975223455744 192 3221225472) itemoff 2717 
itemsize 24
[29450.405177]          block group used 0
[29450.405179]  item 42 key (978444681216 192 3221225472) itemoff 2693 
itemsize 24
[29450.405180]          block group used 0
[29450.405182]  item 43 key (981665906688 192 3221225472) itemoff 2669 
itemsize 24
[29450.405183]          block group used 0
[29450.405185]  item 44 key (984887132160 192 3221225472) itemoff 2645 
itemsize 24
[29450.405186]          block group used 0
[29450.405188]  item 45 key (988108357632 192 3221225472) itemoff 2621 
itemsize 24
[29450.405189]          block group used 0
[29450.405191]  item 46 key (991329583104 192 3221225472) itemoff 2597 
itemsize 24
[29450.405192]          block group used 0
[29450.405194]  item 47 key (994550808576 192 3221225472) itemoff 2573 
itemsize 24
[29450.405195]          block group used 0
[29450.405197]  item 48 key (997772034048 192 3221225472) itemoff 2549 
itemsize 24
[29450.405198]          block group used 0
[29450.405200]  item 49 key (1000993259520 192 3221225472) itemoff 2525 
itemsize 24
[29450.405201]          block group used 0
[29450.405204]  item 50 key (1004214484992 192 3221225472) itemoff 2501 
itemsize 24
[29450.405205]          block group used 0
[29450.405207]  item 51 key (1007435710464 192 3221225472) itemoff 2477 
itemsize 24
[29450.405208]          block group used 0
[29450.405210]  item 52 key (1010656935936 192 3221225472) itemoff 2453 
itemsize 24
[29450.405211]          block group used 0
[29450.405213]  item 53 key (1013878161408 192 3221225472) itemoff 2429 
itemsize 24
[29450.405214]          block group used 0
[29450.405216]  item 54 key (1017099386880 192 3221225472) itemoff 2405 
itemsize 24
[29450.405217]          block group used 0
[29450.405219]  item 55 key (1020320612352 192 3221225472) itemoff 2381 
itemsize 24
[29450.405220]          block group used 0
[29450.405222]  item 56 key (1023541837824 192 3221225472) itemoff 2357 
itemsize 24
[29450.405223]          block group used 0
[29450.405225]  item 57 key (1026763063296 192 3221225472) itemoff 2333 
itemsize 24
[29450.405228]          block group used 0
[29450.405232]  item 58 key (1029984288768 192 3221225472) itemoff 2309 
itemsize 24
[29450.405237]          block group used 0
[29450.405241]  item 59 key (1033205514240 192 3221225472) itemoff 2285 
itemsize 24
[29450.405244]          block group used 0
[29450.405250]  item 60 key (1036426739712 192 3221225472) itemoff 2261 
itemsize 24
[29450.405255]          block group used 0
[29450.405262]  item 61 key (1039647965184 192 3221225472) itemoff 2237 
itemsize 24
[29450.405266]          block group used 0
[29450.405272]  item 62 key (1042869190656 192 3221225472) itemoff 2213 
itemsize 24
[29450.405279]          block group used 0
[29450.405288]  item 63 key (1046090416128 192 3221225472) itemoff 2189 
itemsize 24
[29450.405293]          block group used 0
[29450.405297]  item 64 key (1049311641600 192 3221225472) itemoff 2165 
itemsize 24
[29450.405304]          block group used 0
[29450.405310]  item 65 key (1052532867072 192 3221225472) itemoff 2141 
itemsize 24
[29450.405317]          block group used 0
[29450.405323]  item 66 key (1055754092544 192 3221225472) itemoff 2117 
itemsize 24
[29450.405329]          block group used 0
[29450.405335] BTRFS error (device sdd): unable to find ref byte nr 
951595991040 parent 0 root 5  owner 1 offset 0
[29450.405350] BTRFS: error (device sdd) in __btrfs_free_extent:6951: errno=-2 
No such entry
[29450.405373] BTRFS: error (device sdd) in btrfs_run_delayed_refs:2960: 
errno=-2 No such entry
[29450.406872] pending csums is 4096


btrfs output:
root@dibsi:/home/jan# btrfs check /dev/disk/by-uuid/
73d4dc77-6ff3-412f-9b0a-0d11458faf32 
Checking filesystem on /dev/disk/by-uuid/73d4dc77-6ff3-412f-9b0a-0d11458faf32
UUID: 73d4dc77-6ff3-412f-9b0a-0d11458faf32
checking extents
parent transid verify failed on 2280458502144 wanted 861168 found 860380
parent transid verify failed on 2280458502144 wanted 861168 found 860380
checksum verify failed on 2280458502144 found FC3DF84D wanted 2164EB93
checksum verify failed on 2280458502144 found FC3DF84D wanted 2164EB93
bytenr mismatch, want=2280458502144, have=15938383240448
parent transid verify failed on 2280458502144 wanted 861168 found 860380
parent transid verify failed on 2280458502144 wanted 861168 found 860380
checksum verify failed on 2280458502144 found FC3DF84D wanted 2164EB93
checksum verify failed on 2280458502144 found FC3DF84D wanted 2164EB93
bytenr mismatch, want=2280458502144, have=15938383240448
parent transid verify failed on 2280458506240 wanted 861168 found 860380
parent transid verify failed on 2280458506240 wanted 861168 found 860380
checksum verify failed on 2280458506240 found E76FAE4B wanted 968BCF7A
checksum verify failed on 2280458506240 found E76FAE4B wanted 968BCF7A
bytenr mismatch, want=2280458506240, have=15938383228160
parent transid verify failed on 2280458510336 wanted 861168 found 860380
parent transid verify failed on 2280458510336 wanted 861168 found 860380
checksum verify failed on 2280458510336 found ABA824E6 wanted FD072FC5
checksum verify failed on 2280458510336 found ABA824E6 wanted FD072FC5
bytenr mismatch, want=2280458510336, have=15938383199488
parent transid verify failed on 2280458620928 wanted 861168 found 860380
parent transid verify failed on 2280458620928 wanted 861168 found 860380
checksum verify failed on 2280458620928 found 4C43E476 wanted F3660326
checksum verify failed on 2280458620928 found 4C43E476 wanted F3660326
bytenr mismatch, want=2280458620928, have=15938383802368
parent transid verify failed on 2280458625024 wanted 861168 found 860380
parent transid verify failed on 2280458625024 wanted 861168 found 860380
checksum verify failed on 2280458625024 found AD8EE78F wanted 093A4586
checksum verify failed on 2280458625024 found AD8EE78F wanted 093A4586
bytenr mismatch, want=2280458625024, have=15938383831040
parent transid verify failed on 2280458629120 wanted 861168 found 860380
parent transid verify failed on 2280458629120 wanted 861168 found 860380
checksum verify failed on 2280458629120 found 67FC5731 wanted 3F68DF37
checksum verify failed on 2280458629120 found 67FC5731 wanted 3F68DF37
bytenr mismatch, want=2280458629120, have=15938383810560
parent transid verify failed on 2280458633216 wanted 861168 found 860380
parent transid verify failed on 2280458633216 wanted 861168 found 860380
checksum verify failed on 2280458633216 found D3E6B383 wanted 372BEB16
checksum verify failed on 2280458633216 found D3E6B383 wanted 372BEB16
bytenr mismatch, want=2280458633216, have=15938383830272
parent transid verify failed on 2280458637312 wanted 861168 found 860380
parent transid verify failed on 2280458637312 wanted 861168 found 860380
checksum verify failed on 2280458637312 found AD56590B wanted 27275E35
checksum verify failed on 2280458637312 found AD56590B wanted 27275E35
bytenr mismatch, want=2280458637312, have=15938383817984
parent transid verify failed on 2280458645504 wanted 861168 found 860380
parent transid verify failed on 2280458645504 wanted 861168 found 860380
checksum verify failed on 2280458645504 found 609E8D90 wanted 62B88736
checksum verify failed on 2280458645504 found 609E8D90 wanted 62B88736
bytenr mismatch, want=2280458645504, have=15938383793408
parent transid verify failed on 2280465395712 wanted 861168 found 860380
parent transid verify failed on 2280465395712 wanted 861168 found 860380
checksum verify failed on 2280465395712 found 1EE17660 wanted 5A593448
checksum verify failed on 2280465395712 found 1EE17660 wanted 5A593448
bytenr mismatch, want=2280465395712, have=15938501561600
parent transid verify failed on 2280415236096 wanted 861168 found 860380
parent transid verify failed on 2280415236096 wanted 861168 found 860380
checksum verify failed on 2280415236096 found F04B13C4 wanted CD1F2721
checksum verify failed on 2280415236096 found F04B13C4 wanted CD1F2721
bytenr mismatch, want=2280415236096, have=15938149183488
parent transid verify failed on 2280259969024 wanted 861166 found 860373
parent transid verify failed on 2280259969024 wanted 861166 found 860373
checksum verify failed on 2280259969024 found B80076DB wanted 09C8FA6F
checksum verify failed on 2280259969024 found B80076DB wanted 09C8FA6F
bytenr mismatch, want=2280259969024, have=15937221663232
parent transid verify failed on 2280409526272 wanted 861168 found 860376
parent transid verify failed on 2280409526272 wanted 861168 found 860376
checksum verify failed on 2280409526272 found 4357A5F3 wanted 969A158E
checksum verify failed on 2280409526272 found 4357A5F3 wanted 969A158E
bytenr mismatch, want=2280409526272, have=15938088724992
parent transid verify failed on 2280428298240 wanted 861168 found 860376
parent transid verify failed on 2280428298240 wanted 861168 found 860376
checksum verify failed on 2280428298240 found B8F51F6A wanted A2F455BD
checksum verify failed on 2280428298240 found B8F51F6A wanted A2F455BD
bytenr mismatch, want=2280428298240, have=15938466364672
parent transid verify failed on 2280446873600 wanted 861168 found 860380
parent transid verify failed on 2280446873600 wanted 861168 found 860380
checksum verify failed on 2280446873600 found DDD2FE66 wanted F89EED66
checksum verify failed on 2280446873600 found DDD2FE66 wanted F89EED66
bytenr mismatch, want=2280446873600, have=15938382596608
parent transid verify failed on 2280272723968 wanted 861166 found 860370
parent transid verify failed on 2280272723968 wanted 861166 found 860370
checksum verify failed on 2280272723968 found E4F93095 wanted B089A4B7
checksum verify failed on 2280272723968 found E4F93095 wanted B089A4B7
bytenr mismatch, want=2280272723968, have=15937216620288
parent transid verify failed on 2280345731072 wanted 861168 found 860377
parent transid verify failed on 2280345731072 wanted 861168 found 860377
checksum verify failed on 2280345731072 found 316EB1A6 wanted 1249429F
checksum verify failed on 2280345731072 found 316EB1A6 wanted 1249429F
bytenr mismatch, want=2280345731072, have=15937407592192
parent transid verify failed on 2280347447296 wanted 861167 found 860377
parent transid verify failed on 2280347447296 wanted 861167 found 860377
checksum verify failed on 2280347447296 found 6D4A46D8 wanted F2C503E1
checksum verify failed on 2280347447296 found 6D4A46D8 wanted F2C503E1
bytenr mismatch, want=2280347447296, have=15937414706944
parent transid verify failed on 2280388960256 wanted 861168 found 860380
parent transid verify failed on 2280388960256 wanted 861168 found 860380
checksum verify failed on 2280388960256 found 3B3067A0 wanted 46EE075B
checksum verify failed on 2280388960256 found 3B3067A0 wanted 46EE075B
bytenr mismatch, want=2280388960256, have=15938310299648
parent transid verify failed on 2280389693440 wanted 861168 found 860380
parent transid verify failed on 2280389693440 wanted 861168 found 860380
checksum verify failed on 2280389693440 found FABC36B0 wanted E26236B9
checksum verify failed on 2280389693440 found FABC36B0 wanted E26236B9
bytenr mismatch, want=2280389693440, have=15938306026752
parent transid verify failed on 2280157011968 wanted 861165 found 860361
parent transid verify failed on 2280157011968 wanted 861165 found 860361
checksum verify failed on 2280157011968 found 40E10FA2 wanted FF1F5A65
checksum verify failed on 2280157011968 found 40E10FA2 wanted FF1F5A65
bytenr mismatch, want=2280157011968, have=15936811604480
parent transid verify failed on 2280157970432 wanted 861165 found 860361
parent transid verify failed on 2280157970432 wanted 861165 found 860361
checksum verify failed on 2280157970432 found 331A4EF6 wanted 6D271A13
checksum verify failed on 2280157970432 found 331A4EF6 wanted 6D271A13
bytenr mismatch, want=2280157970432, have=15936577447680
parent transid verify failed on 2280427163648 wanted 861168 found 860380
parent transid verify failed on 2280427163648 wanted 861168 found 860380
checksum verify failed on 2280427163648 found 1C8FE3B6 wanted 95818BD4
checksum verify failed on 2280427163648 found 1C8FE3B6 wanted 95818BD4
bytenr mismatch, want=2280427163648, have=15938458906624
parent transid verify failed on 2280445718528 wanted 861168 found 860380
parent transid verify failed on 2280445718528 wanted 861168 found 860380
checksum verify failed on 2280445718528 found 67E355D6 wanted B49E524F
checksum verify failed on 2280445718528 found 67E355D6 wanted B49E524F
bytenr mismatch, want=2280445718528, have=15938385121280
parent transid verify failed on 503649468416 wanted 2486 found 861173
Ignoring transid failure
parent transid verify failed on 503649472512 wanted 2486 found 861177
parent transid verify failed on 503649472512 wanted 2486 found 861177
checksum verify failed on 503649472512 found 094AA6E5 wanted 76FAADD7
checksum verify failed on 503649472512 found 094AA6E5 wanted 76FAADD7
bytenr mismatch, want=503649472512, have=372927852544
parent transid verify failed on 503649476608 wanted 818691 found 861177
parent transid verify failed on 503649476608 wanted 818691 found 861177
checksum verify failed on 503649476608 found 6CF07DA7 wanted 1EC02B4E
checksum verify failed on 503649476608 found 6CF07DA7 wanted 1EC02B4E
bytenr mismatch, want=503649476608, have=372927881216
parent transid verify failed on 503649480704 wanted 2486 found 861177
parent transid verify failed on 503649480704 wanted 2486 found 861177
checksum verify failed on 503649480704 found 74888BDC wanted 5396C8AC
bytenr mismatch, want=503649480704, have=503649415168
parent transid verify failed on 503649484800 wanted 141269 found 861172
parent transid verify failed on 503649484800 wanted 141269 found 861172
checksum verify failed on 503649484800 found 519C3303 wanted 1B4232A5
checksum verify failed on 503649484800 found 519C3303 wanted 1B4232A5
bytenr mismatch, want=503649484800, have=372927880448
parent transid verify failed on 503649488896 wanted 2486 found 861176
parent transid verify failed on 503649488896 wanted 2486 found 861176
checksum verify failed on 503649488896 found 4619DDAF wanted A0B78F95
bytenr mismatch, want=503649488896, have=503649423360
parent transid verify failed on 503649492992 wanted 2486 found 861176
parent transid verify failed on 503649492992 wanted 2486 found 861176
checksum verify failed on 503649492992 found EBC18A42 wanted F05DA8AA
bytenr mismatch, want=503649492992, have=503649427456
parent transid verify failed on 503649497088 wanted 2486 found 861176
parent transid verify failed on 503649497088 wanted 2486 found 861176
checksum verify failed on 503649497088 found B046E5A0 wanted 9867B21F
checksum verify failed on 503649497088 found B046E5A0 wanted 9867B21F
bytenr mismatch, want=503649497088, have=372927843584
parent transid verify failed on 503649501184 wanted 2486 found 861176
parent transid verify failed on 503649501184 wanted 2486 found 861176
checksum verify failed on 503649501184 found 717EBF4C wanted A15FDF58
parent transid verify failed on 503649501184 wanted 2486 found 861176
Ignoring transid failure
leaf parent key incorrect 503649501184
bad block 503649501184
ERROR: errors found in extent allocation tree or chunk allocation
parent transid verify failed on 2280458502144 wanted 861168 found 860380
parent transid verify failed on 2280458502144 wanted 861168 found 860380
checksum verify failed on 2280458502144 found FC3DF84D wanted 2164EB93
checksum verify failed on 2280458502144 found FC3DF84D wanted 2164EB93
bytenr mismatch, want=2280458502144, have=15938383240448

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: mounting failed any file on my filesystem
  2016-12-29 19:05 mounting failed any file on my filesystem Jan Koester
@ 2016-12-29 22:31 ` Duncan
  2016-12-30 12:17   ` Jan Koester
  0 siblings, 1 reply; 5+ messages in thread
From: Duncan @ 2016-12-29 22:31 UTC (permalink / raw)
  To: linux-btrfs

Jan Koester posted on Thu, 29 Dec 2016 20:05:35 +0100 as excerpted:

> Hi,
> 
> i have problem with filesystem if my system crashed i have made been
> hard reset of the system after my Filesystem was crashed. I have already
> tried to repair without success you can see it on log file. It's seem
> one corrupted block brings complete filesystem to crashing.
> 
> Have anybody idea what happened with my filesystem ?
> 
> dmesg if open file:
> [29450.404327] WARNING: CPU: 5 PID: 16161 at
> /build/linux-lIgGMF/linux-4.8.11/ fs/btrfs/extent-tree.c:6945
> __btrfs_free_extent.isra.71+0x8e2/0xd60 [btrfs]

First a disclaimer.  I'm a btrfs user and list regular, not a dev.  As 
such I don't really read call traces much beyond checking the kernel 
version, and don't do code.  It's likely that you will get a more 
authoritative reply from someone who does, and it should take precedence, 
but in the mean time, I can try to deal with the preliminaries.

Kernel 4.8.11, good.  But you run btrfs check below, and we don't have 
the version of your btrfs-progs userspace.  Please report that too.

> btrfs output:
> root@dibsi:/home/jan# btrfs check /dev/disk/by-uuid/
> 73d4dc77-6ff3-412f-9b0a-0d11458faf32

Note that btrfs check is read-only by default.  It will report what it 
thinks are errors, but won't attempt to fix them unless you add various 
options (such as --repair) to tell it to do so.  This is by design and is 
very important, as attempting to repair problems that it doesn't properly 
understand could make the problems worse instead of better.  So even tho 
the above command will only report what it sees as problems, not attempt 
to fix them, you did the right thing by running check without --repair 
first, and posting the results here for an expert to look at and tell you 
whether to try --repair, or what else to try instead.

> Checking filesystem on
> /dev/disk/by-uuid/73d4dc77-6ff3-412f-9b0a-0d11458faf32
> UUID: 73d4dc77-6ff3-412f-9b0a-0d11458faf32
> checking extents
> parent transid verify failed on 2280458502144 wanted 861168
> found 860380
> parent transid verify failed on 2280458502144 wanted 861168
> found 860380
> checksum verify failed on 2280458502144 found FC3DF84D
> wanted 2164EB93
> checksum verify failed on 2280458502144 found FC3DF84D
> wanted 2164EB93
> bytenr mismatch, want=2280458502144, have=15938383240448
[...]

Some other information that we normally ask for includes the output from 
a few other btrfs commands.

It's unclear from your report if the filesystem will mount at all.  The 
subject says mount failed, but then it mentions any file on the 
filesystem, which seems to imply that you could mount, but that any file 
you attempted to actually access after mounting crashes the system with 
the trace you posted, so I'm not sure if you can actually mount the 
filesystem at all.

If you can't mount the filesystem, at least try to post the output from...

btrfs filesystem show

If you can mount the filesystem, then the much more detailed...

btrfs filesystem usage

... if your btrfs-progs is new enough, or...

btrfs filesystem df

... if btrfs-progs is too old to have the usage command.

Also, if it's not clear from the output of the commands above (usage by 
itself, or show plus df, should answer most of the below, but show alone 
only provides some of the information), tell us a bit more about the 
filesystem in question:

Single device (like traditional filesystems) or multiple device?  If 
multiple device, what raid levels if you know them, or did you just go 
with the defaults.  If single device, again, defaults, or did you specify 
single or dup, particularly for metadata.

Also, how big was the filesystem and how close to full?  And was it on 
ssd, spinning rust, or on top of something virtual (like a VM image 
existing as a file on the host, or lvm, or mdraid, etc)?


Meanwhile, if you can mount, the first thing I'd try is btrfs scrub 
(unless you were running btrfs raid56 mode, which makes things far more 
complex as it's not stable yet and isn't recommended except for testing 
with data you can afford to lose).  Often, a scrub can fix much of the 
damage of a crash if you were running raid1 mode (multi-device metadata 
default), raid10, or dup (single device metadata default, except on ssd), 
as those have a second checksummed copy that will often be correct that 
scrub can use to fix the bad copy, but it will detect but be unable to 
fix damage in single mode (default for data) or raid0 mode, as those 
don't have a second copy available to fix the first.

Because the default for single device btrfs is dup metadata, single data, 
in that case the scrub should fix most or all of metadata, allowing you 
to access small file (roughly anything under a couple KiB) and larger 
files that weren't themselves damaged, but you may still have damage in 
some files of any significant size.

But scrub can only run if you can mount the filesystem.  If you can't, 
then you have to try other things in ordered to get it mountable, first.  
Many of these other things tend to be much more complex and risky, so if 
you can mount at all, try scrub first, and see how much it helps.  Here 
I'm dual-device raid1 for nearly all my btrfs, and (assuming I can mount 
the affected filesystem, which I usually can) I now run scrub first thing 
after a crash, as a preventative measure even without knowing if the 
filesystem was damaged or not.

If the filesystem won't mount, then the recommendation is /likely/ to be 
trying the usebackuproot mount option (which replaced the older recovery 
mount option, but you're using a new enough kernel for usebackuproot), 
which will try some older tree roots if the newest one is damaged.  You 
may have to use that option with readonly, which of course will prevent 
running scrub or the like while mounted, but may help you get access to 
the data at least to freshen up your backups.  However, usebackuproot 
will by definition sacrifice the last seconds of writes before the crash, 
and while I'd probably try this option on my own system without asking, 
I'm not comfortable recommending it to others, so I'd suggest waiting for 
one of the higher experts to confirm, before trying it yourself.

Beyond usebackuproot, you get into more risky attempts to repair that may 
instead do further damage if they don't work.  This is where btrfs check 
--repair lives, along with some other check options, btrfs rescue, etc.  
Unless specifically told otherwise by an expert after they look at the 
filesystem info, these are risky enough that if at all possible, you want 
to freshen your backups before you try them.

That's where btrfs restore comes in, as it lets you try to attempt 
restoring files from an unmountable filesystem, while not actually 
writing to that filesystem, thus not risking doing further damage, in the 
process.  Of course that means you have to have some place to put the 
files it's going to restore.  In simple mode you just run btrfs restore 
with commandline parameters telling it what device to try to restore from 
and where to put the restored files (and some options telling it whether 
to try restoring metadata like file ownership, permissions, dates, etc), 
and it just works.

However, should btrfs restore's simple mode fail, there's more complex 
advanced modes to try, still without risking further damage to the 
filesystem in question, but that gets complex enough it needs its own 
post... if you come to that.  There's a page on the wiki with some 
instructions, but they may not be current and it's a complex enough 
operation that most people need help beyond what's on the wiki (and in 
the btrfs-restore manpage), anyway.  But here's the link so you can take 
a look at what the general operation looks like:

https://btrfs.wiki.kernel.org/index.php/Restore

Meanwhile, it's a bit late now, but in general, btrfs is considered still 
in heavy development, stabilizing but not yet fully stable and mature.  
As such, while any sysadmin worth the label will tell you that you are 
defining any data you don't have backups for as not worth the time, 
trouble and resources to do those backups, basically defining it as throw-
away data because it's /not/ worth backing up or by definition you'd 
/have/ those backups, even for normal stable and mature filesystems, with 
btrfs still stabilizing, backups are even /more/ strongly recommended, as 
is keeping them current within the window of data you're willing to lose 
if you lose the primary copy, and keeping those backups practically 
usable (not over a slow net link that'll take over a week to download in 
ordered to restore, for instance, one real case that was posted).  If 
you're doing that then losing a filesystem isn't going to be a big stress 
and you can afford to skip the real complex and risky stuff (unless 
you're simply doing it to learn how) and just restore from backup, as it 
will be simpler.  If not, then you should really reexamine whether btrfs 
is the right filesystem choice for you, because it /isn't/ yet fully 
stable and mature, and chances are you'd be better off with a more stable 
and mature filesystem where not having updated at-hand backups is less of 
a risk (altho as I said any sysadmin worth the name will tell you not 
having backups is literally defining the data as throw-away value, 
because in the real world, "things happen", and there's too many of those 
things possible in the real world to behave otherwise).

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: mounting failed any file on my filesystem
  2016-12-29 22:31 ` Duncan
@ 2016-12-30 12:17   ` Jan Koester
  2016-12-31  5:05     ` Duncan
  0 siblings, 1 reply; 5+ messages in thread
From: Jan Koester @ 2016-12-30 12:17 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Donnerstag, 29. Dezember 2016 22:31:29 CET Duncan wrote:
> Jan Koester posted on Thu, 29 Dec 2016 20:05:35 +0100 as excerpted:
> > Hi,
> > 
> > i have problem with filesystem if my system crashed i have made been
> > hard reset of the system after my Filesystem was crashed. I have already
> > tried to repair without success you can see it on log file. It's seem
> > one corrupted block brings complete filesystem to crashing.
> > 
> > Have anybody idea what happened with my filesystem ?
> > 
> > dmesg if open file:
> > [29450.404327] WARNING: CPU: 5 PID: 16161 at
> > /build/linux-lIgGMF/linux-4.8.11/ fs/btrfs/extent-tree.c:6945
> > __btrfs_free_extent.isra.71+0x8e2/0xd60 [btrfs]
> 
> First a disclaimer.  I'm a btrfs user and list regular, not a dev.  As
> such I don't really read call traces much beyond checking the kernel
> version, and don't do code.  It's likely that you will get a more
> authoritative reply from someone who does, and it should take precedence,
> but in the mean time, I can try to deal with the preliminaries.
> 
> Kernel 4.8.11, good.  But you run btrfs check below, and we don't have
> the version of your btrfs-progs userspace.  Please report that too.
> 
> > btrfs output:
> > root@dibsi:/home/jan# btrfs check /dev/disk/by-uuid/
> > 73d4dc77-6ff3-412f-9b0a-0d11458faf32
> 
> Note that btrfs check is read-only by default.  It will report what it
> thinks are errors, but won't attempt to fix them unless you add various
> options (such as --repair) to tell it to do so.  This is by design and is
> very important, as attempting to repair problems that it doesn't properly
> understand could make the problems worse instead of better.  So even tho
> the above command will only report what it sees as problems, not attempt
> to fix them, you did the right thing by running check without --repair
> first, and posting the results here for an expert to look at and tell you
> whether to try --repair, or what else to try instead.
> 
> > Checking filesystem on
> > /dev/disk/by-uuid/73d4dc77-6ff3-412f-9b0a-0d11458faf32
> > UUID: 73d4dc77-6ff3-412f-9b0a-0d11458faf32
> > checking extents
> > parent transid verify failed on 2280458502144 wanted 861168
> > found 860380
> > parent transid verify failed on 2280458502144 wanted 861168
> > found 860380
> > checksum verify failed on 2280458502144 found FC3DF84D
> > wanted 2164EB93
> > checksum verify failed on 2280458502144 found FC3DF84D
> > wanted 2164EB93
> > bytenr mismatch, want=2280458502144, have=15938383240448
> 
> [...]
> 
> Some other information that we normally ask for includes the output from
> a few other btrfs commands.
> 
> It's unclear from your report if the filesystem will mount at all.  The
> subject says mount failed, but then it mentions any file on the
> filesystem, which seems to imply that you could mount, but that any file
> you attempted to actually access after mounting crashes the system with
> the trace you posted, so I'm not sure if you can actually mount the
> filesystem at all.
> 
> If you can't mount the filesystem, at least try to post the output from...
> 
> btrfs filesystem show
> 
> If you can mount the filesystem, then the much more detailed...
> 
> btrfs filesystem usage
> 
> ... if your btrfs-progs is new enough, or...
> 
> btrfs filesystem df
> 
> ... if btrfs-progs is too old to have the usage command.
> 
> Also, if it's not clear from the output of the commands above (usage by
> itself, or show plus df, should answer most of the below, but show alone
> only provides some of the information), tell us a bit more about the
> filesystem in question:
> 
> Single device (like traditional filesystems) or multiple device?  If
> multiple device, what raid levels if you know them, or did you just go
> with the defaults.  If single device, again, defaults, or did you specify
> single or dup, particularly for metadata.
> 
> Also, how big was the filesystem and how close to full?  And was it on
> ssd, spinning rust, or on top of something virtual (like a VM image
> existing as a file on the host, or lvm, or mdraid, etc)?
> 
> 
> Meanwhile, if you can mount, the first thing I'd try is btrfs scrub
> (unless you were running btrfs raid56 mode, which makes things far more
> complex as it's not stable yet and isn't recommended except for testing
> with data you can afford to lose).  Often, a scrub can fix much of the
> damage of a crash if you were running raid1 mode (multi-device metadata
> default), raid10, or dup (single device metadata default, except on ssd),
> as those have a second checksummed copy that will often be correct that
> scrub can use to fix the bad copy, but it will detect but be unable to
> fix damage in single mode (default for data) or raid0 mode, as those
> don't have a second copy available to fix the first.
> 
> Because the default for single device btrfs is dup metadata, single data,
> in that case the scrub should fix most or all of metadata, allowing you
> to access small file (roughly anything under a couple KiB) and larger
> files that weren't themselves damaged, but you may still have damage in
> some files of any significant size.
> 
> But scrub can only run if you can mount the filesystem.  If you can't,
> then you have to try other things in ordered to get it mountable, first.
> Many of these other things tend to be much more complex and risky, so if
> you can mount at all, try scrub first, and see how much it helps.  Here
> I'm dual-device raid1 for nearly all my btrfs, and (assuming I can mount
> the affected filesystem, which I usually can) I now run scrub first thing
> after a crash, as a preventative measure even without knowing if the
> filesystem was damaged or not.
> 
> If the filesystem won't mount, then the recommendation is /likely/ to be
> trying the usebackuproot mount option (which replaced the older recovery
> mount option, but you're using a new enough kernel for usebackuproot),
> which will try some older tree roots if the newest one is damaged.  You
> may have to use that option with readonly, which of course will prevent
> running scrub or the like while mounted, but may help you get access to
> the data at least to freshen up your backups.  However, usebackuproot
> will by definition sacrifice the last seconds of writes before the crash,
> and while I'd probably try this option on my own system without asking,
> I'm not comfortable recommending it to others, so I'd suggest waiting for
> one of the higher experts to confirm, before trying it yourself.
> 
> Beyond usebackuproot, you get into more risky attempts to repair that may
> instead do further damage if they don't work.  This is where btrfs check
> --repair lives, along with some other check options, btrfs rescue, etc.
> Unless specifically told otherwise by an expert after they look at the
> filesystem info, these are risky enough that if at all possible, you want
> to freshen your backups before you try them.
> 
> That's where btrfs restore comes in, as it lets you try to attempt
> restoring files from an unmountable filesystem, while not actually
> writing to that filesystem, thus not risking doing further damage, in the
> process.  Of course that means you have to have some place to put the
> files it's going to restore.  In simple mode you just run btrfs restore
> with commandline parameters telling it what device to try to restore from
> and where to put the restored files (and some options telling it whether
> to try restoring metadata like file ownership, permissions, dates, etc),
> and it just works.
> 
> However, should btrfs restore's simple mode fail, there's more complex
> advanced modes to try, still without risking further damage to the
> filesystem in question, but that gets complex enough it needs its own
> post... if you come to that.  There's a page on the wiki with some
> instructions, but they may not be current and it's a complex enough
> operation that most people need help beyond what's on the wiki (and in
> the btrfs-restore manpage), anyway.  But here's the link so you can take
> a look at what the general operation looks like:
> 
> https://btrfs.wiki.kernel.org/index.php/Restore
> 
> Meanwhile, it's a bit late now, but in general, btrfs is considered still
> in heavy development, stabilizing but not yet fully stable and mature.
> As such, while any sysadmin worth the label will tell you that you are
> defining any data you don't have backups for as not worth the time,
> trouble and resources to do those backups, basically defining it as throw-
> away data because it's /not/ worth backing up or by definition you'd
> /have/ those backups, even for normal stable and mature filesystems, with
> btrfs still stabilizing, backups are even /more/ strongly recommended, as
> is keeping them current within the window of data you're willing to lose
> if you lose the primary copy, and keeping those backups practically
> usable (not over a slow net link that'll take over a week to download in
> ordered to restore, for instance, one real case that was posted).  If
> you're doing that then losing a filesystem isn't going to be a big stress
> and you can afford to skip the real complex and risky stuff (unless
> you're simply doing it to learn how) and just restore from backup, as it
> will be simpler.  If not, then you should really reexamine whether btrfs
> is the right filesystem choice for you, because it /isn't/ yet fully
> stable and mature, and chances are you'd be better off with a more stable
> and mature filesystem where not having updated at-hand backups is less of
> a risk (altho as I said any sysadmin worth the name will tell you not
> having backups is literally defining the data as throw-away value,
> because in the real world, "things happen", and there's too many of those
> things possible in the real world to behave otherwise).

hi,

I'am using kernel 4.8.0-2 and btrfsprogs 4.9.

I can mouting now the filesystem but it's chrashed when i try to access file on 
this filesystem.

btrfs fi show
Label: none  uuid: 73d4dc77-6ff3-412f-9b0a-0d11458faf32
        Total devices 5 FS bytes used 1.17TiB
        devid    1 size 931.51GiB used 420.78GiB path /dev/sdd
        devid    2 size 931.51GiB used 420.78GiB path /dev/sdf
        devid    3 size 931.51GiB used 420.78GiB path /dev/sde
        devid    4 size 931.51GiB used 420.78GiB path /dev/sda
        devid    5 size 931.51GiB used 420.78GiB path /dev/sdc

btrfs filesystem usage

sudo btrfs filesystem usage /mnt
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
WARNING: RAID56 detected, not implemented
Overall:
    Device size:                   4.55TiB
    Device allocated:                0.00B
    Device unallocated:            4.55TiB
    Device missing:                  0.00B
    Used:                            0.00B
    Free (estimated):                0.00B      (min: 8.00EiB)
    Data ratio:                       0.00
    Metadata ratio:                   0.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,RAID6: Size:1.22TiB, Used:0.00B
   /dev/sda      418.00GiB
   /dev/sdc      418.00GiB
   /dev/sdd      418.00GiB
   /dev/sde      418.00GiB
   /dev/sdf      418.00GiB

Metadata,RAID6: Size:8.25GiB, Used:44.00KiB
   /dev/sda        2.75GiB
   /dev/sdc        2.75GiB
   /dev/sdd        2.75GiB
   /dev/sde        2.75GiB
   /dev/sdf        2.75GiB

System,RAID6: Size:96.00MiB, Used:0.00B
   /dev/sda       32.00MiB
   /dev/sdc       32.00MiB
   /dev/sdd       32.00MiB
   /dev/sde       32.00MiB
   /dev/sdf       32.00MiB

Unallocated:
   /dev/sda      510.73GiB
   /dev/sdc      510.73GiB
   /dev/sdd      510.73GiB
   /dev/sde      510.73GiB
   /dev/sdf      510.73GiB


sudo btrfs filesystem df /mnt
Data, RAID6: total=1.22TiB, used=0.00B
System, RAID6: total=96.00MiB, used=0.00B
Metadata, RAID6: total=8.25GiB, used=80.00KiB
GlobalReserve, single: total=512.00MiB, used=8.00KiB



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: mounting failed any file on my filesystem
  2016-12-30 12:17   ` Jan Koester
@ 2016-12-31  5:05     ` Duncan
  2017-01-01 17:24       ` Jan Koester
  0 siblings, 1 reply; 5+ messages in thread
From: Duncan @ 2016-12-31  5:05 UTC (permalink / raw)
  To: linux-btrfs

Jan Koester posted on Fri, 30 Dec 2016 13:17:37 +0100 as excerpted:

> sudo btrfs filesystem df /mnt
> Data, RAID6: total=1.22TiB, used=0.00B
> System, RAID6: total=96.00MiB, used=0.00B
> Metadata, RAID6: total=8.25GiB, used=80.00KiB
> GlobalReserve, single: total=512.00MiB, used=8.00KiB

Expanding on what I already mentioned in passing (hoping it wasn't the 
case), raid56 mode (so including your raid6) remains quite unstable, with 
known problems that make it unsuitable for the sort of purposes people 
normally run parity-raid for.  So it's actively negatively recommended 
unless you're running it for the specific purpose of trying to help the 
devs work out the problems with it, using only throw-away-value test data 
in case those problems eat it, which unfortunately they have a 
significantly real chance of doing, with raid56 at this point.

So you need to get off of it ASAP, and hope any data that wasn't already 
throw-away-value, doesn't end up being thrown away anyway, in the process.

Unfortunately, as I said in the earlier post, I'm just a user, tho a list 
regular, myself, not a dev.  And I've been staying well away from raid56 
/because/ of these problems (as well as because it didn't fit my use-case 
in the first place), so other than noting the severity of the issues, 
I've not really been paying attention to the various threads with people 
trying to fix the problems and save at least some of their raid56 stored 
data.

So if you want to try to save the data, you'll need help from a dev or 
higher level expert, or failing that, at least to examine the last couple 
2-3 months worth of list threads and find the raid56 ones with methods to 
try to save what can be saved, and possibly to patch at least some of the 
problems in ordered to not make the problem worse while you're doing so.

But it's going to require some reasonable technical know-how to try to do 
that, as well as the time and hassle, so honestly, unless that data's 
/really/ worth it, it may be better to simply cut and run, doing a fresh 
mkfs and being done with btrfs raid56 for now, without spending more time 
on it, only to find you can't save much anyway.  Tho if it's worth it to 
you, you may be able to save much of it, but you could spend a month's 
man-hours doing it too and possibly still come up empty.  Plus be 
careful, because stuff like scrub that would normally help, can make the 
problem much much worse in the case of raid56 ATM.  Yes, the problems 
with it ATM *are* that bad.  Unfortunately.  There's actually talk of 
scrapping the code (almost?) entirely and starting over again, as there's 
a real question as to whether it can even be properly fixed, tho I'm not 
sure it will come to that.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: mounting failed any file on my filesystem
  2016-12-31  5:05     ` Duncan
@ 2017-01-01 17:24       ` Jan Koester
  0 siblings, 0 replies; 5+ messages in thread
From: Jan Koester @ 2017-01-01 17:24 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Samstag, 31. Dezember 2016 05:05:03 CET Duncan wrote:
> Jan Koester posted on Fri, 30 Dec 2016 13:17:37 +0100 as excerpted:
> > sudo btrfs filesystem df /mnt
> > Data, RAID6: total=1.22TiB, used=0.00B
> > System, RAID6: total=96.00MiB, used=0.00B
> > Metadata, RAID6: total=8.25GiB, used=80.00KiB
> > GlobalReserve, single: total=512.00MiB, used=8.00KiB
> 
> Expanding on what I already mentioned in passing (hoping it wasn't the
> case), raid56 mode (so including your raid6) remains quite unstable, with
> known problems that make it unsuitable for the sort of purposes people
> normally run parity-raid for.  So it's actively negatively recommended
> unless you're running it for the specific purpose of trying to help the
> devs work out the problems with it, using only throw-away-value test data
> in case those problems eat it, which unfortunately they have a
> significantly real chance of doing, with raid56 at this point.
> 
> So you need to get off of it ASAP, and hope any data that wasn't already
> throw-away-value, doesn't end up being thrown away anyway, in the process.
> 
> Unfortunately, as I said in the earlier post, I'm just a user, tho a list
> regular, myself, not a dev.  And I've been staying well away from raid56
> /because/ of these problems (as well as because it didn't fit my use-case
> in the first place), so other than noting the severity of the issues,
> I've not really been paying attention to the various threads with people
> trying to fix the problems and save at least some of their raid56 stored
> data.
> 
> So if you want to try to save the data, you'll need help from a dev or
> higher level expert, or failing that, at least to examine the last couple
> 2-3 months worth of list threads and find the raid56 ones with methods to
> try to save what can be saved, and possibly to patch at least some of the
> problems in ordered to not make the problem worse while you're doing so.
> 
> But it's going to require some reasonable technical know-how to try to do
> that, as well as the time and hassle, so honestly, unless that data's
> /really/ worth it, it may be better to simply cut and run, doing a fresh
> mkfs and being done with btrfs raid56 for now, without spending more time
> on it, only to find you can't save much anyway.  Tho if it's worth it to
> you, you may be able to save much of it, but you could spend a month's
> man-hours doing it too and possibly still come up empty.  Plus be
> careful, because stuff like scrub that would normally help, can make the
> problem much much worse in the case of raid56 ATM.  Yes, the problems
> with it ATM *are* that bad.  Unfortunately.  There's actually talk of
> scrapping the code (almost?) entirely and starting over again, as there's
> a real question as to whether it can even be properly fixed, tho I'm not
> sure it will come to that.

I have backup of this data i wan't to help the Dev's too fix this bug because i 
have enough hard drives.I can also give remote access to show what happend 
with the filesystem or other thins that could he to fix this bug. I see at the 
moment much things happened on the raid56 code.



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-01-01 17:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-29 19:05 mounting failed any file on my filesystem Jan Koester
2016-12-29 22:31 ` Duncan
2016-12-30 12:17   ` Jan Koester
2016-12-31  5:05     ` Duncan
2017-01-01 17:24       ` Jan Koester

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.