All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
@ 2015-11-19 13:03 Adam Błaszczykowski
  0 siblings, 0 replies; 15+ messages in thread
From: Adam Błaszczykowski @ 2015-11-19 13:03 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 15618 bytes --]

Hello,
I have updated kernel version 3.4 to 3.10 and I encountered the same
problem. System is working, but periodically I get following errors in
kernel logs:

[20367.539731] XFS (dm-4): Internal error xfs_attr3_leaf_read_verify at
line 256 of file fs/xfs/xfs_attr_leaf.c.  Caller 0xffffffff81263bcd
[20367.539731]
[20367.539732] CPU: 1 PID: 2240 Comm: kworker/1:1H Tainted: G           O
3.10.93 #15
[20367.539733] Hardware name: Intel Corporation S2600IP/S2600IP, BIOS
SE5C600.86B.02.03.0003.041920141333 04/19/2014
[20367.539736] Workqueue: xfslogd xfs_buf_iodone_work
[20367.539738]  ffffffff816bf9df 000000000000007e ffffffff81265a42
ffffffff81263bcd
[20367.539739]  ffff000000000100 ffff8802c8e40e40 ffff88085257f080
ffff88085adf5000
[20367.539740]  ffff88087fc38f00 0000000000000000 ffffffff8127e95f
ffffffff81263bcd
[20367.539741] Call Trace:
[20367.539744]  [<ffffffff816bf9df>] ? dump_stack+0xd/0x17
[20367.539746]  [<ffffffff81265a42>] ? xfs_corruption_error+0x62/0x90
[20367.539748]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[20367.539750]  [<ffffffff8127e95f>] ? xfs_attr3_leaf_read_verify+0x6f/0x100
[20367.539751]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[20367.539753]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[20367.539754]  [<ffffffff8105dd01>] ? process_one_work+0x141/0x3b0
[20367.539757]  [<ffffffff8105b79e>] ? pwq_activate_delayed_work+0x2e/0x50
[20367.539758]  [<ffffffff8105ec42>] ? worker_thread+0x112/0x370
[20367.539759]  [<ffffffff8105eb30>] ? manage_workers.isra.28+0x290/0x290
[20367.539761]  [<ffffffff810647e3>] ? kthread+0xb3/0xc0
[20367.539763]  [<ffffffff81012685>] ? sched_clock+0x5/0x10
[20367.539764]  [<ffffffff81060000>] ? __alloc_workqueue_key+0x210/0x510
[20367.539766]  [<ffffffff81064730>] ?
kthread_freezable_should_stop+0x60/0x60
[20367.539768]  [<ffffffff816cab18>] ? ret_from_fork+0x58/0x90
[20367.539770]  [<ffffffff81064730>] ?
kthread_freezable_should_stop+0x60/0x60
[20367.539770] XFS (dm-4): Corruption detected. Unmount and run xfs_repair
[20367.539777] XFS (dm-4): metadata I/O error: block 0x19a4775b18
("xfs_trans_read_buf_map") error 117 numblks 8


As in other cases from this thread, xfs repair hasn't found any problems.
Here is xfs_dump from block 0x19a4775b18:

[20367.539731] XFS (dm-4): Internal error xfs_attr3_leaf_read_verify at
line 256 of file fs/xfs/xfs_attr_leaf.c.  Caller 0xffffffff81263bcd
[20367.539731]
[20367.539732] CPU: 1 PID: 2240 Comm: kworker/1:1H Tainted: G           O
3.10.93 #15
[20367.539733] Hardware name: Intel Corporation S2600IP/S2600IP, BIOS
SE5C600.86B.02.03.0003.041920141333 04/19/2014
[20367.539736] Workqueue: xfslogd xfs_buf_iodone_work
[20367.539738]  ffffffff816bf9df 000000000000007e ffffffff81265a42
ffffffff81263bcd
[20367.539739]  ffff000000000100 ffff8802c8e40e40 ffff88085257f080
ffff88085adf5000
[20367.539740]  ffff88087fc38f00 0000000000000000 ffffffff8127e95f
ffffffff81263bcd
[20367.539741] Call Trace:
[20367.539744]  [<ffffffff816bf9df>] ? dump_stack+0xd/0x17
[20367.539746]  [<ffffffff81265a42>] ? xfs_corruption_error+0x62/0x90
[20367.539748]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[20367.539750]  [<ffffffff8127e95f>] ? xfs_attr3_leaf_read_verify+0x6f/0x100
[20367.539751]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[20367.539753]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[20367.539754]  [<ffffffff8105dd01>] ? process_one_work+0x141/0x3b0
[20367.539757]  [<ffffffff8105b79e>] ? pwq_activate_delayed_work+0x2e/0x50
[20367.539758]  [<ffffffff8105ec42>] ? worker_thread+0x112/0x370
[20367.539759]  [<ffffffff8105eb30>] ? manage_workers.isra.28+0x290/0x290
[20367.539761]  [<ffffffff810647e3>] ? kthread+0xb3/0xc0
[20367.539763]  [<ffffffff81012685>] ? sched_clock+0x5/0x10
[20367.539764]  [<ffffffff81060000>] ? __alloc_workqueue_key+0x210/0x510
[20367.539766]  [<ffffffff81064730>] ?
kthread_freezable_should_stop+0x60/0x60
[20367.539768]  [<ffffffff816cab18>] ? ret_from_fork+0x58/0x90
[20367.539770]  [<ffffffff81064730>] ?
kthread_freezable_should_stop+0x60/0x60
[20367.539770] XFS (dm-4): Corruption detected. Unmount and run xfs_repair
[20367.539777] XFS (dm-4): metadata I/O error: block 0x19a4775b18
("xfs_trans_read_buf_map") error 117 numblks 8

xfs_db> convert daddr 0x19a4775b18 fsb
0x3348eeb96 (13766683542)

xfs_db> fsb 0x3348eeb96
xfs_db> p
000: 00000000 00000000 fbee0000 00000000 10000000 00200fe0 00000000 00000000
020: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
040: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
060: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
080: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
100: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
120: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
140: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
160: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
180: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
200: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
220: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
240: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
260: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
280: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
2a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
2c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
2e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
300: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
320: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
340: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
360: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
380: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
3a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
3c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
3e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
400: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
420: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
440: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
460: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
480: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
4a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
4c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
4e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
500: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
520: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
540: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
560: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
580: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
5a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
5c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
5e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
600: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
620: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
640: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
660: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
680: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
6a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
6c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
6e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
700: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
720: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
740: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
760: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
780: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
7a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
7c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
7e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
800: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
820: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
840: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
860: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
880: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
900: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
920: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
940: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
960: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
980: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
9a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
9c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
9e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
aa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ac0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ae0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ba0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
bc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
be0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ca0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
cc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ce0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
da0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
dc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
de0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ea0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ec0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ee0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fe0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
xfs_db> type attr
xfs_db> p
hdr.info.forw = 0
hdr.info.back = 0
hdr.info.magic = 0xfbee
hdr.count = 0
hdr.usedbytes = 0
hdr.firstused = 4096
hdr.holes = 0
hdr.freemap[0-2] = [base,size] 0:[32,4064] 1:[0,0] 2:[0,0]


After converting the address to inode and running xfs_dump I get following
output:

xfs_db> convert daddr 0x19a4775b18 inode
0x3348eeb960 (220266936672)
xfs_db> inode 0x3348eeb960
xfs_db> p
core.magic = 0
core.mode = 0
core.version = 0
core.format = 0 (dev)
core.uid = 4226678784
core.gid = 0
core.flushiter = 0
core.atime.sec = Thu Jan  1 01:00:00 1970
core.atime.nsec = 000000000
core.mtime.sec = Thu Jan  1 01:00:00 1970
core.mtime.nsec = 000000000
core.ctime.sec = Thu Jan  1 01:00:00 1970
core.ctime.nsec = 000000000
core.size = 0
core.nblocks = 0
core.extsize = 0
core.nextents = 0
core.naextents = 0
core.forkoff = 0
core.aformat = 0 (dev)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.rtinherit = 0
core.projinherit = 0
core.nosymlinks = 0
core.extsz = 0
core.extszinherit = 0
core.nodefrag = 0
core.filestream = 0
core.gen = 0
next_unlinked = 0
u.dev = 0

It's strange, because there is no "extended attributes" record.

Do you know what can cause such problem ?
Is it fixed in new kernel version ?

Thank you in advance !
Best regards

Adam Blaszczykowski

[-- Attachment #1.2: Type: text/html, Size: 16754 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-09  1:15                   ` Dave Chinner
@ 2015-11-13  6:39                     ` Arkadiusz Bubała
  0 siblings, 0 replies; 15+ messages in thread
From: Arkadiusz Bubała @ 2015-11-13  6:39 UTC (permalink / raw)
  To: xfs

Hello,
I encountered the same issue on the Linux kernel 3.10. Here's the 
information you previously asked about. I hope it'll help to find the 
root of this problem.

The ls command invoked on lost+found:

#ls -la
ls: 246185005270: Operation not permitted
total 4
drwxr-xr-x 2 root   root         33 Nov  7 00:51 .
drwxr-xr-x 5 root   root         61 Sep 21 21:07 ..
-rwxrwxrwx 1 user user  0 Oct 27 18:22 246185005270

Execution of ls generates following error in the dmesg:

[486147.297493] ffff88063af38000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 
00 00 00  ................
[486147.318762] ffff88063af38010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 
00 00 00  ..... ..........
[486147.339328] ffff88063af38020: 00 00 00 00 00 00 00 00 00 00 00 00 00 
00 00 00  ................
[486147.359597] ffff88063af38030: 00 00 00 00 00 00 00 00 00 00 00 00 00 
00 00 00  ................
[486147.359599] XFS (dm-16): Internal error xfs_attr3_leaf_read_verify 
at line 256 of file fs/xfs/xfs_attr_leaf.c.  Caller 0xffffffff81263bcd
[486147.359599]
[486147.359601] CPU: 8 PID: 14315 Comm: kworker/8:1H Tainted: 
G           O 3.10.80 #45
[486147.359602] Hardware name: Supermicro 
X9DRi-LN4+/X9DR3-LN4+/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.2 03/04/2015
[486147.359607] Workqueue: xfslogd xfs_buf_iodone_work
[486147.359611]  ffffffff816bf9df 000000000000007e ffffffff81265a42 
ffffffff81263bcd
[486147.359612]  ffff000000000100 ffff8800518b4fc0 ffff88000de01200 
ffff88085adf5000
[486147.359613]  ffff88087fd18f00 0000000000000000 ffffffff8127e95f 
ffffffff81263bcd
[486147.359614] Call Trace:
[486147.359617]  [<ffffffff816bf9df>] ? dump_stack+0xd/0x17
[486147.359620]  [<ffffffff81265a42>] ? xfs_corruption_error+0x62/0x90
[486147.359621]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[486147.359624]  [<ffffffff8127e95f>] ? 
xfs_attr3_leaf_read_verify+0x6f/0x100
[486147.359625]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[486147.359627]  [<ffffffff81263bcd>] ? xfs_buf_iodone_work+0x8d/0xb0
[486147.359630]  [<ffffffff8105dd01>] ? process_one_work+0x141/0x3b0
[486147.359633]  [<ffffffff8105b79e>] ? pwq_activate_delayed_work+0x2e/0x50
[486147.359636]  [<ffffffff8105ec42>] ? worker_thread+0x112/0x370
[486147.359638]  [<ffffffff8105eb30>] ? manage_workers.isra.28+0x290/0x290
[486147.359641]  [<ffffffff810647e3>] ? kthread+0xb3/0xc0
[486147.359644]  [<ffffffff81012685>] ? sched_clock+0x5/0x10
[486147.359646]  [<ffffffff81060000>] ? __alloc_workqueue_key+0x210/0x510
[486147.359648]  [<ffffffff81064730>] ? 
kthread_freezable_should_stop+0x60/0x60
[486147.359652]  [<ffffffff816cab18>] ? ret_from_fork+0x58/0x90
[486147.359654]  [<ffffffff81064730>] ? 
kthread_freezable_should_stop+0x60/0x60
[486147.359655] XFS (dm-16): Corruption detected. Unmount and run xfs_repair
[486147.359664] XFS (dm-16): metadata I/O error: block 0x1ca8ebe310 
("xfs_trans_read_buf_map") error 117 numblks 8


# xfs_db -r /dev/mapper/vg+vg01-lv+n+lv0100

xfs_db> inode 246185005270
xfs_db> p
core.magic = 0x494e
core.mode = 0100777
core.version = 2
core.format = 2 (extents)
core.nlinkv2 = 1
core.onlink = 0
core.projid_lo = 0
core.projid_hi = 0
core.uid = 106
core.gid = 103
core.flushiter = 4
core.atime.sec = Tue Oct 27 18:22:48 2015
core.atime.nsec = 000000000
core.mtime.sec = Tue Oct 27 18:22:48 2015
core.mtime.nsec = 000000000
core.ctime.sec = Sun Nov  1 19:05:21 2015
core.ctime.nsec = 337962467
core.size = 0
core.nblocks = 1
core.extsize = 0
core.nextents = 0
core.naextents = 1
core.forkoff = 9
core.aformat = 2 (extents)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.rtinherit = 0
core.projinherit = 0
core.nosymlinks = 0
core.extsz = 0
core.extszinherit = 0
core.nodefrag = 0
core.filestream = 0
core.gen = 2298109809
next_unlinked = null
u = (empty)
a.bmx[0] = [startoff,startblock,blockcount,extentflag] 0:[0,15386639515,1,0]
xfs_db> ablock 0
xfs_db> p
hdr.info.forw = 0
hdr.info.back = 0
hdr.info.magic = 0xfbee
hdr.count = 0
hdr.usedbytes = 0
hdr.firstused = 4096
hdr.holes = 0
hdr.freemap[0-2] = [base,size] 0:[32,4064] 1:[0,0] 2:[0,0]

xfs_db> type text
xfs_db> p
000:  00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00 ................
010:  10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00 ................
020:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
030:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
... (zeros only)
ff0:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................

#xfs_db -r -c "convert daddr 0x1ca8ebe310 fsb" /dev/mapper/lv0100
0x3951d7c9b (15386639515)

# xfs_db -r -c "fsb 0x3951d7c9b" -c p -c "type attr" -c p 
/dev/mapper/lv0100
000: 00000000 00000000 fbee0000 00000000 10000000 00200fe0 00000000 00000000
020: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
040: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
060: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
080: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
100: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
120: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
140: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
160: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
180: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
200: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
220: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
240: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
260: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
280: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
2a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
2c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
2e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
300: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
320: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
340: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
360: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
380: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
3a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
3c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
3e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
400: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
420: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
440: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
460: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
480: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
4a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
4c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
4e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
500: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
520: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
540: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
560: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
580: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
5a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
5c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
5e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
600: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
620: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
640: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
660: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
680: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
6a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
6c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
6e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
700: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
720: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
740: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
760: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
780: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
7a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
7c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
7e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
800: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
820: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
840: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
860: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
880: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
8e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
900: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
920: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
940: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
960: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
980: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
9a0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
9c0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
9e0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
a80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
aa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ac0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ae0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
b80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ba0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
bc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
be0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
c80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ca0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
cc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ce0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
d80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
da0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
dc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
de0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
e80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ea0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ec0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
ee0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f40: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
f80: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fa0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
fe0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
hdr.info.forw = 0
hdr.info.back = 0
hdr.info.magic = 0xfbee
hdr.count = 0
hdr.usedbytes = 0
hdr.firstused = 4096
hdr.holes = 0
hdr.freemap[0-2] = [base,size] 0:[32,4064] 1:[0,0] 2:[0,0]


-- 
Best regards
Arkadiusz Bubała
Open-E Poland Sp. z o.o.
www.open-e.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-08 10:28                 ` Rasmus Borup Hansen
@ 2015-07-09  1:15                   ` Dave Chinner
  2015-11-13  6:39                     ` Arkadiusz Bubała
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Chinner @ 2015-07-09  1:15 UTC (permalink / raw)
  To: Rasmus Borup Hansen; +Cc: xfs

On Wed, Jul 08, 2015 at 12:28:19PM +0200, Rasmus Borup Hansen
wrote:
> > On 07 Jul 2015, at 02:19, Dave Chinner <david@fromorbit.com>
> > wrote:
> > 
> > On Mon, Jul 06, 2015 at 01:08:52PM +0200, Rasmus Borup Hansen
> > wrote:
> >> I've made a metadump and I'm running another xfs_repair, but
> >> given that the first metadump is 132 GB, will you still be
> >> interested in looking at the dumps?
> > 
> > That's significantly larger than my monthly download quota.  How
> > big is it once you compress it?
> 
> One metadump is 25 GB when compressed with xz -9. The server the
> files currently reside on is not very fast, so I've only
> compressed one of them so far.
> 
> I used the strings command on the metadump files and discovered
> that they contain fragments of files that we really don't want to
> leave our IT systems. However, if you think it's worth the effort,
> I could set up a virtual machine with the metadump files and give
> you access with your SSH public key. But then you'll have to tell
> me which tools you'll need for investigating the files.

<sigh>

I only want to look at one inode with xfs_db....

I don't think ssh access is going be very useful. To isolate the
actual cause of the verifier error I usually run an instrumented
kernel, and to verify that I've fixed the problem I'll need to run
custom kernels and/or xfsprogs binaries. So I really need a local fs
image to do this.

I also like to run a kernel I trust (i.e. one I've built myself) on
a machine I trust not to have storage or hardware issues when doing
diagnostics on broken filesystem images.

Of course, nobody can learn about how to find problems like this
when the triage is hidden away in private....

> Output from ls when listing "lost+found":
> 
> $ ls -laF /backup/lost+found/
> ls: /backup/lost+found/11539619467: Structure needs cleaning

This is the inode I need to look at. Can you grep for this inode in
the xfs_repair output and post it? (grab a couple of lines of
context around each match, too...)

> total 4
> drwxr-xr-x 2 root root     32 Jun 30 07:43 ./
> drwxr-xr-x 5 root root     74 Jul  2 12:55 ../
> -rw-rw-rw- 1 tsj  intomics  0 Jun 23 16:11 11539619467

It's a zero length file, so I'll be interested to know what
attributes it has on it. Can you check that the inode number of
that file is 11539619467 (ls -i will tell you that) and then post
the output of this command:

# xfs_db -r -c "inode 11539619467" -c p /dev/mapper/backup01-data

Then I can give you the commands for walking and dumping the
full attribute tree.

> [503166.562498] XFS (dm-0): Corruption detected. Unmount and run xfs_repair
> [503166.589297] XFS (dm-0): metadata I/O error: block 0x157e84da0 ("xfs_trans_read_buf_map") error 117 numblks 8

Also, post the output of:

# xfs_db -r -c "convert daddr 0x157e84da0 fsb"  /dev/mapper/backup01-data
<fsb>
# xfs_db -r -c "fsb <fsb>" -c p -c "type attr" -c p /dev/mapper/backup01-data

Which will give me the contents of the bad block, both in raw format
and as processed by the attribute leaf format parser in xfs_db.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-07  0:19               ` Dave Chinner
@ 2015-07-08 10:28                 ` Rasmus Borup Hansen
  2015-07-09  1:15                   ` Dave Chinner
  0 siblings, 1 reply; 15+ messages in thread
From: Rasmus Borup Hansen @ 2015-07-08 10:28 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 13452 bytes --]

> On 07 Jul 2015, at 02:19, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Mon, Jul 06, 2015 at 01:08:52PM +0200, Rasmus Borup Hansen wrote:
>> I've made a metadump and I'm running another xfs_repair, but given
>> that the first metadump is 132 GB, will you still be interested in
>> looking at the dumps?
> 
> That's significantly larger than my monthly download quota.  How big
> is it once you compress it?

One metadump is 25 GB when compressed with xz -9. The server the files currently reside on is not very fast, so I've only compressed one of them so far.

I used the strings command on the metadump files and discovered that they contain fragments of files that we really don't want to leave our IT systems. However, if you think it's worth the effort, I could set up a virtual machine with the metadump files and give you access with your SSH public key. But then you'll have to tell me which tools you'll need for investigating the files.

> Also, because of the size of the metadump, I'll need some context
> about the hardware it is running on. Can you you please also provide
> the information in:
> 
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> 
> so I have a better idea of environment the problem is showing up in.

$ uname -a
Linux mammuthus 3.13.0-55-generic #94-Ubuntu SMP Thu Jun 18 00:27:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ ./xfs_repair -V
xfs_repair version 3.2.3

$ cat /proc/cpuinfo | grep processor | wc -l
2

$ cat /proc/meminfo
MemTotal:       10228560 kB
MemFree:          300612 kB
Buffers:          111996 kB
Cached:          3569448 kB
SwapCached:         7836 kB
Active:          1915848 kB
Inactive:        2510244 kB
Active(anon):     358064 kB
Inactive(anon):   386784 kB
Active(file):    1557784 kB
Inactive(file):  2123460 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       2088956 kB
SwapFree:        2042964 kB
Dirty:              3972 kB
Writeback:             0 kB
AnonPages:        739808 kB
Mapped:            18340 kB
Shmem:                72 kB
Slab:            4672980 kB
SReclaimable:    3927540 kB
SUnreclaim:       745440 kB
KernelStack:        2328 kB
PageTables:         8840 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     7203236 kB
Committed_AS:    1152936 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      309048 kB
VmallocChunk:   34359412736 kB
HardwareCorrupted:     0 kB
AnonHugePages:      8192 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       62052 kB
DirectMap2M:    10414080 kB

$ cat /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=5103100k,nr_inodes=1275775,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=1022856k,mode=755 0 0
/dev/mapper/mammuthus-root / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
none /sys/fs/cgroup tmpfs rw,relatime,size=4k,mode=755 0 0
none /sys/fs/fuse/connections fusectl rw,relatime 0 0
none /sys/kernel/debug debugfs rw,relatime 0 0
none /sys/kernel/security securityfs rw,relatime 0 0
none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0
none /run/user tmpfs rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0
none /sys/fs/pstore pstore rw,relatime 0 0
/dev/sdd1 /boot ext2 rw,relatime 0 0
systemd /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,name=systemd 0 0
/dev/mapper/backup02-limited_backup /limited_backup xfs rw,noatime,attr2,inode64,logbsize=256k,noquota 0 0
/dev/mapper/backup02-timemachine /timemachine_backup ext4 rw,relatime,errors=remount-ro,data=ordered,jqfmt=vfsv0,usrjquota=.aquota.user,grpjquota=.aquota.group 0 0
/dev/mapper/backup01-data /backup xfs rw,noatime,attr2,inode64,logbsize=256k,noquota 0 0

This is what /proc/mounts looks like currently. When the error first occurred there was no /dev/mapper/backup02 volume group and /dev/mapper/backup01-data (which has the file system that behaves strangely) was mounted with user and project quota. Of course, the file system was not mounted when running xfs_repair.

$ cat /proc/partitions
major minor  #blocks  name

   8       16 19529728000 sdb
   8       17 19529726959 sdb1
   8        0 39064698880 sda
   8        1 39064697839 sda1
 252        0 39064694784 dm-0
   8       32  976224256 sdc
   8       33  976223215 sdc1
   8       48  155713536 sdd
   8       49     248832 sdd1
   8       50          1 sdd2
   8       53  155461632 sdd5
 252        1  153370624 dm-1
 252        2    2088960 dm-2
 252        3   33554432 dm-3
 252        4  942665728 dm-4
 252        5 5368709120 dm-5
 252        6 14161014784 dm-6

I'm using hardware RAID level 6 with a Dell PERC H800 controller and an MD1200 disk enclosure with 12 4 TB disks configured as a single "virtual disk":

$ /opt/dell/srvadmin/bin/omreport storage vdisk controller=0
List of Virtual Disks on Controller PERC H800 Adapter (Slot 1)

Controller PERC H800 Adapter (Slot 1)
ID                                : 0
Status                            : Ok
Name                              : mammuthus01
State                             : Ready
Hot Spare Policy violated         : Not Assigned
Encrypted                         : No
Layout                            : RAID-6
Size                              : 37,255.00 GB (40002251653120 bytes)
T10 Protection Information Status : No
Associated Fluid Cache State      : Not Applicable
Device Name                       : /dev/sda
Bus Protocol                      : SAS
Media                             : HDD
Read Policy                       : Adaptive Read Ahead
Write Policy                      : Write Back
Cache Policy                      : Not Applicable
Stripe Element Size               : 64 KB
Disk Cache Policy                 : Disabled

This "virtual disk" is the only member of the volume group "backup01":

$ sudo pvscan
  PV /dev/sdd5   VG mammuthus   lvm2 [148.26 GiB / 0    free]
  PV /dev/sdc1   VG extra       lvm2 [931.00 GiB / 0    free]
  PV /dev/sdb1   VG backup02    lvm2 [18.19 TiB / 0    free]
  PV /dev/sda1   VG backup01    lvm2 [36.38 TiB / 0    free]
  Total: 4 [55.62 TiB] / in use: 4 [55.62 TiB] / in no VG: 0 [0   ]

This volume group has a single logical volume:

$ sudo lvscan
  ACTIVE            '/dev/mammuthus/root' [146.27 GiB] inherit
  ACTIVE            '/dev/mammuthus/swap_1' [1.99 GiB] inherit
  ACTIVE            '/dev/extra/swap' [32.00 GiB] inherit
  ACTIVE            '/dev/extra/files' [899.00 GiB] inherit
  ACTIVE            '/dev/backup02/timemachine' [5.00 TiB] inherit
  ACTIVE            '/dev/backup02/limited_backup' [13.19 TiB] inherit
  ACTIVE            '/dev/backup01/data' [36.38 TiB] inherit

The drives are 12 4 TB 7.2 RPK Near-Line SAS 3.5" hot plug drives:

$ /opt/dell/srvadmin/bin/omreport storage pdisk controller=0 connector=0
List of Physical Disks on Connector 0 of Controller PERC H800 Adapter (Slot 1)

Controller PERC H800 Adapter (Slot 1)
ID                              : 0:0:0
Status                          : Ok
Name                            : Physical Disk 0:0:0
State                           : Online
Power Status                    : Spun Up
Bus Protocol                    : SAS
Media                           : HDD
Part of Cache Pool              : Not Applicable
Remaining Rated Write Endurance : Not Applicable
Failure Predicted               : No
Revision                        : GS0D
Driver Version                  : Not Applicable
Model Number                    : Not Applicable
T10 PI Capable                  : No
Certified                       : Yes
Encryption Capable              : No
Encrypted                       : Not Applicable
Progress                        : Not Applicable
Mirror Set ID                   : Not Applicable
Capacity                        : 3,725.50 GB (4000225165312 bytes)
Used RAID Disk Space            : 3,725.50 GB (4000225165312 bytes)
Available RAID Disk Space       : 0.00 GB (0 bytes)
Hot Spare                       : No
Vendor ID                       : DELL(tm)
Product ID                      : ST4000NM0023
Serial No.                      : Z1Z4BNJ3
Part Number                     : TH0529FG2123345Q022HA02
Negotiated Speed                : 6.00 Gbps
Capable Speed                   : 6.00 Gbps
PCIe Maximum Link Width         : Not Applicable
PCIe Negotiated Link Width      : Not Applicable
Sector Size                     : 512B
Device Write Cache              : Not Applicable
Manufacture Day                 : 04
Manufacture Week                : 21
Manufacture Year                : 2014
SAS Address                     : 5000C50058C0F211

(Only output for the first drive show; the others are similar.)

The individual drives don't use write caches, but the storage controller has 512 MB cache with battery backup operating in write-back mode.

$ xfs_info /backup/
meta-data=/dev/mapper/backup01-data isize=256    agcount=37, agsize=268435455 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=9766173696, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Output from ls when listing "lost+found":

$ ls -laF /backup/lost+found/
ls: /backup/lost+found/11539619467: Structure needs cleaning
total 4
drwxr-xr-x 2 root root     32 Jun 30 07:43 ./
drwxr-xr-x 5 root root     74 Jul  2 12:55 ../
-rw-rw-rw- 1 tsj  intomics  0 Jun 23 16:11 11539619467

Relevant output from dmesg (the errors are generated by the ls command above):

[444852.252110] XFS (dm-0): Mounting Filesystem
[444854.630181] XFS (dm-0): Ending clean mount
[503166.397439] ffff880114063000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................
[503166.425056] ffff880114063010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........
[503166.453484] ffff880114063020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[503166.480812] ffff880114063030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[503166.508386] XFS (dm-0): Internal error xfs_attr3_leaf_read_verify at line 246 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_attr_leaf.c.  Caller 0xffffffffa00d4885
[503166.562354] CPU: 1 PID: 3342 Comm: kworker/1:1H Not tainted 3.13.0-55-generic #94-Ubuntu
[503166.562356] Hardware name: Dell Inc. PowerEdge R310/05XKKK, BIOS 1.8.2 08/17/2011
[503166.562394] Workqueue: xfslogd xfs_buf_iodone_work [xfs]
[503166.562398]  0000000000000001 ffff8802af92bd68 ffffffff81723294 ffff88004d1b2000
[503166.562402]  ffff8802af92bd80 ffffffffa00d76fb ffffffffa00d4885 ffff8802af92bdb8
[503166.562403]  ffffffffa00d7755 000000f600200400 ffff88001b216a00 ffff88004d1b2000
[503166.562406] Call Trace:
[503166.562415]  [<ffffffff81723294>] dump_stack+0x45/0x56
[503166.562426]  [<ffffffffa00d76fb>] xfs_error_report+0x3b/0x40 [xfs]
[503166.562436]  [<ffffffffa00d4885>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
[503166.562446]  [<ffffffffa00d7755>] xfs_corruption_error+0x55/0x80 [xfs]
[503166.562459]  [<ffffffffa00f4bdd>] xfs_attr3_leaf_read_verify+0x6d/0xf0 [xfs]
[503166.562469]  [<ffffffffa00d4885>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
[503166.562479]  [<ffffffffa00d4885>] xfs_buf_iodone_work+0x85/0xf0 [xfs]
[503166.562483]  [<ffffffff81083b22>] process_one_work+0x182/0x450
[503166.562485]  [<ffffffff81084911>] worker_thread+0x121/0x410
[503166.562487]  [<ffffffff810847f0>] ? rescuer_thread+0x430/0x430
[503166.562489]  [<ffffffff8108b702>] kthread+0xd2/0xf0
[503166.562491]  [<ffffffff8108b630>] ? kthread_create_on_node+0x1c0/0x1c0
[503166.562494]  [<ffffffff81733ca8>] ret_from_fork+0x58/0x90
[503166.562496]  [<ffffffff8108b630>] ? kthread_create_on_node+0x1c0/0x1c0
[503166.562498] XFS (dm-0): Corruption detected. Unmount and run xfs_repair
[503166.589297] XFS (dm-0): metadata I/O error: block 0x157e84da0 ("xfs_trans_read_buf_map") error 117 numblks 8

The error occurs even though the file system is not doing anything else.

Intomics is a contract research organization specialized in deriving core biological insight from large scale data. We help our clients in the pharmaceutical industry develop tomorrow's medicines better, faster, and cheaper through optimized use of biomedical data.
-----------------------------------------------------------------
Hansen, Rasmus Borup              Intomics - from data to biology
System Administrator              Diplomvej 377
Scientific Programmer             DK-2800 Kgs. Lyngby
                                  Denmark
E: rbh@intomics.com               W: http://www.intomics.com/
P: +45 5167 7972                  P: +45 8880 7979


[-- Attachment #1.2: Type: text/html, Size: 35449 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-06 11:08             ` Rasmus Borup Hansen
@ 2015-07-07  0:19               ` Dave Chinner
  2015-07-08 10:28                 ` Rasmus Borup Hansen
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Chinner @ 2015-07-07  0:19 UTC (permalink / raw)
  To: Rasmus Borup Hansen; +Cc: xfs

On Mon, Jul 06, 2015 at 01:08:52PM +0200, Rasmus Borup Hansen wrote:
> I've made a metadump and I'm running another xfs_repair, but given
> that the first metadump is 132 GB, will you still be interested in
> looking at the dumps?

That's significantly larger than my monthly download quota.  How big
is it once you compress it?

Also, because of the size of the metadump, I'll need some context
about the hardware it is running on. Can you you please also provide
the information in:

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

so I have a better idea of environment the problem is showing up in.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-03 23:55           ` Dave Chinner
@ 2015-07-06 11:08             ` Rasmus Borup Hansen
  2015-07-07  0:19               ` Dave Chinner
  0 siblings, 1 reply; 15+ messages in thread
From: Rasmus Borup Hansen @ 2015-07-06 11:08 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1506 bytes --]

I've made a metadump and I'm running another xfs_repair, but given that the first metadump is 132 GB, will you still be interested in looking at the dumps?

Best,

Rasmus

Intomics is a contract research organization specialized in deriving core biological insight from large scale data. We help our clients in the pharmaceutical industry develop tomorrow's medicines better, faster, and cheaper through optimized use of biomedical data.
-----------------------------------------------------------------
Hansen, Rasmus Borup              Intomics - from data to biology
System Administrator              Diplomvej 377
Scientific Programmer             DK-2800 Kgs. Lyngby
                                  Denmark
E: rbh@intomics.com               W: http://www.intomics.com/
P: +45 5167 7972                  P: +45 8880 7979

> On 04 Jul 2015, at 01:55, Dave Chinner <david@fromorbit.com> wrote:
> 
> On Fri, Jul 03, 2015 at 08:27:28AM +0200, Rasmus Borup Hansen wrote:
>> Thank you for the suggestion. I compiled a new xfs_repair, but I got similar results from running it.
> 
> Please take a metadump of the filesystem before and after repair
> so that we can find out what is actually going wrong. Feel free to
> send me a private pointer to the metadump images...
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs


[-- Attachment #1.2: Type: text/html, Size: 4479 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-03  6:27         ` Rasmus Borup Hansen
  2015-07-03 15:24           ` Emmanuel Florac
@ 2015-07-03 23:55           ` Dave Chinner
  2015-07-06 11:08             ` Rasmus Borup Hansen
  1 sibling, 1 reply; 15+ messages in thread
From: Dave Chinner @ 2015-07-03 23:55 UTC (permalink / raw)
  To: Rasmus Borup Hansen; +Cc: xfs

On Fri, Jul 03, 2015 at 08:27:28AM +0200, Rasmus Borup Hansen wrote:
> Thank you for the suggestion. I compiled a new xfs_repair, but I got similar results from running it.

Please take a metadump of the filesystem before and after repair
so that we can find out what is actually going wrong. Feel free to
send me a private pointer to the metadump images...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-03  6:27         ` Rasmus Borup Hansen
@ 2015-07-03 15:24           ` Emmanuel Florac
  2015-07-03 23:55           ` Dave Chinner
  1 sibling, 0 replies; 15+ messages in thread
From: Emmanuel Florac @ 2015-07-03 15:24 UTC (permalink / raw)
  To: Rasmus Borup Hansen; +Cc: xfs

Le Fri, 3 Jul 2015 08:27:28 +0200
Rasmus Borup Hansen <rbh@intomics.com> écrivait:

> Thank you for the suggestion. I compiled a new xfs_repair, but I got
> similar results from running it.
> 

Get the inode number of the faulty file, umount the volume, open it
with xfs_db and remove the file through it:

http://xfs.org/index.php/XFS_FAQ#Q:_How_to_get_around_a_bad_inode_repair_is_unable_to_clean_up

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-02  9:26       ` Emmanuel Florac
@ 2015-07-03  6:27         ` Rasmus Borup Hansen
  2015-07-03 15:24           ` Emmanuel Florac
  2015-07-03 23:55           ` Dave Chinner
  0 siblings, 2 replies; 15+ messages in thread
From: Rasmus Borup Hansen @ 2015-07-03  6:27 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 2286 bytes --]

Thank you for the suggestion. I compiled a new xfs_repair, but I got similar results from running it.

Best,

Rasmus

Intomics is a contract research organization specialized in deriving core biological insight from large scale data. We help our clients in the pharmaceutical industry develop tomorrow's medicines better, faster, and cheaper through optimized use of biomedical data.
-----------------------------------------------------------------
Hansen, Rasmus Borup              Intomics - from data to biology
System Administrator              Diplomvej 377
Scientific Programmer             DK-2800 Kgs. Lyngby
                                  Denmark
E: rbh@intomics.com               W: http://www.intomics.com/
P: +45 5167 7972                  P: +45 8880 7979

> On 02 Jul 2015, at 11:26, Emmanuel Florac <eflorac@intellique.com> wrote:
> 
> Le Thu, 2 Jul 2015 09:58:19 +0200
> Rasmus Borup Hansen <rbh@intomics.com> écrivait:
> 
>> The file then turns up in lost+found and when I "ls" it I get the
>> same errors again. I've tried deleting it from lost+found, but then
>> xfs_repair finds it again with exactly the same output as show above
>> and puts it back.
>> 
>> Apart from that, everything apparently works fine.
>> 
>> Is there a way to permanently get rid of the file in lost+found? Its
>> size is apparently 0 bytes.
> 
> You should try the latest xfs_repair (3.2.3 IIRC) instead of the stock
> Ubuntu version (3.1.x probably).
> 
> In case you don't know how to compile it, I've just uploaded it there:
> 
> http://update.intellique.com/pub/xfs_repair-3.2.3.gz
> 
> md5sum :
> 756a28228c7e657ce8626d27850f6261  xfs_repair-3.2.3.gz
> 
> Beware of binaries provided by strangers, however this one should be
> fine :) gunzip it before use...
> 
> -- 
> ------------------------------------------------------------------------
> Emmanuel Florac     |   Direction technique
>                    |   Intellique
>                    |	<eflorac@intellique.com>
>                    |   +33 1 78 94 84 02
> ------------------------------------------------------------------------
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs


[-- Attachment #1.2: Type: text/html, Size: 5919 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-07-02  7:58     ` Rasmus Borup Hansen
@ 2015-07-02  9:26       ` Emmanuel Florac
  2015-07-03  6:27         ` Rasmus Borup Hansen
  0 siblings, 1 reply; 15+ messages in thread
From: Emmanuel Florac @ 2015-07-02  9:26 UTC (permalink / raw)
  To: Rasmus Borup Hansen; +Cc: xfs

Le Thu, 2 Jul 2015 09:58:19 +0200
Rasmus Borup Hansen <rbh@intomics.com> écrivait:

> The file then turns up in lost+found and when I "ls" it I get the
> same errors again. I've tried deleting it from lost+found, but then
> xfs_repair finds it again with exactly the same output as show above
> and puts it back.
> 
> Apart from that, everything apparently works fine.
> 
> Is there a way to permanently get rid of the file in lost+found? Its
> size is apparently 0 bytes.

You should try the latest xfs_repair (3.2.3 IIRC) instead of the stock
Ubuntu version (3.1.x probably).

In case you don't know how to compile it, I've just uploaded it there:

http://update.intellique.com/pub/xfs_repair-3.2.3.gz

md5sum :
756a28228c7e657ce8626d27850f6261  xfs_repair-3.2.3.gz

Beware of binaries provided by strangers, however this one should be
fine :) gunzip it before use...

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-06-26  6:14   ` Rasmus Borup Hansen
@ 2015-07-02  7:58     ` Rasmus Borup Hansen
  2015-07-02  9:26       ` Emmanuel Florac
  0 siblings, 1 reply; 15+ messages in thread
From: Rasmus Borup Hansen @ 2015-07-02  7:58 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 8083 bytes --]

When I tried mounting the file system after running xfs_repair, a quota check was started. However it did not finish after a few hours like the quota check that was started when mounting the file system read only, so after a few days of waiting (with next to no disk activity) I restarted the server and mounted the file system without quotas. Soon afterwards I got the following error (note that it's a different line number in xfs_attr_leaf.c than the one I initially saw):

[  327.670974] ffff8802a273d000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................
[  327.678213] ffff8802a273d010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........
[  327.685878] ffff8802a273d020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[  327.693950] ffff8802a273d030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[  327.701567] XFS (dm-0): Internal error xfs_attr3_leaf_read_verify at line 246 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_attr_leaf.c.  Caller 0xffffffffa00cb885
[  327.718829] CPU: 1 PID: 2805 Comm: kworker/1:1H Not tainted 3.13.0-55-generic #94-Ubuntu
[  327.718830] Hardware name: Dell Inc. PowerEdge R310/05XKKK, BIOS 1.8.2 08/17/2011
[  327.718867] Workqueue: xfslogd xfs_buf_iodone_work [xfs]
[  327.718869]  0000000000000001 ffff8800b82bfd68 ffffffff81723294 ffff8800368dd800
[  327.718875]  ffff8800b82bfd80 ffffffffa00ce6fb ffffffffa00cb885 ffff8800b82bfdb8
[  327.718877]  ffffffffa00ce755 000000f600203100 ffff8802a140ad00 ffff8800368dd800
[  327.718879] Call Trace:
[  327.718887]  [<ffffffff81723294>] dump_stack+0x45/0x56
[  327.718901]  [<ffffffffa00ce6fb>] xfs_error_report+0x3b/0x40 [xfs]
[  327.718911]  [<ffffffffa00cb885>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
[  327.718921]  [<ffffffffa00ce755>] xfs_corruption_error+0x55/0x80 [xfs]
[  327.718935]  [<ffffffffa00ebbdd>] xfs_attr3_leaf_read_verify+0x6d/0xf0 [xfs]
[  327.718945]  [<ffffffffa00cb885>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
[  327.718954]  [<ffffffffa00cb885>] xfs_buf_iodone_work+0x85/0xf0 [xfs]
[  327.718958]  [<ffffffff81083b22>] process_one_work+0x182/0x450
[  327.718961]  [<ffffffff81084911>] worker_thread+0x121/0x410
[  327.718963]  [<ffffffff810847f0>] ? rescuer_thread+0x430/0x430
[  327.718965]  [<ffffffff8108b702>] kthread+0xd2/0xf0
[  327.718967]  [<ffffffff8108b630>] ? kthread_create_on_node+0x1c0/0x1c0
[  327.718970]  [<ffffffff81733ca8>] ret_from_fork+0x58/0x90
[  327.718972]  [<ffffffff8108b630>] ? kthread_create_on_node+0x1c0/0x1c0
[  327.718973] XFS (dm-0): Corruption detected. Unmount and run xfs_repair
[  327.729003] XFS (dm-0): metadata I/O error: block 0x157e84da0 ("xfs_trans_read_buf_map") error 117 numblks 8

I started another xfs_repair which did not report any errors. After mounting the file system again (still without quotas) I discovered that ls would write "Structure needs cleaning" whenever it listed a certain file (and the kernel would output error messages like those above). This was a file I didn't need, so I tried deleting it and running yet another xfs_repair:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
agi unlinked bucket 11 is 2949684875 in ag 2 (inode=11539619467)
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
doubling cache size to 591232
        - agno = 1
        - agno = 2
        - agno = 3
...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected inode 11539619467, moving to lost+found
Phase 7 - verify and correct link counts...
done

The file then turns up in lost+found and when I "ls" it I get the same errors again. I've tried deleting it from lost+found, but then xfs_repair finds it again with exactly the same output as show above and puts it back.

Apart from that, everything apparently works fine.

Is there a way to permanently get rid of the file in lost+found? Its size is apparently 0 bytes.

Best,

Rasmus


Intomics is a contract research organization specialized in deriving core biological insight from large scale data. We help our clients in the pharmaceutical industry develop tomorrow's medicines better, faster, and cheaper through optimized use of biomedical data.
-----------------------------------------------------------------
Hansen, Rasmus Borup              Intomics - from data to biology
System Administrator              Diplomvej 377
Scientific Programmer             DK-2800 Kgs. Lyngby
                                  Denmark
E: rbh@intomics.com               W: http://www.intomics.com/
P: +45 5167 7972                  P: +45 8880 7979

> On 26 Jun 2015, at 08:14, Rasmus Borup Hansen <rbh@intomics.com> wrote:
> 
> I tried mounting the file system read-only (which triggered a quota check – does this make sense when the file system is read-only?) and then I scanned the file system for files with the spurious inodes to see if I could find a pattern. This took quite a while, and I didn't find any patterns, except that all the files in directories with project quotas were affected (but there were also other files). I'm now running xfs_repair without -n (and I had to mount and unmount the file system before it would start). I'll report back when it has finished.
> 
> Best,
> 
> Rasmus
> 
> Intomics is a contract research organization specialized in deriving core biological insight from large scale data. We help our clients in the pharmaceutical industry develop tomorrow's medicines better, faster, and cheaper through optimized use of biomedical data.
> -----------------------------------------------------------------
> Hansen, Rasmus Borup              Intomics - from data to biology
> System Administrator              Diplomvej 377
> Scientific Programmer             DK-2800 Kgs. Lyngby
>                                   Denmark
> E: rbh@intomics.com <mailto:rbh@intomics.com>               W: http://www.intomics.com/ <http://www.intomics.com/>
> P: +45 5167 7972                  P: +45 8880 7979
> 
>> On 25 Jun 2015, at 18:41, Emmanuel Florac <eflorac@intellique.com <mailto:eflorac@intellique.com>> wrote:
>> 
>> Le Wed, 24 Jun 2015 09:39:45 +0200
>> Rasmus Borup Hansen <rbh@intomics.com <mailto:rbh@intomics.com>> écrivait:
>> 
>>> Only the first 20 lines are included. There are currently 250000+
>>> more lines with "directory flags set on non-directory inode" and the
>>> check is still running (the mostly small files take up around 30 TB,
>>> so it'll probably take a while).
>>> 
>>> I recently enabled user and project quota and updated from 3.13.0-53.
>>> The file system has been heavily used for the last month or so.
>>> 
>>> Does anyone have any thoughts on this? I'm tempted to stop using
>>> quotas when the file system (hopefully) works again, as it's my
>>> impression that project quotas are not widely used.
>> 
>> Did you first try remounting then unmounting the volume to clear the
>> log? That could clear out xfs_repair output.
>> 
>> -- 
>> ------------------------------------------------------------------------
>> Emmanuel Florac     |   Direction technique
>>                    |   Intellique
>>                    |	<eflorac@intellique.com <mailto:eflorac@intellique.com>>
>>                    |   +33 1 78 94 84 02
>> ------------------------------------------------------------------------
>> 
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com <mailto:xfs@oss.sgi.com>
>> http://oss.sgi.com/mailman/listinfo/xfs
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs


[-- Attachment #1.2: Type: text/html, Size: 17554 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-06-24  7:39 Rasmus Borup Hansen
  2015-06-25 16:41 ` Emmanuel Florac
@ 2015-06-29 21:50 ` Dave Chinner
  1 sibling, 0 replies; 15+ messages in thread
From: Dave Chinner @ 2015-06-29 21:50 UTC (permalink / raw)
  To: Rasmus Borup Hansen; +Cc: xfs

On Wed, Jun 24, 2015 at 09:39:45AM +0200, Rasmus Borup Hansen wrote:
> Hi! Yesterday I got the following error messages from the kernel (Ubuntu trusty, 3.13.0-55):
> 
> [601201.817664] ffff88016e03e000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................
> [601201.818224] ffff88016e03e010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........
> [601201.818827] ffff88016e03e020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> [601201.819429] ffff88016e03e030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> [601201.820013] XFS (dm-0): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_attr_leaf.c.  Caller 0xffffffffa00996f0

Probably fixed by commit c88547a ("xfs: fix directory hash ordering
bug"), which also affected attributes in leaf format.

> I'm currently running xfs_repair -n and so far I've seen the following output:
> 
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
>         - scan filesystem freespace and inode maps...
>         - found root inode chunk
> Phase 3 - for each AG...
>         - scan (but don't clear) agi unlinked lists...
>         - process known inodes and perform inode discovery...
>         - agno = 0
> doubling cache size to 591200
> directory flags set on non-directory inode 206624
> directory flags set on non-directory inode 206625
> directory flags set on non-directory inode 206626
> directory flags set on non-directory inode 206627
> directory flags set on non-directory inode 206628
> directory flags set on non-directory inode 206629
> directory flags set on non-directory inode 206630
> directory flags set on non-directory inode 206631
> directory flags set on non-directory inode 206632
> directory flags set on non-directory inode 206633
> directory flags set on non-directory inode 206634
> 
> Only the first 20 lines are included. There are currently 250000+
> more lines with "directory flags set on non-directory inode" and
> the check is still running (the mostly small files take up around
> 30 TB, so it'll probably take a while).

Harmless, but repair will fix it anyway.

Kernel is fixed by commit 9336e3a ("xfs: project id inheritance is a
directory only flag").

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-06-25 16:41 ` Emmanuel Florac
@ 2015-06-26  6:14   ` Rasmus Borup Hansen
  2015-07-02  7:58     ` Rasmus Borup Hansen
  0 siblings, 1 reply; 15+ messages in thread
From: Rasmus Borup Hansen @ 2015-06-26  6:14 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 2619 bytes --]

I tried mounting the file system read-only (which triggered a quota check – does this make sense when the file system is read-only?) and then I scanned the file system for files with the spurious inodes to see if I could find a pattern. This took quite a while, and I didn't find any patterns, except that all the files in directories with project quotas were affected (but there were also other files). I'm now running xfs_repair without -n (and I had to mount and unmount the file system before it would start). I'll report back when it has finished.

Best,

Rasmus

Intomics is a contract research organization specialized in deriving core biological insight from large scale data. We help our clients in the pharmaceutical industry develop tomorrow's medicines better, faster, and cheaper through optimized use of biomedical data.
-----------------------------------------------------------------
Hansen, Rasmus Borup              Intomics - from data to biology
System Administrator              Diplomvej 377
Scientific Programmer             DK-2800 Kgs. Lyngby
                                  Denmark
E: rbh@intomics.com               W: http://www.intomics.com/
P: +45 5167 7972                  P: +45 8880 7979

> On 25 Jun 2015, at 18:41, Emmanuel Florac <eflorac@intellique.com> wrote:
> 
> Le Wed, 24 Jun 2015 09:39:45 +0200
> Rasmus Borup Hansen <rbh@intomics.com> écrivait:
> 
>> Only the first 20 lines are included. There are currently 250000+
>> more lines with "directory flags set on non-directory inode" and the
>> check is still running (the mostly small files take up around 30 TB,
>> so it'll probably take a while).
>> 
>> I recently enabled user and project quota and updated from 3.13.0-53.
>> The file system has been heavily used for the last month or so.
>> 
>> Does anyone have any thoughts on this? I'm tempted to stop using
>> quotas when the file system (hopefully) works again, as it's my
>> impression that project quotas are not widely used.
> 
> Did you first try remounting then unmounting the volume to clear the
> log? That could clear out xfs_repair output.
> 
> -- 
> ------------------------------------------------------------------------
> Emmanuel Florac     |   Direction technique
>                    |   Intellique
>                    |	<eflorac@intellique.com>
>                    |   +33 1 78 94 84 02
> ------------------------------------------------------------------------
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs


[-- Attachment #1.2: Type: text/html, Size: 6196 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
  2015-06-24  7:39 Rasmus Borup Hansen
@ 2015-06-25 16:41 ` Emmanuel Florac
  2015-06-26  6:14   ` Rasmus Borup Hansen
  2015-06-29 21:50 ` Dave Chinner
  1 sibling, 1 reply; 15+ messages in thread
From: Emmanuel Florac @ 2015-06-25 16:41 UTC (permalink / raw)
  To: Rasmus Borup Hansen; +Cc: xfs

Le Wed, 24 Jun 2015 09:39:45 +0200
Rasmus Borup Hansen <rbh@intomics.com> écrivait:

> Only the first 20 lines are included. There are currently 250000+
> more lines with "directory flags set on non-directory inode" and the
> check is still running (the mostly small files take up around 30 TB,
> so it'll probably take a while).
> 
> I recently enabled user and project quota and updated from 3.13.0-53.
> The file system has been heavily used for the last month or so.
> 
> Does anyone have any thoughts on this? I'm tempted to stop using
> quotas when the file system (hopefully) works again, as it's my
> impression that project quotas are not widely used.

Did you first try remounting then unmounting the volume to clear the
log? That could clear out xfs_repair output.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

* "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors
@ 2015-06-24  7:39 Rasmus Borup Hansen
  2015-06-25 16:41 ` Emmanuel Florac
  2015-06-29 21:50 ` Dave Chinner
  0 siblings, 2 replies; 15+ messages in thread
From: Rasmus Borup Hansen @ 2015-06-24  7:39 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 8109 bytes --]

Hi! Yesterday I got the following error messages from the kernel (Ubuntu trusty, 3.13.0-55):

[601201.817664] ffff88016e03e000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................
[601201.818224] ffff88016e03e010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........
[601201.818827] ffff88016e03e020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[601201.819429] ffff88016e03e030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[601201.820013] XFS (dm-0): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_attr_leaf.c.  Caller 0xffffffffa00996f0
[601201.820910] CPU: 1 PID: 421 Comm: xfsaild/dm-0 Not tainted 3.13.0-55-generic #92-Ubuntu
[601201.820913] Hardware name: Dell Inc. PowerEdge R310/05XKKK, BIOS 1.8.2 08/17/2011
[601201.820914]  0000000000000001 ffff880035cc3bd0 ffffffff81723294 ffff8802ac71d800
[601201.820918]  ffff880035cc3be8 ffffffffa009d6fb ffffffffa00996f0 ffff880035cc3c20
[601201.820919]  ffffffffa009d755 000000d800205500 ffff88001aa2a400 ffff880050acc2b8
[601201.820922] Call Trace:
[601201.820930]  [<ffffffff81723294>] dump_stack+0x45/0x56
[601201.820964]  [<ffffffffa009d6fb>] xfs_error_report+0x3b/0x40 [xfs]
[601201.820974]  [<ffffffffa00996f0>] ? _xfs_buf_ioapply+0x70/0x3a0 [xfs]
[601201.820984]  [<ffffffffa009d755>] xfs_corruption_error+0x55/0x80 [xfs]
[601201.820997]  [<ffffffffa00bab50>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
[601201.821007]  [<ffffffffa00996f0>] ? _xfs_buf_ioapply+0x70/0x3a0 [xfs]
[601201.821016]  [<ffffffffa009b3d5>] ? xfs_bdstrat_cb+0x55/0xb0 [xfs]
[601201.821026]  [<ffffffffa00996f0>] _xfs_buf_ioapply+0x70/0x3a0 [xfs]
[601201.821030]  [<ffffffff8109abc0>] ? wake_up_state+0x20/0x20
[601201.821040]  [<ffffffffa009b3d5>] ? xfs_bdstrat_cb+0x55/0xb0 [xfs]
[601201.821050]  [<ffffffffa009b336>] xfs_buf_iorequest+0x46/0x90 [xfs]
[601201.821060]  [<ffffffffa009b3d5>] xfs_bdstrat_cb+0x55/0xb0 [xfs]
[601201.821070]  [<ffffffffa009b56b>] __xfs_buf_delwri_submit+0x13b/0x210 [xfs]
[601201.821081]  [<ffffffffa009c000>] ? xfs_buf_delwri_submit_nowait+0x20/0x30 [xfs]
[601201.821100]  [<ffffffffa00faaa0>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
[601201.821110]  [<ffffffffa009c000>] xfs_buf_delwri_submit_nowait+0x20/0x30 [xfs]
[601201.821127]  [<ffffffffa00facd7>] xfsaild+0x237/0x5c0 [xfs]
[601201.821145]  [<ffffffffa00faaa0>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
[601201.821148]  [<ffffffff8108b702>] kthread+0xd2/0xf0
[601201.821150]  [<ffffffff8108b630>] ? kthread_create_on_node+0x1c0/0x1c0
[601201.821153]  [<ffffffff81733ca8>] ret_from_fork+0x58/0x90
[601201.821155]  [<ffffffff8108b630>] ? kthread_create_on_node+0x1c0/0x1c0
[601201.821158] XFS (dm-0): Corruption detected. Unmount and run xfs_repair
[601201.821581] XFS (dm-0): xfs_do_force_shutdown(0x8) called from line 1320 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_buf.c.  Return address = 0xffffffffa009971c
[601201.832780] XFS (dm-0): Corruption of in-memory data detected.  Shutting down filesystem
[601201.833292] XFS (dm-0): Please umount the filesystem and rectify the problem(s)
[601201.843375] ffff880238b00000: 58 44 32 42 01 00 0e c0 00 00 00 00 00 00 00 00  XD2B............
[601201.843924] ffff880238b00010: 00 00 00 20 3d a7 0c 7b 01 2e 8d 9e 01 88 00 10  ... =..{........
[601201.844472] ffff880238b00020: 00 00 00 19 21 a9 2d 2f 02 2e 2e 9e 01 88 00 20  ....!.-/.......
[601201.845031] ffff880238b00030: 00 00 00 07 39 2b 68 20 18 4d 4f 42 31 41 5f 48  ....9+h .MOB1A_H
[601201.845585] XFS (dm-0): Internal error xfs_da3_node_read_verify at line 240 of file /build/buildd/linux-3.13.0/fs/xfs/xfs_da_btree.c.  Caller 0xffffffffa009a885
[601201.858885] CPU: 1 PID: 2806 Comm: kworker/1:1H Not tainted 3.13.0-55-generic #92-Ubuntu
[601201.858889] Hardware name: Dell Inc. PowerEdge R310/05XKKK, BIOS 1.8.2 08/17/2011
[601201.858938] Workqueue: xfslogd xfs_buf_iodone_work [xfs]
[601201.858941]  0000000000000001 ffff8802af9cbd68 ffffffff81723294 ffff8802ac71d800
[601201.858944]  ffff8802af9cbd80 ffffffffa009d6fb ffffffffa009a885 ffff8802af9cbdb8
[601201.858945]  ffffffffa009d755 000000f0810a0465 ffff880031b35b00 ffff880238b00000
[601201.858948] Call Trace:
[601201.858958]  [<ffffffff81723294>] dump_stack+0x45/0x56
[601201.858969]  [<ffffffffa009d6fb>] xfs_error_report+0x3b/0x40 [xfs]
[601201.858979]  [<ffffffffa009a885>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
[601201.858989]  [<ffffffffa009d755>] xfs_corruption_error+0x55/0x80 [xfs]
[601201.859005]  [<ffffffffa00d146c>] xfs_da3_node_read_verify+0x8c/0x180 [xfs]
[601201.859014]  [<ffffffffa009a885>] ? xfs_buf_iodone_work+0x85/0xf0 [xfs]
[601201.859024]  [<ffffffffa009a885>] xfs_buf_iodone_work+0x85/0xf0 [xfs]
[601201.859028]  [<ffffffff81083b22>] process_one_work+0x182/0x450
[601201.859030]  [<ffffffff81084911>] worker_thread+0x121/0x410
[601201.859032]  [<ffffffff810847f0>] ? rescuer_thread+0x430/0x430
[601201.859035]  [<ffffffff8108b702>] kthread+0xd2/0xf0
[601201.859037]  [<ffffffff8108b630>] ? kthread_create_on_node+0x1c0/0x1c0
[601201.859040]  [<ffffffff81733ca8>] ret_from_fork+0x58/0x90
[601201.859042]  [<ffffffff8108b630>] ? kthread_create_on_node+0x1c0/0x1c0
[601201.859044] XFS (dm-0): Corruption detected. Unmount and run xfs_repair
[601201.872288] XFS (dm-0): metadata I/O error: block 0x101ed393c0 ("xfs_trans_read_buf_map") error 117 numblks 8
[601201.942054] XFS (dm-0): xfs_imap_to_bp: xfs_trans_read_buf() returned error 5.
[601209.420268] XFS (dm-0): xfs_log_force: error 5 returned.
[601239.529710] XFS (dm-0): xfs_log_force: error 5 returned.
[601269.639150] XFS (dm-0): xfs_log_force: error 5 returned.
[601299.748593] XFS (dm-0): xfs_log_force: error 5 returned.
[601329.858034] XFS (dm-0): xfs_log_force: error 5 returned.

I'm currently running xfs_repair -n and so far I've seen the following output:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
doubling cache size to 591200
directory flags set on non-directory inode 206624
directory flags set on non-directory inode 206625
directory flags set on non-directory inode 206626
directory flags set on non-directory inode 206627
directory flags set on non-directory inode 206628
directory flags set on non-directory inode 206629
directory flags set on non-directory inode 206630
directory flags set on non-directory inode 206631
directory flags set on non-directory inode 206632
directory flags set on non-directory inode 206633
directory flags set on non-directory inode 206634

Only the first 20 lines are included. There are currently 250000+ more lines with "directory flags set on non-directory inode" and the check is still running (the mostly small files take up around 30 TB, so it'll probably take a while).

I recently enabled user and project quota and updated from 3.13.0-53. The file system has been heavily used for the last month or so.

Does anyone have any thoughts on this? I'm tempted to stop using quotas when the file system (hopefully) works again, as it's my impression that project quotas are not widely used.

Best,

Rasmus

Intomics is a contract research organization specialized in deriving core biological insight from large scale data. We help our clients in the pharmaceutical industry develop tomorrow's medicines better, faster, and cheaper through optimized use of biomedical data.
-----------------------------------------------------------------
Hansen, Rasmus Borup              Intomics - from data to biology
System Administrator              Diplomvej 377
Scientific Programmer             DK-2800 Kgs. Lyngby
                                  Denmark
E: rbh@intomics.com               W: http://www.intomics.com/
P: +45 5167 7972                  P: +45 8880 7979


[-- Attachment #1.2: Type: text/html, Size: 15759 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2015-11-19 13:03 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-19 13:03 "Internal error xfs_attr3_leaf_write_verify at line 216", "directory flags set on non-directory inode" and other errors Adam Błaszczykowski
  -- strict thread matches above, loose matches on Subject: below --
2015-06-24  7:39 Rasmus Borup Hansen
2015-06-25 16:41 ` Emmanuel Florac
2015-06-26  6:14   ` Rasmus Borup Hansen
2015-07-02  7:58     ` Rasmus Borup Hansen
2015-07-02  9:26       ` Emmanuel Florac
2015-07-03  6:27         ` Rasmus Borup Hansen
2015-07-03 15:24           ` Emmanuel Florac
2015-07-03 23:55           ` Dave Chinner
2015-07-06 11:08             ` Rasmus Borup Hansen
2015-07-07  0:19               ` Dave Chinner
2015-07-08 10:28                 ` Rasmus Borup Hansen
2015-07-09  1:15                   ` Dave Chinner
2015-11-13  6:39                     ` Arkadiusz Bubała
2015-06-29 21:50 ` Dave Chinner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.