All of lore.kernel.org
 help / color / mirror / Atom feed
* [Bug 215506] New: Internal error !ino_ok at line 200 of file fs/xfs/libxfs/xfs_dir2.c.  Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs]
@ 2022-01-19  3:08 bugzilla-daemon
  2022-01-19  6:22 ` Dave Chinner
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: bugzilla-daemon @ 2022-01-19  3:08 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=215506

            Bug ID: 215506
           Summary: Internal error !ino_ok at line 200 of file
                    fs/xfs/libxfs/xfs_dir2.c.  Caller
                    xfs_dir_ino_validate+0x5d/0xd0 [xfs]
           Product: File System
           Version: 2.5
    Kernel Version: 5.15.4
          Hardware: All
                OS: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: XFS
          Assignee: filesystem_xfs@kernel-bugs.kernel.org
          Reporter: yanming@tju.edu.cn
        Regression: No

Created attachment 300288
  --> https://bugzilla.kernel.org/attachment.cgi?id=300288&action=edit
tmp.c

I have encountered a bug in xfs file system.

I created a disk image and modified some properties. After that I mount the
image and run some commands related to file operations, and the bug occured.

The file operations are in the "tmp.c" file, and a modified image named
"tmp.img" can be found in
(https://drive.google.com/file/d/1SujibjuGYcBA-jjZ5FtR-rSi7koVt0_d/view?usp=sharing).
You can simply reproduce the bug by running the following commands:

gcc -o tmp tmp.c
losetup /dev/loop7 tmp.img
mount -o
"attr2,discard,grpid,filestreams,noikeep,inode32,largeio,logbufs=5,noalign,nouuid,noquota,loop"
-t xfs /dev/loop7 /root/mnt
./tmp

The kernel message is shown below:

6,2489,54115223218,-;loop7: detected capacity change from 0 to 131072
6,2490,54115436921,-;loop8: detected capacity change from 0 to 131072
5,2491,54115497587,-;XFS (loop8): Mounting V5 Filesystem
1,2492,54115636142,-;XFS (loop8): Internal error !ino_ok at line 200 of file
fs/xfs/libxfs/xfs_dir2.c.  Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs]
4,2493,54115637100,-;CPU: 0 PID: 17928 Comm: mount Tainted: G        W    L   
5.15.4 #3
4,2494,54115637493,-;Hardware name: LENOVO 20J6A00NHH/20J6A00NHH, BIOS R0FET24W
(1.04 ) 12/21/2016
4,2495,54115637742,-;Call Trace:
4,2496,54115637857,-; <TASK>
4,2497,54115638019,-; dump_stack_lvl+0xea/0x130
4,2498,54115638586,-; dump_stack+0x1c/0x25
4,2499,54115639082,-; xfs_error_report+0xd3/0xe0 [xfs]
4,2500,54115639769,-; ? xfs_dir_ino_validate+0x5d/0xd0 [xfs]
4,2501,54115640025,-; ? xfs_dir_ino_validate+0x5d/0xd0 [xfs]
4,2502,54115640025,-; xfs_corruption_error+0xab/0x120 [xfs]
4,2503,54115640025,-; ? write_comp_data+0x37/0xc0
4,2504,54115640025,-; xfs_dir_ino_validate+0xa2/0xd0 [xfs]
4,2505,54115640025,-; ? xfs_dir_ino_validate+0x5d/0xd0 [xfs]
4,2506,54115640025,-; xfs_dir2_sf_verify+0x5d2/0xb50 [xfs]
4,2507,54115640025,-; xfs_ifork_verify_local_data+0xd6/0x180 [xfs]
4,2508,54115640025,-; ? __sanitizer_cov_trace_pc+0x31/0x80
4,2509,54115640025,-; xfs_iformat_data_fork+0x3ff/0x4c0 [xfs]
4,2510,54115640025,-; xfs_inode_from_disk+0xb5a/0x1460 [xfs]
4,2511,54115640025,-; xfs_iget+0x1281/0x2850 [xfs]
4,2512,54115640025,-; ? _raw_write_lock_bh+0x130/0x130
4,2513,54115640025,-; ? xfs_verify_icount+0x31a/0x3f0 [xfs]
4,2514,54115640025,-; ? write_comp_data+0x37/0xc0
4,2515,54115640025,-; ? write_comp_data+0x37/0xc0
4,2516,54115640025,-; ? xfs_perag_get+0x260/0x260 [xfs]
4,2517,54115640025,-; ? xfs_inode_free+0xe0/0xe0 [xfs]
4,2518,54115640025,-; ? xfs_mountfs+0x1227/0x1ff0 [xfs]
4,2519,54115640025,-; ? xfs_blockgc_start+0x76/0x490 [xfs]
4,2520,54115640025,-; ? write_comp_data+0x37/0xc0
4,2521,54115640025,-; xfs_mountfs+0x12f5/0x1ff0 [xfs]
4,2522,54115640025,-; ? xfs_mount_reset_sbqflags+0x1a0/0x1a0 [xfs]
4,2523,54115640025,-; ? __sanitizer_cov_trace_pc+0x31/0x80
4,2524,54115640025,-; ? xfs_mru_cache_create+0x4d2/0x690 [xfs]
4,2525,54115640025,-; ? xfs_filestream_get_ag+0x90/0x90 [xfs]
4,2526,54115640025,-; ? write_comp_data+0x37/0xc0
4,2527,54115640025,-; xfs_fs_fill_super+0x1198/0x2030 [xfs]
4,2528,54115640025,-; get_tree_bdev+0x494/0x850
4,2529,54115640025,-; ? xfs_fs_parse_param+0x1920/0x1920 [xfs]
4,2530,54115640025,-; xfs_fs_get_tree+0x2a/0x40 [xfs]
4,2531,54115640025,-; vfs_get_tree+0x9a/0x380
4,2532,54115640025,-; path_mount+0x7e3/0x24c0
4,2533,54115640025,-; ? __kasan_slab_free+0x147/0x1f0
4,2534,54115640025,-; ? finish_automount+0x860/0x860
4,2535,54115640025,-; ? __sanitizer_cov_trace_pc+0x31/0x80
4,2536,54115640025,-; ? putname+0x165/0x1e0
4,2537,54115640025,-; ? write_comp_data+0x37/0xc0
4,2538,54115640025,-; do_mount+0x11b/0x140
4,2539,54115640025,-; ? path_mount+0x24c0/0x24c0
4,2540,54115640025,-; ? write_comp_data+0x37/0xc0
4,2541,54115640025,-; ? __sanitizer_cov_trace_pc+0x31/0x80
4,2542,54115640025,-; ? write_comp_data+0x37/0xc0
4,2543,54115640025,-; __x64_sys_mount+0x1c3/0x2c0
4,2544,54115640025,-; do_syscall_64+0x3b/0xc0
4,2545,54115640025,-; entry_SYSCALL_64_after_hwframe+0x44/0xae
4,2546,54115640025,-;RIP: 0033:0x7fa63cbb0dde
4,2547,54115640025,-;Code: 48 8b 0d b5 80 0c 00 f7 d8 64 89 01 48 83 c8 ff c3
66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 49 89 ca b8 a5 00 00 00 0f 05 <48>
3d 01 f0 ff ff 73 01 c3 48 8b 0d 82 80 0c 00 f7 d8 64 89 01 48
4,2548,54115640025,-;RSP: 002b:00007ffcd394f958 EFLAGS: 00000246 ORIG_RAX:
00000000000000a5
4,2549,54115640025,-;RAX: ffffffffffffffda RBX: 00007fa63ccdf204 RCX:
00007fa63cbb0dde
4,2550,54115640025,-;RDX: 000056155b8a6d10 RSI: 000056155b8a6d90 RDI:
000056155b8af870
4,2551,54115640025,-;RBP: 000056155b8a6b00 R08: 0000000000000000 R09:
000056155b8af980
4,2552,54115640025,-;R10: 0000000000000000 R11: 0000000000000246 R12:
0000000000000000
4,2553,54115640025,-;R13: 000056155b8af870 R14: 000056155b8a6d10 R15:
000056155b8a6b00
4,2554,54115640025,-; </TASK>
1,2555,54115662742,-;XFS (loop8): Corruption detected. Unmount and run
xfs_repair
4,2556,54115663126,-;XFS (loop8): Invalid inode number 0x2000000
1,2557,54115663448,-;XFS (loop8): Metadata corruption detected at
xfs_dir2_sf_verify+0x906/0xb50 [xfs], inode 0x60 data fork
1,2558,54115664625,-;XFS (loop8): Unmount and run xfs_repair
1,2559,54115665007,-;XFS (loop8): First 17 bytes of corrupted metadata buffer:
1,2560,54115665553,-;00000000: 01 00 00 00 00 60 03 00 60 66 6f 6f 02 00 00 00 
.....`..`foo....
1,2561,54115666121,-;00000010: 63                                              
c
4,2562,54115666649,-;XFS (loop8): Failed to read root inode 0x60, error 117

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Bug 215506] New: Internal error !ino_ok at line 200 of file fs/xfs/libxfs/xfs_dir2.c.  Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs]
  2022-01-19  3:08 [Bug 215506] New: Internal error !ino_ok at line 200 of file fs/xfs/libxfs/xfs_dir2.c. Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs] bugzilla-daemon
@ 2022-01-19  6:22 ` Dave Chinner
  2022-01-19  6:22 ` [Bug 215506] " bugzilla-daemon
  2022-01-19  7:25 ` bugzilla-daemon
  2 siblings, 0 replies; 4+ messages in thread
From: Dave Chinner @ 2022-01-19  6:22 UTC (permalink / raw)
  To: bugzilla-daemon; +Cc: linux-xfs

On Wed, Jan 19, 2022 at 03:08:33AM +0000, bugzilla-daemon@bugzilla.kernel.org wrote:
> I have encountered a bug in xfs file system.

I don't think so.

> I created a disk image and modified some properties.

Let's call it what it is: you -corrupted- the disk image.

> After that I mount the image

According to the trace, the mount failed because XFS detected one
of the many corruptions you created in the filesystem.

> and run some commands related to file operations, and the bug occured.

The filesystem mount was aborted due to the detected corruption, so
I don't actually beleive that this corruption was produced by the
test program....

> The kernel message is shown below:
> 
> loop7: detected capacity change from 0 to 131072
> loop8: detected capacity change from 0 to 131072
> XFS (loop8): Mounting V5 Filesystem
> XFS (loop8): Internal error !ino_ok at line 200 of file
> fs/xfs/libxfs/xfs_dir2.c.  Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs]

XFS found an invalid inode number in a directory entry.

> Call Trace:
>  <TASK>
>  dump_stack_lvl+0xea/0x130
>  dump_stack+0x1c/0x25
>  xfs_error_report+0xd3/0xe0 [xfs]
>  xfs_corruption_error+0xab/0x120 [xfs]
>  xfs_dir_ino_validate+0xa2/0xd0 [xfs]
>  xfs_dir2_sf_verify+0x5d2/0xb50 [xfs]
>  xfs_ifork_verify_local_data+0xd6/0x180 [xfs]
>  xfs_iformat_data_fork+0x3ff/0x4c0 [xfs]
>  xfs_inode_from_disk+0xb5a/0x1460 [xfs]
>  xfs_iget+0x1281/0x2850 [xfs]
>  xfs_mountfs+0x12f5/0x1ff0 [xfs]
>  xfs_fs_fill_super+0x1198/0x2030 [xfs]
>  get_tree_bdev+0x494/0x850
>  xfs_fs_get_tree+0x2a/0x40 [xfs]
>  vfs_get_tree+0x9a/0x380
>  path_mount+0x7e3/0x24c0
>  do_mount+0x11b/0x140

Whilst reading and validating the root inode during mount. The error
messages emitted tell us that this isn't a bug but an on-disk
corruption that was encountered:

> XFS (loop8): Corruption detected. Unmount and run xfs_repair
> XFS (loop8): Invalid inode number 0x2000000
> XFS (loop8): Metadata corruption detected at xfs_dir2_sf_verify+0x906/0xb50 [xfs], inode 0x60 data fork
> XFS (loop8): Unmount and run xfs_repair
> XFS (loop8): First 17 bytes of corrupted metadata buffer:
> 00000000: 01 00 00 00 00 60 03 00 60 66 6f 6f 02 00 00 00  .....`..`foo....
> 00000010: 63                                               c
> XFS (loop8): Failed to read root inode 0x60, error 117

Using xfs_db, the root inode contains:

u3.sfdir2.hdr.count = 1
u3.sfdir2.hdr.i8count = 0
u3.sfdir2.hdr.parent.i4 = 96
u3.sfdir2.list[0].namelen = 3
u3.sfdir2.list[0].offset = 0x60
u3.sfdir2.list[0].name = "foo"
u3.sfdir2.list[0].inumber.i4 = 33554432

That the inode number for foo is, indeed, 0x200000. Given that
this is a 64MB filesystem, the largest inode number that is valid
is around 0x20000 (131072). Hence the mount failed immediately
as the root inode has clearly been corrupted.

Looking at the AGI btree for AG 0, I see that inode 99 is also
allocated, which is where that should point. That contains one
entry:

u3.sfdir2.hdr.count = 1
u3.sfdir2.hdr.i8count = 0
u3.sfdir2.hdr.parent.i4 = 96
u3.sfdir2.list[0].namelen = 3
u3.sfdir2.list[0].offset = 0x160
u3.sfdir2.list[0].name = "bar"
u3.sfdir2.list[0].inumber.i4 = 33554560

Which points to inode 0x200080, also beyond EOFS.

So that's two corrupt inode numbers out of two shortform directory
entries, both with a very similar corruption.

Then looking at your reproducer, it assumes that the directory
structure /foo/bar/ needs to pre-exist before the test file is run.
I can also see ifrom the LSN values in various metadata that the
filesystem was mounted and foo/bar was created. e.g.  those two
inodes have a LSN of 0x100000002 (cycle 1, block 2), and so would
have been the very first modifications made after the filesystem was
created.

I can see other inodes that are marked allocated in AGI 1, and I
found that inode 0x8060 should have been the /foo/bar/ directory
inode. I can see the traces of it in the literal area when I change
the inode to local format:

u3.sfdir2.hdr.count = 7
u3.sfdir2.hdr.i8count = 0
u3.sfdir2.hdr.parent.i4 = 99
u3.sfdir2.list[0].namelen = 255
u3.sfdir2.list[0].offset = 0xffff
u3.sfdir2.list[0].name = "\177az\001\000\000\200a\003\000phln\001\000\000\200b\005\000\200xattr\001\000\000\200c\003\000\230acl\001
....

as it's parent directory points back to inode 99 that I identified
above. But it's been corrupted - not only have the format been
changed, the first few bytes of the shortform entry have been
overwritten and so the directory entry data is corrupt, too.

IOWs, it's pretty clear that this filesystem image was corrupted
-after- the test binary was run to create all these files.

So I thought "lets run repair just to see how many structures have
been maliciously corrupted":

# xfs_repair -n tmp.img
Phase 1 - find and verify superblock...
bad primary superblock - inconsistent inode alignment value !!!
attempting to find secondary superblock...
.found candidate secondary superblock...
verified secondary superblock...
would write modified primary superblock
Primary superblock would have been modified.
Cannot proceed further in no_modify mode.
Exiting now.
#

Yeah, didn't get far. There's significant modification to all the
superblocks - the inode alignment, and all the features fields (f2,
bad f2, ro feat, incompat feat) have been zeroed too. SO there are
serious inconsistencies between what the filesystem says is the
format on disk and what is actually on disk.

Other superblocks fail simple checks, too:

xfs_db> sb 1
Superblock has unknown read-only compatible features (0xff000000) enabled.
Attempted to mount read-only compatible filesystem read-write.
Filesystem can only be safely mounted read only.
xfs_db> sb 2
SB sanity check failed
Metadata corruption detected at 0x561fdae339f5, xfs_sb block 0x10000/0x200

and so on. There's random metadata field corruption all over the
place, and it clearly has been done by a third party as there are
corruptions in metadata that clearly is not written by the kernel,
yet the CRCs for those objects are correct.

IOWs, you've maliciously modified random fields in the filesystem to
try to induce a failure, but the kernel has detected those malicious
corruptions and correctly aborted mounting the filesystem.  I'd say
the kernel code is working exactly as intended at this point in time.

Please close this bug, and for next time, please learn the
difference between XFS reporting on-disk corruption and an actual
kernel bug.

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Bug 215506] Internal error !ino_ok at line 200 of file fs/xfs/libxfs/xfs_dir2.c.  Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs]
  2022-01-19  3:08 [Bug 215506] New: Internal error !ino_ok at line 200 of file fs/xfs/libxfs/xfs_dir2.c. Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs] bugzilla-daemon
  2022-01-19  6:22 ` Dave Chinner
@ 2022-01-19  6:22 ` bugzilla-daemon
  2022-01-19  7:25 ` bugzilla-daemon
  2 siblings, 0 replies; 4+ messages in thread
From: bugzilla-daemon @ 2022-01-19  6:22 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=215506

--- Comment #1 from Dave Chinner (david@fromorbit.com) ---
On Wed, Jan 19, 2022 at 03:08:33AM +0000, bugzilla-daemon@bugzilla.kernel.org
wrote:
> I have encountered a bug in xfs file system.

I don't think so.

> I created a disk image and modified some properties.

Let's call it what it is: you -corrupted- the disk image.

> After that I mount the image

According to the trace, the mount failed because XFS detected one
of the many corruptions you created in the filesystem.

> and run some commands related to file operations, and the bug occured.

The filesystem mount was aborted due to the detected corruption, so
I don't actually beleive that this corruption was produced by the
test program....

> The kernel message is shown below:
> 
> loop7: detected capacity change from 0 to 131072
> loop8: detected capacity change from 0 to 131072
> XFS (loop8): Mounting V5 Filesystem
> XFS (loop8): Internal error !ino_ok at line 200 of file
> fs/xfs/libxfs/xfs_dir2.c.  Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs]

XFS found an invalid inode number in a directory entry.

> Call Trace:
>  <TASK>
>  dump_stack_lvl+0xea/0x130
>  dump_stack+0x1c/0x25
>  xfs_error_report+0xd3/0xe0 [xfs]
>  xfs_corruption_error+0xab/0x120 [xfs]
>  xfs_dir_ino_validate+0xa2/0xd0 [xfs]
>  xfs_dir2_sf_verify+0x5d2/0xb50 [xfs]
>  xfs_ifork_verify_local_data+0xd6/0x180 [xfs]
>  xfs_iformat_data_fork+0x3ff/0x4c0 [xfs]
>  xfs_inode_from_disk+0xb5a/0x1460 [xfs]
>  xfs_iget+0x1281/0x2850 [xfs]
>  xfs_mountfs+0x12f5/0x1ff0 [xfs]
>  xfs_fs_fill_super+0x1198/0x2030 [xfs]
>  get_tree_bdev+0x494/0x850
>  xfs_fs_get_tree+0x2a/0x40 [xfs]
>  vfs_get_tree+0x9a/0x380
>  path_mount+0x7e3/0x24c0
>  do_mount+0x11b/0x140

Whilst reading and validating the root inode during mount. The error
messages emitted tell us that this isn't a bug but an on-disk
corruption that was encountered:

> XFS (loop8): Corruption detected. Unmount and run xfs_repair
> XFS (loop8): Invalid inode number 0x2000000
> XFS (loop8): Metadata corruption detected at xfs_dir2_sf_verify+0x906/0xb50
> [xfs], inode 0x60 data fork
> XFS (loop8): Unmount and run xfs_repair
> XFS (loop8): First 17 bytes of corrupted metadata buffer:
> 00000000: 01 00 00 00 00 60 03 00 60 66 6f 6f 02 00 00 00  .....`..`foo....
> 00000010: 63                                               c
> XFS (loop8): Failed to read root inode 0x60, error 117

Using xfs_db, the root inode contains:

u3.sfdir2.hdr.count = 1
u3.sfdir2.hdr.i8count = 0
u3.sfdir2.hdr.parent.i4 = 96
u3.sfdir2.list[0].namelen = 3
u3.sfdir2.list[0].offset = 0x60
u3.sfdir2.list[0].name = "foo"
u3.sfdir2.list[0].inumber.i4 = 33554432

That the inode number for foo is, indeed, 0x200000. Given that
this is a 64MB filesystem, the largest inode number that is valid
is around 0x20000 (131072). Hence the mount failed immediately
as the root inode has clearly been corrupted.

Looking at the AGI btree for AG 0, I see that inode 99 is also
allocated, which is where that should point. That contains one
entry:

u3.sfdir2.hdr.count = 1
u3.sfdir2.hdr.i8count = 0
u3.sfdir2.hdr.parent.i4 = 96
u3.sfdir2.list[0].namelen = 3
u3.sfdir2.list[0].offset = 0x160
u3.sfdir2.list[0].name = "bar"
u3.sfdir2.list[0].inumber.i4 = 33554560

Which points to inode 0x200080, also beyond EOFS.

So that's two corrupt inode numbers out of two shortform directory
entries, both with a very similar corruption.

Then looking at your reproducer, it assumes that the directory
structure /foo/bar/ needs to pre-exist before the test file is run.
I can also see ifrom the LSN values in various metadata that the
filesystem was mounted and foo/bar was created. e.g.  those two
inodes have a LSN of 0x100000002 (cycle 1, block 2), and so would
have been the very first modifications made after the filesystem was
created.

I can see other inodes that are marked allocated in AGI 1, and I
found that inode 0x8060 should have been the /foo/bar/ directory
inode. I can see the traces of it in the literal area when I change
the inode to local format:

u3.sfdir2.hdr.count = 7
u3.sfdir2.hdr.i8count = 0
u3.sfdir2.hdr.parent.i4 = 99
u3.sfdir2.list[0].namelen = 255
u3.sfdir2.list[0].offset = 0xffff
u3.sfdir2.list[0].name =
"\177az\001\000\000\200a\003\000phln\001\000\000\200b\005\000\200xattr\001\000\000\200c\003\000\230acl\001
....

as it's parent directory points back to inode 99 that I identified
above. But it's been corrupted - not only have the format been
changed, the first few bytes of the shortform entry have been
overwritten and so the directory entry data is corrupt, too.

IOWs, it's pretty clear that this filesystem image was corrupted
-after- the test binary was run to create all these files.

So I thought "lets run repair just to see how many structures have
been maliciously corrupted":

# xfs_repair -n tmp.img
Phase 1 - find and verify superblock...
bad primary superblock - inconsistent inode alignment value !!!
attempting to find secondary superblock...
.found candidate secondary superblock...
verified secondary superblock...
would write modified primary superblock
Primary superblock would have been modified.
Cannot proceed further in no_modify mode.
Exiting now.
#

Yeah, didn't get far. There's significant modification to all the
superblocks - the inode alignment, and all the features fields (f2,
bad f2, ro feat, incompat feat) have been zeroed too. SO there are
serious inconsistencies between what the filesystem says is the
format on disk and what is actually on disk.

Other superblocks fail simple checks, too:

xfs_db> sb 1
Superblock has unknown read-only compatible features (0xff000000) enabled.
Attempted to mount read-only compatible filesystem read-write.
Filesystem can only be safely mounted read only.
xfs_db> sb 2
SB sanity check failed
Metadata corruption detected at 0x561fdae339f5, xfs_sb block 0x10000/0x200

and so on. There's random metadata field corruption all over the
place, and it clearly has been done by a third party as there are
corruptions in metadata that clearly is not written by the kernel,
yet the CRCs for those objects are correct.

IOWs, you've maliciously modified random fields in the filesystem to
try to induce a failure, but the kernel has detected those malicious
corruptions and correctly aborted mounting the filesystem.  I'd say
the kernel code is working exactly as intended at this point in time.

Please close this bug, and for next time, please learn the
difference between XFS reporting on-disk corruption and an actual
kernel bug.

-Dave.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Bug 215506] Internal error !ino_ok at line 200 of file fs/xfs/libxfs/xfs_dir2.c.  Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs]
  2022-01-19  3:08 [Bug 215506] New: Internal error !ino_ok at line 200 of file fs/xfs/libxfs/xfs_dir2.c. Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs] bugzilla-daemon
  2022-01-19  6:22 ` Dave Chinner
  2022-01-19  6:22 ` [Bug 215506] " bugzilla-daemon
@ 2022-01-19  7:25 ` bugzilla-daemon
  2 siblings, 0 replies; 4+ messages in thread
From: bugzilla-daemon @ 2022-01-19  7:25 UTC (permalink / raw)
  To: linux-xfs

https://bugzilla.kernel.org/show_bug.cgi?id=215506

bughunter (yanming@tju.edu.cn) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |RESOLVED
         Resolution|---                         |ANSWERED

--- Comment #2 from bughunter (yanming@tju.edu.cn) ---
Thank you for reply! I will learn what you have suggested.

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-01-19  7:25 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-19  3:08 [Bug 215506] New: Internal error !ino_ok at line 200 of file fs/xfs/libxfs/xfs_dir2.c. Caller xfs_dir_ino_validate+0x5d/0xd0 [xfs] bugzilla-daemon
2022-01-19  6:22 ` Dave Chinner
2022-01-19  6:22 ` [Bug 215506] " bugzilla-daemon
2022-01-19  7:25 ` bugzilla-daemon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.