All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Libor Klepáč" <libor.klepac@bcom.cz>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: Eric Sandeen <sandeen@redhat.com>, linux-xfs <linux-xfs@vger.kernel.org>
Subject: Re: [PATCH] xfs_repair: junk leaf attribute if count == 0
Date: Tue, 14 Mar 2017 08:15:20 +0000	[thread overview]
Message-ID: <1702693.ETbqtf3ybS@libor-nb> (raw)
In-Reply-To: <2c8d1ee3-c795-aefa-4797-8b7970ea8266@sandeen.net>

Hello,
i have this during night with error_level = 11, no force shutdown
(kernel 4.8.15)
Mar 14 02:36:29 vps2 kernel: [54799.061956] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs], xfs_attr3_leaf block 0x24e70268
Mar 14 02:36:29 vps2 kernel: [54799.063194] XFS (dm-2): Unmount and run xfs_repair
Mar 14 02:36:29 vps2 kernel: [54799.063786] XFS (dm-2): First 64 bytes of corrupted metadata buffer:
Mar 14 02:36:29 vps2 kernel: [54799.064377] ffff933db1988000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................
Mar 14 02:36:29 vps2 kernel: [54799.064972] ffff933db1988010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........
Mar 14 02:36:29 vps2 kernel: [54799.065569] ffff933db1988020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 14 02:36:29 vps2 kernel: [54799.066141] ffff933db1988030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Mar 14 02:36:29 vps2 kernel: [54799.066683] CPU: 1 PID: 22609 Comm: kworker/1:30 Tainted: G            E   4.8.0-0.bpo.2-amd64 #1 Debian 4.8.15-2~bpo8+2
Mar 14 02:36:29 vps2 kernel: [54799.066684] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 04/14/2014
Mar 14 02:36:29 vps2 kernel: [54799.066710] Workqueue: xfs-buf/dm-2 xfs_buf_ioend_work [xfs]
Mar 14 02:36:29 vps2 kernel: [54799.066712]  0000000000000286 00000000940e3248 ffffffff92322b05 ffff933daff0e300
Mar 14 02:36:29 vps2 kernel: [54799.066714]  ffff934016e70480 ffffffffc073895a ffff9341bfc98700 00000000940e3248
Mar 14 02:36:29 vps2 kernel: [54799.066715]  ffff933daff0e300 ffff934016e70480 0000000000000000 ffffffffc0779a34
Mar 14 02:36:29 vps2 kernel: [54799.066716] Call Trace:
Mar 14 02:36:29 vps2 kernel: [54799.066736]  [<ffffffff92322b05>] ? dump_stack+0x5c/0x77
Mar 14 02:36:29 vps2 kernel: [54799.066751]  [<ffffffffc073895a>] ? xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs]
Mar 14 02:36:29 vps2 kernel: [54799.066768]  [<ffffffffc0779a34>] ? xfs_buf_ioend+0x54/0x1d0 [xfs]
Mar 14 02:36:29 vps2 kernel: [54799.066776]  [<ffffffff9209000b>] ? process_one_work+0x14b/0x410
Mar 14 02:36:29 vps2 kernel: [54799.066777]  [<ffffffff92090ac5>] ? worker_thread+0x65/0x4a0
Mar 14 02:36:29 vps2 kernel: [54799.066778]  [<ffffffff92090a60>] ? rescuer_thread+0x340/0x340
Mar 14 02:36:29 vps2 kernel: [54799.066780]  [<ffffffff92095d5f>] ? kthread+0xdf/0x100
Mar 14 02:36:29 vps2 kernel: [54799.066805]  [<ffffffff9202478b>] ? __switch_to+0x2bb/0x710
Mar 14 02:36:29 vps2 kernel: [54799.066809]  [<ffffffff925ecf2f>] ? ret_from_fork+0x1f/0x40
Mar 14 02:36:29 vps2 kernel: [54799.066810]  [<ffffffff92095c80>] ? kthread_park+0x50/0x50
Mar 14 02:36:29 vps2 kernel: [54799.067562] XFS (dm-2): metadata I/O error: block 0x24e70268 ("xfs_trans_read_buf_map") error 117 numblks 8

This one is from now, when i tried to find inode=2152616264 which corresponds to block 0x24e70268                                                                                                                   
(kernel 4.9.13)
Mar 14 09:01:31 vps2 kernel: [10644.526839] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs], xfs_attr3_leaf block 0x24e70268                                                                                                                   
Mar 14 09:01:31 vps2 kernel: [10644.526896] XFS (dm-2): Unmount and run xfs_repair                                                                                                                                                                                                     
Mar 14 09:01:31 vps2 kernel: [10644.526917] XFS (dm-2): First 64 bytes of corrupted metadata buffer:                                                                                                                                                                                   
Mar 14 09:01:31 vps2 kernel: [10644.526944] ffff9f0b5b5d2000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................                                                                                                                                                        
Mar 14 09:01:31 vps2 kernel: [10644.526980] ffff9f0b5b5d2010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........                                                                                                                                                        
Mar 14 09:01:31 vps2 kernel: [10644.527015] ffff9f0b5b5d2020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................                                                                                                                                                        
Mar 14 09:01:31 vps2 kernel: [10644.527049] ffff9f0b5b5d2030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................                                                                                                                                                        
Mar 14 09:01:31 vps2 kernel: [10644.527086] CPU: 2 PID: 30689 Comm: kworker/2:3 Tainted: G            E   4.9.0-0.bpo.2-amd64 #1 Debian 4.9.13-1~bpo8+1                                                                                                                                
Mar 14 09:01:31 vps2 kernel: [10644.527087] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 04/14/2014
Mar 14 09:01:31 vps2 kernel: [10644.527119] Workqueue: xfs-buf/dm-2 xfs_buf_ioend_work [xfs]
Mar 14 09:01:31 vps2 kernel: [10644.527121]  0000000000000000 ffffffff9cf29cd5 ffff9f0b91105b00 ffff9f0b6acd2900
Mar 14 09:01:31 vps2 kernel: [10644.527123]  ffffffffc0749aca ffff9f0e7fd18700 000000006076a0c0 ffff9f0b91105b00
Mar 14 09:01:31 vps2 kernel: [10644.527124]  ffff9f0b6acd2900 0000000000000000 ffffffffc0794414 ffff9f0b91105ba8
Mar 14 09:01:31 vps2 kernel: [10644.527126] Call Trace:
Mar 14 09:01:31 vps2 kernel: [10644.527132]  [<ffffffff9cf29cd5>] ? dump_stack+0x5c/0x77
Mar 14 09:01:31 vps2 kernel: [10644.527155]  [<ffffffffc0749aca>] ? xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs]
Mar 14 09:01:31 vps2 kernel: [10644.527184]  [<ffffffffc0794414>] ? xfs_buf_ioend+0x54/0x1d0 [xfs]
Mar 14 09:01:31 vps2 kernel: [10644.527187]  [<ffffffff9cc9171b>] ? process_one_work+0x14b/0x410
Mar 14 09:01:31 vps2 kernel: [10644.527189]  [<ffffffff9cc921d5>] ? worker_thread+0x65/0x4a0
Mar 14 09:01:31 vps2 kernel: [10644.527190]  [<ffffffff9cc92170>] ? rescuer_thread+0x340/0x340
Mar 14 09:01:31 vps2 kernel: [10644.527191]  [<ffffffff9cc92170>] ? rescuer_thread+0x340/0x340
Mar 14 09:01:31 vps2 kernel: [10644.527194]  [<ffffffff9cc03b81>] ? do_syscall_64+0x81/0x190
Mar 14 09:01:31 vps2 kernel: [10644.527197]  [<ffffffff9cc7c730>] ? SyS_exit_group+0x10/0x10
Mar 14 09:01:31 vps2 kernel: [10644.527198]  [<ffffffff9cc974c0>] ? kthread+0xe0/0x100
Mar 14 09:01:31 vps2 kernel: [10644.527200]  [<ffffffff9cc2476b>] ? __switch_to+0x2bb/0x700
Mar 14 09:01:31 vps2 kernel: [10644.527202]  [<ffffffff9cc973e0>] ? kthread_park+0x60/0x60
Mar 14 09:01:31 vps2 kernel: [10644.527205]  [<ffffffff9d1fb675>] ? ret_from_fork+0x25/0x30
Mar 14 09:01:31 vps2 kernel: [10644.527213] XFS (dm-2): metadata I/O error: block 0x24e70268 ("xfs_trans_read_buf_map") error 117 numblks 8


---------------------------------------------------------
There are logs from my collegue, he wasn't able to do repair, device was busy, probably something forgoten to stop (container with private mounts?)

# xfs_repair -n /dev/mapper/vgDisk-lvData
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
sb_fdblocks 2245059, counted 2253251
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
bad attribute count 0 in attr block 0, inode 2152616264
problem with attribute contents in inode 2152616264
would clear attr fork
bad nblocks 1 for inode 2152616264, would reset to 0
bad anextents 1 for inode 2152616264, would reset to 0
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
 
# xfs_db -r /dev/mapper/vgDisk2-lvData
xfs_db> inode 2152616264
xfs_db> print
core.magic = 0x494e
core.mode = 0100664
core.version = 2
core.format = 2 (extents)
core.nlinkv2 = 1
core.onlink = 0
core.projid_lo = 0
core.projid_hi = 0
core.uid = 20045
core.gid = 20045
core.flushiter = 0
core.atime.sec = Mon Mar 13 11:16:31 2017
core.atime.nsec = 815645008
core.mtime.sec = Mon Mar 13 11:16:31 2017
core.mtime.nsec = 815645008
core.ctime.sec = Mon Mar 13 11:16:31 2017
core.ctime.nsec = 815645008
core.size = 0
core.nblocks = 1
core.extsize = 0
core.nextents = 0
core.naextents = 1
core.forkoff = 15
core.aformat = 2 (extents)
core.dmevmask = 0
core.dmstate = 0
core.newrtbm = 0
core.prealloc = 0
core.realtime = 0
core.immutable = 0
core.append = 0
core.sync = 0
core.noatime = 0
core.nodump = 0
core.rtinherit = 0
core.projinherit = 0
core.nosymlinks = 0
core.extsz = 0
core.extszinherit = 0
core.nodefrag = 0
core.filestream = 0
core.gen = 3279391285
next_unlinked = null
u = (empty)
a.bmx[0] = [startoff,startblock,blockcount,extentflag]
0:[0,134538317,1,0]
 
# xfs_repair /dev/mapper/vgDisk2-lvData
xfs_repair: cannot open /dev/mapper/vgDisk2-lvData: Device or resource busy



Libor



On pondělí 13. března 2017 9:14:53 CET Eric Sandeen wrote:
> On 3/13/17 8:48 AM, Libor Klepáč wrote:
> > Hello,
> > problem with this host again, after running uninterrupted from last email/repair on kernel 4.8.15. (so since 31. January)
> > 
> > Today, metadata corruption occured again.
> > Mar 13 11:16:31 vps2 kernel: [3563991.623260] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_write_verify+0xe8/0x100 [xfs], xfs_attr3_leaf block 0x24e70268
> > Mar 13 11:16:31 vps2 kernel: [3563991.624321] XFS (dm-2): Unmount and run xfs_repair
> 
> Ok, interesting that you hit this when writing an attr.
> 
> Can you turn the logging level way up:
> # echo 11 > /proc/sys/fs/xfs/error_level
> 
> and then things like the force shutdown and the metadata will give you a backtrace, which might be useful (noisy, but useful)...
> 
> -Eric
> 
> > Mar 13 11:16:31 vps2 kernel: [3563991.624696] XFS (dm-2): First 64 bytes of corrupted metadata buffer:
> > Mar 13 11:16:31 vps2 kernel: [3563991.625085] ffff994543410000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................
> > Mar 13 11:16:31 vps2 kernel: [3563991.625511] ffff994543410010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........
> > Mar 13 11:16:31 vps2 kernel: [3563991.625983] ffff994543410020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> > Mar 13 11:16:31 vps2 kernel: [3563991.626398] ffff994543410030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> > Mar 13 11:16:31 vps2 kernel: [3563991.626829] XFS (dm-2): xfs_do_force_shutdown(0x8) called from line 1322 of file /build/linux-aPrr8L/linux-4.8.15/fs/xfs/xfs_buf.c.  Return address = 0xffffffffc08295c4
> > Mar 13 11:16:31 vps2 kernel: [3563991.627210] XFS (dm-2): xfs_imap_to_bp: xfs_trans_read_buf() returned error -5.
> > Mar 13 11:16:31 vps2 kernel: [3563991.627212] XFS (dm-2): Corruption of in-memory data detected.  Shutting down filesystem
> > Mar 13 11:16:31 vps2 kernel: [3563991.627215] XFS (dm-2): Please umount the filesystem and rectify the problem(s)
> > Mar 13 11:16:31 vps2 kernel: [3563991.628752] XFS (dm-2): xfs_do_force_shutdown(0x8) called from line 3420 of file /build/linux-aPrr8L/linux-4.8.15/fs/xfs/xfs_inode.c.  Return address = 0xffffffffc083fc1e
> > Mar 13 11:16:48 vps2 kernel: [3564008.557340] XFS (dm-2): xfs_log_force: error -5 returned.
> > 
> > After reboot, sometimes it logs
> > Mar 13 12:51:10 vps2 kernel: [ 5283.025665] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs], xfs_attr3_leaf block 0x24e70268
> > Mar 13 12:51:10 vps2 kernel: [ 5283.026879] XFS (dm-2): Unmount and run xfs_repair
> > Mar 13 12:51:10 vps2 kernel: [ 5283.027471] XFS (dm-2): First 64 bytes of corrupted metadata buffer:
> > Mar 13 12:51:10 vps2 kernel: [ 5283.028074] ffff933f16f8c000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................
> > Mar 13 12:51:10 vps2 kernel: [ 5283.028669] ffff933f16f8c010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........
> > Mar 13 12:51:10 vps2 kernel: [ 5283.029240] ffff933f16f8c020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> > Mar 13 12:51:10 vps2 kernel: [ 5283.029814] ffff933f16f8c030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> > Mar 13 12:51:10 vps2 kernel: [ 5283.030428] XFS (dm-2): metadata I/O error: block 0x24e70268 ("xfs_trans_read_buf_map") error 117 numblks 8
> > Mar 13 12:51:10 vps2 kernel: [ 5283.036222] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs], xfs_attr3_leaf block 0x24e70268
> > Mar 13 12:51:10 vps2 kernel: [ 5283.037443] XFS (dm-2): Unmount and run xfs_repair
> > Mar 13 12:51:10 vps2 kernel: [ 5283.038049] XFS (dm-2): First 64 bytes of corrupted metadata buffer:
> > Mar 13 12:51:10 vps2 kernel: [ 5283.038644] ffff933f16f8c000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00  ................
> > Mar 13 12:51:10 vps2 kernel: [ 5283.039257] ffff933f16f8c010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00  ..... ..........
> > Mar 13 12:51:10 vps2 kernel: [ 5283.039838] ffff933f16f8c020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> > Mar 13 12:51:10 vps2 kernel: [ 5283.040397] ffff933f16f8c030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> > Mar 13 12:51:10 vps2 kernel: [ 5283.041482] XFS (dm-2): metadata I/O error: block 0x24e70268 ("xfs_trans_read_buf_map") error 117 numblks 8
> > 
> > I have installed kernel 4.9.13 from backports and installed tools 4.10.0.
> > Collegue will reboot yesterday and do the repair.
> > 
> > It seems to be the same pattern again. Do you have any clue where it comes from? How can we prevent it from happening?
> > 
> > Thanks,
> > Libor
> > 
> > On úterý 31. ledna 2017 9:03:02 CET Libor Klepáč wrote:
> >>
> >> Hello,
> >> sorry for late reply. It didn't crash since than and i forgot and moved on to another tasks.
> >>
> >> Yesterday it crashed on one of machines (running 4.8.11)
> >> -------------------------
> >> Jan 30 07:18:13 vps2 kernel: [5881831.379547] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs], xfs_attr3_leaf block 0x12f63f40
> >> Jan 30 07:18:13 vps2 kernel: [5881831.381721] XFS (dm-2): Unmount and run xfs_repair
> >> Jan 30 07:18:13 vps2 kernel: [5881831.382750] XFS (dm-2): First 64 bytes of corrupted metadata buffer:
> >> Jan 30 07:18:13 vps2 kernel: [5881831.387810] XFS (dm-2): metadata I/O error: block 0x12f63f40 ("xfs_trans_read_buf_map") error 117 numblks 8
> >> Jan 30 07:26:02 vps2 kernel: [5882300.524528] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs], xfs_attr3_leaf block 0x12645ef8
> >> Jan 30 07:26:02 vps2 kernel: [5882300.525993] XFS (dm-2): Unmount and run xfs_repair
> >> Jan 30 07:26:02 vps2 kernel: [5882300.526539] XFS (dm-2): First 64 bytes of corrupted metadata buffer:
> >> Jan 30 07:26:02 vps2 kernel: [5882300.529224] XFS (dm-2): metadata I/O error: block 0x12645ef8 ("xfs_trans_read_buf_map") error 117 numblks 8
> >> Jan 30 10:00:27 vps2 kernel: [5891564.682483] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_write_verify+0xe8/0x100 [xfs], xfs_attr3_leaf block 0x127b5578
> >> Jan 30 10:00:27 vps2 kernel: [5891564.683962] XFS (dm-2): Unmount and run xfs_repair
> >> Jan 30 10:00:27 vps2 kernel: [5891564.684536] XFS (dm-2): First 64 bytes of corrupted metadata buffer:
> >> Jan 30 10:00:27 vps2 kernel: [5891564.687223] XFS (dm-2): xfs_do_force_shutdown(0x8) called from line 1250 of file /build/linux-lVEVrl/linux-4.7.8/fs/xfs/xfs_buf.c.  Return address = 0xffffffffc06747f2
> >> Jan 30 10:00:27 vps2 kernel: [5891564.687230] XFS (dm-2): Corruption of in-memory data detected.  Shutting down filesystem
> >> Jan 30 10:00:27 vps2 kernel: [5891564.687778] XFS (dm-2): Please umount the filesystem and rectify the problem(s)
> >>
> >> and later
> >> Jan 30 21:10:31 vps2 kernel: [39747.917831] XFS (dm-2): Metadata corruption detected at xfs_attr3_leaf_read_verify+0x5a/0x100 [xfs], xfs_attr3_leaf block 0x24c17ba8
> >> Jan 30 21:10:31 vps2 kernel: [39747.918130] XFS (dm-2): metadata I/O error: block 0x24c17ba8 ("xfs_trans_read_buf_map") error 117 numblks 8
> >> -------------------------
> >>
> >> I have scheduled repair on today, all these blocks were repaired using xfsprogs 4.9.0
> >> Kernel is now 4.8.15
> >>
> >> -------------------------
> >> Phase 1 - find and verify superblock...
> >> Phase 2 - using internal log
> >>         - zero log...
> >>         - scan filesystem freespace and inode maps...
> >>         - found root inode chunk
> >> Phase 3 - for each AG...
> >>         - scan and clear agi unlinked lists...
> >>         - process known inodes and perform inode discovery...
> >>         - agno = 0
> >>         - agno = 1
> >> Metadata corruption detected at xfs_attr3_leaf block 0x12645ef8/0x1000
> >> bad attribute count 0 in attr block 0, inode 1074268922
> >> problem with attribute contents in inode 1074268922
> >> clearing inode 1074268922 attributes
> >> correcting nblocks for inode 1074268922, was 1 - counted 0
> >> Metadata corruption detected at xfs_attr3_leaf block 0x127b5578/0x1000
> >> bad attribute count 0 in attr block 0, inode 1077334032
> >> problem with attribute contents in inode 1077334032
> >> clearing inode 1077334032 attributes
> >> correcting nblocks for inode 1077334032, was 1 - counted 0
> >> Metadata corruption detected at xfs_attr3_leaf block 0x12f63f40/0x1000
> >> bad attribute count 0 in attr block 0, inode 1093437859
> >> problem with attribute contents in inode 1093437859
> >> clearing inode 1093437859 attributes
> >> correcting nblocks for inode 1093437859, was 1 - counted 0
> >>         - agno = 2
> >> Metadata corruption detected at xfs_attr3_leaf block 0x24c17ba8/0x1000
> >> bad attribute count 0 in attr block 0, inode 2147673775
> >> problem with attribute contents in inode 2147673775
> >> clearing inode 2147673775 attributes
> >> correcting nblocks for inode 2147673775, was 1 - counted 0
> >>         - process newly discovered inodes...
> >> Phase 4 - check for duplicate blocks...
> >>         - setting up duplicate extent list...
> >>         - check for inodes claiming duplicate blocks...
> >>         - agno = 0
> >>         - agno = 1
> >> bad attribute format 1 in inode 1074268922, resetting value
> >> bad attribute format 1 in inode 1077334032, resetting value
> >> bad attribute format 1 in inode 1093437859, resetting value
> >>         - agno = 2
> >> bad attribute format 1 in inode 2147673775, resetting value
> >> Phase 5 - rebuild AG headers and trees...
> >>         - reset superblock...
> >> Phase 6 - check inode connectivity...
> >>         - resetting contents of realtime bitmap and summary inodes
> >>         - traversing filesystem ...
> >>         - traversal finished ...
> >>         - moving disconnected inodes to lost+found ...
> >> Phase 7 - verify and correct link counts...
> >> done
> >> -------------------------
> >>
> >> Thank you very much for patch, it has done it's work
> >>
> >> Libor
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> > 
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


--------
[1] mailto:libor.klepac@bcom.cz
[2] tel:+420377457676
[3] http://www.bcom.cz

  reply	other threads:[~2017-03-14  8:15 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-08 18:06 [PATCH] xfs_repair: junk leaf attribute if count == 0 Eric Sandeen
2016-12-12 18:36 ` Brian Foster
2016-12-13 10:52 ` Libor Klepáč
2016-12-13 16:04   ` Eric Sandeen
2016-12-15 20:48     ` Libor Klepáč
2016-12-21  8:25     ` Libor Klepáč
2016-12-24 17:50       ` Eric Sandeen
2017-01-31  8:03     ` Libor Klepáč
2017-03-13 13:48       ` Libor Klepáč
2017-03-13 14:14         ` Eric Sandeen
2017-03-14  8:15           ` Libor Klepáč [this message]
2017-03-14 16:54             ` Eric Sandeen
2017-03-14 18:51               ` Eric Sandeen
2017-03-15 10:07               ` Libor Klepáč
2017-03-15 15:22                 ` Eric Sandeen
2017-03-16  8:58                   ` Libor Klepáč
2017-03-16 15:21                     ` Eric Sandeen
2017-03-29 13:33                       ` Libor Klepáč
2017-04-11 11:23                         ` Libor Klepáč
2017-05-24 11:18                       ` Libor Klepáč
2017-05-24 12:24                         ` Libor Klepáč
2017-02-01 12:48     ` Libor Klepáč
2017-02-01 22:49       ` Eric Sandeen
2017-02-02  8:35         ` Libor Klepáč
2017-02-22 11:42       ` Libor Klepáč
2017-02-22 13:45         ` Eric Sandeen
2017-02-22 14:19           ` Libor Klepáč
2017-02-23  9:05           ` Libor Klepáč

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1702693.ETbqtf3ybS@libor-nb \
    --to=libor.klepac@bcom.cz \
    --cc=linux-xfs@vger.kernel.org \
    --cc=sandeen@redhat.com \
    --cc=sandeen@sandeen.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.