All of lore.kernel.org
 help / color / mirror / Atom feed
* xfs_repair segfaults with ag_stride option
@ 2012-02-01 13:36 Tom Crane
  2012-02-02 12:42 ` Christoph Hellwig
  0 siblings, 1 reply; 10+ messages in thread
From: Tom Crane @ 2012-02-01 13:36 UTC (permalink / raw)
  To: xfs; +Cc: T.Crane

Dear XFS Support,
    I am attempting to use xfs_repair to fix a damaged FS but always get 
a segfault if and only if -o ag_stride is specified. I have tried 
ag_stride=2,8,16 & 32.  The FS is approx 60T. I can't find reports of 
this particular problem on the mailing list archive.  Further details are;

xfs_repair version 3.1.7, recently downloaded via git repository.
uname -a
Linux store3 2.6.18-274.17.1.el5 #1 SMP Wed Jan 11 11:10:32 CET 2012 
x86_64 x86_64 x86_64 GNU/Linux


Running with -P and/or -m 9000 did not help.  The host has 10GB memory.  
I built xfs_repair with './configure CFLAGS="-g -O2" && make' Here is 
the log from a gdb session.  Is there any other information/tests that I 
can supply?

Please help.
Many thanks
Tom Crane


> [root@store3 tcrane]# gdb xfsprogs/repair/xfs_repair
> GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-37.el5_7.1)
> Copyright (C) 2009 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later 
> <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "x86_64-redhat-linux-gnu".
> For bug reporting instructions, please see:
> <http://www.gnu.org/software/gdb/bugs/>...
> Reading symbols from /data/tcrane/xfsprogs/repair/xfs_repair...done.
> (gdb) set arg -n -m 9000 -o ag_stride=2 /dev/mapper/vg0-lvol0
> (gdb) ru
> Starting program: /data/tcrane/xfsprogs/repair/xfs_repair -n -m 9000 
> -o ag_stride=2 /dev/mapper/vg0-lvol0
> warning: no loadable sections found in added symbol-file 
> system-supplied DSO at 0x2aaaaaaab000
> [Thread debugging using libthread_db enabled]
> Phase 1 - find and verify superblock...
> [New Thread 0x40a00940 (LWP 12803)]
>         - reporting progress in intervals of 15 minutes
> Phase 2 - using internal log
>         - scan filesystem freespace and inode maps...
> [New Thread 0x41401940 (LWP 12804)]
> [New Thread 0x41e02940 (LWP 12805)]
> [New Thread 0x42803940 (LWP 12806)]
> [New Thread 0x43204940 (LWP 12807)]
> [New Thread 0x43c05940 (LWP 12808)]
> [New Thread 0x44606940 (LWP 12809)]
> [New Thread 0x45007940 (LWP 12810)]
> [New Thread 0x45a08940 (LWP 12811)]
> [New Thread 0x46409940 (LWP 12812)]
> [New Thread 0x46e0a940 (LWP 12813)]
> [New Thread 0x4780b940 (LWP 12814)]
> [New Thread 0x4820c940 (LWP 12815)]
> [New Thread 0x48c0d940 (LWP 12816)]
> [New Thread 0x4960e940 (LWP 12817)]
> [New Thread 0x4a00f940 (LWP 12818)]
> [New Thread 0x4aa10940 (LWP 12819)]
> [New Thread 0x4b411940 (LWP 12820)]
> [New Thread 0x4be12940 (LWP 12821)]
> [New Thread 0x4c813940 (LWP 12822)]
> [New Thread 0x4d214940 (LWP 12823)]
> [New Thread 0x4dc15940 (LWP 12824)]
> [New Thread 0x4e616940 (LWP 12825)]
> [New Thread 0x4f017940 (LWP 12826)]
> [New Thread 0x4fa18940 (LWP 12827)]
> [New Thread 0x50419940 (LWP 12828)]
> [New Thread 0x50e1a940 (LWP 12829)]
> [New Thread 0x5181b940 (LWP 12830)]
> [New Thread 0x5221c940 (LWP 12831)]
> [New Thread 0x52c1d940 (LWP 12832)]
> [New Thread 0x5361e940 (LWP 12833)]
> [New Thread 0x5401f940 (LWP 12834)]
> [New Thread 0x54a20940 (LWP 12835)]
> [Thread 0x4820c940 (LWP 12815) exited]
> [Thread 0x4f017940 (LWP 12826) exited]
> [Thread 0x5401f940 (LWP 12834) exited]
> [Thread 0x54a20940 (LWP 12835) exited]
> [Thread 0x48c0d940 (LWP 12816) exited]
> [Thread 0x46409940 (LWP 12812) exited]
> [Thread 0x4780b940 (LWP 12814) exited]
> [Thread 0x46e0a940 (LWP 12813) exited]
> [Thread 0x44606940 (LWP 12809) exited]
> [Thread 0x5361e940 (LWP 12833) exited]
> [Thread 0x50e1a940 (LWP 12829) exited]
> [Thread 0x45a08940 (LWP 12811) exited]
> [Thread 0x52c1d940 (LWP 12832) exited]
> [Thread 0x4c813940 (LWP 12822) exited]
> [Thread 0x41401940 (LWP 12804) exited]
> [Thread 0x5221c940 (LWP 12831) exited]
> [Thread 0x4fa18940 (LWP 12827) exited]
> [Thread 0x4be12940 (LWP 12821) exited]
> [Thread 0x4a00f940 (LWP 12818) exited]
> [Thread 0x43204940 (LWP 12807) exited]
> [Thread 0x5181b940 (LWP 12830) exited]
> [Thread 0x4b411940 (LWP 12820) exited]
> [Thread 0x4e616940 (LWP 12825) exited]
> [Thread 0x41e02940 (LWP 12805) exited]
> [Thread 0x4dc15940 (LWP 12824) exited]
> [Thread 0x50419940 (LWP 12828) exited]
> [Thread 0x42803940 (LWP 12806) exited]
> [Thread 0x4d214940 (LWP 12823) exited]
> [Thread 0x4aa10940 (LWP 12819) exited]
> [Thread 0x43c05940 (LWP 12808) exited]
> [Thread 0x45007940 (LWP 12810) exited]
> [Thread 0x4960e940 (LWP 12817) exited]
>         - 12:58:56: scanning filesystem freespace - 59 of 59 
> allocation groups done
>         - found root inode chunk
> Phase 3 - for each AG...
>         - scan (but don't clear) agi unlinked lists...
>         - 12:58:56: scanning agi unlinked lists - 59 of 59 allocation 
> groups done
>         - process known inodes and perform inode discovery...
> [New Thread 0x54a20940 (LWP 12837)]
> [New Thread 0x5401f940 (LWP 12838)]
> [New Thread 0x41401940 (LWP 12840)]
> [New Thread 0x5361e940 (LWP 12839)]
> [New Thread 0x41e02940 (LWP 12841)]
> [New Thread 0x42803940 (LWP 12842)]
> [New Thread 0x43c05940 (LWP 12844)]
> [New Thread 0x43204940 (LWP 12843)]
> [New Thread 0x44606940 (LWP 12845)]
> [New Thread 0x46409940 (LWP 12849)]
> [New Thread 0x46e0a940 (LWP 12850)]
> [New Thread 0x45a08940 (LWP 12848)]
> [New Thread 0x45007940 (LWP 12847)]
> [New Thread 0x4780b940 (LWP 12851)]
> [New Thread 0x48c0d940 (LWP 12853)]
> [New Thread 0x4820c940 (LWP 12852)]
> [New Thread 0x4a00f940 (LWP 12856)]
> [New Thread 0x4960e940 (LWP 12855)]
>         - agno = 0
> [New Thread 0x4aa10940 (LWP 12858)]
> [New Thread 0x4b411940 (LWP 12857)]
> [New Thread 0x4be12940 (LWP 12859)]
> [New Thread 0x4c813940 (LWP 12861)]
> [New Thread 0x4d214940 (LWP 12860)]
> [New Thread 0x4dc15940 (LWP 12862)]
>         - agno = 4
> [New Thread 0x4f017940 (LWP 12864)]
> [New Thread 0x4e616940 (LWP 12863)]
>         - agno = 2
> [Thread 0x4be12940 (LWP 12859) exited]
> [New Thread 0x4fa18940 (LWP 12866)]
> [Thread 0x43c05940 (LWP 12844) exited]
> [New Thread 0x50419940 (LWP 12867)]
> [Thread 0x46409940 (LWP 12849) exited]
> [Thread 0x4820c940 (LWP 12852) exited]
> [New Thread 0x43c05940 (LWP 12869)]
> [New Thread 0x46409940 (LWP 12868)]
> [Thread 0x4c813940 (LWP 12861) exited]
> [Thread 0x4a00f940 (LWP 12856) exited]
> [New Thread 0x5181b940 (LWP 12871)]
> [New Thread 0x50e1a940 (LWP 12870)]
> [New Thread 0x52c1d940 (LWP 12873)]
> [Thread 0x46e0a940 (LWP 12850) exited]
> [New Thread 0x5221c940 (LWP 12872)]
>         - agno = 6
> [Thread 0x4dc15940 (LWP 12862) exited]
> [Thread 0x5221c940 (LWP 12872) exited]
> [New Thread 0x55421940 (LWP 12875)]
> [Thread 0x4b411940 (LWP 12857) exited]
> [Thread 0x50419940 (LWP 12867) exited]
> [New Thread 0x4be12940 (LWP 12876)]
> [Thread 0x4960e940 (LWP 12855) exited]
> [New Thread 0x50419940 (LWP 12878)]
> [New Thread 0x5221c940 (LWP 12877)]
> [New Thread 0x4c813940 (LWP 12879)]
>         - agno = 10
> [Thread 0x50419940 (LWP 12878) exited]
> [New Thread 0x55e22940 (LWP 12883)]
> [New Thread 0x4dc15940 (LWP 12882)]
> [Thread 0x42803940 (LWP 12842) exited]
> [New Thread 0x56823940 (LWP 12884)]
> [New Thread 0x4960e940 (LWP 12881)]
> [Thread 0x52c1d940 (LWP 12873) exited]
> [Thread 0x43c05940 (LWP 12869) exited]
> [New Thread 0x42803940 (LWP 12886)]
> [New Thread 0x43c05940 (LWP 12885)]
> [New Thread 0x57224940 (LWP 12887)]
>         - agno = 8
> [Thread 0x50e1a940 (LWP 12870) exited]
> [Thread 0x44606940 (LWP 12845) exited]
> [New Thread 0x50419940 (LWP 12888)]
> [New Thread 0x52c1d940 (LWP 12889)]
> [Thread 0x55e22940 (LWP 12883) exited]
> [New Thread 0x50e1a940 (LWP 12891)]
> [New Thread 0x44606940 (LWP 12890)]
> [Thread 0x5221c940 (LWP 12877) exited]
> [Thread 0x4f017940 (LWP 12864) exited]
> [New Thread 0x57c25940 (LWP 12893)]
>         - agno = 3
> [Thread 0x50e1a940 (LWP 12891) exited]
> [Thread 0x4fa18940 (LWP 12866) exited]
> [New Thread 0x4f017940 (LWP 12894)]
> [New Thread 0x58626940 (LWP 12895)]
> [Thread 0x52c1d940 (LWP 12889) exited]
> [New Thread 0x4fa18940 (LWP 12896)]
> [New Thread 0x59027940 (LWP 12897)]
> [New Thread 0x59a28940 (LWP 12898)]
> [Thread 0x4dc15940 (LWP 12882) exited]
> [New Thread 0x5a429940 (LWP 12899)]
> [New Thread 0x4dc15940 (LWP 12900)]
> [Thread 0x48c0d940 (LWP 12853) exited]
> [New Thread 0x5ae2a940 (LWP 12901)]
>         - agno = 5
> [Thread 0x55421940 (LWP 12875) exited]
> [New Thread 0x55e22940 (LWP 12902)]
> [New Thread 0x5221c940 (LWP 12903)]
> [Thread 0x5ae2a940 (LWP 12901) exited]
> [Thread 0x59a28940 (LWP 12898) exited]
> [Thread 0x43c05940 (LWP 12885) exited]
> [Thread 0x4f017940 (LWP 12894) exited]
> [New Thread 0x5ae2a940 (LWP 12904)]
> [Thread 0x56823940 (LWP 12884) exited]
>         - agno = 11
>         - agno = 12
> [Thread 0x4c813940 (LWP 12879) exited]
> [New Thread 0x59a28940 (LWP 12932)]
> [Thread 0x5a429940 (LWP 12899) exited]
> [Thread 0x5221c940 (LWP 12903) exited]
> [New Thread 0x4c813940 (LWP 12933)]
> [Thread 0x5ae2a940 (LWP 12904) exited]
> [Thread 0x59027940 (LWP 12897) exited]
> [Thread 0x4fa18940 (LWP 12896) exited]
> [Thread 0x58626940 (LWP 12895) exited]
> [New Thread 0x50e1a940 (LWP 12952)]
> [Thread 0x50419940 (LWP 12888) exited]
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x54a20940 (LWP 12837)]
> 0x000000380ac7b29f in memset () from /lib64/libc.so.6
> (gdb) bt
> #0  0x000000380ac7b29f in memset () from /lib64/libc.so.6
> #1  0x0000000000403998 in process_leaf_attr_block (mp=0x7fffffffe560, 
> leaf=0x2aab2bfb4400, da_bno=0, ino=1718, blkmap=0x2aab280787d0, 
> last_hashval=0,
>     current_hashval=0x54a1fd44, repair=0x54a1fdc4) at attr_repair.c:522
> #2  0x000000000040494b in process_longform_attr (mp=0x7fffffffe560, 
> ino=1718, dip=0x18e4e00, blkmap=0x2aab280787d0, repair=0x54a1fdc4)
>     at attr_repair.c:900
> #3  0x000000000040d971 in process_inode_attr_fork (mp=0x7fffffffe560, 
> agno=0, ino=1718, dino=0x18e4e00, type=5, dirty=0x54a1ffe0, 
> atotblocks=0x54a1fe70,
>     anextents=0x54a1fe60, check_dups=0, extra_attr_check=1, 
> retval=0x54a1fe80) at dinode.c:2301
> #4  0x000000000040f368 in process_dinode_int (mp=0x7fffffffe560, 
> dino=0x18e4e00, agno=0, ino=1718, was_free=0, dirty=0x54a1ffe0, 
> used=0x54a1ffe4,
>     verify_mode=0, uncertain=0, ino_discovery=1, check_dups=0, 
> extra_attr_check=1, isa_dir=0x54a1ffdc, parent=0x54a1ffd0) at 
> dinode.c:2764
> #5  0x000000000040fd0e in process_dinode (mp=0x0, dino=0x0, agno=0, 
> ino=1024, was_free=8192, dirty=0x2000, used=0x54a1ffe4, ino_discovery=1,
>     check_dups=0, extra_attr_check=1, isa_dir=0x54a1ffdc, 
> parent=0x54a1ffd0) at dinode.c:2898
> #6  0x0000000000409361 in process_inode_chunk (mp=0x7fffffffe560, 
> agno=0, num_inos=<value optimized out>, first_irec=0x2aab283c9ef0, 
> ino_discovery=1,
>     check_dups=0, extra_attr_check=1, bogus=0x54a20064) at 
> dino_chunks.c:779
> #7  0x0000000000409a6c in process_aginodes (mp=0x7fffffffe560, 
> pf_args=0x68ea10, agno=0, ino_discovery=1, check_dups=0, 
> extra_attr_check=1)
>     at dino_chunks.c:1018
> #8  0x000000000041c8df in process_ag_func (wq=0x68fb50, agno=0, 
> arg=0x68ea10) at phase3.c:154
> #9  0x000000000042f86d in worker_thread (arg=<value optimized out>) at 
> threads.c:46
> #10 0x000000380b40673d in start_thread () from /lib64/libpthread.so.0
> #11 0x000000380acd44bd in clone () from /lib64/libc.so.6
> (gdb) list
> 522             * doesn't get flushed out if no_modify is set
> 523             */
> 524            mp->m_sb.sb_rsumino = first_prealloc_ino + 2;
> 525        }
> 526   
> 527    }
> 528   
> 529    int
> 530    main(int argc, char **argv)
> 531    {
> (gdb) q

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-01 13:36 xfs_repair segfaults with ag_stride option Tom Crane
@ 2012-02-02 12:42 ` Christoph Hellwig
  2012-02-06  0:50   ` Tom Crane
  0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2012-02-02 12:42 UTC (permalink / raw)
  To: Tom Crane; +Cc: xfs

[-- Attachment #1: Type: text/plain, Size: 644 bytes --]

Hi Tom,

On Wed, Feb 01, 2012 at 01:36:12PM +0000, Tom Crane wrote:
> Dear XFS Support,
>    I am attempting to use xfs_repair to fix a damaged FS but always
> get a segfault if and only if -o ag_stride is specified. I have
> tried ag_stride=2,8,16 & 32.  The FS is approx 60T. I can't find
> reports of this particular problem on the mailing list archive.
> Further details are;
> 
> xfs_repair version 3.1.7, recently downloaded via git repository.
> uname -a
> Linux store3 2.6.18-274.17.1.el5 #1 SMP Wed Jan 11 11:10:32 CET 2012
> x86_64 x86_64 x86_64 GNU/Linux

Thanks for the detailed bug report.

Can you please try the attached patch?


[-- Attachment #2: repair-fix-dirbuf --]
[-- Type: text/plain, Size: 10065 bytes --]

From: Christoph Hellwig <hch@lst.de>
Subject: repair: fix incorrect use of thread local data in dir and attr code

The attribute and dirv1 code use pthread thread local data incorrectly in
a few places, which will make them fail in horrible ways when using the
ag_stride options.

Replace the use of thread local data with simple local allocations given
that there is no needed to micro-optimize these allocations as much
as e.g. the extent map.  The added benefit is that we have to allocate
less memory, and can free it quickly.

Signed-off-by: Christoph Hellwig <hch@lst.de>

Index: xfsprogs-dev/repair/attr_repair.c
===================================================================
--- xfsprogs-dev.orig/repair/attr_repair.c	2012-02-02 09:25:50.000000000 +0000
+++ xfsprogs-dev/repair/attr_repair.c	2012-02-02 11:14:06.000000000 +0000
@@ -363,12 +363,6 @@ rmtval_get(xfs_mount_t *mp, xfs_ino_t in
 	return (clearit);
 }
 
-/*
- * freespace map for directory and attribute leaf blocks (1 bit per byte)
- * 1 == used, 0 == free
- */
-size_t ts_attr_freemap_size = sizeof(da_freemap_t) * DA_BMAP_SIZE;
-
 /* The block is read in. The magic number and forward / backward
  * links are checked by the caller process_leaf_attr.
  * If any problems occur the routine returns with non-zero. In
@@ -503,7 +497,7 @@ process_leaf_attr_block(
 {
 	xfs_attr_leaf_entry_t *entry;
 	int  i, start, stop, clearit, usedbs, firstb, thissize;
-	da_freemap_t *attr_freemap = ts_attr_freemap();
+	da_freemap_t *attr_freemap;
 
 	clearit = usedbs = 0;
 	*repair = 0;
@@ -519,7 +513,7 @@ process_leaf_attr_block(
 		return (1);
 	}
 
-	init_da_freemap(attr_freemap);
+	attr_freemap = alloc_da_freemap(mp);
 	(void) set_da_freemap(mp, attr_freemap, 0, stop);
 
 	/* go thru each entry checking for problems */
@@ -636,6 +630,8 @@ process_leaf_attr_block(
 		* we can add it then.
 		*/
 	}
+
+	free(attr_freemap);
 	return (clearit);  /* and repair */
 }
 
Index: xfsprogs-dev/repair/dir.c
===================================================================
--- xfsprogs-dev.orig/repair/dir.c	2012-02-02 09:25:50.000000000 +0000
+++ xfsprogs-dev/repair/dir.c	2012-02-02 11:17:20.000000000 +0000
@@ -495,23 +495,19 @@ process_shortform_dir(
 }
 
 /*
- * freespace map for directory leaf blocks (1 bit per byte)
- * 1 == used, 0 == free
+ * Allocate a freespace map for directory or attr leaf blocks (1 bit per byte)
+ * 1 == used, 0 == free.
  */
-size_t ts_dir_freemap_size = sizeof(da_freemap_t) * DA_BMAP_SIZE;
-
-void
-init_da_freemap(da_freemap_t *dir_freemap)
+da_freemap_t *
+alloc_da_freemap(struct xfs_mount *mp)
 {
-	memset(dir_freemap, 0, sizeof(da_freemap_t) * DA_BMAP_SIZE);
+	return calloc(1, mp->m_sb.sb_blocksize / NBBY);
 }
 
 /*
- * sets directory freemap, returns 1 if there is a conflict
- * returns 0 if everything's good.  the range [start, stop) is set.
- * right now, we just use the static array since only one directory
- * block will be processed at once even though the interface allows
- * you to pass in arbitrary da_freemap_t array's.
+ * Set the he range [start, stop) in the directory freemap.
+ *
+ * Returns 1 if there is a conflict or 0 if everything's good.
  *
  * Within a char, the lowest bit of the char represents the byte with
  * the smallest address
@@ -728,28 +724,6 @@ _("- derived hole (base %d, size %d) in
 	return(res);
 }
 
-#if 0
-void
-test(xfs_mount_t *mp)
-{
-	int i = 0;
-	da_hole_map_t	holemap;
-
-	init_da_freemap(dir_freemap);
-	memset(&holemap, 0, sizeof(da_hole_map_t));
-
-	set_da_freemap(mp, dir_freemap, 0, 50);
-	set_da_freemap(mp, dir_freemap, 100, 126);
-	set_da_freemap(mp, dir_freemap, 126, 129);
-	set_da_freemap(mp, dir_freemap, 130, 131);
-	set_da_freemap(mp, dir_freemap, 150, 160);
-	process_da_freemap(mp, dir_freemap, &holemap);
-
-	return;
-}
-#endif
-
-
 /*
  * walk tree from root to the left-most leaf block reading in
  * blocks and setting up cursor.  passes back file block number of the
@@ -1366,8 +1340,6 @@ verify_da_path(xfs_mount_t	*mp,
 	return(0);
 }
 
-size_t ts_dirbuf_size = 64*1024;
-
 /*
  * called by both node dir and leaf dir processing routines
  * validates all contents *but* the sibling pointers (forw/back)
@@ -1441,7 +1413,7 @@ process_leaf_dir_block(
 	char				fname[MAXNAMELEN + 1];
 	da_hole_map_t			holemap;
 	da_hole_map_t			bholemap;
-	unsigned char			*dir_freemap = ts_dir_freemap();
+	da_freemap_t			*dir_freemap;
 
 #ifdef XR_DIR_TRACE
 	fprintf(stderr, "\tprocess_leaf_dir_block - ino %" PRIu64 "\n", ino);
@@ -1450,7 +1422,7 @@ process_leaf_dir_block(
 	/*
 	 * clear static dir block freespace bitmap
 	 */
-	init_da_freemap(dir_freemap);
+	dir_freemap = alloc_da_freemap(mp);
 
 	*buf_dirty = 0;
 	first_used = mp->m_sb.sb_blocksize;
@@ -1462,7 +1434,8 @@ process_leaf_dir_block(
 		do_warn(
 _("directory block header conflicts with used space in directory inode %" PRIu64 "\n"),
 			ino);
-		return(1);
+		res = 1;
+		goto out;
 	}
 
 	/*
@@ -1778,8 +1751,8 @@ _("entry references free inode %" PRIu64
 			do_warn(
 _("bad size, entry #%d in dir inode %" PRIu64 ", block %u -- entry overflows block\n"),
 				i, ino, da_bno);
-
-			return(1);
+			res = 1;
+			goto out;
 		}
 
 		start = (__psint_t)&leaf->entries[i] - (__psint_t)leaf;;
@@ -1789,7 +1762,8 @@ _("bad size, entry #%d in dir inode %" P
 			do_warn(
 _("dir entry slot %d in block %u conflicts with used space in dir inode %" PRIu64 "\n"),
 				i, da_bno, ino);
-			return(1);
+			res = 1;
+			goto out;
 		}
 
 		/*
@@ -2183,7 +2157,7 @@ _("- existing hole info for block %d, di
 			_("- compacting block %u in dir inode %" PRIu64 "\n"),
 					da_bno, ino);
 
-			new_leaf = (xfs_dir_leafblock_t *) ts_dirbuf();
+			new_leaf = malloc(mp->m_sb.sb_blocksize);
 
 			/*
 			 * copy leaf block header
@@ -2223,6 +2197,7 @@ _("- existing hole info for block %d, di
 					do_warn(
 	_("not enough space in block %u of dir inode %" PRIu64 " for all entries\n"),
 						da_bno, ino);
+					free(new_leaf);
 					break;
 				}
 
@@ -2284,6 +2259,7 @@ _("- existing hole info for block %d, di
 			 * final step, copy block back
 			 */
 			memmove(leaf, new_leaf, mp->m_sb.sb_blocksize);
+			free(new_leaf);
 
 			*buf_dirty = 1;
 		} else  {
@@ -2302,10 +2278,13 @@ _("- existing hole info for block %d, di
 		junk_zerolen_dir_leaf_entries(mp, leaf, ino, buf_dirty);
 	}
 #endif
+
+out:
+	free(dir_freemap);
 #ifdef XR_DIR_TRACE
 	fprintf(stderr, "process_leaf_dir_block returns %d\n", res);
 #endif
-	return((res > 0) ? 1 : 0);
+	return res > 0 ? 1 : 0;
 }
 
 /*
Index: xfsprogs-dev/repair/dir.h
===================================================================
--- xfsprogs-dev.orig/repair/dir.h	2012-02-02 09:28:58.000000000 +0000
+++ xfsprogs-dev/repair/dir.h	2012-02-02 11:09:41.000000000 +0000
@@ -21,9 +21,6 @@
 
 struct blkmap;
 
-/* 1 bit per byte, max XFS blocksize == 64K bits / NBBY */
-#define DA_BMAP_SIZE		8192
-
 typedef unsigned char	da_freemap_t;
 
 /*
@@ -81,9 +78,9 @@ get_first_dblock_fsbno(
 	xfs_ino_t	ino,
 	xfs_dinode_t	*dino);
 
-void
-init_da_freemap(
-	da_freemap_t *dir_freemap);
+da_freemap_t *
+alloc_da_freemap(
+	xfs_mount_t	*mp);
 
 int
 namecheck(
Index: xfsprogs-dev/repair/globals.h
===================================================================
--- xfsprogs-dev.orig/repair/globals.h	2012-02-02 09:33:29.000000000 +0000
+++ xfsprogs-dev/repair/globals.h	2012-02-02 09:34:49.000000000 +0000
@@ -185,10 +185,6 @@ EXTERN xfs_extlen_t	sb_inoalignmt;
 EXTERN __uint32_t	sb_unit;
 EXTERN __uint32_t	sb_width;
 
-extern size_t 		ts_dirbuf_size;
-extern size_t 		ts_dir_freemap_size;
-extern size_t 		ts_attr_freemap_size;
-
 EXTERN pthread_mutex_t	*ag_locks;
 
 EXTERN int 		report_interval;
Index: xfsprogs-dev/repair/init.c
===================================================================
--- xfsprogs-dev.orig/repair/init.c	2012-02-02 09:25:50.000000000 +0000
+++ xfsprogs-dev/repair/init.c	2012-02-02 09:37:02.000000000 +0000
@@ -29,67 +29,16 @@
 #include "prefetch.h"
 #include <sys/resource.h>
 
-/* TODO: dirbuf/freemap key usage is completely b0rked - only used for dirv1 */
-static pthread_key_t dirbuf_key;
-static pthread_key_t dir_freemap_key;
-static pthread_key_t attr_freemap_key;
-
 extern pthread_key_t dblkmap_key;
 extern pthread_key_t ablkmap_key;
 
 static void
-ts_alloc(pthread_key_t key, unsigned n, size_t size)
-{
-	void *voidp;
-	voidp = calloc(n, size);
-	if (voidp == NULL) {
-		do_error(_("ts_alloc: cannot allocate thread specific storage\n"));
-		/* NO RETURN */
-		return;
-	}
-	pthread_setspecific(key,  voidp);
-}
-
-static void
 ts_create(void)
 {
-	/* create thread specific keys */
-	pthread_key_create(&dirbuf_key, NULL);
-	pthread_key_create(&dir_freemap_key, NULL);
-	pthread_key_create(&attr_freemap_key, NULL);
-
 	pthread_key_create(&dblkmap_key, NULL);
 	pthread_key_create(&ablkmap_key, NULL);
 }
 
-void
-ts_init(void)
-{
-
-	/* allocate thread specific storage */
-	ts_alloc(dirbuf_key, 1, ts_dirbuf_size);
-	ts_alloc(dir_freemap_key, 1, ts_dir_freemap_size);
-	ts_alloc(attr_freemap_key, 1, ts_attr_freemap_size);
-}
-
-void *
-ts_dirbuf(void)
-{
-	return pthread_getspecific(dirbuf_key);
-}
-
-void *
-ts_dir_freemap(void)
-{
-	return pthread_getspecific(dir_freemap_key);
-}
-
-void *
-ts_attr_freemap(void)
-{
-	return pthread_getspecific(attr_freemap_key);
-}
-
 static void
 increase_rlimit(void)
 {
@@ -156,7 +105,6 @@ xfs_init(libxfs_init_t *args)
 		do_error(_("couldn't initialize XFS library\n"));
 
 	ts_create();
-	ts_init();
 	increase_rlimit();
 	pftrace_init();
 }
Index: xfsprogs-dev/repair/protos.h
===================================================================
--- xfsprogs-dev.orig/repair/protos.h	2012-02-02 09:33:29.000000000 +0000
+++ xfsprogs-dev/repair/protos.h	2012-02-02 09:36:42.000000000 +0000
@@ -41,9 +41,5 @@ char	*alloc_ag_buf(int size);
 void	print_inode_list(xfs_agnumber_t i);
 char *	err_string(int err_code);
 
-extern void *ts_attr_freemap(void);
-extern void *ts_dir_freemap(void);
-extern void *ts_dirbuf(void);
-extern void ts_init(void);
 extern void thread_init(void);
 

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-02 12:42 ` Christoph Hellwig
@ 2012-02-06  0:50   ` Tom Crane
  2012-02-06  5:58     ` Eric Sandeen
  2012-02-06 14:04     ` Christoph Hellwig
  0 siblings, 2 replies; 10+ messages in thread
From: Tom Crane @ 2012-02-06  0:50 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Crane T, xfs

[-- Attachment #1: Type: text/plain, Size: 1589 bytes --]

Hi Christoph,
    Many thanks for the quick response and the patch.  It was a big 
help.  I was able to repair our 60TB FS in about 30 hours. I have a 
couple of questions;

(1) The steps in the progress report seem a little strange.  See the 
attachment. Is this expected?

(2) This may be a little out of band but I have heard second hand 
reports from another sysadmin that the xfs tools which come with SLC5 
(our current Linux distro) should not be relied upon and that SLC6 
should be used.  Our 60TB FS is significantly fragmented (~40%) and I 
would very much like to run xfs_fsr on it.  Given that I have built the 
latest xfsprogs, is there any reason I should be afraid of running 
xfs_fsr, on the FS which comes with SLC5?  Unfortunately I don't have 
~60TB spare storage space elsewhere to backup the FS before defragging.  
What would you advise?

Many thanks
Tom.

Christoph Hellwig wrote:
> Hi Tom,
>
> On Wed, Feb 01, 2012 at 01:36:12PM +0000, Tom Crane wrote:
>   
>> Dear XFS Support,
>>    I am attempting to use xfs_repair to fix a damaged FS but always
>> get a segfault if and only if -o ag_stride is specified. I have
>> tried ag_stride=2,8,16 & 32.  The FS is approx 60T. I can't find
>> reports of this particular problem on the mailing list archive.
>> Further details are;
>>
>> xfs_repair version 3.1.7, recently downloaded via git repository.
>> uname -a
>> Linux store3 2.6.18-274.17.1.el5 #1 SMP Wed Jan 11 11:10:32 CET 2012
>> x86_64 x86_64 x86_64 GNU/Linux
>>     
>
> Thanks for the detailed bug report.
>
> Can you please try the attached patch?
>
>   

[-- Attachment #2: xfs_repair4.tmp --]
[-- Type: text/plain, Size: 37141 bytes --]

Thu Feb  2 16:46:23 GMT 2012: Starting xfs_repair job with patched xfs_repair
./xfs_repair -V
xfs_repair version 3.1.7
./xfs_repair -m 9000 -o ag_stride=32 /dev/mapper/vg0-lvol0
Phase 1 - find and verify superblock...
        - reporting progress in intervals of 15 minutes
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - 16:46:41: scanning filesystem freespace - 59 of 59 allocation groups done
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - 16:46:41: scanning agi unlinked lists - 59 of 59 allocation groups done
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 32
        - agno = 33
        - agno = 34
        - agno = 35
        - agno = 36
        - agno = 37
        - agno = 38
        - agno = 39
        - agno = 40
        - agno = 41
        - agno = 42
        - agno = 43
        - agno = 44
        - agno = 45
        - agno = 46
        - agno = 47
        - agno = 48
        - agno = 49
        - agno = 50
        - agno = 51
        - agno = 52
        - agno = 53
        - agno = 54
        - agno = 55
        - agno = 56
        - agno = 57
        - agno = 58
        - 17:01:24: process known inodes and inode discovery - 2048512 of 19546240 inodes done
	- 17:01:24: Phase 3: elapsed time 14 minutes, 43 seconds - processed 139196 inodes per minute
	- 17:01:24: Phase 3: 10% done - estimated remaining time 2 hours, 5 minutes, 42 seconds
        - 17:16:24: process known inodes and inode discovery - 2060224 of 19546240 inodes done
	- 17:16:24: Phase 3: elapsed time 29 minutes, 43 seconds - processed 69328 inodes per minute
	- 17:16:24: Phase 3: 10% done - estimated remaining time 4 hours, 12 minutes, 13 seconds
        - 17:31:24: process known inodes and inode discovery - 2060224 of 19546240 inodes done
	- 17:31:24: Phase 3: elapsed time 44 minutes, 43 seconds - processed 46072 inodes per minute
	- 17:31:24: Phase 3: 10% done - estimated remaining time 6 hours, 19 minutes, 31 seconds
        - 17:46:24: process known inodes and inode discovery - 2074112 of 19546240 inodes done
	- 17:46:24: Phase 3: elapsed time 59 minutes, 43 seconds - processed 34732 inodes per minute
	- 17:46:24: Phase 3: 10% done - estimated remaining time 8 hours, 23 minutes, 2 seconds
        - 18:01:24: process known inodes and inode discovery - 2074368 of 19546240 inodes done
	- 18:01:24: Phase 3: elapsed time 1 hour, 14 minutes, 43 seconds - processed 27763 inodes per minute
	- 18:01:24: Phase 3: 10% done - estimated remaining time 10 hours, 29 minutes, 19 seconds
        - 18:16:24: process known inodes and inode discovery - 2295360 of 19546240 inodes done
	- 18:16:24: Phase 3: elapsed time 1 hour, 29 minutes, 43 seconds - processed 25584 inodes per minute
	- 18:16:24: Phase 3: 11% done - estimated remaining time 11 hours, 14 minutes, 16 seconds
        - 18:31:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 18:31:24: Phase 3: elapsed time 1 hour, 44 minutes, 43 seconds - processed 41082 inodes per minute
	- 18:31:24: Phase 3: 22% done - estimated remaining time 6 hours, 11 minutes, 3 seconds
        - 18:46:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 18:46:24: Phase 3: elapsed time 1 hour, 59 minutes, 43 seconds - processed 35934 inodes per minute
	- 18:46:24: Phase 3: 22% done - estimated remaining time 7 hours, 4 minutes, 13 seconds
        - 19:01:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 19:01:24: Phase 3: elapsed time 2 hours, 14 minutes, 43 seconds - processed 31933 inodes per minute
	- 19:01:24: Phase 3: 22% done - estimated remaining time 7 hours, 57 minutes, 22 seconds
        - 19:16:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 19:16:24: Phase 3: elapsed time 2 hours, 29 minutes, 43 seconds - processed 28734 inodes per minute
	- 19:16:24: Phase 3: 22% done - estimated remaining time 8 hours, 50 minutes, 31 seconds
        - 19:31:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 19:31:24: Phase 3: elapsed time 2 hours, 44 minutes, 43 seconds - processed 26117 inodes per minute
	- 19:31:24: Phase 3: 22% done - estimated remaining time 9 hours, 43 minutes, 40 seconds
        - 19:46:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 19:46:24: Phase 3: elapsed time 2 hours, 59 minutes, 43 seconds - processed 23937 inodes per minute
	- 19:46:24: Phase 3: 22% done - estimated remaining time 10 hours, 36 minutes, 49 seconds
        - 20:01:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 20:01:24: Phase 3: elapsed time 3 hours, 14 minutes, 43 seconds - processed 22093 inodes per minute
	- 20:01:24: Phase 3: 22% done - estimated remaining time 11 hours, 29 minutes, 58 seconds
        - 20:16:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 20:16:24: Phase 3: elapsed time 3 hours, 29 minutes, 43 seconds - processed 20513 inodes per minute
	- 20:16:24: Phase 3: 22% done - estimated remaining time 12 hours, 23 minutes, 7 seconds
        - 20:31:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 20:31:24: Phase 3: elapsed time 3 hours, 44 minutes, 43 seconds - processed 19144 inodes per minute
	- 20:31:24: Phase 3: 22% done - estimated remaining time 13 hours, 16 minutes, 17 seconds
        - 20:46:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 20:46:24: Phase 3: elapsed time 3 hours, 59 minutes, 43 seconds - processed 17946 inodes per minute
	- 20:46:24: Phase 3: 22% done - estimated remaining time 14 hours, 9 minutes, 26 seconds
        - 21:01:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 21:01:24: Phase 3: elapsed time 4 hours, 14 minutes, 43 seconds - processed 16889 inodes per minute
	- 21:01:24: Phase 3: 22% done - estimated remaining time 15 hours, 2 minutes, 35 seconds
        - 21:16:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 21:16:24: Phase 3: elapsed time 4 hours, 29 minutes, 43 seconds - processed 15950 inodes per minute
	- 21:16:24: Phase 3: 22% done - estimated remaining time 15 hours, 55 minutes, 44 seconds
        - 21:31:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 21:31:24: Phase 3: elapsed time 4 hours, 44 minutes, 43 seconds - processed 15109 inodes per minute
	- 21:31:24: Phase 3: 22% done - estimated remaining time 16 hours, 48 minutes, 53 seconds
        - 21:46:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 21:46:24: Phase 3: elapsed time 4 hours, 59 minutes, 43 seconds - processed 14353 inodes per minute
	- 21:46:24: Phase 3: 22% done - estimated remaining time 17 hours, 42 minutes, 2 seconds
        - 22:01:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 22:01:24: Phase 3: elapsed time 5 hours, 14 minutes, 43 seconds - processed 13669 inodes per minute
	- 22:01:24: Phase 3: 22% done - estimated remaining time 18 hours, 35 minutes, 12 seconds
        - 22:16:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 22:16:24: Phase 3: elapsed time 5 hours, 29 minutes, 43 seconds - processed 13047 inodes per minute
	- 22:16:24: Phase 3: 22% done - estimated remaining time 19 hours, 28 minutes, 21 seconds
        - 22:31:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 22:31:24: Phase 3: elapsed time 5 hours, 44 minutes, 43 seconds - processed 12479 inodes per minute
	- 22:31:24: Phase 3: 22% done - estimated remaining time 20 hours, 21 minutes, 30 seconds
        - 22:46:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 22:46:24: Phase 3: elapsed time 5 hours, 59 minutes, 43 seconds - processed 11959 inodes per minute
	- 22:46:24: Phase 3: 22% done - estimated remaining time 21 hours, 14 minutes, 39 seconds
        - 23:01:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 23:01:24: Phase 3: elapsed time 6 hours, 14 minutes, 43 seconds - processed 11480 inodes per minute
	- 23:01:24: Phase 3: 22% done - estimated remaining time 22 hours, 7 minutes, 48 seconds
        - 23:16:24: process known inodes and inode discovery - 4302016 of 19546240 inodes done
	- 23:16:24: Phase 3: elapsed time 6 hours, 29 minutes, 43 seconds - processed 11038 inodes per minute
	- 23:16:24: Phase 3: 22% done - estimated remaining time 23 hours, 57 seconds
        - 23:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 23:31:24: Phase 3: elapsed time 6 hours, 44 minutes, 43 seconds - processed 46135 inodes per minute
	- 23:31:24: Phase 3: 95% done - estimated remaining time 18 minutes, 57 seconds
        - 23:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 23:46:24: Phase 3: elapsed time 6 hours, 59 minutes, 43 seconds - processed 44486 inodes per minute
	- 23:46:24: Phase 3: 95% done - estimated remaining time 19 minutes, 39 seconds
        - 00:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 00:01:24: Phase 3: elapsed time 7 hours, 14 minutes, 43 seconds - processed 42951 inodes per minute
	- 00:01:24: Phase 3: 95% done - estimated remaining time 20 minutes, 21 seconds
        - 00:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 00:16:24: Phase 3: elapsed time 7 hours, 29 minutes, 43 seconds - processed 41518 inodes per minute
	- 00:16:24: Phase 3: 95% done - estimated remaining time 21 minutes, 3 seconds
        - 00:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 00:31:24: Phase 3: elapsed time 7 hours, 44 minutes, 43 seconds - processed 40178 inodes per minute
	- 00:31:24: Phase 3: 95% done - estimated remaining time 21 minutes, 45 seconds
        - 00:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 00:46:24: Phase 3: elapsed time 7 hours, 59 minutes, 43 seconds - processed 38922 inodes per minute
	- 00:46:24: Phase 3: 95% done - estimated remaining time 22 minutes, 28 seconds
        - 01:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 01:01:24: Phase 3: elapsed time 8 hours, 14 minutes, 43 seconds - processed 37742 inodes per minute
	- 01:01:24: Phase 3: 95% done - estimated remaining time 23 minutes, 10 seconds
        - 01:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 01:16:24: Phase 3: elapsed time 8 hours, 29 minutes, 43 seconds - processed 36631 inodes per minute
	- 01:16:24: Phase 3: 95% done - estimated remaining time 23 minutes, 52 seconds
        - 01:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 01:31:24: Phase 3: elapsed time 8 hours, 44 minutes, 43 seconds - processed 35584 inodes per minute
	- 01:31:24: Phase 3: 95% done - estimated remaining time 24 minutes, 34 seconds
        - 01:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 01:46:24: Phase 3: elapsed time 8 hours, 59 minutes, 43 seconds - processed 34595 inodes per minute
	- 01:46:24: Phase 3: 95% done - estimated remaining time 25 minutes, 16 seconds
        - 02:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 02:01:24: Phase 3: elapsed time 9 hours, 14 minutes, 43 seconds - processed 33659 inodes per minute
	- 02:01:24: Phase 3: 95% done - estimated remaining time 25 minutes, 58 seconds
        - 02:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 02:16:24: Phase 3: elapsed time 9 hours, 29 minutes, 43 seconds - processed 32773 inodes per minute
	- 02:16:24: Phase 3: 95% done - estimated remaining time 26 minutes, 40 seconds
        - 02:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 02:31:24: Phase 3: elapsed time 9 hours, 44 minutes, 43 seconds - processed 31932 inodes per minute
	- 02:31:24: Phase 3: 95% done - estimated remaining time 27 minutes, 23 seconds
        - 02:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 02:46:24: Phase 3: elapsed time 9 hours, 59 minutes, 43 seconds - processed 31134 inodes per minute
	- 02:46:24: Phase 3: 95% done - estimated remaining time 28 minutes, 5 seconds
        - 03:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 03:01:24: Phase 3: elapsed time 10 hours, 14 minutes, 43 seconds - processed 30374 inodes per minute
	- 03:01:24: Phase 3: 95% done - estimated remaining time 28 minutes, 47 seconds
        - 03:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 03:16:24: Phase 3: elapsed time 10 hours, 29 minutes, 43 seconds - processed 29651 inodes per minute
	- 03:16:24: Phase 3: 95% done - estimated remaining time 29 minutes, 29 seconds
        - 03:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 03:31:24: Phase 3: elapsed time 10 hours, 44 minutes, 43 seconds - processed 28961 inodes per minute
	- 03:31:24: Phase 3: 95% done - estimated remaining time 30 minutes, 11 seconds
        - 03:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 03:46:24: Phase 3: elapsed time 10 hours, 59 minutes, 43 seconds - processed 28302 inodes per minute
	- 03:46:24: Phase 3: 95% done - estimated remaining time 30 minutes, 53 seconds
        - 04:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 04:01:24: Phase 3: elapsed time 11 hours, 14 minutes, 43 seconds - processed 27673 inodes per minute
	- 04:01:24: Phase 3: 95% done - estimated remaining time 31 minutes, 36 seconds
        - 04:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 04:16:24: Phase 3: elapsed time 11 hours, 29 minutes, 43 seconds - processed 27071 inodes per minute
	- 04:16:24: Phase 3: 95% done - estimated remaining time 32 minutes, 18 seconds
        - 04:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 04:31:24: Phase 3: elapsed time 11 hours, 44 minutes, 43 seconds - processed 26495 inodes per minute
	- 04:31:24: Phase 3: 95% done - estimated remaining time 33 minutes
        - 04:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 04:46:24: Phase 3: elapsed time 11 hours, 59 minutes, 43 seconds - processed 25943 inodes per minute
	- 04:46:24: Phase 3: 95% done - estimated remaining time 33 minutes, 42 seconds
        - 05:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 05:01:24: Phase 3: elapsed time 12 hours, 14 minutes, 43 seconds - processed 25413 inodes per minute
	- 05:01:24: Phase 3: 95% done - estimated remaining time 34 minutes, 24 seconds
        - 05:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 05:16:24: Phase 3: elapsed time 12 hours, 29 minutes, 43 seconds - processed 24905 inodes per minute
	- 05:16:24: Phase 3: 95% done - estimated remaining time 35 minutes, 6 seconds
        - 05:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 05:31:24: Phase 3: elapsed time 12 hours, 44 minutes, 43 seconds - processed 24416 inodes per minute
	- 05:31:24: Phase 3: 95% done - estimated remaining time 35 minutes, 48 seconds
        - 05:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 05:46:24: Phase 3: elapsed time 12 hours, 59 minutes, 43 seconds - processed 23946 inodes per minute
	- 05:46:24: Phase 3: 95% done - estimated remaining time 36 minutes, 31 seconds
        - 06:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 06:01:24: Phase 3: elapsed time 13 hours, 14 minutes, 43 seconds - processed 23494 inodes per minute
	- 06:01:24: Phase 3: 95% done - estimated remaining time 37 minutes, 13 seconds
        - 06:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 06:16:24: Phase 3: elapsed time 13 hours, 29 minutes, 43 seconds - processed 23059 inodes per minute
	- 06:16:24: Phase 3: 95% done - estimated remaining time 37 minutes, 55 seconds
        - 06:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 06:31:24: Phase 3: elapsed time 13 hours, 44 minutes, 43 seconds - processed 22640 inodes per minute
	- 06:31:24: Phase 3: 95% done - estimated remaining time 38 minutes, 37 seconds
        - 06:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 06:46:24: Phase 3: elapsed time 13 hours, 59 minutes, 43 seconds - processed 22235 inodes per minute
	- 06:46:24: Phase 3: 95% done - estimated remaining time 39 minutes, 19 seconds
        - 07:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 07:01:24: Phase 3: elapsed time 14 hours, 14 minutes, 43 seconds - processed 21845 inodes per minute
	- 07:01:24: Phase 3: 95% done - estimated remaining time 40 minutes, 1 second
        - 07:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 07:16:24: Phase 3: elapsed time 14 hours, 29 minutes, 43 seconds - processed 21468 inodes per minute
	- 07:16:24: Phase 3: 95% done - estimated remaining time 40 minutes, 44 seconds
        - 07:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 07:31:24: Phase 3: elapsed time 14 hours, 44 minutes, 43 seconds - processed 21104 inodes per minute
	- 07:31:24: Phase 3: 95% done - estimated remaining time 41 minutes, 26 seconds
        - 07:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 07:46:24: Phase 3: elapsed time 14 hours, 59 minutes, 43 seconds - processed 20752 inodes per minute
	- 07:46:24: Phase 3: 95% done - estimated remaining time 42 minutes, 8 seconds
        - 08:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 08:01:24: Phase 3: elapsed time 15 hours, 14 minutes, 43 seconds - processed 20412 inodes per minute
	- 08:01:24: Phase 3: 95% done - estimated remaining time 42 minutes, 50 seconds
        - 08:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 08:16:24: Phase 3: elapsed time 15 hours, 29 minutes, 43 seconds - processed 20083 inodes per minute
	- 08:16:24: Phase 3: 95% done - estimated remaining time 43 minutes, 32 seconds
        - 08:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 08:31:24: Phase 3: elapsed time 15 hours, 44 minutes, 43 seconds - processed 19764 inodes per minute
	- 08:31:24: Phase 3: 95% done - estimated remaining time 44 minutes, 14 seconds
        - 08:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 08:46:24: Phase 3: elapsed time 15 hours, 59 minutes, 43 seconds - processed 19455 inodes per minute
	- 08:46:24: Phase 3: 95% done - estimated remaining time 44 minutes, 56 seconds
        - 09:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 09:01:24: Phase 3: elapsed time 16 hours, 14 minutes, 43 seconds - processed 19156 inodes per minute
	- 09:01:24: Phase 3: 95% done - estimated remaining time 45 minutes, 39 seconds
        - 09:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 09:16:24: Phase 3: elapsed time 16 hours, 29 minutes, 43 seconds - processed 18865 inodes per minute
	- 09:16:24: Phase 3: 95% done - estimated remaining time 46 minutes, 21 seconds
        - 09:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 09:31:24: Phase 3: elapsed time 16 hours, 44 minutes, 43 seconds - processed 18584 inodes per minute
	- 09:31:24: Phase 3: 95% done - estimated remaining time 47 minutes, 3 seconds
        - 09:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 09:46:24: Phase 3: elapsed time 16 hours, 59 minutes, 43 seconds - processed 18310 inodes per minute
	- 09:46:24: Phase 3: 95% done - estimated remaining time 47 minutes, 45 seconds
        - 10:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 10:01:24: Phase 3: elapsed time 17 hours, 14 minutes, 43 seconds - processed 18045 inodes per minute
	- 10:01:24: Phase 3: 95% done - estimated remaining time 48 minutes, 27 seconds
        - 10:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 10:16:24: Phase 3: elapsed time 17 hours, 29 minutes, 43 seconds - processed 17787 inodes per minute
	- 10:16:24: Phase 3: 95% done - estimated remaining time 49 minutes, 9 seconds
        - 10:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 10:31:24: Phase 3: elapsed time 17 hours, 44 minutes, 43 seconds - processed 17536 inodes per minute
	- 10:31:24: Phase 3: 95% done - estimated remaining time 49 minutes, 51 seconds
        - 10:46:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 10:46:24: Phase 3: elapsed time 17 hours, 59 minutes, 43 seconds - processed 17293 inodes per minute
	- 10:46:24: Phase 3: 95% done - estimated remaining time 50 minutes, 34 seconds
        - 11:01:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 11:01:24: Phase 3: elapsed time 18 hours, 14 minutes, 43 seconds - processed 17056 inodes per minute
	- 11:01:24: Phase 3: 95% done - estimated remaining time 51 minutes, 16 seconds
        - 11:16:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 11:16:24: Phase 3: elapsed time 18 hours, 29 minutes, 43 seconds - processed 16825 inodes per minute
	- 11:16:24: Phase 3: 95% done - estimated remaining time 51 minutes, 58 seconds
        - 11:31:24: process known inodes and inode discovery - 18671744 of 19546240 inodes done
	- 11:31:24: Phase 3: elapsed time 18 hours, 44 minutes, 43 seconds - processed 16601 inodes per minute
	- 11:31:24: Phase 3: 95% done - estimated remaining time 52 minutes, 40 seconds
        - 11:46:24: process known inodes and inode discovery - 19513152 of 19546240 inodes done
	- 11:46:24: Phase 3: elapsed time 18 hours, 59 minutes, 43 seconds - processed 17121 inodes per minute
	- 11:46:24: Phase 3: 99% done - estimated remaining time 1 minute, 55 seconds
        - 12:01:24: process known inodes and inode discovery - 19513152 of 19546240 inodes done
	- 12:01:24: Phase 3: elapsed time 19 hours, 14 minutes, 43 seconds - processed 16898 inodes per minute
	- 12:01:24: Phase 3: 99% done - estimated remaining time 1 minute, 57 seconds
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
        - agno = 22
        - agno = 23
        - agno = 24
        - agno = 25
        - agno = 26
        - agno = 27
        - agno = 28
        - agno = 29
        - agno = 30
        - agno = 31
        - 12:03:04: process known inodes and inode discovery - 19546240 of 19546240 inodes done
        - process newly discovered inodes...
        - 12:03:04: process newly discovered inodes - 59 of 59 allocation groups done
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - 12:03:04: setting up duplicate extent list - 59 of 59 allocation groups done
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 32
        - agno = 33
        - agno = 34
        - agno = 35
        - agno = 36
        - agno = 37
        - agno = 38
        - agno = 39
        - agno = 40
        - agno = 41
        - agno = 42
        - agno = 43
        - agno = 44
        - agno = 45
        - agno = 46
        - agno = 47
        - agno = 48
        - agno = 49
        - agno = 50
        - agno = 51
        - agno = 52
        - agno = 53
        - agno = 54
        - agno = 55
        - agno = 56
        - agno = 57
        - agno = 58
        - 12:16:24: check for inodes claiming duplicate blocks - 2060224 of 19546240 inodes done
	- 12:16:24: Phase 4: elapsed time 13 minutes, 20 seconds - processed 154516 inodes per minute
	- 12:16:24: Phase 4: 10% done - estimated remaining time 1 hour, 53 minutes, 9 seconds
        - 12:31:24: check for inodes claiming duplicate blocks - 2074112 of 19546240 inodes done
	- 12:31:24: Phase 4: elapsed time 28 minutes, 20 seconds - processed 73203 inodes per minute
	- 12:31:24: Phase 4: 10% done - estimated remaining time 3 hours, 58 minutes, 40 seconds
        - 12:46:24: check for inodes claiming duplicate blocks - 2074304 of 19546240 inodes done
	- 12:46:24: Phase 4: elapsed time 43 minutes, 20 seconds - processed 47868 inodes per minute
	- 12:46:24: Phase 4: 10% done - estimated remaining time 6 hours, 4 minutes, 59 seconds
        - 13:01:24: check for inodes claiming duplicate blocks - 2295680 of 19546240 inodes done
	- 13:01:24: Phase 4: elapsed time 58 minutes, 20 seconds - processed 39354 inodes per minute
	- 13:01:24: Phase 4: 11% done - estimated remaining time 7 hours, 18 minutes, 20 seconds
        - 13:16:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 13:16:24: Phase 4: elapsed time 1 hour, 13 minutes, 20 seconds - processed 58663 inodes per minute
	- 13:16:24: Phase 4: 22% done - estimated remaining time 4 hours, 19 minutes, 51 seconds
        - 13:31:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 13:31:24: Phase 4: elapsed time 1 hour, 28 minutes, 20 seconds - processed 48702 inodes per minute
	- 13:31:24: Phase 4: 22% done - estimated remaining time 5 hours, 13 minutes
        - 13:46:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 13:46:24: Phase 4: elapsed time 1 hour, 43 minutes, 20 seconds - processed 41632 inodes per minute
	- 13:46:24: Phase 4: 22% done - estimated remaining time 6 hours, 6 minutes, 9 seconds
        - 14:01:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 14:01:24: Phase 4: elapsed time 1 hour, 58 minutes, 20 seconds - processed 36355 inodes per minute
	- 14:01:24: Phase 4: 22% done - estimated remaining time 6 hours, 59 minutes, 18 seconds
        - 14:16:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 14:16:24: Phase 4: elapsed time 2 hours, 13 minutes, 20 seconds - processed 32265 inodes per minute
	- 14:16:24: Phase 4: 22% done - estimated remaining time 7 hours, 52 minutes, 28 seconds
        - 14:31:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 14:31:24: Phase 4: elapsed time 2 hours, 28 minutes, 20 seconds - processed 29002 inodes per minute
	- 14:31:24: Phase 4: 22% done - estimated remaining time 8 hours, 45 minutes, 37 seconds
        - 14:46:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 14:46:24: Phase 4: elapsed time 2 hours, 43 minutes, 20 seconds - processed 26338 inodes per minute
	- 14:46:24: Phase 4: 22% done - estimated remaining time 9 hours, 38 minutes, 46 seconds
        - 15:01:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 15:01:24: Phase 4: elapsed time 2 hours, 58 minutes, 20 seconds - processed 24123 inodes per minute
	- 15:01:24: Phase 4: 22% done - estimated remaining time 10 hours, 31 minutes, 55 seconds
        - 15:16:24: check for inodes claiming duplicate blocks - 4302016 of 19546240 inodes done
	- 15:16:24: Phase 4: elapsed time 3 hours, 13 minutes, 20 seconds - processed 22251 inodes per minute
	- 15:16:24: Phase 4: 22% done - estimated remaining time 11 hours, 25 minutes, 4 seconds
        - 15:31:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 15:31:24: Phase 4: elapsed time 3 hours, 28 minutes, 20 seconds - processed 89624 inodes per minute
	- 15:31:24: Phase 4: 95% done - estimated remaining time 9 minutes, 45 seconds
        - 15:46:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 15:46:24: Phase 4: elapsed time 3 hours, 43 minutes, 20 seconds - processed 83604 inodes per minute
	- 15:46:24: Phase 4: 95% done - estimated remaining time 10 minutes, 27 seconds
        - 16:01:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 16:01:24: Phase 4: elapsed time 3 hours, 58 minutes, 20 seconds - processed 78342 inodes per minute
	- 16:01:24: Phase 4: 95% done - estimated remaining time 11 minutes, 9 seconds
        - 16:16:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 16:16:24: Phase 4: elapsed time 4 hours, 13 minutes, 20 seconds - processed 73704 inodes per minute
	- 16:16:24: Phase 4: 95% done - estimated remaining time 11 minutes, 51 seconds
        - 16:31:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 16:31:24: Phase 4: elapsed time 4 hours, 28 minutes, 20 seconds - processed 69584 inodes per minute
	- 16:31:24: Phase 4: 95% done - estimated remaining time 12 minutes, 34 seconds
        - 16:46:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 16:46:24: Phase 4: elapsed time 4 hours, 43 minutes, 20 seconds - processed 65900 inodes per minute
	- 16:46:24: Phase 4: 95% done - estimated remaining time 13 minutes, 16 seconds
        - 17:01:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 17:01:24: Phase 4: elapsed time 4 hours, 58 minutes, 20 seconds - processed 62586 inodes per minute
	- 17:01:24: Phase 4: 95% done - estimated remaining time 13 minutes, 58 seconds
        - 17:16:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 17:16:24: Phase 4: elapsed time 5 hours, 13 minutes, 20 seconds - processed 59590 inodes per minute
	- 17:16:24: Phase 4: 95% done - estimated remaining time 14 minutes, 40 seconds
        - 17:31:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 17:31:24: Phase 4: elapsed time 5 hours, 28 minutes, 20 seconds - processed 56868 inodes per minute
	- 17:31:24: Phase 4: 95% done - estimated remaining time 15 minutes, 22 seconds
        - 17:46:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 17:46:24: Phase 4: elapsed time 5 hours, 43 minutes, 20 seconds - processed 54383 inodes per minute
	- 17:46:24: Phase 4: 95% done - estimated remaining time 16 minutes, 4 seconds
        - 18:01:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 18:01:24: Phase 4: elapsed time 5 hours, 58 minutes, 20 seconds - processed 52107 inodes per minute
	- 18:01:24: Phase 4: 95% done - estimated remaining time 16 minutes, 46 seconds
        - 18:16:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 18:16:24: Phase 4: elapsed time 6 hours, 13 minutes, 20 seconds - processed 50013 inodes per minute
	- 18:16:24: Phase 4: 95% done - estimated remaining time 17 minutes, 29 seconds
        - 18:31:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 18:31:24: Phase 4: elapsed time 6 hours, 28 minutes, 20 seconds - processed 48081 inodes per minute
	- 18:31:24: Phase 4: 95% done - estimated remaining time 18 minutes, 11 seconds
        - 18:46:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 18:46:24: Phase 4: elapsed time 6 hours, 43 minutes, 20 seconds - processed 46293 inodes per minute
	- 18:46:24: Phase 4: 95% done - estimated remaining time 18 minutes, 53 seconds
        - 19:01:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 19:01:24: Phase 4: elapsed time 6 hours, 58 minutes, 20 seconds - processed 44633 inodes per minute
	- 19:01:24: Phase 4: 95% done - estimated remaining time 19 minutes, 35 seconds
        - 19:16:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 19:16:24: Phase 4: elapsed time 7 hours, 13 minutes, 20 seconds - processed 43088 inodes per minute
	- 19:16:24: Phase 4: 95% done - estimated remaining time 20 minutes, 17 seconds
        - 19:31:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 19:31:24: Phase 4: elapsed time 7 hours, 28 minutes, 20 seconds - processed 41647 inodes per minute
	- 19:31:24: Phase 4: 95% done - estimated remaining time 20 minutes, 59 seconds
        - 19:46:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 19:46:24: Phase 4: elapsed time 7 hours, 43 minutes, 20 seconds - processed 40298 inodes per minute
	- 19:46:24: Phase 4: 95% done - estimated remaining time 21 minutes, 42 seconds
        - 20:01:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 20:01:24: Phase 4: elapsed time 7 hours, 58 minutes, 20 seconds - processed 39035 inodes per minute
	- 20:01:24: Phase 4: 95% done - estimated remaining time 22 minutes, 24 seconds
        - 20:16:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 20:16:24: Phase 4: elapsed time 8 hours, 13 minutes, 20 seconds - processed 37848 inodes per minute
	- 20:16:24: Phase 4: 95% done - estimated remaining time 23 minutes, 6 seconds
        - 20:31:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 20:31:24: Phase 4: elapsed time 8 hours, 28 minutes, 20 seconds - processed 36731 inodes per minute
	- 20:31:24: Phase 4: 95% done - estimated remaining time 23 minutes, 48 seconds
        - 20:46:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 20:46:24: Phase 4: elapsed time 8 hours, 43 minutes, 20 seconds - processed 35678 inodes per minute
	- 20:46:24: Phase 4: 95% done - estimated remaining time 24 minutes, 30 seconds
        - 21:01:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 21:01:24: Phase 4: elapsed time 8 hours, 58 minutes, 20 seconds - processed 34684 inodes per minute
	- 21:01:24: Phase 4: 95% done - estimated remaining time 25 minutes, 12 seconds
        - 21:16:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 21:16:24: Phase 4: elapsed time 9 hours, 13 minutes, 20 seconds - processed 33744 inodes per minute
	- 21:16:24: Phase 4: 95% done - estimated remaining time 25 minutes, 54 seconds
        - 21:31:24: check for inodes claiming duplicate blocks - 18671744 of 19546240 inodes done
	- 21:31:24: Phase 4: elapsed time 9 hours, 28 minutes, 20 seconds - processed 32853 inodes per minute
	- 21:31:24: Phase 4: 95% done - estimated remaining time 26 minutes, 37 seconds
        - 21:46:24: check for inodes claiming duplicate blocks - 19427904 of 19546240 inodes done
	- 21:46:24: Phase 4: elapsed time 9 hours, 43 minutes, 20 seconds - processed 33304 inodes per minute
	- 21:46:24: Phase 4: 99% done - estimated remaining time 3 minutes, 33 seconds
        - 22:01:24: check for inodes claiming duplicate blocks - 19513152 of 19546240 inodes done
	- 22:01:24: Phase 4: elapsed time 9 hours, 58 minutes, 20 seconds - processed 32612 inodes per minute
	- 22:01:24: Phase 4: 99% done - estimated remaining time 1 minute
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
        - agno = 22
        - agno = 23
        - agno = 24
        - agno = 25
        - agno = 26
        - agno = 27
        - agno = 28
        - agno = 29
        - agno = 30
        - agno = 31
        - 22:03:25: check for inodes claiming duplicate blocks - 19546240 of 19546240 inodes done
Phase 5 - rebuild AG headers and trees...
        - 22:03:27: rebuild AG headers and trees - 59 of 59 allocation groups done
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
Fri Feb  3 22:04:30 GMT 2012: Finished xfs_repair job

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-06  0:50   ` Tom Crane
@ 2012-02-06  5:58     ` Eric Sandeen
  2012-02-06 11:19       ` Tom Crane
  2012-02-06 14:04     ` Christoph Hellwig
  1 sibling, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2012-02-06  5:58 UTC (permalink / raw)
  To: Tom Crane; +Cc: Christoph Hellwig, xfs

On 2/5/12 6:50 PM, Tom Crane wrote:
> Hi Christoph,
> Many thanks for the quick response and the patch. It was a big help.
> I was able to repair our 60TB FS in about 30 hours. I have a couple
> of questions;
> 
> (1) The steps in the progress report seem a little strange. See the
> attachment. Is this expected?
> 
> (2) This may be a little out of band but I have heard second hand
> reports from another sysadmin that the xfs tools which come with SLC5
> (our current Linux distro) should not be relied upon and that SLC6
> should be used. Our 60TB FS is significantly fragmented (~40%) and I
> would very much like to run xfs_fsr on it. Given that I have built
> the latest xfsprogs, is there any reason I should be afraid of
> running xfs_fsr, on the FS which comes with SLC5? Unfortunately I
> don't have ~60TB spare storage space elsewhere to backup the FS
> before defragging. What would you advise?> 
> Many thanks

Newer tools are fine to use on older filesystems, there should be no
issue there.

running fsr can cause an awful lot of IO, and a lot of file reorganization.
(meaning, they will get moved to new locations on disk, etc).

How bad is it, really?  How did you arrive at the 40% number?  Unless
you see perf problems which you know you can attribute to fragmentation,
I might not worry about it.

You can also check the fragmentation of individual files with the
xfs_bmap tool.

-Eric

> Tom.
> 
> Christoph Hellwig wrote:
>> Hi Tom,
>>
>> On Wed, Feb 01, 2012 at 01:36:12PM +0000, Tom Crane wrote:
>>  
>>> Dear XFS Support,
>>>    I am attempting to use xfs_repair to fix a damaged FS but always
>>> get a segfault if and only if -o ag_stride is specified. I have
>>> tried ag_stride=2,8,16 & 32.  The FS is approx 60T. I can't find
>>> reports of this particular problem on the mailing list archive.
>>> Further details are;
>>>
>>> xfs_repair version 3.1.7, recently downloaded via git repository.
>>> uname -a
>>> Linux store3 2.6.18-274.17.1.el5 #1 SMP Wed Jan 11 11:10:32 CET 2012
>>> x86_64 x86_64 x86_64 GNU/Linux
>>>     
>>
>> Thanks for the detailed bug report.
>>
>> Can you please try the attached patch?
>>
>>   
> 
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-06  5:58     ` Eric Sandeen
@ 2012-02-06 11:19       ` Tom Crane
  2012-02-06 13:21         ` Eric Sandeen
  0 siblings, 1 reply; 10+ messages in thread
From: Tom Crane @ 2012-02-06 11:19 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: Christoph Hellwig, T.Crane, xfs

Eric Sandeen wrote:
> On 2/5/12 6:50 PM, Tom Crane wrote:
>   
>> Hi Christoph,
>> Many thanks for the quick response and the patch. It was a big help.
>> I was able to repair our 60TB FS in about 30 hours. I have a couple
>> of questions;
>>
>> (1) The steps in the progress report seem a little strange. See the
>> attachment. Is this expected?
>>
>> (2) This may be a little out of band but I have heard second hand
>> reports from another sysadmin that the xfs tools which come with SLC5
>> (our current Linux distro) should not be relied upon and that SLC6
>> should be used. Our 60TB FS is significantly fragmented (~40%) and I
>> would very much like to run xfs_fsr on it. Given that I have built
>> the latest xfsprogs, is there any reason I should be afraid of
>> running xfs_fsr, on the FS which comes with SLC5? Unfortunately I
>> don't have ~60TB spare storage space elsewhere to backup the FS
>> before defragging. What would you advise?> 
>> Many thanks
>>     
>
> Newer tools are fine to use on older filesystems, there should be no
>   

Good!

> issue there.
>
> running fsr can cause an awful lot of IO, and a lot of file reorganization.
> (meaning, they will get moved to new locations on disk, etc).
>
> How bad is it, really?  How did you arrive at the 40% number?  Unless
>   

xfs_db -c frag -r <block device>

Some users on our compute farm with large jobs (lots of I/O) find they take 
longer than with some of our other scratch arrays hosted on other machines.  We also typically find many 
nfsd tasks in an uninterruptible wait state (sync_page), waiting for data to be copied in from the FS. 


> you see perf problems which you know you can attribute to fragmentation,
> I might not worry about it.
>
> You can also check the fragmentation of individual files with the
> xfs_bmap tool.
>
> -Eric
>   

Thanks for your advice.
Cheers
Tom.

>   
>> Tom.
>>
>> Christoph Hellwig wrote:
>>     
>>> Hi Tom,
>>>
>>> On Wed, Feb 01, 2012 at 01:36:12PM +0000, Tom Crane wrote:
>>>  
>>>       
>>>> Dear XFS Support,
>>>>    I am attempting to use xfs_repair to fix a damaged FS but always
>>>> get a segfault if and only if -o ag_stride is specified. I have
>>>> tried ag_stride=2,8,16 & 32.  The FS is approx 60T. I can't find
>>>> reports of this particular problem on the mailing list archive.
>>>> Further details are;
>>>>
>>>> xfs_repair version 3.1.7, recently downloaded via git repository.
>>>> uname -a
>>>> Linux store3 2.6.18-274.17.1.el5 #1 SMP Wed Jan 11 11:10:32 CET 2012
>>>> x86_64 x86_64 x86_64 GNU/Linux
>>>>     
>>>>         
>>> Thanks for the detailed bug report.
>>>
>>> Can you please try the attached patch?
>>>
>>>   
>>>       
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>>     
>
>   

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-06 11:19       ` Tom Crane
@ 2012-02-06 13:21         ` Eric Sandeen
  2012-02-07 17:41           ` Tom Crane
  0 siblings, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2012-02-06 13:21 UTC (permalink / raw)
  To: Tom Crane; +Cc: Christoph Hellwig, xfs

On 2/6/12 5:19 AM, Tom Crane wrote:
> Eric Sandeen wrote:

...

>> Newer tools are fine to use on older filesystems, there should be no
>>   
> 
> Good!
> 
>> issue there.
>>
>> running fsr can cause an awful lot of IO, and a lot of file reorganization.
>> (meaning, they will get moved to new locations on disk, etc).
>>
>> How bad is it, really?  How did you arrive at the 40% number?  Unless
>>   
> 
> xfs_db -c frag -r <block device>

which does:

                answer = (double)(extcount_actual - extcount_ideal) * 100.0 /
                         (double)extcount_actual;

If you work it out, if every file was split into only 2 extents, you'd have
"50%" - and really, that's not bad.  40% is even less bad.

> Some users on our compute farm with large jobs (lots of I/O) find they take longer than with some of our other scratch arrays hosted on other machines.  We also typically find many nfsd tasks in an uninterruptible wait state (sync_page), waiting for data to be copied in from the FS.

So fragmentation may not be the problem... 

-Eric

>> you see perf problems which you know you can attribute to fragmentation,
>> I might not worry about it.
>>
>> You can also check the fragmentation of individual files with the
>> xfs_bmap tool.
>>
>> -Eric
>>   
> 
> Thanks for your advice.
> Cheers
> Tom.
> 
>>  
>>> Tom.
>>>
>>> Christoph Hellwig wrote:
>>>    
>>>> Hi Tom,
>>>>
>>>> On Wed, Feb 01, 2012 at 01:36:12PM +0000, Tom Crane wrote:
>>>>  
>>>>      
>>>>> Dear XFS Support,
>>>>>    I am attempting to use xfs_repair to fix a damaged FS but always
>>>>> get a segfault if and only if -o ag_stride is specified. I have
>>>>> tried ag_stride=2,8,16 & 32.  The FS is approx 60T. I can't find
>>>>> reports of this particular problem on the mailing list archive.
>>>>> Further details are;
>>>>>
>>>>> xfs_repair version 3.1.7, recently downloaded via git repository.
>>>>> uname -a
>>>>> Linux store3 2.6.18-274.17.1.el5 #1 SMP Wed Jan 11 11:10:32 CET 2012
>>>>> x86_64 x86_64 x86_64 GNU/Linux
>>>>>             
>>>> Thanks for the detailed bug report.
>>>>
>>>> Can you please try the attached patch?
>>>>
>>>>         
>>> _______________________________________________
>>> xfs mailing list
>>> xfs@oss.sgi.com
>>> http://oss.sgi.com/mailman/listinfo/xfs
>>>     
>>
>>   
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-06  0:50   ` Tom Crane
  2012-02-06  5:58     ` Eric Sandeen
@ 2012-02-06 14:04     ` Christoph Hellwig
  1 sibling, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2012-02-06 14:04 UTC (permalink / raw)
  To: Tom Crane; +Cc: Christoph Hellwig, xfs

On Mon, Feb 06, 2012 at 12:50:59AM +0000, Tom Crane wrote:
> Hi Christoph,
>    Many thanks for the quick response and the patch.  It was a big
> help.  I was able to repair our 60TB FS in about 30 hours. I have a
> couple of questions;
> 
> (1) The steps in the progress report seem a little strange.  See the
> attachment. Is this expected?

Do you mean the out of order agno progress reports?  That's an artefact
of the ag_stride option which parallelizes processing of different
AGs, and expected.  It's not very nice but I don't have a smart idea
how to do much better either.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-06 13:21         ` Eric Sandeen
@ 2012-02-07 17:41           ` Tom Crane
  2012-02-07 18:00             ` Eric Sandeen
  2012-02-08  9:00             ` Dave Chinner
  0 siblings, 2 replies; 10+ messages in thread
From: Tom Crane @ 2012-02-07 17:41 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: Christoph Hellwig, T.Crane >> Crane T, xfs

Eric Sandeen wrote:
> On 2/6/12 5:19 AM, Tom Crane wrote:
>   
>> Eric Sandeen wrote:
>>     
>
> ...
>
>   
>>> Newer tools are fine to use on older filesystems, there should be no
>>>   
>>>       
>> Good!
>>
>>     
>>> issue there.
>>>
>>> running fsr can cause an awful lot of IO, and a lot of file reorganization.
>>> (meaning, they will get moved to new locations on disk, etc).
>>>
>>> How bad is it, really?  How did you arrive at the 40% number?  Unless
>>>   
>>>       
>> xfs_db -c frag -r <block device>
>>     
>
> which does:
>
>                 answer = (double)(extcount_actual - extcount_ideal) * 100.0 /
>                          (double)extcount_actual;
>
> If you work it out, if every file was split into only 2 extents, you'd have
> "50%" - and really, that's not bad.  40% is even less bad.
>   

Here is a list of some of the more fragmented files, produced using,
xfs_db -r /dev/mapper/vg0-lvol0 -c "frag -v" | head -1000000 | sort 
-k4,4 -g | tail -100

> inode 1323681 actual 12496 ideal 2
> inode 1324463 actual 12633 ideal 2
> inode 1333841 actual 12709 ideal 2
> inode 1336378 actual 12816 ideal 2
> inode 1321872 actual 12845 ideal 2
> inode 1326336 actual 13023 ideal 2
> inode 1334204 actual 13079 ideal 2
> inode 1318894 actual 13151 ideal 2
> inode 1339200 actual 13179 ideal 2
> inode 1106019 actual 13264 ideal 2
> inode 1330156 actual 13357 ideal 2
> inode 1325766 actual 13482 ideal 2
> inode 1322262 actual 13537 ideal 2
> inode 1321605 actual 13572 ideal 2
> inode 1333068 actual 13897 ideal 2
> inode 1325224 actual 14060 ideal 2
> inode 48166 actual 14167 ideal 2
> inode 1319965 actual 14187 ideal 2
> inode 1334519 actual 14212 ideal 2
> inode 1327312 actual 14264 ideal 2
> inode 1322761 actual 14724 ideal 2
> inode 425483 actual 14761 ideal 2
> inode 1337466 actual 15024 ideal 2
> inode 1324853 actual 15039 ideal 2
> inode 1327964 actual 15047 ideal 2
> inode 1334036 actual 15508 ideal 2
> inode 1329861 actual 15589 ideal 2
> inode 1324306 actual 15665 ideal 2
> inode 1338957 actual 15830 ideal 2
> inode 1322943 actual 16385 ideal 2
> inode 1321074 actual 16624 ideal 2
> inode 1323162 actual 16724 ideal 2
> inode 1318543 actual 16734 ideal 2
> inode 1340193 actual 16756 ideal 2
> inode 1334354 actual 16948 ideal 2
> inode 1324121 actual 17057 ideal 2
> inode 1326106 actual 17318 ideal 2
> inode 1325527 actual 17425 ideal 2
> inode 1332902 actual 17477 ideal 2
> inode 1330358 actual 18775 ideal 2
> inode 1338161 actual 18858 ideal 2
> inode 1320625 actual 20579 ideal 2
> inode 1335016 actual 22701 ideal 2
> inode 753185 actual 33483 ideal 2
> inode 64515 actual 37764 ideal 2
> inode 76068 actual 41394 ideal 2
> inode 76069 actual 65898 ideal 2

The following for some of the larger, more fragmented files was produced 
by parsing/summarising the output of bmap -l

> (nos-extents size-of-smallest-extent size-of-largest-extent 
> size-of-average-extent)
> 20996 8 38232 370.678986473614
> 21831 8 1527168 555.59158994091
> 22700 8 407160 371.346607929515
> 26075 8 1170120 544.218753595398
> 27632 16 480976 311.79473074696
> 29312 8 184376 348.09115720524
> 29474 8 1632 8.06758499016082
> 33482 16 421008 292.340959321426
> 34953 8 457848 371.310044917461
> 37763 8 82184 377.083812197124
> 37826 8 970624 314.246497118384
> 39892 16 508936 345.970921488018
> 41393 8 214496 443.351291281134
> 47877 8 1047728 325.400004177371
> 50562 8 677576 328.302994343578
> 53743 8 672896 364.316841263048
> 54378 16 764280 360.091801831623
> 59071 8 910816 332.138748285961
> 62666 8 337808 312.538601474484
> 65897 16 775832 287.113040047347
> 84946 8 1457120 496.702563981824
> 117798 8 161576 53.8408461943327
> 119904 8 39048 168.37943688284
> 131330 8 65424 68.948267722531
> 174379 8 1187616 112.254113167297
> 254070 8 1418960 303.413201086315
> 313029 8 280064 62.6561756259005
> 365547 8 76864 53.5732368204362
> 1790382 8 1758176 359.880034540115
> 2912436 8 1004848 373.771190851919
 How bad does this look?

Cheers
Tom.


>> Some users on our compute farm with large jobs (lots of I/O) find they take longer than with some of our other scratch arrays hosted on other machines.  We also typically find many nfsd tasks in an uninterruptible wait state (sync_page), waiting for data to be copied in from the FS.
>>     
>
> So fragmentation may not be the problem... 
>
> -Eric
>
>   
>>> you see perf problems which you know you can attribute to fragmentation,
>>> I might not worry about it.
>>>
>>> You can also check the fragmentation of individual files with the
>>> xfs_bmap tool.
>>>
>>> -Eric
>>>   
>>>       
>> Thanks for your advice.
>> Cheers
>> Tom.
>>
>>     
>>>  
>>>       
>>>> Tom.
>>>>
>>>> Christoph Hellwig wrote:
>>>>    
>>>>         
>>>>> Hi Tom,
>>>>>
>>>>> On Wed, Feb 01, 2012 at 01:36:12PM +0000, Tom Crane wrote:
>>>>>  
>>>>>      
>>>>>           
>>>>>> Dear XFS Support,
>>>>>>    I am attempting to use xfs_repair to fix a damaged FS but always
>>>>>> get a segfault if and only if -o ag_stride is specified. I have
>>>>>> tried ag_stride=2,8,16 & 32.  The FS is approx 60T. I can't find
>>>>>> reports of this particular problem on the mailing list archive.
>>>>>> Further details are;
>>>>>>
>>>>>> xfs_repair version 3.1.7, recently downloaded via git repository.
>>>>>> uname -a
>>>>>> Linux store3 2.6.18-274.17.1.el5 #1 SMP Wed Jan 11 11:10:32 CET 2012
>>>>>> x86_64 x86_64 x86_64 GNU/Linux
>>>>>>             
>>>>>>             
>>>>> Thanks for the detailed bug report.
>>>>>
>>>>> Can you please try the attached patch?
>>>>>
>>>>>         
>>>>>           
>>>> _______________________________________________
>>>> xfs mailing list
>>>> xfs@oss.sgi.com
>>>> http://oss.sgi.com/mailman/listinfo/xfs
>>>>     
>>>>         
>>>   
>>>       
>
>   

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-07 17:41           ` Tom Crane
@ 2012-02-07 18:00             ` Eric Sandeen
  2012-02-08  9:00             ` Dave Chinner
  1 sibling, 0 replies; 10+ messages in thread
From: Eric Sandeen @ 2012-02-07 18:00 UTC (permalink / raw)
  To: Tom Crane; +Cc: Christoph Hellwig, xfs

On 2/7/12 11:41 AM, Tom Crane wrote:
> Eric Sandeen wrote:
>> On 2/6/12 5:19 AM, Tom Crane wrote:
>>  
>>> Eric Sandeen wrote:
>>>     
>>
>> ...
>>
>>  
>>>> Newer tools are fine to use on older filesystems, there should be no
>>>>         
>>> Good!
>>>
>>>    
>>>> issue there.
>>>>
>>>> running fsr can cause an awful lot of IO, and a lot of file reorganization.
>>>> (meaning, they will get moved to new locations on disk, etc).
>>>>
>>>> How bad is it, really?  How did you arrive at the 40% number?  Unless
>>>>         
>>> xfs_db -c frag -r <block device>
>>>     
>>
>> which does:
>>
>>                 answer = (double)(extcount_actual - extcount_ideal) * 100.0 /
>>                          (double)extcount_actual;
>>
>> If you work it out, if every file was split into only 2 extents, you'd have
>> "50%" - and really, that's not bad.  40% is even less bad.
>>   
> 
> Here is a list of some of the more fragmented files, produced using,
> xfs_db -r /dev/mapper/vg0-lvol0 -c "frag -v" | head -1000000 | sort -k4,4 -g | tail -100
> 
>> inode 1323681 actual 12496 ideal 2

ok, so that's a fair number of extents, although I don't know how big the file is.

I think "Frag" takes into account sparseness, so that doesn't account for it.
(i.e. frag on a sparse file w/ 5 filled in regions yields "actual 5, ideal 5")

> The following for some of the larger, more fragmented files was produced by parsing/summarising the output of bmap -l
> 
>> (nos-extents size-of-smallest-extent size-of-largest-extent size-of-average-extent)
>> 20996 8 38232 370.678986473614

So about a 3G file in 20996 extents.  Not great (unless it's sparse?)

> How bad does this look?

Ok... not great?  :)  If it is really scattered around the disk that might impact how quickly you can read them after all.

How are the files created, you might want to try to fix it up on that end, as well.

-Eric

> Cheers
> Tom.
> 
> 
>>> Some users on our compute farm with large jobs (lots of I/O) find they take longer than with some of our other scratch arrays hosted on other machines.  We also typically find many nfsd tasks in an uninterruptible wait state (sync_page), waiting for data to be copied in from the FS.
>>>     
>>
>> So fragmentation may not be the problem...
>> -Eric
>>
>>  
>>>> you see perf problems which you know you can attribute to fragmentation,
>>>> I might not worry about it.
>>>>
>>>> You can also check the fragmentation of individual files with the
>>>> xfs_bmap tool.
>>>>
>>>> -Eric
>>>>         
>>> Thanks for your advice.
>>> Cheers
>>> Tom.
>>>
>>>    
>>>>  
>>>>      
>>>>> Tom.
>>>>>
>>>>> Christoph Hellwig wrote:
>>>>>           
>>>>>> Hi Tom,
>>>>>>
>>>>>> On Wed, Feb 01, 2012 at 01:36:12PM +0000, Tom Crane wrote:
>>>>>>  
>>>>>>               
>>>>>>> Dear XFS Support,
>>>>>>>    I am attempting to use xfs_repair to fix a damaged FS but always
>>>>>>> get a segfault if and only if -o ag_stride is specified. I have
>>>>>>> tried ag_stride=2,8,16 & 32.  The FS is approx 60T. I can't find
>>>>>>> reports of this particular problem on the mailing list archive.
>>>>>>> Further details are;
>>>>>>>
>>>>>>> xfs_repair version 3.1.7, recently downloaded via git repository.
>>>>>>> uname -a
>>>>>>> Linux store3 2.6.18-274.17.1.el5 #1 SMP Wed Jan 11 11:10:32 CET 2012
>>>>>>> x86_64 x86_64 x86_64 GNU/Linux
>>>>>>>                         
>>>>>> Thanks for the detailed bug report.
>>>>>>
>>>>>> Can you please try the attached patch?
>>>>>>
>>>>>>                   
>>>>> _______________________________________________
>>>>> xfs mailing list
>>>>> xfs@oss.sgi.com
>>>>> http://oss.sgi.com/mailman/listinfo/xfs
>>>>>             
>>>>         
>>
>>   
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: xfs_repair segfaults with ag_stride option
  2012-02-07 17:41           ` Tom Crane
  2012-02-07 18:00             ` Eric Sandeen
@ 2012-02-08  9:00             ` Dave Chinner
  1 sibling, 0 replies; 10+ messages in thread
From: Dave Chinner @ 2012-02-08  9:00 UTC (permalink / raw)
  To: Tom Crane; +Cc: Christoph Hellwig, Eric Sandeen, xfs

On Tue, Feb 07, 2012 at 05:41:10PM +0000, Tom Crane wrote:
> Eric Sandeen wrote:
> >On 2/6/12 5:19 AM, Tom Crane wrote:
> >>Eric Sandeen wrote:
> >
> >...
> >
> >>>Newer tools are fine to use on older filesystems, there should be no
> >>Good!
> >>
> >>>issue there.
> >>>
> >>>running fsr can cause an awful lot of IO, and a lot of file reorganization.
> >>>(meaning, they will get moved to new locations on disk, etc).
> >>>
> >>>How bad is it, really?  How did you arrive at the 40% number?  Unless
> >>xfs_db -c frag -r <block device>
> >
> >which does:
> >
> >                answer = (double)(extcount_actual - extcount_ideal) * 100.0 /
> >                         (double)extcount_actual;
> >
> >If you work it out, if every file was split into only 2 extents, you'd have
> >"50%" - and really, that's not bad.  40% is even less bad.
> 
> Here is a list of some of the more fragmented files, produced using,
> xfs_db -r /dev/mapper/vg0-lvol0 -c "frag -v" | head -1000000 | sort
> -k4,4 -g | tail -100
> 
> >inode 1323681 actual 12496 ideal 2
> >inode 1324463 actual 12633 ideal 2
.....
> >inode 1320625 actual 20579 ideal 2
> >inode 1335016 actual 22701 ideal 2
> >inode 753185 actual 33483 ideal 2
> >inode 64515 actual 37764 ideal 2
> >inode 76068 actual 41394 ideal 2
> >inode 76069 actual 65898 ideal 2

Ok, so that looks like you have a fragmentation problem here. What
is the workload that is generating these files?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2012-02-08  9:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-01 13:36 xfs_repair segfaults with ag_stride option Tom Crane
2012-02-02 12:42 ` Christoph Hellwig
2012-02-06  0:50   ` Tom Crane
2012-02-06  5:58     ` Eric Sandeen
2012-02-06 11:19       ` Tom Crane
2012-02-06 13:21         ` Eric Sandeen
2012-02-07 17:41           ` Tom Crane
2012-02-07 18:00             ` Eric Sandeen
2012-02-08  9:00             ` Dave Chinner
2012-02-06 14:04     ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.