All of lore.kernel.org
 help / color / mirror / Atom feed
* fragmentation question
@ 2010-09-08  0:45 Brady Chang
  2010-09-08  1:16 ` Brady Chang
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Brady Chang @ 2010-09-08  0:45 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 4158 bytes --]

Hello All,
I have an issue with fragmentation on a particular device
thanks for any advice.

-Brady

I have a Dell r510 with 12 disks
2xraid 5 (6 disks each)
raid group1:
48 GB   carved out for os mounted as /
remaining space  2.7 TB for xfs mounted as /data1
raid group2:
48 GB  for swap
remaining space 2.7 TB for xfs mounted as /data2

The strange thing is that /data1 never gets fragmented where as /data2 is badly fragmented.
I believe increase allocsize would help, but not sure how to explain why /data2(/dev/sdd) always gets fragmented and not /data1(/dev/sdb)

It's a data warehouse application.  the I/O is balanced between /data1 and /data2:
output of xfs_db
[root@sdw4 data1]# xfs_db -c frag -r /dev/sdb
actual 14353, ideal 13702, fragmentation factor 4.54%
[root@sdw4 data1]# xfs_db -c frag -r /dev/sdd
actual 408674, ideal 13719, fragmentation factor 96.64%
df output
/dev/sdb              2.7T  967G  1.8T  36% /data1
/dev/sdd              2.7T  1.1T  1.7T  39% /data2

LABEL=/data1        /data1     xfs     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0
LABEL=/data2        /data2     xfs     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0


raid config output:
[root@sdw4 data1]# omreport storage vdisk
List of Virtual Disks in the System

Controller PERC H700 Integrated (Slot 4)
ID                        : 0
Status                    : Ok
Name                      : boot
State                     : Ready
Hot Spare Policy violated : Not Assigned
Virtual Disk Bad Blocks   : No
Secured                   : Not Applicable
Progress                  : Not Applicable
Layout                    : RAID-5
Size                      : 48.99 GB (52602470400 bytes)
Device Name               : /dev/sda
Bus Protocol              : SAS
Media                     : HDD
Read Policy               : No Read Ahead
Write Policy              : Force Write Back
Cache Policy              : Not Applicable
Stripe Element Size       : 128 KB
Disk Cache Policy         : Disabled

ID                        : 1
Status                    : Ok
Name                      : data1
State                     : Ready
Hot Spare Policy violated : Not Assigned
Virtual Disk Bad Blocks   : No
Secured                   : Not Applicable
Progress                  : Not Applicable
Layout                    : RAID-5
Size                      : 2,742.89 GB (2945150484480 bytes)
Device Name               : /dev/sdb
Bus Protocol              : SAS
Media                     : HDD
Read Policy               : No Read Ahead
Write Policy              : Force Write Back
Cache Policy              : Not Applicable
Stripe Element Size       : 128 KB
Disk Cache Policy         : Disabled

ID                        : 2
Status                    : Ok
Name                      : swap
State                     : Ready
Hot Spare Policy violated : Not Assigned
Virtual Disk Bad Blocks   : No
Secured                   : Not Applicable
Progress                  : Not Applicable
Layout                    : RAID-5
Size                      : 48.99 GB (52602470400 bytes)
Device Name               : /dev/sdc
Bus Protocol              : SAS
Media                     : HDD
Read Policy               : No Read Ahead
Write Policy              : Force Write Back
Cache Policy              : Not Applicable
Stripe Element Size       : 128 KB
Disk Cache Policy         : Disabled

ID                        : 3
Status                    : Ok
Name                      : data2
State                     : Ready
Hot Spare Policy violated : Not Assigned
Virtual Disk Bad Blocks   : No
Secured                   : Not Applicable
Progress                  : Not Applicable
Layout                    : RAID-5
Size                      : 2,742.89 GB (2945150484480 bytes)
Device Name               : /dev/sdd
Bus Protocol              : SAS
Media                     : HDD
Read Policy               : No Read Ahead
Write Policy              : Force Write Back
Cache Policy              : Not Applicable
Stripe Element Size       : 128 KB
Disk Cache Policy         : Disabled

[-- Attachment #1.2: Type: text/html, Size: 10350 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-08  0:45 fragmentation question Brady Chang
@ 2010-09-08  1:16 ` Brady Chang
  2010-09-08  7:10 ` Emmanuel Florac
  2010-09-09 14:12 ` Eric Sandeen
  2 siblings, 0 replies; 10+ messages in thread
From: Brady Chang @ 2010-09-08  1:16 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 4317 bytes --]

By the way, the os is RHEL 5.5 kernel 2.6.18-194.11.1.el5

thanks in advance.
-Brady


On 9/7/10 5:45 PM, "Brady Chang" <bchang@greenplum.com> wrote:

Hello All,
I have an issue with fragmentation on a particular device
thanks for any advice.

-Brady

I have a Dell r510 with 12 disks
2xraid 5 (6 disks each)
raid group1:
48 GB   carved out for os mounted as /
remaining space  2.7 TB for xfs mounted as /data1
raid group2:
48 GB  for swap
remaining space 2.7 TB for xfs mounted as /data2

The strange thing is that /data1 never gets fragmented where as /data2 is badly fragmented.
I believe increase allocsize would help, but not sure how to explain why /data2(/dev/sdd) always gets fragmented and not /data1(/dev/sdb)

It's a data warehouse application.  the I/O is balanced between /data1 and /data2:
output of xfs_db
[root@sdw4 data1]# xfs_db -c frag -r /dev/sdb
actual 14353, ideal 13702, fragmentation factor 4.54%
[root@sdw4 data1]# xfs_db -c frag -r /dev/sdd
actual 408674, ideal 13719, fragmentation factor 96.64%
df output
/dev/sdb              2.7T  967G  1.8T  36% /data1
/dev/sdd              2.7T  1.1T  1.7T  39% /data2

LABEL=/data1        /data1     xfs     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0
LABEL=/data2        /data2     xfs     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0


raid config output:
[root@sdw4 data1]# omreport storage vdisk
List of Virtual Disks in the System

Controller PERC H700 Integrated (Slot 4)
ID                        : 0
Status                    : Ok
Name                      : boot
State                     : Ready
Hot Spare Policy violated : Not Assigned
Virtual Disk Bad Blocks   : No
Secured                   : Not Applicable
Progress                  : Not Applicable
Layout                    : RAID-5
Size                      : 48.99 GB (52602470400 bytes)
Device Name               : /dev/sda
Bus Protocol              : SAS
Media                     : HDD
Read Policy               : No Read Ahead
Write Policy              : Force Write Back
Cache Policy              : Not Applicable
Stripe Element Size       : 128 KB
Disk Cache Policy         : Disabled

ID                        : 1
Status                    : Ok
Name                      : data1
State                     : Ready
Hot Spare Policy violated : Not Assigned
Virtual Disk Bad Blocks   : No
Secured                   : Not Applicable
Progress                  : Not Applicable
Layout                    : RAID-5
Size                      : 2,742.89 GB (2945150484480 bytes)
Device Name               : /dev/sdb
Bus Protocol              : SAS
Media                     : HDD
Read Policy               : No Read Ahead
Write Policy              : Force Write Back
Cache Policy              : Not Applicable
Stripe Element Size       : 128 KB
Disk Cache Policy         : Disabled

ID                        : 2
Status                    : Ok
Name                      : swap
State                     : Ready
Hot Spare Policy violated : Not Assigned
Virtual Disk Bad Blocks   : No
Secured                   : Not Applicable
Progress                  : Not Applicable
Layout                    : RAID-5
Size                      : 48.99 GB (52602470400 bytes)
Device Name               : /dev/sdc
Bus Protocol              : SAS
Media                     : HDD
Read Policy               : No Read Ahead
Write Policy              : Force Write Back
Cache Policy              : Not Applicable
Stripe Element Size       : 128 KB
Disk Cache Policy         : Disabled

ID                        : 3
Status                    : Ok
Name                      : data2
State                     : Ready
Hot Spare Policy violated : Not Assigned
Virtual Disk Bad Blocks   : No
Secured                   : Not Applicable
Progress                  : Not Applicable
Layout                    : RAID-5
Size                      : 2,742.89 GB (2945150484480 bytes)
Device Name               : /dev/sdd
Bus Protocol              : SAS
Media                     : HDD
Read Policy               : No Read Ahead
Write Policy              : Force Write Back
Cache Policy              : Not Applicable
Stripe Element Size       : 128 KB
Disk Cache Policy         : Disabled

[-- Attachment #1.2: Type: text/html, Size: 10712 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-08  0:45 fragmentation question Brady Chang
  2010-09-08  1:16 ` Brady Chang
@ 2010-09-08  7:10 ` Emmanuel Florac
  2010-09-09 14:12 ` Eric Sandeen
  2 siblings, 0 replies; 10+ messages in thread
From: Emmanuel Florac @ 2010-09-08  7:10 UTC (permalink / raw)
  To: Brady Chang; +Cc: xfs

Le Tue, 7 Sep 2010 17:45:41 -0700 vous écriviez:

> I have an issue with fragmentation on a particular device
> thanks for any advice.

I'd try monitoring io with "iostat -mx 4" for a while. I suppose most
write activity goes to data1.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-08  0:45 fragmentation question Brady Chang
  2010-09-08  1:16 ` Brady Chang
  2010-09-08  7:10 ` Emmanuel Florac
@ 2010-09-09 14:12 ` Eric Sandeen
  2010-09-09 21:59   ` Brady Chang
  2010-09-09 23:44   ` Brady Chang
  2 siblings, 2 replies; 10+ messages in thread
From: Eric Sandeen @ 2010-09-09 14:12 UTC (permalink / raw)
  To: Brady Chang; +Cc: xfs

Brady Chang wrote:
> Hello All,
> I have an issue with fragmentation on a particular device
> thanks for any advice.
> 
> -Brady
> 
> I have a Dell r510 with 12 disks
> 2xraid 5 (6 disks each)
> raid group1:
> 48 GB   carved out for os mounted as /
> remaining space  2.7 TB for xfs mounted as /data1
> raid group2:
> 48 GB  for swap
> remaining space 2.7 TB for xfs mounted as /data2
> 
> The strange thing is that /data1 never gets fragmented where as /data2
> is badly fragmented.
> I believe increase allocsize would help, but not sure how to explain why
> /data2(/dev/sdd) always gets fragmented and not /data1(/dev/sdb)
> 
> It's a data warehouse application.  the I/O is balanced between /data1
> and /data2:
> output of xfs_db
> [root@sdw4 data1]# xfs_db -c frag -r /dev/sdb
> actual 14353, ideal 13702, fragmentation factor 4.54%
> [root@sdw4 data1]# xfs_db -c frag -r /dev/sdd
> actual 408674, ideal 13719, fragmentation factor 96.64%

so each file has 30 extents on average (actual/ideal)

> df output
> /dev/sdb              2.7T  967G  1.8T  36% /data1
> /dev/sdd              2.7T  1.1T  1.7T  39% /data2

1.1T/408674 extents is ~3M per extent, not so good.

How many files are on each fs?

> LABEL=/data1        /data1     xfs
>     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0
> LABEL=/data2        /data2     xfs
>     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0

Everything but the first option is default, BTW.

Is xfs_info output on the 2 filesystems the same?

Otherwise Emmanuel's idea is a good one, maybe it's not
as balanced as you think it is, or maybe they have aged
differently and have different amounts of freespace
(see the freesp command in xfs_db)

> By the way, the os is RHEL 5.5 kernel 2.6.18-194.11.1.el5

Was Red Hat support not helpful?

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-09 14:12 ` Eric Sandeen
@ 2010-09-09 21:59   ` Brady Chang
  2010-09-09 22:06     ` Eric Sandeen
  2010-09-10  3:23     ` Stan Hoeppner
  2010-09-09 23:44   ` Brady Chang
  1 sibling, 2 replies; 10+ messages in thread
From: Brady Chang @ 2010-09-09 21:59 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 2189 bytes --]

thanks guys for the feedback.
iostat shows balanced io between two filesystems.
testing with RHEL 54, no issues there. seems to be happening on RHEL 55 only.
I do not have the latest 55 kernel. I'm going to upgrade to the latest 55 kernel and rerun the test.



On 9/9/10 7:12 AM, "Eric Sandeen" <sandeen@sandeen.net> wrote:

Brady Chang wrote:
> Hello All,
> I have an issue with fragmentation on a particular device
> thanks for any advice.
>
> -Brady
>
> I have a Dell r510 with 12 disks
> 2xraid 5 (6 disks each)
> raid group1:
> 48 GB   carved out for os mounted as /
> remaining space  2.7 TB for xfs mounted as /data1
> raid group2:
> 48 GB  for swap
> remaining space 2.7 TB for xfs mounted as /data2
>
> The strange thing is that /data1 never gets fragmented where as /data2
> is badly fragmented.
> I believe increase allocsize would help, but not sure how to explain why
> /data2(/dev/sdd) always gets fragmented and not /data1(/dev/sdb)
>
> It's a data warehouse application.  the I/O is balanced between /data1
> and /data2:
> output of xfs_db
> [root@sdw4 data1]# xfs_db -c frag -r /dev/sdb
> actual 14353, ideal 13702, fragmentation factor 4.54%
> [root@sdw4 data1]# xfs_db -c frag -r /dev/sdd
> actual 408674, ideal 13719, fragmentation factor 96.64%

so each file has 30 extents on average (actual/ideal)

> df output
> /dev/sdb              2.7T  967G  1.8T  36% /data1
> /dev/sdd              2.7T  1.1T  1.7T  39% /data2

1.1T/408674 extents is ~3M per extent, not so good.

How many files are on each fs?

> LABEL=/data1        /data1     xfs
>     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0
> LABEL=/data2        /data2     xfs
>     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0

Everything but the first option is default, BTW.

Is xfs_info output on the 2 filesystems the same?

Otherwise Emmanuel's idea is a good one, maybe it's not
as balanced as you think it is, or maybe they have aged
differently and have different amounts of freespace
(see the freesp command in xfs_db)

> By the way, the os is RHEL 5.5 kernel 2.6.18-194.11.1.el5

Was Red Hat support not helpful?

-Eric



[-- Attachment #1.2: Type: text/html, Size: 3260 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-09 21:59   ` Brady Chang
@ 2010-09-09 22:06     ` Eric Sandeen
  2010-09-09 23:41       ` Brady Chang
  2010-09-10  3:23     ` Stan Hoeppner
  1 sibling, 1 reply; 10+ messages in thread
From: Eric Sandeen @ 2010-09-09 22:06 UTC (permalink / raw)
  To: Brady Chang; +Cc: xfs

On 09/09/2010 04:59 PM, Brady Chang wrote:
> thanks guys for the feedback.
> iostat shows balanced io between two filesystems.
> testing with RHEL 54, no issues there. seems to be happening on RHEL 55
> only.

There were no xfs changes between 5.4 and 5.5 that should be relevant,
only a single bugfixes for 5.5, related to fallocate error returns.

Just FWIW.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-09 22:06     ` Eric Sandeen
@ 2010-09-09 23:41       ` Brady Chang
  0 siblings, 0 replies; 10+ messages in thread
From: Brady Chang @ 2010-09-09 23:41 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 2014 bytes --]

Thanks,
after couple runs of tpc-H , RHEL 5.4 /dev/sdd is heavily fragmented.
so back to the original problem. it always happen on /dev/sdd


[root@sdw9 data1]# xfs_db -c frag -r /dev/sdb
actual 1773, ideal 1731, fragmentation factor 2.37%

[root@sdw9 data1]# xfs_db -c frag -r /dev/sdd
actual 43384, ideal 1726, fragmentation factor 96.02%

[root@sdw9 data1]# xfs_info /dev/sdb
meta-data=/dev/sdb               isize=256    agcount=32, agsize=22469715 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=719030880, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@sdw9 data1]# xfs_info /dev/sdd
meta-data=/dev/sdd               isize=256    agcount=32, agsize=22469715 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=719030880, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0


67 files on both /dev/sdb and /dev/sdd.


On 9/9/10 3:06 PM, "Eric Sandeen" <sandeen@sandeen.net> wrote:

On 09/09/2010 04:59 PM, Brady Chang wrote:
> thanks guys for the feedback.
> iostat shows balanced io between two filesystems.
> testing with RHEL 54, no issues there. seems to be happening on RHEL 55
> only.

There were no xfs changes between 5.4 and 5.5 that should be relevant,
only a single bugfixes for 5.5, related to fallocate error returns.

Just FWIW.

-Eric


[-- Attachment #1.2: Type: text/html, Size: 4640 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-09 14:12 ` Eric Sandeen
  2010-09-09 21:59   ` Brady Chang
@ 2010-09-09 23:44   ` Brady Chang
  1 sibling, 0 replies; 10+ messages in thread
From: Brady Chang @ 2010-09-09 23:44 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 3344 bytes --]

xfs_info output after the TPC-H runs are the same.

[root@sdw9 data1]# xfs_info /dev/sdd
meta-data=/dev/sdd               isize=256    agcount=32, agsize=22469715 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=719030880, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@sdw9 data1]# xfs_info /dev/sdb
meta-data=/dev/sdb               isize=256    agcount=32, agsize=22469715 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=719030880, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@sdw9 data1]# xfs_db -c frag -r /dev/sdb
actual 1799, ideal 1748, fragmentation factor 2.83%
[root@sdw9 data1]# xfs_db -c frag -r /dev/sdd
actual 54324, ideal 1749, fragmentation factor 96.78%


On 9/9/10 7:12 AM, "Eric Sandeen" <sandeen@sandeen.net> wrote:

Brady Chang wrote:
> Hello All,
> I have an issue with fragmentation on a particular device
> thanks for any advice.
>
> -Brady
>
> I have a Dell r510 with 12 disks
> 2xraid 5 (6 disks each)
> raid group1:
> 48 GB   carved out for os mounted as /
> remaining space  2.7 TB for xfs mounted as /data1
> raid group2:
> 48 GB  for swap
> remaining space 2.7 TB for xfs mounted as /data2
>
> The strange thing is that /data1 never gets fragmented where as /data2
> is badly fragmented.
> I believe increase allocsize would help, but not sure how to explain why
> /data2(/dev/sdd) always gets fragmented and not /data1(/dev/sdb)
>
> It's a data warehouse application.  the I/O is balanced between /data1
> and /data2:
> output of xfs_db
> [root@sdw4 data1]# xfs_db -c frag -r /dev/sdb
> actual 14353, ideal 13702, fragmentation factor 4.54%
> [root@sdw4 data1]# xfs_db -c frag -r /dev/sdd
> actual 408674, ideal 13719, fragmentation factor 96.64%

so each file has 30 extents on average (actual/ideal)

> df output
> /dev/sdb              2.7T  967G  1.8T  36% /data1
> /dev/sdd              2.7T  1.1T  1.7T  39% /data2

1.1T/408674 extents is ~3M per extent, not so good.

How many files are on each fs?

> LABEL=/data1        /data1     xfs
>     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0
> LABEL=/data2        /data2     xfs
>     allocsize=1048576,logbufs=8,noatime,nodiratime 0 0

Everything but the first option is default, BTW.

Is xfs_info output on the 2 filesystems the same?

Otherwise Emmanuel's idea is a good one, maybe it's not
as balanced as you think it is, or maybe they have aged
differently and have different amounts of freespace
(see the freesp command in xfs_db)

> By the way, the os is RHEL 5.5 kernel 2.6.18-194.11.1.el5

Was Red Hat support not helpful?

-Eric



[-- Attachment #1.2: Type: text/html, Size: 6539 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-09 21:59   ` Brady Chang
  2010-09-09 22:06     ` Eric Sandeen
@ 2010-09-10  3:23     ` Stan Hoeppner
  2010-09-10  4:57       ` Stan Hoeppner
  1 sibling, 1 reply; 10+ messages in thread
From: Stan Hoeppner @ 2010-09-10  3:23 UTC (permalink / raw)
  To: xfs

Brady Chang put forth on 9/9/2010 4:59 PM:
> thanks guys for the feedback.
> iostat shows balanced io between two filesystems.

Can we please see the "iostat -x" output for the duration of the TPC-H
run _only_?

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: fragmentation question
  2010-09-10  3:23     ` Stan Hoeppner
@ 2010-09-10  4:57       ` Stan Hoeppner
  0 siblings, 0 replies; 10+ messages in thread
From: Stan Hoeppner @ 2010-09-10  4:57 UTC (permalink / raw)
  To: xfs

Stan Hoeppner put forth on 9/9/2010 10:23 PM:
> Brady Chang put forth on 9/9/2010 4:59 PM:
>> thanks guys for the feedback.
>> iostat shows balanced io between two filesystems.
> 
> Can we please see the "iostat -x" output for the duration of the TPC-H
> run _only_?

What db engine are you using?  Oracle, DB2, MySQL, or PostgreSQL?

Exactly how are you instructing it to split files between /data1 and /data2?

Are you instructing your db engine to split your transaction logs and db
files equally across both filesystems?

On which filesystem are you locating your database engine scratch space
if any?

Can you run xfs_bmap on the files on each filesystem to determine which
are the most fragmented?  Doing this may/should produce the smoking gun.

At this point, I'm guessing you have transaction logs and db scratch
space allocated to /data2 which is causing the heavy fragmentation.

It would be instructive if you dropped /data2 out of the picture
entirely and ran the TPC-H benchy using only /data1.  I'm sure you'll
see the fragmentation on /data1 in this case.

It seems clear your fragmentation issue is a database management issue,
not an XFS issue.  Your answers to my questions should tell us which.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-09-10  4:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-09-08  0:45 fragmentation question Brady Chang
2010-09-08  1:16 ` Brady Chang
2010-09-08  7:10 ` Emmanuel Florac
2010-09-09 14:12 ` Eric Sandeen
2010-09-09 21:59   ` Brady Chang
2010-09-09 22:06     ` Eric Sandeen
2010-09-09 23:41       ` Brady Chang
2010-09-10  3:23     ` Stan Hoeppner
2010-09-10  4:57       ` Stan Hoeppner
2010-09-09 23:44   ` Brady Chang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.