linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Production comparison between 2.4.27 and 2.6.8.1
@ 2004-08-21 17:25 Massimo Cetra
  2004-08-22  1:33 ` Nick Piggin
  0 siblings, 1 reply; 9+ messages in thread
From: Massimo Cetra @ 2004-08-21 17:25 UTC (permalink / raw)
  To: linux-kernel


Hi everybody.

#***********************************************************

Environment:
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 6
model           : 10
model name      : AMD Athlon(tm) XP 3000+
stepping        : 0
cpu MHz         : 2091.477

#***********************************************************
             total       used       free     shared    buffers
cached
Mem:       1030844     258500     772344          0      36924
167092
-/+ buffers/cache:      54484     976360
Swap:      2056304          0    2056304

#***********************************************************
# lspci
0000:00:00.0 Host bridge: nVidia Corporation nForce2 AGP (different
version?) (rev c1)
0000:00:00.1 RAM memory: nVidia Corporation nForce2 Memory Controller 1
(rev c1)
0000:00:00.2 RAM memory: nVidia Corporation nForce2 Memory Controller 4
(rev c1)
0000:00:00.3 RAM memory: nVidia Corporation nForce2 Memory Controller 3
(rev c1)
0000:00:00.4 RAM memory: nVidia Corporation nForce2 Memory Controller 2
(rev c1)
0000:00:00.5 RAM memory: nVidia Corporation nForce2 Memory Controller 5
(rev c1)
0000:00:01.0 ISA bridge: nVidia Corporation nForce2 ISA Bridge (rev a4)
0000:00:01.1 SMBus: nVidia Corporation nForce2 SMBus (MCP) (rev a2)
0000:00:02.0 USB Controller: nVidia Corporation nForce2 USB Controller
(rev a4)
0000:00:02.1 USB Controller: nVidia Corporation nForce2 USB Controller
(rev a4)
0000:00:02.2 USB Controller: nVidia Corporation nForce2 USB Controller
(rev a4)
0000:00:04.0 Ethernet controller: nVidia Corporation nForce2 Ethernet
Controller (rev a1)
0000:00:08.0 PCI bridge: nVidia Corporation nForce2 External PCI Bridge
(rev a3)
0000:00:09.0 IDE interface: nVidia Corporation nForce2 IDE (rev a2)
0000:00:1e.0 PCI bridge: nVidia Corporation nForce2 AGP (rev c1)
0000:01:04.0 Ethernet controller: Marvell Technology Group Ltd. Yukon
Gigabit Ethernet 10/100/1000Base-T Adapter (rev 13)
0000:01:0b.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
Technology Inc) SiI 3112 [SATALink/SATARaid] Serial ATA Controller (rev
02)
0000:03:00.0 VGA compatible controller: nVidia Corporation NV11
[GeForce2 MX/MX 400] (rev b2)

#***********************************************************

Distro is Debian Woody with all necessary packages backported in order
to have 2.6 working.

I used postgres 7.4.3 to make some tests on a server which will go in
producton in a short time.

The test was really simple:

dropdb mydb
createdb mydb
time psql -U blus mydb <schema.sql
time psql -U blus mydb <data.sql

#***********************************************************

I tried both 2.6.8.1 vanilla and 2.4.7 with -lck patches applied and run
the same test changing kernels.

Tests were run:
- on a raid1 partition on 2 serial Ata disks	(ext3) [software raid)
- on a non raid partition on /dev/sda0 (xfs)

(only postgres data has been switched from raid-ext3 to xfs)


My purpose was merely have a difference of time between the 2 kernels in
performing that task.

Case 1a is 2.4.27-lck1 with raid1 ext3
Case 1b is 2.4.27-lck1 without raid on xfs
Case 2a is 2.6.8.1 with raid1 ext3
Case 2b is 2.6.8.1 without raid on xfs
Results were:

A) for creating the schema (which involves creating tables and indexes)
1a:
  real    0m1.312s
  user    0m0.030s
  sys     0m0.008s
1b:
  real    0m0.508s
  user    0m0.024s
  sys     0m0.012s
2a:
  real    0m0.941s
  user    0m0.025s
  sys     0m0.010s
2b:
  real    0m0.560s
  user    0m0.024s
  sys     0m0.005s

B) to import the data (which implies both writing data to disk and
recalculating indexes)
1a:
  real    4m12.757s
  user    0m3.376s
  sys     0m1.700s
1b:
  real    1m0.467s
  user    0m3.290s
  sys     0m1.646s
2a:
  real    2m42.861s
  user    0m3.590s
  sys     0m1.523s
2b:
  real    1m30.746s
  user    0m3.255s
  sys     0m1.501s

#**********************************************
HDPARM shows:
# hdparm -v
hdparm - get/set hard disk parameters - version v5.5

2.4.7:
/dev/sda:
 Timing buffer-cache reads:   2188 MB in  2.00 seconds = 1094.00 MB/sec
 Timing buffered disk reads:  164 MB in  3.02 seconds =  54.34 MB/sec

2.6.8.1:
/dev/sda:
 Timing buffer-cache reads:   2176 MB in  2.00 seconds = 1087.08 MB/sec
 Timing buffered disk reads:  136 MB in  3.04 seconds =  44.77 MB/sec

#**********************************************
It is my first experience with 2.6 branch kernels, because i am trying
to figure out if the tree is performing well to switch everithing in
production, so my ideas may be wrong...

Raid tests may be faked because of the overhead caused by md sync (and
probably raid is better on 2.6). 
However it seems that libsata has better performance on 2.4 (hdparm)
xfs tests shows that 2.4 has better performance if compared to 2.6 and
the difference, in my opinion, is not linked on libsata better
performance.

What is your opinion ?
What can I try to improve performance ?


Regards,

Ing. Massimo Cetra
------------------------------------
Navynet S.r.l
Surfing the network
Via di peretola, 93 - 50145 Firenze 
Tel. 055317634 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Production comparison between 2.4.27 and 2.6.8.1
  2004-08-21 17:25 Production comparison between 2.4.27 and 2.6.8.1 Massimo Cetra
@ 2004-08-22  1:33 ` Nick Piggin
  2004-08-22 15:43   ` Massimo Cetra
                     ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Nick Piggin @ 2004-08-22  1:33 UTC (permalink / raw)
  To: Massimo Cetra; +Cc: linux-kernel

Massimo Cetra wrote:
> Hi everybody.
> 
> #***********************************************************
> 
> Environment:
> processor       : 0
> vendor_id       : AuthenticAMD
> cpu family      : 6
> model           : 10
> model name      : AMD Athlon(tm) XP 3000+
> stepping        : 0
> cpu MHz         : 2091.477
> 
> #***********************************************************
>              total       used       free     shared    buffers
> cached
> Mem:       1030844     258500     772344          0      36924
> 167092
> -/+ buffers/cache:      54484     976360
> Swap:      2056304          0    2056304
> 
> #***********************************************************
> # lspci
> 0000:00:00.0 Host bridge: nVidia Corporation nForce2 AGP (different
> version?) (rev c1)
> 0000:00:00.1 RAM memory: nVidia Corporation nForce2 Memory Controller 1
> (rev c1)
> 0000:00:00.2 RAM memory: nVidia Corporation nForce2 Memory Controller 4
> (rev c1)
> 0000:00:00.3 RAM memory: nVidia Corporation nForce2 Memory Controller 3
> (rev c1)
> 0000:00:00.4 RAM memory: nVidia Corporation nForce2 Memory Controller 2
> (rev c1)
> 0000:00:00.5 RAM memory: nVidia Corporation nForce2 Memory Controller 5
> (rev c1)
> 0000:00:01.0 ISA bridge: nVidia Corporation nForce2 ISA Bridge (rev a4)
> 0000:00:01.1 SMBus: nVidia Corporation nForce2 SMBus (MCP) (rev a2)
> 0000:00:02.0 USB Controller: nVidia Corporation nForce2 USB Controller
> (rev a4)
> 0000:00:02.1 USB Controller: nVidia Corporation nForce2 USB Controller
> (rev a4)
> 0000:00:02.2 USB Controller: nVidia Corporation nForce2 USB Controller
> (rev a4)
> 0000:00:04.0 Ethernet controller: nVidia Corporation nForce2 Ethernet
> Controller (rev a1)
> 0000:00:08.0 PCI bridge: nVidia Corporation nForce2 External PCI Bridge
> (rev a3)
> 0000:00:09.0 IDE interface: nVidia Corporation nForce2 IDE (rev a2)
> 0000:00:1e.0 PCI bridge: nVidia Corporation nForce2 AGP (rev c1)
> 0000:01:04.0 Ethernet controller: Marvell Technology Group Ltd. Yukon
> Gigabit Ethernet 10/100/1000Base-T Adapter (rev 13)
> 0000:01:0b.0 RAID bus controller: Silicon Image, Inc. (formerly CMD
> Technology Inc) SiI 3112 [SATALink/SATARaid] Serial ATA Controller (rev
> 02)
> 0000:03:00.0 VGA compatible controller: nVidia Corporation NV11
> [GeForce2 MX/MX 400] (rev b2)
> 
> #***********************************************************
> 
> Distro is Debian Woody with all necessary packages backported in order
> to have 2.6 working.
> 
> I used postgres 7.4.3 to make some tests on a server which will go in
> producton in a short time.
> 
> The test was really simple:
> 
> dropdb mydb
> createdb mydb
> time psql -U blus mydb <schema.sql
> time psql -U blus mydb <data.sql
> 
> #***********************************************************
> 
> I tried both 2.6.8.1 vanilla and 2.4.7 with -lck patches applied and run
> the same test changing kernels.
> 
> Tests were run:
> - on a raid1 partition on 2 serial Ata disks	(ext3) [software raid)
> - on a non raid partition on /dev/sda0 (xfs)
> 
> (only postgres data has been switched from raid-ext3 to xfs)
> 
> 
> My purpose was merely have a difference of time between the 2 kernels in
> performing that task.
> 
> Case 1a is 2.4.27-lck1 with raid1 ext3
> Case 1b is 2.4.27-lck1 without raid on xfs
> Case 2a is 2.6.8.1 with raid1 ext3
> Case 2b is 2.6.8.1 without raid on xfs
> Results were:
> 
> A) for creating the schema (which involves creating tables and indexes)
> 1a:
>   real    0m1.312s
>   user    0m0.030s
>   sys     0m0.008s
> 1b:
>   real    0m0.508s
>   user    0m0.024s
>   sys     0m0.012s
> 2a:
>   real    0m0.941s
>   user    0m0.025s
>   sys     0m0.010s
> 2b:
>   real    0m0.560s
>   user    0m0.024s
>   sys     0m0.005s
> 
> B) to import the data (which implies both writing data to disk and
> recalculating indexes)
> 1a:
>   real    4m12.757s
>   user    0m3.376s
>   sys     0m1.700s
> 1b:
>   real    1m0.467s
>   user    0m3.290s
>   sys     0m1.646s
> 2a:
>   real    2m42.861s
>   user    0m3.590s
>   sys     0m1.523s
> 2b:
>   real    1m30.746s
>   user    0m3.255s
>   sys     0m1.501s
> 
> #**********************************************
> HDPARM shows:
> # hdparm -v
> hdparm - get/set hard disk parameters - version v5.5
> 
> 2.4.7:
> /dev/sda:
>  Timing buffer-cache reads:   2188 MB in  2.00 seconds = 1094.00 MB/sec
>  Timing buffered disk reads:  164 MB in  3.02 seconds =  54.34 MB/sec
> 
> 2.6.8.1:
> /dev/sda:
>  Timing buffer-cache reads:   2176 MB in  2.00 seconds = 1087.08 MB/sec
>  Timing buffered disk reads:  136 MB in  3.04 seconds =  44.77 MB/sec
> 
> #**********************************************
> It is my first experience with 2.6 branch kernels, because i am trying
> to figure out if the tree is performing well to switch everithing in
> production, so my ideas may be wrong...
> 
> Raid tests may be faked because of the overhead caused by md sync (and
> probably raid is better on 2.6). 
> However it seems that libsata has better performance on 2.4 (hdparm)
> xfs tests shows that 2.4 has better performance if compared to 2.6 and
> the difference, in my opinion, is not linked on libsata better
> performance.
> 
> What is your opinion ?
> What can I try to improve performance ?
> 

I wouldn't worry too much about hdparm measurements. If you want to
test the streaming throughput of the disk, run dd if=big-file of=/dev/null
or a large write+sync.

Regarding your worse non-RAID XFS database results, try booting 2.6 with
elevator=deadline and test again. If yes, are you using queueing (TCQ) on
your disks?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Production comparison between 2.4.27 and 2.6.8.1
  2004-08-22  1:33 ` Nick Piggin
@ 2004-08-22 15:43   ` Massimo Cetra
  2004-08-22 16:54   ` Massimo Cetra
  2004-08-23 11:46   ` Massimo Cetra
  2 siblings, 0 replies; 9+ messages in thread
From: Massimo Cetra @ 2004-08-22 15:43 UTC (permalink / raw)
  To: 'Nick Piggin'; +Cc: linux-kernel

Nick Piggin wrote:

> I wouldn't worry too much about hdparm measurements. If you 
> want to test the streaming throughput of the disk, run dd 
> if=big-file of=/dev/null or a large write+sync.

Created a big file:
 -rw-r--r--    1 root     root     1073740800 Aug 22 17:22 /testfile

time dd if=/testfile of=/dev/null gives:
On 2.6.8.1 ext3 raid
  real    0m11.493s
  user    0m0.657s
  sys     0m2.796s
On 2.6.8.1 xfs:
  real    0m18.214s
  user    0m0.697s
  sys     0m3.778s

Tests on 2.6.8.1 has been done with elevator=deadline

On 2.4.7 ext3 raid:
  real    0m20.513s
  user    0m0.704s
  sys     0m2.626s

On 2.4.7 xfs:
  real    0m28.414s
  user    0m0.686s
  sys     0m3.320s

So it seems that read access to disks is better on 2.6 tree.


> Regarding your worse non-RAID XFS database results, try 
> booting 2.6 with elevator=deadline and test again. 

This are results obtained with deadline:

filippo:~# dmesg |grep deadline
Using deadline io scheduler

A) [schema]
2b) 2.6.8.1 and xfs
  real    0m0.551s
  user    0m0.027s
  sys     0m0.012s

B) [Importing data]
2b) 2.6.8.1 and xfs
  real    1m1.474s
  user    0m3.281s
  sys     0m1.505s

It seems performance does not get better.

I have tried other tests:
With ext2 FS results are: 



A)
1c) 2.4.7 and ext2 (no raid)
  real    0m0.625s
  user    0m0.028s
  sys     0m0.018s
2c) 2.6.8.1 and ext2 (no raid)
  real    0m1.667s
  user    0m0.026s
  sys     0m0.010s
B)
1c) 2.4.7 and ext2 (no raid)
  real    1m28.542s
  user    0m3.232s
  sys     0m1.384s
2c) 2.6.8.1 and ext2
  real    1m30.200s
  user    0m3.304s
  sys     0m1.461s

Still, even with ext2, 2.4.7 performs much better with postgres (and
likely other databases).

I have no idea nor no clue how to improve this.

> If yes, 
> are you using queueing (TCQ) on your disks?

How can i check ?


 Massimo Cetra



^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Production comparison between 2.4.27 and 2.6.8.1
  2004-08-22  1:33 ` Nick Piggin
  2004-08-22 15:43   ` Massimo Cetra
@ 2004-08-22 16:54   ` Massimo Cetra
  2004-08-23 11:46   ` Massimo Cetra
  2 siblings, 0 replies; 9+ messages in thread
From: Massimo Cetra @ 2004-08-22 16:54 UTC (permalink / raw)
  To: 'Nick Piggin'; +Cc: linux-kernel

Nick Piggin Wrote:
> I wouldn't worry too much about hdparm measurements. If you 
> want to test the streaming throughput of the disk, run dd 
> if=big-file of=/dev/null or a large write+sync.
> 
> Regarding your worse non-RAID XFS database results, try 
> booting 2.6 with elevator=deadline and test again. If yes, 
> are you using queueing (TCQ) on your disks?

Done another test.
This time I created a 256Mb ramdisk, formatted it as ext3 and mounted as
data partition.

Results are the following:
2.6.8.1:
A)
real    0m0.437s
user    0m0.036s
sys     0m0.013s

B)
real    0m37.307s
user    0m3.212s
sys     0m1.287s


2.4.7:
A)
real    0m0.437s
user    0m0.024s
sys     0m0.010s

B)
real    0m38.180s
user    0m2.950s
sys     0m1.602s


In this case results are comparable.
What is the difference, so?
2.6 performs better reading from disk.
Avoiding PCI, SATA and disks on this test makes 2.4 and 2.6 perform in
the same way.

Hope this helps.

Massimo Cetra





^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Production comparison between 2.4.27 and 2.6.8.1
  2004-08-22  1:33 ` Nick Piggin
  2004-08-22 15:43   ` Massimo Cetra
  2004-08-22 16:54   ` Massimo Cetra
@ 2004-08-23 11:46   ` Massimo Cetra
  2004-08-24  2:05     ` Nick Piggin
  2 siblings, 1 reply; 9+ messages in thread
From: Massimo Cetra @ 2004-08-23 11:46 UTC (permalink / raw)
  To: 'Nick Piggin'; +Cc: linux-kernel

Nick Piggin wrote:
> > #**********************************************
> > It is my first experience with 2.6 branch kernels, because 
> i am trying 
> > to figure out if the tree is performing well to switch 
> everithing in 
> > production, so my ideas may be wrong...
> > 
> > Raid tests may be faked because of the overhead caused by 
> md sync (and 
> > probably raid is better on 2.6). However it seems that libsata has 
> > better performance on 2.4 (hdparm) xfs tests shows that 2.4 
> has better 
> > performance if compared to 2.6 and the difference, in my 
> opinion, is 
> > not linked on libsata better performance.
> > 
> > What is your opinion ?
> > What can I try to improve performance ?
> > 
> 
> I wouldn't worry too much about hdparm measurements. If you 
> want to test the streaming throughput of the disk, run dd 
> if=big-file of=/dev/null or a large write+sync.
> 
> Regarding your worse non-RAID XFS database results, try 
> booting 2.6 with elevator=deadline and test again. If yes, 
> are you using queueing (TCQ) on your disks?


Tried even with 2.6.8.1-mm and 2.6.8.1-ck
No performance improvement.

>From Documentation/block/as-iosched.txt i read:

#--------------------------------------
Attention! Database servers, especially those using "TCQ" disks should
investigate performance with the 'deadline' IO scheduler. Any system
with high
disk performance requirements should do so, in fact.

If you see unusual performance characteristics of your disk systems, or
you
see big performance regressions versus the deadline scheduler, please
email
me. Database users don't bother unless you're willing to test a lot of
patches
from me ;) its a known issue.
#--------------------------------------

So it's probably known that 2.6 performance with databases and heavy HD
access is an issue.
I don't believe that 2.6.x tree is performing as well as 2.4.x(-lck) on
server tasks.

Is this issue being analyzed ?
Should we hope in an improvement sometime?
Or I'll have to use 2.4 to have good performance ?

Max





^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Production comparison between 2.4.27 and 2.6.8.1
  2004-08-23 11:46   ` Massimo Cetra
@ 2004-08-24  2:05     ` Nick Piggin
  2004-08-24 14:15       ` Massimo Cetra
  0 siblings, 1 reply; 9+ messages in thread
From: Nick Piggin @ 2004-08-24  2:05 UTC (permalink / raw)
  To: Massimo Cetra; +Cc: linux-kernel

Massimo Cetra wrote:
> Nick Piggin wrote:
> 
>>>#**********************************************
>>>It is my first experience with 2.6 branch kernels, because 
>>
>>i am trying 
>>
>>>to figure out if the tree is performing well to switch 
>>
>>everithing in 
>>
>>>production, so my ideas may be wrong...
>>>
>>>Raid tests may be faked because of the overhead caused by 
>>
>>md sync (and 
>>
>>>probably raid is better on 2.6). However it seems that libsata has 
>>>better performance on 2.4 (hdparm) xfs tests shows that 2.4 
>>
>>has better 
>>
>>>performance if compared to 2.6 and the difference, in my 
>>
>>opinion, is 
>>
>>>not linked on libsata better performance.
>>>
>>>What is your opinion ?
>>>What can I try to improve performance ?
>>>
>>
>>I wouldn't worry too much about hdparm measurements. If you 
>>want to test the streaming throughput of the disk, run dd 
>>if=big-file of=/dev/null or a large write+sync.
>>
>>Regarding your worse non-RAID XFS database results, try 
>>booting 2.6 with elevator=deadline and test again. If yes, 
>>are you using queueing (TCQ) on your disks?
> 
> 
> 
> Tried even with 2.6.8.1-mm and 2.6.8.1-ck
> No performance improvement.
> 
>>From Documentation/block/as-iosched.txt i read:
> 
> #--------------------------------------
> Attention! Database servers, especially those using "TCQ" disks should
> investigate performance with the 'deadline' IO scheduler. Any system
> with high
> disk performance requirements should do so, in fact.
> 
> If you see unusual performance characteristics of your disk systems, or
> you
> see big performance regressions versus the deadline scheduler, please
> email
> me. Database users don't bother unless you're willing to test a lot of
> patches
> from me ;) its a known issue.
> #--------------------------------------
> 
> So it's probably known that 2.6 performance with databases and heavy HD
> access is an issue.
> I don't believe that 2.6.x tree is performing as well as 2.4.x(-lck) on
> server tasks.
> 
> Is this issue being analyzed ?
> Should we hope in an improvement sometime?
> Or I'll have to use 2.4 to have good performance ?
> 

You booted with elevator=deadline and things still didn't improve
though, correct? If so, then the problem should be found and fixed.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: Production comparison between 2.4.27 and 2.6.8.1
  2004-08-24  2:05     ` Nick Piggin
@ 2004-08-24 14:15       ` Massimo Cetra
  2004-08-25  2:28         ` Nick Piggin
  0 siblings, 1 reply; 9+ messages in thread
From: Massimo Cetra @ 2004-08-24 14:15 UTC (permalink / raw)
  To: 'Nick Piggin'; +Cc: linux-kernel

Nick Piggin wrote:

> > Is this issue being analyzed ?
> > Should we hope in an improvement sometime?
> > Or I'll have to use 2.4 to have good performance ?
> > 
> 
> You booted with elevator=deadline and things still didn't 
> improve though, correct? If so, then the problem should be 
> found and fixed.

Yes, that's correct.
Thanks.  I'll try next versions of kernel.
I dont think 2.8.9-RC1 includes something regarding this issue.


Max


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Production comparison between 2.4.27 and 2.6.8.1
  2004-08-24 14:15       ` Massimo Cetra
@ 2004-08-25  2:28         ` Nick Piggin
  0 siblings, 0 replies; 9+ messages in thread
From: Nick Piggin @ 2004-08-25  2:28 UTC (permalink / raw)
  To: Massimo Cetra; +Cc: linux-kernel

Massimo Cetra wrote:
> Nick Piggin wrote:
> 
> 
>>>Is this issue being analyzed ?
>>>Should we hope in an improvement sometime?
>>>Or I'll have to use 2.4 to have good performance ?
>>>
>>
>>You booted with elevator=deadline and things still didn't 
>>improve though, correct? If so, then the problem should be 
>>found and fixed.
> 
> 
> Yes, that's correct.
> Thanks.  I'll try next versions of kernel.
> I dont think 2.8.9-RC1 includes something regarding this issue.
> 

OK, can you try testing different values of
/sys/block/???/queue/read_ahead_kb

and

/sys/block/???/queue/nr_requests

You should set '???' for all disks involved.


First, try setting read_ahead_kb to 0, then 256.
If those values don't change anything, set nr_requests to 1024.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Production comparison between 2.4.27 and 2.6.8.1
@ 2004-08-25  7:23 rwhron
  0 siblings, 0 replies; 9+ messages in thread
From: rwhron @ 2004-08-25  7:23 UTC (permalink / raw)
  To: mcetra; +Cc: linux-kernel

> What can I try to improve performance ?

In benchmarks I've done, XFS was helped significantly
by the mkfs/mount options in the XFS FAQ.  (look
for the dbench question).

http://oss.sgi.com/projects/xfs/faq.html

mkfs -t xfs -l size=32768b -f /dev/device
mount -t xfs -o logbufs=8,logbsize=32768 /dev/device /mountpoint

-- 
Randy Hron
http://home.earthlink.net/~rwhron/kernel/bigbox.html


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2004-08-25  7:23 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-08-21 17:25 Production comparison between 2.4.27 and 2.6.8.1 Massimo Cetra
2004-08-22  1:33 ` Nick Piggin
2004-08-22 15:43   ` Massimo Cetra
2004-08-22 16:54   ` Massimo Cetra
2004-08-23 11:46   ` Massimo Cetra
2004-08-24  2:05     ` Nick Piggin
2004-08-24 14:15       ` Massimo Cetra
2004-08-25  2:28         ` Nick Piggin
2004-08-25  7:23 rwhron

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).