linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [resent PATCH] Re: very slow parallel read performance
  2001-08-28  1:08 [resent PATCH] Re: very slow parallel read performance Dieter Nützel
@ 2001-08-28  0:05 ` Marcelo Tosatti
  2001-08-28  1:54   ` Daniel Phillips
  2001-08-28  5:01 ` Mike Galbraith
  2001-08-28 18:18 ` Andrew Morton
  2 siblings, 1 reply; 8+ messages in thread
From: Marcelo Tosatti @ 2001-08-28  0:05 UTC (permalink / raw)
  To: Dieter Nützel; +Cc: Linux Kernel List, Daniel Phillips, ReiserFS List



On Tue, 28 Aug 2001, Dieter Nützel wrote:

> [-]
> > In the real-world case we observed the readahead was actually being
> > throttled by the ftp clients.  IO request throttling on the file read
> > side would not have prevented cache from overfilling.  Once the cache
> > filled up, readahead pages started being dropped and reread, cutting
> > the server throughput by a factor of 2 or so.  On the other hand,
> > performance with no readahead was even worse.
> [-]
> 
> Are you like some numbers?

Note that increasing readahead size on -ac and stock tree will affect the
system in a different way since the VM has different logics on drop
behind.

Could you please try the same tests with the stock tree? (2.4.10-pre and
2.4.9)

> 
> I've generated some max-readahead numbers (dbench-1.1 32 clients) with 
> 2.4.8-ac11,  2.4.8-ac12 (+ memory.c fix) and 2.4.8-ac12 (+ memory.c fix + low 
> latency)
> 
> system:
> Athlon I 550
> MSI MS-6167 Rev 1.0B, AMD Irongate C4 (without bypass)
> 640 MB PC100-2-2-2 SDRAM
> AHA-2940UW
> IBM U160 DDYS 18 GB, 10.000 rpm (in UW mode)
> all filesystems ReiserFS 3.6.25
> 
> * readahead do not show dramatic differences
> * killall -STOP kupdated DO
> 
> Yes, I know it is dangerous to stop kupdated but my disk show heavy thrashing 
> (seeks like mad) since 2.4.7ac4. killall -STOP kupdated make it smooth and 
> fast, again.
> 
> Could it be that kupdated and kreiserfsd do concurrent (double) work?
> The numbers for context switches are more than double with kupdated than 
> without it. Without kupdated the system feels very smooth and snappy.
> Low latency patch push this even further.
> 
> Regards,
> 	Dieter
> 
> 
> 2.4.8-ac11
> 
> max-readahead 511
> Throughput 22.4059 MB/sec (NB=28.0073 MB/sec  224.059 MBit/sec)
> 24.780u 78.010s 3:09.54 54.2%   0+0k 0+0io 911pf+0w
> 
> max-readahead 31 (default)
> Throughput 19.7815 MB/sec (NB=24.7269 MB/sec  197.815 MBit/sec)
> 24.690u 73.630s 3:34.55 45.8%   0+0k 0+0io 911pf+0w
> 
> max-readahead 15
> Throughput 20.5266 MB/sec (NB=25.6583 MB/sec  205.266 MBit/sec)
> 25.430u 77.090s 3:26.79 49.5%   0+0k 0+0io 911pf+0w
> 
> max-readahead 7
> Throughput 19.7186 MB/sec (NB=24.6483 MB/sec  197.186 MBit/sec)
> 24.950u 77.820s 3:35.23 47.7%   0+0k 0+0io 911pf+0w
> 
> max-readahead 3
> Throughput 21.1795 MB/sec (NB=26.4743 MB/sec  211.795 MBit/sec)
> 26.020u 79.290s 3:20.45 52.5%   0+0k 0+0io 911pf+0w
> 
> max-readahead 0
> Throughput 19.3769 MB/sec (NB=24.2211 MB/sec  193.769 MBit/sec)
> 25.070u 77.550s 3:39.00 46.8%   0+0k 0+0io 911pf+0w
> 
> killall -STOP kupdated
> retry with the 2 best cases
> 
> max-readahead 3
> Throughput 34.6985 MB/sec (NB=43.3731 MB/sec  346.985 MBit/sec)
> 24.230u 81.930s 2:02.75 86.4%   0+0k 0+0io 911pf+0w
> 
> max-readahead 511 (it is repeatable, see below)
> Throughput 32.3584 MB/sec (NB=40.448 MB/sec  323.584 MBit/sec)
> 24.190u 86.130s 2:11.55 83.8%   0+0k 0+0io 911pf+0w
> 
> Throughput 33.28 MB/sec (NB=41.6 MB/sec  332.8 MBit/sec)
> 25.220u 84.260s 2:07.93 85.5%   0+0k 0+0io 911pf+0w
> 
> After (heavy) work:
> Throughput 25.3106 MB/sec (NB=31.6383 MB/sec  253.106 MBit/sec)
> 25.370u 84.420s 2:47.91 65.3%   0+0k 0+0io 911pf+0w
> 
> After reboot:
> Throughput 31.2373 MB/sec (NB=39.0466 MB/sec  312.373 MBit/sec)
> 25.500u 83.810s 2:16.26 80.2%   0+0k 0+0io 911pf+0w
> 
> After reboot:
> Throughput 30.0666 MB/sec (NB=37.5833 MB/sec  300.666 MBit/sec)
> 25.020u 83.770s 2:21.50 76.8%   0+0k 0+0io 911pf+0w
> 
> 
> 
> 2.4.8-ac12
> 
> max-readahead 31 (default)
> Throughput 19.4526 MB/sec (NB=24.3157 MB/sec  194.526 MBit/sec)
> 24.840u 79.490s 3:38.16 47.8%   0+0k 0+0io 911pf+0w
> 
> max-readahead 511
> Throughput 21.5307 MB/sec (NB=26.9134 MB/sec  215.307 MBit/sec)
> 25.000u 77.520s 3:17.20 51.9%   0+0k 0+0io 911pf+0w
> 
> killall -STOP kupdated
> 
> max-readahead 31 (default)
> Throughput 28.5728 MB/sec (NB=35.7159 MB/sec  285.728 MBit/sec)
> 24.750u 88.250s 2:28.85 75.9%   0+0k 0+0io 911pf+0w
> 
> max-readahead 511
> Throughput 29.5127 MB/sec (NB=36.8908 MB/sec  295.127 MBit/sec)
> 25.610u 86.730s 2:24.14 77.9%   0+0k 0+0io 911pf+0w
> 
> 
> 
> 2.4.8-ac12 + The Right memory.c fix
> 
> max-readahead 31 (default)
> Throughput 22.0905 MB/sec (NB=27.6131 MB/sec  220.905 MBit/sec)
> 25.340u 77.700s 3:12.24 53.5%   0+0k 0+0io 911pf+0w
> 
> killall -STOP kupdated
> 
> max-readahead 31 (default)
> Throughput 29.2189 MB/sec (NB=36.5236 MB/sec  292.189 MBit/sec)
> 25.750u 82.090s 2:25.57 74.0%   0+0k 0+0io 911pf+0w
> 
> 
> 
> 2.4.8-ac12 + The Right memory.c fix + low latency patch
> 
> max-readahead 31 (default)
> Throughput 20.3505 MB/sec (NB=25.4381 MB/sec  203.505 MBit/sec)
> 25.430u 75.250s 3:28.58 48.2%   0+0k 0+0io 911pf+0w
> 
> killall -STOP kupdated
> 
> max-readahead 31 (default)
> Throughput 29.25 MB/sec (NB=36.5625 MB/sec  292.5 MBit/sec)
> 24.600u 86.370s 2:25.42 76.3%   0+0k 0+0io 911pf+0w
> 
> max-readahead 511
> Throughput 30.0372 MB/sec (NB=37.5465 MB/sec  300.372 MBit/sec)
> 25.590u 75.910s 2:21.64 71.6%   0+0k 0+0io 911pf+0w
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [resent PATCH] Re: very slow parallel read performance
@ 2001-08-28  1:08 Dieter Nützel
  2001-08-28  0:05 ` Marcelo Tosatti
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Dieter Nützel @ 2001-08-28  1:08 UTC (permalink / raw)
  To: Linux Kernel List; +Cc: Daniel Phillips, ReiserFS List

[-]
> In the real-world case we observed the readahead was actually being
> throttled by the ftp clients.  IO request throttling on the file read
> side would not have prevented cache from overfilling.  Once the cache
> filled up, readahead pages started being dropped and reread, cutting
> the server throughput by a factor of 2 or so.  On the other hand,
> performance with no readahead was even worse.
[-]

Are you like some numbers?

I've generated some max-readahead numbers (dbench-1.1 32 clients) with 
2.4.8-ac11,  2.4.8-ac12 (+ memory.c fix) and 2.4.8-ac12 (+ memory.c fix + low 
latency)

system:
Athlon I 550
MSI MS-6167 Rev 1.0B, AMD Irongate C4 (without bypass)
640 MB PC100-2-2-2 SDRAM
AHA-2940UW
IBM U160 DDYS 18 GB, 10.000 rpm (in UW mode)
all filesystems ReiserFS 3.6.25

* readahead do not show dramatic differences
* killall -STOP kupdated DO

Yes, I know it is dangerous to stop kupdated but my disk show heavy thrashing 
(seeks like mad) since 2.4.7ac4. killall -STOP kupdated make it smooth and 
fast, again.

Could it be that kupdated and kreiserfsd do concurrent (double) work?
The numbers for context switches are more than double with kupdated than 
without it. Without kupdated the system feels very smooth and snappy.
Low latency patch push this even further.

Regards,
	Dieter


2.4.8-ac11

max-readahead 511
Throughput 22.4059 MB/sec (NB=28.0073 MB/sec  224.059 MBit/sec)
24.780u 78.010s 3:09.54 54.2%   0+0k 0+0io 911pf+0w

max-readahead 31 (default)
Throughput 19.7815 MB/sec (NB=24.7269 MB/sec  197.815 MBit/sec)
24.690u 73.630s 3:34.55 45.8%   0+0k 0+0io 911pf+0w

max-readahead 15
Throughput 20.5266 MB/sec (NB=25.6583 MB/sec  205.266 MBit/sec)
25.430u 77.090s 3:26.79 49.5%   0+0k 0+0io 911pf+0w

max-readahead 7
Throughput 19.7186 MB/sec (NB=24.6483 MB/sec  197.186 MBit/sec)
24.950u 77.820s 3:35.23 47.7%   0+0k 0+0io 911pf+0w

max-readahead 3
Throughput 21.1795 MB/sec (NB=26.4743 MB/sec  211.795 MBit/sec)
26.020u 79.290s 3:20.45 52.5%   0+0k 0+0io 911pf+0w

max-readahead 0
Throughput 19.3769 MB/sec (NB=24.2211 MB/sec  193.769 MBit/sec)
25.070u 77.550s 3:39.00 46.8%   0+0k 0+0io 911pf+0w

killall -STOP kupdated
retry with the 2 best cases

max-readahead 3
Throughput 34.6985 MB/sec (NB=43.3731 MB/sec  346.985 MBit/sec)
24.230u 81.930s 2:02.75 86.4%   0+0k 0+0io 911pf+0w

max-readahead 511 (it is repeatable, see below)
Throughput 32.3584 MB/sec (NB=40.448 MB/sec  323.584 MBit/sec)
24.190u 86.130s 2:11.55 83.8%   0+0k 0+0io 911pf+0w

Throughput 33.28 MB/sec (NB=41.6 MB/sec  332.8 MBit/sec)
25.220u 84.260s 2:07.93 85.5%   0+0k 0+0io 911pf+0w

After (heavy) work:
Throughput 25.3106 MB/sec (NB=31.6383 MB/sec  253.106 MBit/sec)
25.370u 84.420s 2:47.91 65.3%   0+0k 0+0io 911pf+0w

After reboot:
Throughput 31.2373 MB/sec (NB=39.0466 MB/sec  312.373 MBit/sec)
25.500u 83.810s 2:16.26 80.2%   0+0k 0+0io 911pf+0w

After reboot:
Throughput 30.0666 MB/sec (NB=37.5833 MB/sec  300.666 MBit/sec)
25.020u 83.770s 2:21.50 76.8%   0+0k 0+0io 911pf+0w



2.4.8-ac12

max-readahead 31 (default)
Throughput 19.4526 MB/sec (NB=24.3157 MB/sec  194.526 MBit/sec)
24.840u 79.490s 3:38.16 47.8%   0+0k 0+0io 911pf+0w

max-readahead 511
Throughput 21.5307 MB/sec (NB=26.9134 MB/sec  215.307 MBit/sec)
25.000u 77.520s 3:17.20 51.9%   0+0k 0+0io 911pf+0w

killall -STOP kupdated

max-readahead 31 (default)
Throughput 28.5728 MB/sec (NB=35.7159 MB/sec  285.728 MBit/sec)
24.750u 88.250s 2:28.85 75.9%   0+0k 0+0io 911pf+0w

max-readahead 511
Throughput 29.5127 MB/sec (NB=36.8908 MB/sec  295.127 MBit/sec)
25.610u 86.730s 2:24.14 77.9%   0+0k 0+0io 911pf+0w



2.4.8-ac12 + The Right memory.c fix

max-readahead 31 (default)
Throughput 22.0905 MB/sec (NB=27.6131 MB/sec  220.905 MBit/sec)
25.340u 77.700s 3:12.24 53.5%   0+0k 0+0io 911pf+0w

killall -STOP kupdated

max-readahead 31 (default)
Throughput 29.2189 MB/sec (NB=36.5236 MB/sec  292.189 MBit/sec)
25.750u 82.090s 2:25.57 74.0%   0+0k 0+0io 911pf+0w



2.4.8-ac12 + The Right memory.c fix + low latency patch

max-readahead 31 (default)
Throughput 20.3505 MB/sec (NB=25.4381 MB/sec  203.505 MBit/sec)
25.430u 75.250s 3:28.58 48.2%   0+0k 0+0io 911pf+0w

killall -STOP kupdated

max-readahead 31 (default)
Throughput 29.25 MB/sec (NB=36.5625 MB/sec  292.5 MBit/sec)
24.600u 86.370s 2:25.42 76.3%   0+0k 0+0io 911pf+0w

max-readahead 511
Throughput 30.0372 MB/sec (NB=37.5465 MB/sec  300.372 MBit/sec)
25.590u 75.910s 2:21.64 71.6%   0+0k 0+0io 911pf+0w

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [resent PATCH] Re: very slow parallel read performance
  2001-08-28  0:05 ` Marcelo Tosatti
@ 2001-08-28  1:54   ` Daniel Phillips
  0 siblings, 0 replies; 8+ messages in thread
From: Daniel Phillips @ 2001-08-28  1:54 UTC (permalink / raw)
  To: Marcelo Tosatti, Dieter Nützel ; +Cc: Linux Kernel List, ReiserFS List

On August 28, 2001 02:05 am, Marcelo Tosatti wrote:
> On Tue, 28 Aug 2001, Dieter Nützel wrote:
> 
> > [-]
> > > In the real-world case we observed the readahead was actually being
> > > throttled by the ftp clients.  IO request throttling on the file read
> > > side would not have prevented cache from overfilling.  Once the cache
> > > filled up, readahead pages started being dropped and reread, cutting
> > > the server throughput by a factor of 2 or so.  On the other hand,
> > > performance with no readahead was even worse.
> > [-]
> > 
> > Are you like some numbers?
> 
> Note that increasing readahead size on -ac and stock tree will affect the
> system in a different way since the VM has different logics on drop
> behind.
> 
> Could you please try the same tests with the stock tree? (2.4.10-pre and
> 2.4.9)

He'll need the proc max-readahead patch posted by Craig I. Hagan on Sunday 
under the subject "Re: [resent PATCH] Re: very slow parallel read 
performance".

There are two other big variables here: Reiserfs and dbench.  Personally, I 
question the value of doing this testing on dbench, it's too erratic.

--
Daniel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [resent PATCH] Re: very slow parallel read performance
  2001-08-28  1:08 [resent PATCH] Re: very slow parallel read performance Dieter Nützel
  2001-08-28  0:05 ` Marcelo Tosatti
@ 2001-08-28  5:01 ` Mike Galbraith
  2001-08-28  8:46   ` [reiserfs-list] " Hans Reiser
  2001-08-28 18:18 ` Andrew Morton
  2 siblings, 1 reply; 8+ messages in thread
From: Mike Galbraith @ 2001-08-28  5:01 UTC (permalink / raw)
  To: Dieter N|tzel; +Cc: Linux Kernel List, Daniel Phillips, ReiserFS List

On Tue, 28 Aug 2001, Dieter N|tzel wrote:

> * readahead do not show dramatic differences
> * killall -STOP kupdated DO
>
> Yes, I know it is dangerous to stop kupdated but my disk show heavy thrashing
> (seeks like mad) since 2.4.7ac4. killall -STOP kupdated make it smooth and
> fast, again.

Interesting.

A while back, I twiddled the flush logic in buffer.c a little and made
kupdated only handle light flushing.. stay out of the way when bdflush
is running.  This and some dynamic adjustment of bdflush flushsize and
not stopping flushing right _at_ (biggie) the trigger level produced
very interesting improvements.  (very marked reduction in system time
for heavy IO jobs, and large improvement in file rewrite throughput)

	-Mike


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [reiserfs-list] Re: [resent PATCH] Re: very slow parallel read  performance
  2001-08-28  5:01 ` Mike Galbraith
@ 2001-08-28  8:46   ` Hans Reiser
  2001-08-28 19:17     ` Mike Galbraith
  0 siblings, 1 reply; 8+ messages in thread
From: Hans Reiser @ 2001-08-28  8:46 UTC (permalink / raw)
  To: Mike Galbraith
  Cc: Dieter N|tzel, Linux Kernel List, Daniel Phillips, ReiserFS List,
	Gryaznova E.

Mike Galbraith wrote:
> 
> On Tue, 28 Aug 2001, Dieter N|tzel wrote:
> 
> > * readahead do not show dramatic differences
> > * killall -STOP kupdated DO
> >
> > Yes, I know it is dangerous to stop kupdated but my disk show heavy thrashing
> > (seeks like mad) since 2.4.7ac4. killall -STOP kupdated make it smooth and
> > fast, again.
> 
> Interesting.
> 
> A while back, I twiddled the flush logic in buffer.c a little and made
> kupdated only handle light flushing.. stay out of the way when bdflush
> is running.  This and some dynamic adjustment of bdflush flushsize and
> not stopping flushing right _at_ (biggie) the trigger level produced
> very interesting improvements.  (very marked reduction in system time
> for heavy IO jobs, and large improvement in file rewrite throughput)
> 
>         -Mike


Can you send us the patch, and Elena will run some tests on it?

Hans

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [resent PATCH] Re: very slow parallel read performance
  2001-08-28  1:08 [resent PATCH] Re: very slow parallel read performance Dieter Nützel
  2001-08-28  0:05 ` Marcelo Tosatti
  2001-08-28  5:01 ` Mike Galbraith
@ 2001-08-28 18:18 ` Andrew Morton
  2001-08-28 18:45   ` Hans Reiser
  2 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2001-08-28 18:18 UTC (permalink / raw)
  To: Dieter Nützel; +Cc: Linux Kernel List, Daniel Phillips, ReiserFS List

Dieter Nützel wrote:
> 
> ...
> (dbench-1.1 32 clients)
> ...
> 640 MB PC100-2-2-2 SDRAM
> ...
> * readahead do not show dramatic differences
> * killall -STOP kupdated DO

dbench is a poor tool for evaluating VM or filesystem performance.
It deletes its own files.

If you want really good dbench numbers, you can simply install lots
of RAM and tweak the bdflush parameters thusly:

1: set nfract and nfract_sync really high, so you can use all your
   RAM for buffering dirty data.

2: Set the kupdate interval to 1000000000 to prevent kupdate from
   kicking in.

And guess what?  You can perform an entire dbench run without
touching the disk at all!  dbench deletes its own files, and
they never hit disk.


It gets more complex - if you leave the bdflush parameters at
default, and increase the number of dbench clients you'll reach
a point where bdflush starts kicking in to reduce the amount
of buffercache memory.  This slows the dbench clients down,
so they have less opportunity to delete data before kupdate and
bdflush write them out.  So the net effect is that the slower
you go, the more I/O you end up doing - a *lot* more.  This slows
you down further, which causes more I/O, etc.

dbench is not a benchmark.  It is really complex, it is really
misleading.  It is a fine stress-tester though.

The original netbench test which dbench emulates has three phases:
startup, run and cleanup.  Throughput numbers are only quoted for
the "run" phase.  dbench is incomplete in that it reports on throughput
for all three phases.  Apparently Tridge and friends are working on
changing this, but it will still be the case that the entire test
can be optimised away, and that it is subject to the regenerative
feedback phenomenon described above.

For tuning and measuring fs and VM efficiency we need to user
simpler, more specific tools.

-

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [resent PATCH] Re: very slow parallel read performance
  2001-08-28 18:18 ` Andrew Morton
@ 2001-08-28 18:45   ` Hans Reiser
  0 siblings, 0 replies; 8+ messages in thread
From: Hans Reiser @ 2001-08-28 18:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dieter NЭtzel, Linux Kernel List, Daniel Phillips, ReiserFS List

Andrew Morton wrote:
> 
> Dieter NЭtzel wrote:
> >
> > ...
> > (dbench-1.1 32 clients)
> > ...
> > 640 MB PC100-2-2-2 SDRAM
> > ...
> > * readahead do not show dramatic differences
> > * killall -STOP kupdated DO
> 
> dbench is a poor tool for evaluating VM or filesystem performance.
> It deletes its own files.
> 
> If you want really good dbench numbers, you can simply install lots
> of RAM and tweak the bdflush parameters thusly:
> 
> 1: set nfract and nfract_sync really high, so you can use all your
>    RAM for buffering dirty data.
> 
> 2: Set the kupdate interval to 1000000000 to prevent kupdate from
>    kicking in.
> 
> And guess what?  You can perform an entire dbench run without
> touching the disk at all!  dbench deletes its own files, and
> they never hit disk.
> 
> It gets more complex - if you leave the bdflush parameters at
> default, and increase the number of dbench clients you'll reach
> a point where bdflush starts kicking in to reduce the amount
> of buffercache memory.  This slows the dbench clients down,
> so they have less opportunity to delete data before kupdate and
> bdflush write them out.  So the net effect is that the slower
> you go, the more I/O you end up doing - a *lot* more.  This slows
> you down further, which causes more I/O, etc.
> 
> dbench is not a benchmark.  It is really complex, it is really
> misleading.  It is a fine stress-tester though.
> 
> The original netbench test which dbench emulates has three phases:
> startup, run and cleanup.  Throughput numbers are only quoted for
> the "run" phase.  dbench is incomplete in that it reports on throughput
> for all three phases.  Apparently Tridge and friends are working on
> changing this, but it will still be the case that the entire test
> can be optimised away, and that it is subject to the regenerative
> feedback phenomenon described above.
> 
> For tuning and measuring fs and VM efficiency we need to user
> simpler, more specific tools.
> 
> -
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

I would encourage you to check out the reiser_fract_tree program, which is at
the heart of mongo.pl.  It generates lots of files using a fractal algorithm for
size and number per directory, and I think it reflects real world statistics
decently.  You can specify mean file size, max file size, mean nr of files per
directory, max nr files per directory, check it out..... 
www.namesys.com/benchmarks.html

It just generates file sets, which is fine for write performance testing like
you are doing.  Mongo.pl can do reads and copies and stuff using those file
sets.

Hans

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [reiserfs-list] Re: [resent PATCH] Re: very slow parallel read performance
  2001-08-28  8:46   ` [reiserfs-list] " Hans Reiser
@ 2001-08-28 19:17     ` Mike Galbraith
  0 siblings, 0 replies; 8+ messages in thread
From: Mike Galbraith @ 2001-08-28 19:17 UTC (permalink / raw)
  To: Hans Reiser
  Cc: Dieter N|tzel, Linux Kernel List, Daniel Phillips, ReiserFS List,
	Gryaznova E.

On Tue, 28 Aug 2001, Hans Reiser wrote:

> Mike Galbraith wrote:
> >
> > On Tue, 28 Aug 2001, Dieter N|tzel wrote:
> >
> > > * readahead do not show dramatic differences
> > > * killall -STOP kupdated DO
> > >
> > > Yes, I know it is dangerous to stop kupdated but my disk show heavy thrashing
> > > (seeks like mad) since 2.4.7ac4. killall -STOP kupdated make it smooth and
> > > fast, again.
> >
> > Interesting.
> >
> > A while back, I twiddled the flush logic in buffer.c a little and made
> > kupdated only handle light flushing.. stay out of the way when bdflush
> > is running.  This and some dynamic adjustment of bdflush flushsize and
> > not stopping flushing right _at_ (biggie) the trigger level produced
> > very interesting improvements.  (very marked reduction in system time
> > for heavy IO jobs, and large improvement in file rewrite throughput)
> >
> >         -Mike
>
>
> Can you send us the patch, and Elena will run some tests on it?

I think I posted the patch once (including dumb typo), and I know I sent
is to a couple of folks to try if they wanted, but I don't save such.

The specific patch is no longer germain.. large (more sensible) change
to flush logic recently.  Interesting is the kupdated/vm interaction..
I saw it getting in the way here (so I whittled it down to size.. made
it small), and some posts I've seen seem to indicate the same.

("biggie" thing is what leads to rewrite throughput increase.  Whacking
kupdated only removes a noise source)

	-Mike


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2001-08-28 19:18 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-28  1:08 [resent PATCH] Re: very slow parallel read performance Dieter Nützel
2001-08-28  0:05 ` Marcelo Tosatti
2001-08-28  1:54   ` Daniel Phillips
2001-08-28  5:01 ` Mike Galbraith
2001-08-28  8:46   ` [reiserfs-list] " Hans Reiser
2001-08-28 19:17     ` Mike Galbraith
2001-08-28 18:18 ` Andrew Morton
2001-08-28 18:45   ` Hans Reiser

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).