All of lore.kernel.org
 help / color / mirror / Atom feed
* Terrible VM in 2.4.11+?
@ 2002-07-08 22:11 Lukas Hejtmanek
  2002-07-08 22:37 ` Austin Gonyou
  2002-07-08 23:27 ` khromy
  0 siblings, 2 replies; 16+ messages in thread
From: Lukas Hejtmanek @ 2002-07-08 22:11 UTC (permalink / raw)
  To: linux-kernel


Hello,

as of the last stable version 2.4.18 VM management does not work for me
properly. I have Athlon system with 512MB ram, 2.4.18 kernel without any
additional patches.

I run following sequence of commands:

dd if=/dev/zero of=/tmp bs=1M count=512 &
find / -print &
 { wait a few seconds }
sync

at this point find stops completely or at least almost stops.

The same if I copy from /dev/hdf to /dev/hda. XOSVIEW shows only reading or only
writing (as bdflushd is flushing buffers). It never shows parallel reading and
writing. /proc/sys/* has default settings. I do not know the reason why i/o
system stops when bdflushd is flushing buffers nor reading can be done.

-- 
Lukáš Hejtmánek

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-08 22:11 Terrible VM in 2.4.11+? Lukas Hejtmanek
@ 2002-07-08 22:37 ` Austin Gonyou
  2002-07-08 22:50   ` Lukas Hejtmanek
  2002-07-08 23:27 ` khromy
  1 sibling, 1 reply; 16+ messages in thread
From: Austin Gonyou @ 2002-07-08 22:37 UTC (permalink / raw)
  To: Lukas Hejtmanek; +Cc: linux-kernel

I do things like this regularly, and have been using kernels 2.4.10+ on
many types of boxen, but have yet to see this behavior. I've done this
same type of test with 16k blocks up to 10M, and not had this problem I
usually do test with regard to I/O on SCSI, but have tested on IDE,
since we use many IDE systems for developers. I found though, that using
something like LVM, and overwhelming it, causes bdflush to go crazy. I
can hit the wall you refer to then.When bdflushd is too busy...it does
in fact seem to *lock* the system, but of course..it's just bdflush
doing it's thing. If I modify the bdflush params..this causes things to
work just fine, at least, useable.



On Mon, 2002-07-08 at 17:11, Lukas Hejtmanek wrote:
> Hello,
> 
> as of the last stable version 2.4.18 VM management does not work for me
> properly. I have Athlon system with 512MB ram, 2.4.18 kernel without any
> additional patches.
> 
> I run following sequence of commands:
> 
> dd if=/dev/zero of=/tmp bs=1M count=512 &
> find / -print &
>  { wait a few seconds }
> sync
> 
> at this point find stops completely or at least almost stops.
> 
> The same if I copy from /dev/hdf to /dev/hda. XOSVIEW shows only reading or only
> writing (as bdflushd is flushing buffers). It never shows parallel reading and
> writing. /proc/sys/* has default settings. I do not know the reason why i/o
> system stops when bdflushd is flushing buffers nor reading can be done.
> 
> -- 
> Lukáš Hejtmánek
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
-- 
Austin Gonyou <austin@digitalroadkill.net>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-08 22:37 ` Austin Gonyou
@ 2002-07-08 22:50   ` Lukas Hejtmanek
  2002-07-08 22:58     ` J.A. Magallon
  2002-07-08 23:04     ` Austin Gonyou
  0 siblings, 2 replies; 16+ messages in thread
From: Lukas Hejtmanek @ 2002-07-08 22:50 UTC (permalink / raw)
  To: Austin Gonyou; +Cc: linux-kernel


Yes, I know a few people that reports it works well for them. How ever for me
and some other do not. System is redhat 7.2, ASUS A7V MB, /dev/hda is on promise
controller. Following helps a lot:

while true; do sync; sleep 3; done

How did you modify the params of bdflush? I do not want to suspend i/o buffers 
nor disk cache.. 

Another thing to notice, the X server has almost every time some pages swaped to
the swap space on /dev/hda. When bdflushd is flushing buffers X server stops as
has no access to the swap area during i/o lock.

On Mon, Jul 08, 2002 at 05:37:02PM -0500, Austin Gonyou wrote:
> I do things like this regularly, and have been using kernels 2.4.10+ on
> many types of boxen, but have yet to see this behavior. I've done this
> same type of test with 16k blocks up to 10M, and not had this problem I
> usually do test with regard to I/O on SCSI, but have tested on IDE,
> since we use many IDE systems for developers. I found though, that using
> something like LVM, and overwhelming it, causes bdflush to go crazy. I
> can hit the wall you refer to then.When bdflushd is too busy...it does
> in fact seem to *lock* the system, but of course..it's just bdflush
> doing it's thing. If I modify the bdflush params..this causes things to
> work just fine, at least, useable.

-- 
Lukáš Hejtmánek

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-08 22:50   ` Lukas Hejtmanek
@ 2002-07-08 22:58     ` J.A. Magallon
  2002-07-08 23:58       ` Lukas Hejtmanek
                         ` (2 more replies)
  2002-07-08 23:04     ` Austin Gonyou
  1 sibling, 3 replies; 16+ messages in thread
From: J.A. Magallon @ 2002-07-08 22:58 UTC (permalink / raw)
  To: Lukas Hejtmanek; +Cc: Austin Gonyou, linux-kernel


On 2002.07.09 Lukas Hejtmanek wrote:
>
>Yes, I know a few people that reports it works well for them. How ever for me
>and some other do not. System is redhat 7.2, ASUS A7V MB, /dev/hda is on promise
>controller. Following helps a lot:
>
>while true; do sync; sleep 3; done
>
>How did you modify the params of bdflush? I do not want to suspend i/o buffers 
>nor disk cache.. 
>
>Another thing to notice, the X server has almost every time some pages swaped to
>the swap space on /dev/hda. When bdflushd is flushing buffers X server stops as
>has no access to the swap area during i/o lock.
>
>On Mon, Jul 08, 2002 at 05:37:02PM -0500, Austin Gonyou wrote:
>> I do things like this regularly, and have been using kernels 2.4.10+ on
>> many types of boxen, but have yet to see this behavior. I've done this
>> same type of test with 16k blocks up to 10M, and not had this problem I
>> usually do test with regard to I/O on SCSI, but have tested on IDE,
>> since we use many IDE systems for developers. I found though, that using
>> something like LVM, and overwhelming it, causes bdflush to go crazy. I
>> can hit the wall you refer to then.When bdflushd is too busy...it does
>> in fact seem to *lock* the system, but of course..it's just bdflush
>> doing it's thing. If I modify the bdflush params..this causes things to
>> work just fine, at least, useable.
>

Seriously, if you have that kind of problems, take the -aa kernel and use it.
I use it regularly and it behaves as one would expect, and fast.
And please, report your results...

-- 
J.A. Magallon             \   Software is like sex: It's better when it's free
mailto:jamagallon@able.es  \                    -- Linus Torvalds, FSF T-shirt
Linux werewolf 2.4.19-rc1-jam1, Mandrake Linux 8.3 (Cooker) for i586
gcc (GCC) 3.1.1 (Mandrake Linux 8.3 3.1.1-0.7mdk)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-08 22:50   ` Lukas Hejtmanek
  2002-07-08 22:58     ` J.A. Magallon
@ 2002-07-08 23:04     ` Austin Gonyou
  1 sibling, 0 replies; 16+ messages in thread
From: Austin Gonyou @ 2002-07-08 23:04 UTC (permalink / raw)
  To: Lukas Hejtmanek; +Cc: linux-kernel

Here's the params I'm running...but it is with a -aa tree, just FYI.

vm.bdflush = 60 1000 0 0 1000 800 60 50 0


On Mon, 2002-07-08 at 17:50, Lukas Hejtmanek wrote:
> Yes, I know a few people that reports it works well for them. How ever for me
> and some other do not. System is redhat 7.2, ASUS A7V MB, /dev/hda is on promise
> controller. Following helps a lot:
> 
> while true; do sync; sleep 3; done
> 
> How did you modify the params of bdflush? I do not want to suspend i/o buffers 
> nor disk cache.. 
> 
> Another thing to notice, the X server has almost every time some pages swaped to
> the swap space on /dev/hda. When bdflushd is flushing buffers X server stops as
> has no access to the swap area during i/o lock.
> 
> On Mon, Jul 08, 2002 at 05:37:02PM -0500, Austin Gonyou wrote:
> > I do things like this regularly, and have been using kernels 2.4.10+ on
> > many types of boxen, but have yet to see this behavior. I've done this
> > same type of test with 16k blocks up to 10M, and not had this problem I
> > usually do test with regard to I/O on SCSI, but have tested on IDE,
> > since we use many IDE systems for developers. I found though, that using
> > something like LVM, and overwhelming it, causes bdflush to go crazy. I
> > can hit the wall you refer to then.When bdflushd is too busy...it does
> > in fact seem to *lock* the system, but of course..it's just bdflush
> > doing it's thing. If I modify the bdflush params..this causes things to
> > work just fine, at least, useable.
> 
> -- 
> Lukáš Hejtmánek
-- 
Austin Gonyou <austin@digitalroadkill.net>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-08 22:11 Terrible VM in 2.4.11+? Lukas Hejtmanek
  2002-07-08 22:37 ` Austin Gonyou
@ 2002-07-08 23:27 ` khromy
  1 sibling, 0 replies; 16+ messages in thread
From: khromy @ 2002-07-08 23:27 UTC (permalink / raw)
  To: Lukas Hejtmanek; +Cc: linux-kernel

On Tue, Jul 09, 2002 at 12:11:37AM +0200, Lukas Hejtmanek wrote:
> 
> Hello,
> 
> as of the last stable version 2.4.18 VM management does not work for me
> properly. I have Athlon system with 512MB ram, 2.4.18 kernel without any
> additional patches.
> 
> I run following sequence of commands:
> 
> dd if=/dev/zero of=/tmp bs=1M count=512 &
> find / -print &
>  { wait a few seconds }
> sync
> 
> at this point find stops completely or at least almost stops.
> 
> The same if I copy from /dev/hdf to /dev/hda. XOSVIEW shows only reading or only

Wow, this is the same problem I was having!  Checkout the thread 'sync
slowness. ext3 on VIA vt82c686b'.  Some said it was my harddrive, but
this morning I noticed the problem is gone!

After I copy the file, sync returns right away.  I'm running
2.4.19-rc1aa1 now.

-- 
L1:	khromy		;khromy(at)lnuxlab.ath.cx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-08 22:58     ` J.A. Magallon
@ 2002-07-08 23:58       ` Lukas Hejtmanek
  2002-07-09 10:48       ` Terrible VM in 2.4.11+ again? Lukas Hejtmanek
  2002-07-10  8:43       ` Terrible VM in 2.4.11+? Thomas Tonino
  2 siblings, 0 replies; 16+ messages in thread
From: Lukas Hejtmanek @ 2002-07-08 23:58 UTC (permalink / raw)
  To: J.A. Magallon; +Cc: Austin Gonyou, linux-kernel

On Tue, Jul 09, 2002 at 12:58:16AM +0200, J.A. Magallon wrote:
> 
> Seriously, if you have that kind of problems, take the -aa kernel and use it.
> I use it regularly and it behaves as one would expect, and fast.
> And please, report your results...

Great, -aa tree works perfectly for me. It was little bit tricky to get that
tree but now I think it works fine.

-- 
Lukáš Hejtmánek

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+ again?
  2002-07-08 22:58     ` J.A. Magallon
  2002-07-08 23:58       ` Lukas Hejtmanek
@ 2002-07-09 10:48       ` Lukas Hejtmanek
  2002-07-10 16:34         ` Andrea Arcangeli
  2002-07-10  8:43       ` Terrible VM in 2.4.11+? Thomas Tonino
  2 siblings, 1 reply; 16+ messages in thread
From: Lukas Hejtmanek @ 2002-07-09 10:48 UTC (permalink / raw)
  To: J.A. Magallon; +Cc: Austin Gonyou, linux-kernel

On Tue, Jul 09, 2002 at 12:58:16AM +0200, J.A. Magallon wrote:
> Seriously, if you have that kind of problems, take the -aa kernel and use it.
> I use it regularly and it behaves as one would expect, and fast.
> And please, report your results...

I've tried 2.4.19rc1aa2, it swaps even when I have 512MB ram and xcdroast with
scsi-ide emulation cd writer reports to syslog:
Jul  9 12:45:02 hell kernel: __alloc_pages: 3-order allocation failed
(gfp=0x20/0)
Jul  9 12:45:02 hell kernel: __alloc_pages: 3-order allocation failed
(gfp=0x20/0)
Jul  9 12:45:02 hell kernel: __alloc_pages: 2-order allocation failed
(gfp=0x20/0)
Jul  9 12:45:02 hell kernel: __alloc_pages: 1-order allocation failed
(gfp=0x20/0)
Jul  9 12:45:02 hell kernel: __alloc_pages: 0-order allocation failed
(gfp=0x20/0)

Am I something missing?

-- 
Lukáš Hejtmánek

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-08 22:58     ` J.A. Magallon
  2002-07-08 23:58       ` Lukas Hejtmanek
  2002-07-09 10:48       ` Terrible VM in 2.4.11+ again? Lukas Hejtmanek
@ 2002-07-10  8:43       ` Thomas Tonino
  2002-07-10  8:49         ` Jens Axboe
  2002-07-10 12:34         ` Adrian Bunk
  2 siblings, 2 replies; 16+ messages in thread
From: Thomas Tonino @ 2002-07-10  8:43 UTC (permalink / raw)
  To: linux-kernel; +Cc: J.A. Magallon

J.A. Magallon wrote:

> Seriously, if you have that kind of problems, take the -aa kernel and use it.
> I use it regularly and it behaves as one would expect, and fast.
> And please, report your results...

I run a 2 cpu server with 16 disks and around 5 megabytes of writes a 
second. With plain 2.4.18 (using the feral.com qlogic driver) and 2GB 
ram, this seemed okay. Upgrading to 4GB ram slowed the system down, and 
normal shell commands became quite unresponsive with 4GB.

So we built a second server, with 2.4.19-pre9-aa2 using the qlogic 
driver in the kernel. That driver needs patching, as it will otherwise 
get stuck in a 'no handle slots' condition. Used a patch that I posted 
to linux-scsi a while ago.

This combination works great so far. In the meantime, the 2.4.18 box has 
been left running, but the load shoots up to 75 sometimes with no 
apparent reason (the -aa2 box stays below a load of 3).

Once the 2.4.18 box was really wedged: load at 70, server process stuck. 
I logged in and the system was very responsive, but in reponse to a 
reboot the system just sat there.

So we're going with 2.4.19-pre9-aa2 for now. I don't yet understand the 
-aa series, for example how 2.4.19-rc1-aa1 would relate to 
2.4.19-pre9-aa2, so I'm a bit wary of just upgrading in the -aa series 
right now.


Thomas



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-10  8:43       ` Terrible VM in 2.4.11+? Thomas Tonino
@ 2002-07-10  8:49         ` Jens Axboe
  2002-07-10 13:52           ` Thomas Tonino
  2002-07-10 12:34         ` Adrian Bunk
  1 sibling, 1 reply; 16+ messages in thread
From: Jens Axboe @ 2002-07-10  8:49 UTC (permalink / raw)
  To: Thomas Tonino; +Cc: linux-kernel, J.A. Magallon

On Wed, Jul 10 2002, Thomas Tonino wrote:
> J.A. Magallon wrote:
> 
> >Seriously, if you have that kind of problems, take the -aa kernel and use 
> >it.
> >I use it regularly and it behaves as one would expect, and fast.
> >And please, report your results...
> 
> I run a 2 cpu server with 16 disks and around 5 megabytes of writes a 
> second. With plain 2.4.18 (using the feral.com qlogic driver) and 2GB 
> ram, this seemed okay. Upgrading to 4GB ram slowed the system down, and 
> normal shell commands became quite unresponsive with 4GB.
> 
> So we built a second server, with 2.4.19-pre9-aa2 using the qlogic 
> driver in the kernel. That driver needs patching, as it will otherwise 
> get stuck in a 'no handle slots' condition. Used a patch that I posted 
> to linux-scsi a while ago.

That's probably not just a mm issue, if you use stock 2.4.18 with 4GB
ram you will spend oodles of time bounce buffering i/o. 2.4.19-pre9-aa2
includes the block-highmem stuff, which enables direct-to-highmem i/o,
if you enabled the CONFIG_HIGHIO option.

In short, not an apples-to-apples comparison :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-10  8:43       ` Terrible VM in 2.4.11+? Thomas Tonino
  2002-07-10  8:49         ` Jens Axboe
@ 2002-07-10 12:34         ` Adrian Bunk
  1 sibling, 0 replies; 16+ messages in thread
From: Adrian Bunk @ 2002-07-10 12:34 UTC (permalink / raw)
  To: Thomas Tonino; +Cc: linux-kernel

On Wed, 10 Jul 2002, Thomas Tonino wrote:

>...
> So we're going with 2.4.19-pre9-aa2 for now. I don't yet understand the
> -aa series, for example how 2.4.19-rc1-aa1 would relate to
> 2.4.19-pre9-aa2, so I'm a bit wary of just upgrading in the -aa series
> right now.

The -aa patches are usually against the most recent 2.4 kernel (they are
usually only available against one specific kernel), IOW the following are
increasing version numbers:

2.4.18-aa1
2.4.18-pre8-aa1
2.4.19-pre9-aa1
2.4.19-pre9-aa2
2.4.19-rc1-aa1  (rc = "release candidate")
2.4.19-aa1
2.4.20-pre1-aa1

> Thomas

cu
Adrian

-- 

You only think this is a free country. Like the US the UK spends a lot of
time explaining its a free country because its a police state.
								Alan Cox



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-10  8:49         ` Jens Axboe
@ 2002-07-10 13:52           ` Thomas Tonino
  2002-07-10 16:41             ` Andrea Arcangeli
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Tonino @ 2002-07-10 13:52 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Thomas Tonino, linux-kernel, J.A. Magallon

Jens Axboe wrote:

> That's probably not just a mm issue, if you use stock 2.4.18 with 4GB
> ram you will spend oodles of time bounce buffering i/o. 2.4.19-pre9-aa2
> includes the block-highmem stuff, which enables direct-to-highmem i/o,
> if you enabled the CONFIG_HIGHIO option.

Indeed, highio seemed a feature I wanted, so I enabled it. But in the 
'stuck' state on the 2 GB 2.4.18 machine, the load is 75 while there is 
no disk activity according to iostat, but shells perform slowly anyway 
and the CPU is idle. A reboot command doesn't work, but logging in over 
ssh is still possible.

> In short, not an apples-to-apples comparison :-)

I agree a lot has changed in that kernel. And I wanted the O(1) 
scheduler as well, as I expect a lot of processes on the server.

The 2.4.18 behaviour stays strange: the server has a fairly constant 
workload, but the cpu load, normally averaging around 2, sometimes rises 
to 75 in about an hour, and usually the load also winds down again.

None of the strange effects above have been noticed on 2.4.19-pre9-aa2. 
BTW, the qlogic patch is great in preventing the handle slots issue.


Thomas



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+ again?
  2002-07-09 10:48       ` Terrible VM in 2.4.11+ again? Lukas Hejtmanek
@ 2002-07-10 16:34         ` Andrea Arcangeli
       [not found]           ` <20020801113124.GA755@mail.muni.cz>
  0 siblings, 1 reply; 16+ messages in thread
From: Andrea Arcangeli @ 2002-07-10 16:34 UTC (permalink / raw)
  To: Lukas Hejtmanek; +Cc: J.A. Magallon, Austin Gonyou, linux-kernel, linux-scsi

On Tue, Jul 09, 2002 at 12:48:08PM +0200, Lukas Hejtmanek wrote:
> On Tue, Jul 09, 2002 at 12:58:16AM +0200, J.A. Magallon wrote:
> > Seriously, if you have that kind of problems, take the -aa kernel and use it.
> > I use it regularly and it behaves as one would expect, and fast.
> > And please, report your results...
> 
> I've tried 2.4.19rc1aa2, it swaps even when I have 512MB ram and xcdroast with
> scsi-ide emulation cd writer reports to syslog:
> Jul  9 12:45:02 hell kernel: __alloc_pages: 3-order allocation failed
> (gfp=0x20/0)
> Jul  9 12:45:02 hell kernel: __alloc_pages: 3-order allocation failed
> (gfp=0x20/0)
> Jul  9 12:45:02 hell kernel: __alloc_pages: 2-order allocation failed
> (gfp=0x20/0)
> Jul  9 12:45:02 hell kernel: __alloc_pages: 1-order allocation failed
> (gfp=0x20/0)
> Jul  9 12:45:02 hell kernel: __alloc_pages: 0-order allocation failed
> (gfp=0x20/0)
> 
> Am I something missing?

you may want to reproduce with vm_debug set to 1, but I'm pretty sure
it's a scsi generic issue, they are allocating ram with GFP_ATOMIC, and
they may eventually fail if kswapd cannot keep up with the other
GFP_ATOMIC allocations. They should use GFP_NOIO, with -aa it won't even
try to unmap pages.  It will just try to shrink clean cache and it
should work fine for the above purpose where the allocation needs low
latency (the local_pages per-task ensures their work won't be stolen by
the GFP_ATOMIC users).  I asked for that change some time ago but it
never happened apparently. However I assume the sr layer tried some more
after failing, sg has a quite large queue so some delay isn't fatal, and
you can probably safely ignore the above messages, they're just warnings
for you. nevertheless GFP_NOIO would make the allocations more reliable.

Andrea

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.11+?
  2002-07-10 13:52           ` Thomas Tonino
@ 2002-07-10 16:41             ` Andrea Arcangeli
  0 siblings, 0 replies; 16+ messages in thread
From: Andrea Arcangeli @ 2002-07-10 16:41 UTC (permalink / raw)
  To: Thomas Tonino; +Cc: Jens Axboe, Thomas Tonino, linux-kernel, J.A. Magallon

On Wed, Jul 10, 2002 at 03:52:36PM +0200, Thomas Tonino wrote:
> Jens Axboe wrote:
> 
> >That's probably not just a mm issue, if you use stock 2.4.18 with 4GB
> >ram you will spend oodles of time bounce buffering i/o. 2.4.19-pre9-aa2
> >includes the block-highmem stuff, which enables direct-to-highmem i/o,
> >if you enabled the CONFIG_HIGHIO option.
> 
> Indeed, highio seemed a feature I wanted, so I enabled it. But in the 
> 'stuck' state on the 2 GB 2.4.18 machine, the load is 75 while there is 
> no disk activity according to iostat, but shells perform slowly anyway 

I doubt the issue is highio here, the 75 load is probably because of 75
tasks deadlocked in D state, it's probably one of the many fixes in my
tree that avoided the deadlock for you.

If you provide a SYSRQ+T I would be more confortable though, so I can
tell you which of the fixes in my tree you need applied to mainline and
more important so I'm sure the problem is really just fixed in my tree.
I've no pending bugreport at the moment for -aa (the last emails for
rc1aa1 were all about acpi that didn't compile for smp, and I dropped it
in rc1aa2 until I get my poor broken laptop replaced).

thanks,

Andrea

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.19rc3aa4 once again?
       [not found]               ` <20020801141940.GB755@mail.muni.cz>
@ 2002-08-01 15:39                 ` Andrea Arcangeli
       [not found]                   ` <20020801220143.GC755@mail.muni.cz>
  0 siblings, 1 reply; 16+ messages in thread
From: Andrea Arcangeli @ 2002-08-01 15:39 UTC (permalink / raw)
  To: Lukas Hejtmanek; +Cc: linux-kernel

On Thu, Aug 01, 2002 at 04:19:40PM +0200, Lukas Hejtmanek wrote:
> On Thu, Aug 01, 2002 at 04:03:48PM +0200, Andrea Arcangeli wrote:
> > you can use elvtune, in my more recent trees I returned in sync with
> > mainline with the parameters to avoid being penalized in the benchmarks,
> > but if you need lower latency you can execute stuff like this by yourself.
> > 
> > 	elvtune -r 10 -w 20 /dev/hd[abcd] /dev/sd[abcd]
> > 
> > etc... (hda or hda[1234] will be the same, it only care about disks)
> > 
> > the smaller the lower latency you will get. In particular you care about
> > the read latency, so the -r parameters is the one that has to be small
> > for you, writes can be as well big.
> 
> Hmm however I think i/o subsystem should allow parallel reading/writing don't
> you think?

of course it does, what you're tuning is the "how many requests can
delay a read request" or "how many requests can delay a write request"?

it's not putting synchronous barriers, it only controls the ordering.

If a read requests can ba passed by 10mbytes of data you will
potentially read one block every 10mbyte written to disk. Of course
there will be less seeks and the global workload will be faster (faster
at least for most cases), but your read latency will be very very bad.

You can see the default values by not passing arguments to elvtune IIRC.

Andrea

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Terrible VM in 2.4.19rc3aa4 once again?
       [not found]                   ` <20020801220143.GC755@mail.muni.cz>
@ 2002-08-01 22:14                     ` Andrea Arcangeli
  0 siblings, 0 replies; 16+ messages in thread
From: Andrea Arcangeli @ 2002-08-01 22:14 UTC (permalink / raw)
  To: Lukas Hejtmanek; +Cc: linux-kernel

On Fri, Aug 02, 2002 at 12:01:43AM +0200, Lukas Hejtmanek wrote:
> 
> One more thing, this version seems to swap a lot more than 2.4.18rc4aa2 did.
> 
> The old one swapped only if scsi-ide emulation was used and I think it was only
> about 10mb. How it swaps more, it's common to have 100mb swapped out. I have
> 512mb ram...

may be an internal vm change but could you first check the difference
between ps xav, /proc/meminfo and /proc/slabinfo to see if some of the
vm users changed significantly? (feel free to post me that information
too so I can double check) thanks,

Andrea

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2002-08-01 22:10 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-07-08 22:11 Terrible VM in 2.4.11+? Lukas Hejtmanek
2002-07-08 22:37 ` Austin Gonyou
2002-07-08 22:50   ` Lukas Hejtmanek
2002-07-08 22:58     ` J.A. Magallon
2002-07-08 23:58       ` Lukas Hejtmanek
2002-07-09 10:48       ` Terrible VM in 2.4.11+ again? Lukas Hejtmanek
2002-07-10 16:34         ` Andrea Arcangeli
     [not found]           ` <20020801113124.GA755@mail.muni.cz>
     [not found]             ` <20020801140348.GM1132@dualathlon.random>
     [not found]               ` <20020801141940.GB755@mail.muni.cz>
2002-08-01 15:39                 ` Terrible VM in 2.4.19rc3aa4 once again? Andrea Arcangeli
     [not found]                   ` <20020801220143.GC755@mail.muni.cz>
2002-08-01 22:14                     ` Andrea Arcangeli
2002-07-10  8:43       ` Terrible VM in 2.4.11+? Thomas Tonino
2002-07-10  8:49         ` Jens Axboe
2002-07-10 13:52           ` Thomas Tonino
2002-07-10 16:41             ` Andrea Arcangeli
2002-07-10 12:34         ` Adrian Bunk
2002-07-08 23:04     ` Austin Gonyou
2002-07-08 23:27 ` khromy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.