linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 2.4.8/2.4.9 VM problems
@ 2001-08-17 13:10 Frank Dekervel
  2001-08-19 21:00 ` Daniel Phillips
  0 siblings, 1 reply; 35+ messages in thread
From: Frank Dekervel @ 2001-08-17 13:10 UTC (permalink / raw)
  To: linux-kernel

Hello,

since i upgraded to kernel 2.4.8/2.4.9, i noticed everything became noticably 
slower, and the number of swapins/swapouts increased significantly. When i 
run 'vmstat 1' i see there is a lot of swap activity constantly when i am 
reading my mail in kmail. After a fresh bootup in the evening, i can get 
everything I normally need swapped out by running updatedb or ht://dig. When 
i do that, my music stops playing for several seconds, and it takes about 3 
seconds before my applications repaint when i switch back to X after an 
updatedb run.
the last time that happent (and the last time i had problems with VM at all) 
was in 2.4.0-testXX so i think something is wrong ...
is it possible new used_once does not work for me (and drop_behind used to 
work fine) ?

My system configuration : athlon 750, 384 meg ram, 128 meg swap, XFree4.1 and 
kde2.2.

Greetings,
Frank

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-17 13:10 2.4.8/2.4.9 VM problems Frank Dekervel
@ 2001-08-19 21:00 ` Daniel Phillips
  2001-08-19 21:13   ` 2.4.8/2.4.9 problem Andrey Nekrasov
  2001-08-20 15:40   ` 2.4.8/2.4.9 VM problems Mike Galbraith
  0 siblings, 2 replies; 35+ messages in thread
From: Daniel Phillips @ 2001-08-19 21:00 UTC (permalink / raw)
  To: Frank Dekervel, linux-kernel

On August 17, 2001 03:10 pm, Frank Dekervel wrote:
> Hello,
> 
> since i upgraded to kernel 2.4.8/2.4.9, i noticed everything became noticably 
> slower, and the number of swapins/swapouts increased significantly. When i 
> run 'vmstat 1' i see there is a lot of swap activity constantly when i am 
> reading my mail in kmail. After a fresh bootup in the evening, i can get 
> everything I normally need swapped out by running updatedb or ht://dig. When 
> i do that, my music stops playing for several seconds, and it takes about 3 
> seconds before my applications repaint when i switch back to X after an 
> updatedb run.
> the last time that happent (and the last time i had problems with VM at all) 
> was in 2.4.0-testXX so i think something is wrong ...
> is it possible new used_once does not work for me (and drop_behind used to 
> work fine) ?
> 
> My system configuration : athlon 750, 384 meg ram, 128 meg swap, XFree4.1 and 
> kde2.2.

Could you please try this patch against 2.4.9 (patch -p0):

--- ../2.4.9.clean/mm/memory.c	Mon Aug 13 19:16:41 2001
+++ ./mm/memory.c	Sun Aug 19 21:35:26 2001
@@ -1119,6 +1119,7 @@
 			 */
 			return pte_same(*page_table, orig_pte) ? -1 : 1;
 		}
+		SetPageReferenced(page);
 	}
 
 	/*

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 problem
  2001-08-19 21:00 ` Daniel Phillips
@ 2001-08-19 21:13   ` Andrey Nekrasov
  2001-08-20 21:20     ` Daniel Phillips
  2001-08-20 15:40   ` 2.4.8/2.4.9 VM problems Mike Galbraith
  1 sibling, 1 reply; 35+ messages in thread
From: Andrey Nekrasov @ 2001-08-19 21:13 UTC (permalink / raw)
  To: linux-kernel

Hello.

I am have problem with "kernel: __alloc_pages: 0-order allocation failed."

1. syslog kern.*

   ...
	 Aug 19 12:28:16 sol kernel: __alloc_pages: 0-order allocation failed.
	 Aug 19 12:28:37 sol last message repeated 364 times
	 Aug 19 12:29:17 sol last message repeated 47 times
	 Aug 19 12:29:25 sol kernel: s: 0-order allocation failed.
	 Aug 19 12:29:25 sol kernel: __alloc_pages: 0-order allocation failed.
	 Aug 19 12:29:25 sol last message repeated 291 times
	 Aug 19 12:29:25 sol kernel: eth0: can't fill rx buffer (force 0)!
	 Aug 19 12:29:25 sol kernel: __alloc_pages: 0-order allocation failed.
	 Aug 19 12:29:25 sol kernel: eth0: Tx ring dump,  Tx queue 2928321 /
	 2928321:
	 Aug 19 12:29:25 sol kernel: eth0:     0 600ca000.
	 Aug 19 12:29:25 sol kernel: eth0:  *= 1 000ca000.
	 Aug 19 12:29:25 sol kernel: eth0:     2 000ca000.
   ...
   Aug 19 12:29:25 sol kernel: eth0:     8 200ca000.
	 Aug 19 12:29:25 sol kernel: __alloc_pages: 0-order allocation failed.
	 Aug 19 12:29:25 sol kernel: eth0:     9 000ca000.
   ...
	 Aug 19 12:29:25 sol kernel: eth0:  * 31 00000000.
	 Aug 19 12:29:25 sol kernel: __alloc_pages: 0-order allocation failed.
	 Aug 19 12:29:59 sol last message repeated 75 times
	 Aug 19 12:31:10 sol last message repeated 32 times
	 Aug 19 12:32:07 sol last message repeated 153 times
	 Aug 19 12:32:35 sol last message repeated 131 times

2. my configuration:

	2CPU/1.5Gb RAM/Mylex Acceleraid 250/Intel PRO100/ Linux kernel 2.4.8/9-xfs /file system   is  xfs or ext2.

3. NFS/NFSD kernel v3, and use nfs-root file system.

4. Test 1: simple copy from/to another nfs-computer. _big_ file - up-to 4Gb

   Test 2: tiobench-0.3.1  on _local_ disk (Mylex RAID5) with support
	         "LARGEFILES" (>2Gb).


Can your help me? 


-- 
bye.
Andrey Nekrasov, SpyLOG.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-19 21:00 ` Daniel Phillips
  2001-08-19 21:13   ` 2.4.8/2.4.9 problem Andrey Nekrasov
@ 2001-08-20 15:40   ` Mike Galbraith
  2001-08-20 17:10     ` Daniel Phillips
  1 sibling, 1 reply; 35+ messages in thread
From: Mike Galbraith @ 2001-08-20 15:40 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Frank Dekervel, linux-kernel

On Sun, 19 Aug 2001, Daniel Phillips wrote:

> On August 17, 2001 03:10 pm, Frank Dekervel wrote:
> > Hello,
> >
> > since i upgraded to kernel 2.4.8/2.4.9, i noticed everything became noticably
> > slower, and the number of swapins/swapouts increased significantly. When i
> > run 'vmstat 1' i see there is a lot of swap activity constantly when i am
> > reading my mail in kmail. After a fresh bootup in the evening, i can get
> > everything I normally need swapped out by running updatedb or ht://dig. When
> > i do that, my music stops playing for several seconds, and it takes about 3
> > seconds before my applications repaint when i switch back to X after an
> > updatedb run.
> > the last time that happent (and the last time i had problems with VM at all)
> > was in 2.4.0-testXX so i think something is wrong ...
> > is it possible new used_once does not work for me (and drop_behind used to
> > work fine) ?
> >
> > My system configuration : athlon 750, 384 meg ram, 128 meg swap, XFree4.1 and
> > kde2.2.
>
> Could you please try this patch against 2.4.9 (patch -p0):

Hi Daniel,

I've been having some troubles which also seem to be use_once related.
(bonnie rewrite test induces large inactive shortage, and some nasty
IO seizures during write intelligently test. [grab window/wave it and
watch it not move for couple seconds])

I'll give your patch a shot.  In the meantime, below is what I did
to it here.  I might have busted use_once all to pieces ;-) but it
cured my problem, so I'll show it anyway.

	-Mike


--- mm/filemap.c.org	Mon Aug 20 17:25:20 2001
+++ mm/filemap.c	Mon Aug 20 17:25:50 2001
@@ -980,7 +980,7 @@
 static inline void check_used_once (struct page *page)
 {
 	if (!PageActive(page)) {
-		if (page->age)
+		if (page->age > PAGE_AGE_START)
 			activate_page(page);
 		else {
 			page->age = PAGE_AGE_START;


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 15:40   ` 2.4.8/2.4.9 VM problems Mike Galbraith
@ 2001-08-20 17:10     ` Daniel Phillips
  2001-08-20 19:14       ` Mike Galbraith
  0 siblings, 1 reply; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 17:10 UTC (permalink / raw)
  To: Mike Galbraith; +Cc: Frank Dekervel, linux-kernel

On August 20, 2001 05:40 pm, Mike Galbraith wrote:
> On Sun, 19 Aug 2001, Daniel Phillips wrote:
> > On August 17, 2001 03:10 pm, Frank Dekervel wrote:
> > > Hello,
> > >
> > > since i upgraded to kernel 2.4.8/2.4.9, i noticed everything became noticably
> > > slower, and the number of swapins/swapouts increased significantly. When i
> > > run 'vmstat 1' i see there is a lot of swap activity constantly when i am
> > > reading my mail in kmail. After a fresh bootup in the evening, i can get
> > > everything I normally need swapped out by running updatedb or ht://dig. When
> > > i do that, my music stops playing for several seconds, and it takes about 3
> > > seconds before my applications repaint when i switch back to X after an
> > > updatedb run.
> > > the last time that happent (and the last time i had problems with VM at all)
> > > was in 2.4.0-testXX so i think something is wrong ...
> > > is it possible new used_once does not work for me (and drop_behind used to
> > > work fine) ?
> > >
> > > My system configuration : athlon 750, 384 meg ram, 128 meg swap, XFree4.1 and
> > > kde2.2.
> >
> > Could you please try this patch against 2.4.9 (patch -p0):
> 
> Hi Daniel,
> 
> I've been having some troubles which also seem to be use_once related.
> (bonnie rewrite test induces large inactive shortage, and some nasty
> IO seizures during write intelligently test. [grab window/wave it and
> watch it not move for couple seconds])
> 
> I'll give your patch a shot.  In the meantime, below is what I did
> to it here.  I might have busted use_once all to pieces ;-) but it
> cured my problem, so I'll show it anyway.

Hi.

No, this doesn't break it at all, what it does is require the IO page
to be touched more times before it's considered truly active.  This
partly takes care of the theory that an intial burst of activity on
the page should be considered as only one use.

We can expose this activation threshold through proc so you can adjust it
without recompiling.  I'll prepare a patch for that.

Another thing you might try is just reversing the unlazy activation patch
I posted previously (and Linus put into 2.4.9) because that will achieve
the effect of treating all touches of the page while it's on the inactive
list as a single reference.  But that has the disadvantage of making the
system think it has more inactive pages than it really does, and since the
scanning logic is a little fragile it doesn't sound like such a good idea
right now.

I intend to try a separate queue for newly activated pages so that the
time spent on the queue can be decoupled from the number of aged-to-zero
inactive pages, and we can get finer control over the period during which
all touches on the page are grouped together into a single reference.
This is 2.5 material.

> --- mm/filemap.c.org	Mon Aug 20 17:25:20 2001
> +++ mm/filemap.c	Mon Aug 20 17:25:50 2001
> @@ -980,7 +980,7 @@
>  static inline void check_used_once (struct page *page)
>  {
>  	if (!PageActive(page)) {
> -		if (page->age)
> +		if (page->age > PAGE_AGE_START)
>  			activate_page(page);
>  		else {
>  			page->age = PAGE_AGE_START;
> 
> 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 20:34         ` Daniel Phillips
@ 2001-08-20 19:12           ` Marcelo Tosatti
  2001-08-20 21:40             ` Daniel Phillips
  2001-08-21  4:52           ` Mike Galbraith
  1 sibling, 1 reply; 35+ messages in thread
From: Marcelo Tosatti @ 2001-08-20 19:12 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel



On Mon, 20 Aug 2001, Daniel Phillips wrote:

> On August 20, 2001 09:14 pm, Mike Galbraith wrote:
> > On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > > On August 20, 2001 05:40 pm, Mike Galbraith wrote:
> > > > I'll give your patch a shot.  In the meantime, below is what I did
> > > > to it here.  I might have busted use_once all to pieces ;-) but it
> > > > cured my problem, so I'll show it anyway.
> > >
> > > No, this doesn't break it at all, what it does is require the IO page
> > > to be touched more times before it's considered truly active.  This
> > > partly takes care of the theory that an intial burst of activity on
> > > the page should be considered as only one use.
> > 
> > (it turns it into a ~sortof used twiceish in my specific case I think..
> 
> Actually, used-thriceish.
> 
> > the aging must happen to make it work right though.. very very tricky.
> 
> I doubt the aging has much to do with it, what's more important is the length 
> of the inactive_dirty queue.  Of course, aging affects that and so does 
> scanning policy, both a little "uncalibrated" at the moment.
> 
> > Nope, I don't have anything other than a 'rough visual' to work with..
> > might be totally out there ;-)
> 
> What made you think of trying the higher activation threshold? ;-)
> 
> > > We can expose this activation threshold through proc so you can adjust it
> > > without recompiling.  I'll prepare a patch for that.
> > >
> > > Another thing you might try is just reversing the unlazy activation patch
> > > I posted previously (and Linus put into 2.4.9) because that will achieve
> > > the effect of treating all touches of the page while it's on the inactive
> > > list as a single reference.  But that has the disadvantage of making the
> > > system think it has more inactive pages than it really does, and since the
> > > scanning logic is a little fragile it doesn't sound like such a good idea
> > > right now.
> > 
> > I don't think this is a big issue.  I do inactive listscanning to improve
> > the informational content of the lists, but it only has a _minor_ effect.
> > For maximum performance, it matters, but really we are not to the point
> > that it matters in the general case at all.
> 
> OK, but people were seeing the inactive_dirty list getting longer than normal 
> and getting worried about it.  Before the fixes to zone scanning it likely 
> would have been a problem, now most probably not.
> 
> > > I intend to try a separate queue for newly activated pages so that the
> > > time spent on the queue can be decoupled from the number of aged-to-zero
> > > inactive pages, and we can get finer control over the period during which
> > > all touches on the page are grouped together into a single reference.
> > > This is 2.5 material.
> > 
> > We need to get the pages 'actioned' (the only thing that really matters)
> > off of the dirty list so that they are out of the equation.. that I'm
> > sure of.
> 
> Well, except when the page is only going to be used once, or not at all (in 
> the case of an unused readahead page).  Otherwise, no, we don't want to have 
> frequently used pages or pages we know nothing about dropping of the inactive 
> queue into the bit-bucket.  There's more work to do to make that come true.

Find riel's message with topic "VM tuning" to linux-mm, then take a look
at the 4th aging option.

That one _should_ be able to make us remove all kinds of "hacks" to do
drop behind, and also it should keep hot/warm active memory _in cache_
for more time. 

> 
> > How is the right way, I don't have a clue ;-)  One thing I
> > feel strongly about:  the only thing that matters is getting the right
> > number of pages moving in the right direction.  (since we are not able
> > to predict the future accurately.. we approximate, and we don't _ever_
> > want to tie that to real time [sync IO is utterly evil] because that
> > then impacts our ability to react to new input to correct our fsckups:)
> 
> True, true and true.  Personally, I'm training myself to think of everything 
> that happens inside the mm on a timescale of allocation events (one page 
> alloced = one tick) not real time.  Sometimes this happens to correspond 
> linearly to real time, but more often not.



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 17:10     ` Daniel Phillips
@ 2001-08-20 19:14       ` Mike Galbraith
  2001-08-20 20:34         ` Daniel Phillips
  0 siblings, 1 reply; 35+ messages in thread
From: Mike Galbraith @ 2001-08-20 19:14 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Frank Dekervel, linux-kernel

On Mon, 20 Aug 2001, Daniel Phillips wrote:

> On August 20, 2001 05:40 pm, Mike Galbraith wrote:
> > On Sun, 19 Aug 2001, Daniel Phillips wrote:
> > > On August 17, 2001 03:10 pm, Frank Dekervel wrote:
> > > > Hello,
> > > >
> > > > since i upgraded to kernel 2.4.8/2.4.9, i noticed everything became noticably
> > > > slower, and the number of swapins/swapouts increased significantly. When i
> > > > run 'vmstat 1' i see there is a lot of swap activity constantly when i am
> > > > reading my mail in kmail. After a fresh bootup in the evening, i can get
> > > > everything I normally need swapped out by running updatedb or ht://dig. When
> > > > i do that, my music stops playing for several seconds, and it takes about 3
> > > > seconds before my applications repaint when i switch back to X after an
> > > > updatedb run.
> > > > the last time that happent (and the last time i had problems with VM at all)
> > > > was in 2.4.0-testXX so i think something is wrong ...
> > > > is it possible new used_once does not work for me (and drop_behind used to
> > > > work fine) ?
> > > >
> > > > My system configuration : athlon 750, 384 meg ram, 128 meg swap, XFree4.1 and
> > > > kde2.2.
> > >
> > > Could you please try this patch against 2.4.9 (patch -p0):
> >
> > Hi Daniel,
> >
> > I've been having some troubles which also seem to be use_once related.
> > (bonnie rewrite test induces large inactive shortage, and some nasty
> > IO seizures during write intelligently test. [grab window/wave it and
> > watch it not move for couple seconds])
> >
> > I'll give your patch a shot.  In the meantime, below is what I did
> > to it here.  I might have busted use_once all to pieces ;-) but it
> > cured my problem, so I'll show it anyway.
>
> Hi.
>
> No, this doesn't break it at all, what it does is require the IO page
> to be touched more times before it's considered truly active.  This
> partly takes care of the theory that an intial burst of activity on
> the page should be considered as only one use.

(it turns it into a ~sortof used twiceish in my specific case I think..
the aging must happen to make it work right though.. very very tricky.
Nope, I don't have anything other than a 'rough visual' to work with..
might be totally out there ;-)

> We can expose this activation threshold through proc so you can adjust it
> without recompiling.  I'll prepare a patch for that.
>
> Another thing you might try is just reversing the unlazy activation patch
> I posted previously (and Linus put into 2.4.9) because that will achieve
> the effect of treating all touches of the page while it's on the inactive
> list as a single reference.  But that has the disadvantage of making the
> system think it has more inactive pages than it really does, and since the
> scanning logic is a little fragile it doesn't sound like such a good idea
> right now.

I don't think this is a big issue.  I do inactive listscanning to improve
the informational content of the lists, but it only has a _minor_ effect.
For maximum performance, it matters, but really we are not to the point
that it matters in the general case at all.

> I intend to try a separate queue for newly activated pages so that the
> time spent on the queue can be decoupled from the number of aged-to-zero
> inactive pages, and we can get finer control over the period during which
> all touches on the page are grouped together into a single reference.
> This is 2.5 material.

We need to get the pages 'actioned' (the only thing that really matters)
off of the dirty list so that they are out of the equation.. that I'm
sure of.  How is the right way, I don't have a clue ;-)  One thing I
feel strongly about:  the only thing that matters is getting the right
number of pages moving in the right direction.  (since we are not able
to predict the future accurately.. we approximate, and we don't _ever_
want to tie that to real time [sync IO is utterly evil] because that
then impacts our ability to react to new input to correct our fsckups:)

	-Mike


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 21:40             ` Daniel Phillips
@ 2001-08-20 20:08               ` Marcelo Tosatti
  2001-08-20 20:16                 ` Marcelo Tosatti
  2001-08-20 21:44               ` Rik van Riel
  1 sibling, 1 reply; 35+ messages in thread
From: Marcelo Tosatti @ 2001-08-20 20:08 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel


~
On Mon, 20 Aug 2001, Daniel Phillips wrote:

> On August 20, 2001 09:12 pm, Marcelo Tosatti wrote:
> > On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > > On August 20, 2001 09:14 pm, Mike Galbraith wrote:
> > > > We need to get the pages 'actioned' (the only thing that really matters)
> > > > off of the dirty list so that they are out of the equation.. that I'm
> > > > sure of.
> > > 
> > > Well, except when the page is only going to be used once, or not at all (in 
> > > the case of an unused readahead page).  Otherwise, no, we don't want to have 
> > > frequently used pages or pages we know nothing about dropping of the inactive 
> > > queue into the bit-bucket.  There's more work to do to make that come true.
> > 
> > Find riel's message with topic "VM tuning" to linux-mm, then take a look
> > at the 4th aging option.
> > 
> > That one _should_ be able to make us remove all kinds of "hacks" to do
> > drop behind, and also it should keep hot/warm active memory _in cache_
> > for more time. 
> 
> I looked at it yesterday.  The problem is, it loses the information about *how*
> a page is used: pagecache lookup via readahead has different implications than
> actual usage.  The other thing that looks a little problematic, which Rik also
> pointed out, is the potential long lag before the inactive page is detected.
> A lot of IO can take place in this time, filling up the active list with pages
> that we could have evicted much earlier.

We're using unlazy page activation on -ac so that is not an issue.



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 20:08               ` Marcelo Tosatti
@ 2001-08-20 20:16                 ` Marcelo Tosatti
  2001-08-20 22:54                   ` Daniel Phillips
  0 siblings, 1 reply; 35+ messages in thread
From: Marcelo Tosatti @ 2001-08-20 20:16 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel



On Mon, 20 Aug 2001, Marcelo Tosatti wrote:

> 
> ~
> On Mon, 20 Aug 2001, Daniel Phillips wrote:
> 
> > On August 20, 2001 09:12 pm, Marcelo Tosatti wrote:
> > > On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > > > On August 20, 2001 09:14 pm, Mike Galbraith wrote:
> > > > > We need to get the pages 'actioned' (the only thing that really matters)
> > > > > off of the dirty list so that they are out of the equation.. that I'm
> > > > > sure of.
> > > > 
> > > > Well, except when the page is only going to be used once, or not at all (in 
> > > > the case of an unused readahead page).  Otherwise, no, we don't want to have 
> > > > frequently used pages or pages we know nothing about dropping of the inactive 
> > > > queue into the bit-bucket.  There's more work to do to make that come true.
> > > 
> > > Find riel's message with topic "VM tuning" to linux-mm, then take a look
> > > at the 4th aging option.
> > > 
> > > That one _should_ be able to make us remove all kinds of "hacks" to do
> > > drop behind, and also it should keep hot/warm active memory _in cache_
> > > for more time. 
> > 
> > I looked at it yesterday.  The problem is, it loses the information about *how*
> > a page is used: pagecache lookup via readahead has different implications than
> > actual usage.

And ah, I forgot something here. 

Your statement which says "pagecache lookup via readahead has different
implications than actual usage" is not really correct.

If you only consider "hot" pages as "pages which have been touched",
you're going to (potentially) fuck heavy streaming IO workloads.

> The other thing that looks a little problematic, which Rik also >
> pointed out, is the potential long lag before the inactive page is
> detected. > A lot of IO can take place in this time, filling up the
> active list with pages > that we could have evicted much earlier.
> 
> We're using unlazy page activation on -ac so that is not an issue.
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 19:14       ` Mike Galbraith
@ 2001-08-20 20:34         ` Daniel Phillips
  2001-08-20 19:12           ` Marcelo Tosatti
  2001-08-21  4:52           ` Mike Galbraith
  0 siblings, 2 replies; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 20:34 UTC (permalink / raw)
  To: Mike Galbraith; +Cc: Frank Dekervel, linux-kernel

On August 20, 2001 09:14 pm, Mike Galbraith wrote:
> On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > On August 20, 2001 05:40 pm, Mike Galbraith wrote:
> > > I'll give your patch a shot.  In the meantime, below is what I did
> > > to it here.  I might have busted use_once all to pieces ;-) but it
> > > cured my problem, so I'll show it anyway.
> >
> > No, this doesn't break it at all, what it does is require the IO page
> > to be touched more times before it's considered truly active.  This
> > partly takes care of the theory that an intial burst of activity on
> > the page should be considered as only one use.
> 
> (it turns it into a ~sortof used twiceish in my specific case I think..

Actually, used-thriceish.

> the aging must happen to make it work right though.. very very tricky.

I doubt the aging has much to do with it, what's more important is the length 
of the inactive_dirty queue.  Of course, aging affects that and so does 
scanning policy, both a little "uncalibrated" at the moment.

> Nope, I don't have anything other than a 'rough visual' to work with..
> might be totally out there ;-)

What made you think of trying the higher activation threshold? ;-)

> > We can expose this activation threshold through proc so you can adjust it
> > without recompiling.  I'll prepare a patch for that.
> >
> > Another thing you might try is just reversing the unlazy activation patch
> > I posted previously (and Linus put into 2.4.9) because that will achieve
> > the effect of treating all touches of the page while it's on the inactive
> > list as a single reference.  But that has the disadvantage of making the
> > system think it has more inactive pages than it really does, and since the
> > scanning logic is a little fragile it doesn't sound like such a good idea
> > right now.
> 
> I don't think this is a big issue.  I do inactive listscanning to improve
> the informational content of the lists, but it only has a _minor_ effect.
> For maximum performance, it matters, but really we are not to the point
> that it matters in the general case at all.

OK, but people were seeing the inactive_dirty list getting longer than normal 
and getting worried about it.  Before the fixes to zone scanning it likely 
would have been a problem, now most probably not.

> > I intend to try a separate queue for newly activated pages so that the
> > time spent on the queue can be decoupled from the number of aged-to-zero
> > inactive pages, and we can get finer control over the period during which
> > all touches on the page are grouped together into a single reference.
> > This is 2.5 material.
> 
> We need to get the pages 'actioned' (the only thing that really matters)
> off of the dirty list so that they are out of the equation.. that I'm
> sure of.

Well, except when the page is only going to be used once, or not at all (in 
the case of an unused readahead page).  Otherwise, no, we don't want to have 
frequently used pages or pages we know nothing about dropping of the inactive 
queue into the bit-bucket.  There's more work to do to make that come true.

> How is the right way, I don't have a clue ;-)  One thing I
> feel strongly about:  the only thing that matters is getting the right
> number of pages moving in the right direction.  (since we are not able
> to predict the future accurately.. we approximate, and we don't _ever_
> want to tie that to real time [sync IO is utterly evil] because that
> then impacts our ability to react to new input to correct our fsckups:)

True, true and true.  Personally, I'm training myself to think of everything 
that happens inside the mm on a timescale of allocation events (one page 
alloced = one tick) not real time.  Sometimes this happens to correspond 
linearly to real time, but more often not.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 problem
  2001-08-19 21:13   ` 2.4.8/2.4.9 problem Andrey Nekrasov
@ 2001-08-20 21:20     ` Daniel Phillips
  2001-08-23 10:04       ` Andrey Nekrasov
  0 siblings, 1 reply; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 21:20 UTC (permalink / raw)
  To: Andrey Nekrasov, linux-kernel

On August 19, 2001 11:13 pm, Andrey Nekrasov wrote:
> Hello.
> 
> I am have problem with "kernel: __alloc_pages: 0-order allocation failed."
> 
> 1. syslog kern.*
> 
>    ...
> 	 Aug 19 12:28:16 sol kernel: __alloc_pages: 0-order allocation failed.
> 	 Aug 19 12:28:37 sol last message repeated 364 times
> 	 Aug 19 12:29:17 sol last message repeated 47 times
> [etc]

Could you please try it with this patch, which will tell us a little more 
about what's happening (patch -p0):

--- ../2.4.9.clean/mm/page_alloc.c	Thu Aug 16 12:43:02 2001
+++ ./mm/page_alloc.c	Mon Aug 20 22:05:40 2001
@@ -502,7 +502,7 @@
 	}
 
 	/* No luck.. */
-	printk(KERN_ERR "__alloc_pages: %lu-order allocation failed.\n", order);
+	printk(KERN_ERR "__alloc_pages: %lu-order allocation failed (gfp=0x%x/%i).\n", order, gfp_mask, !!(current->flags & PF_MEMALLOC));
 	return NULL;
 }
 

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 19:12           ` Marcelo Tosatti
@ 2001-08-20 21:40             ` Daniel Phillips
  2001-08-20 20:08               ` Marcelo Tosatti
  2001-08-20 21:44               ` Rik van Riel
  0 siblings, 2 replies; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 21:40 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel

On August 20, 2001 09:12 pm, Marcelo Tosatti wrote:
> On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > On August 20, 2001 09:14 pm, Mike Galbraith wrote:
> > > We need to get the pages 'actioned' (the only thing that really matters)
> > > off of the dirty list so that they are out of the equation.. that I'm
> > > sure of.
> > 
> > Well, except when the page is only going to be used once, or not at all (in 
> > the case of an unused readahead page).  Otherwise, no, we don't want to have 
> > frequently used pages or pages we know nothing about dropping of the inactive 
> > queue into the bit-bucket.  There's more work to do to make that come true.
> 
> Find riel's message with topic "VM tuning" to linux-mm, then take a look
> at the 4th aging option.
> 
> That one _should_ be able to make us remove all kinds of "hacks" to do
> drop behind, and also it should keep hot/warm active memory _in cache_
> for more time. 

I looked at it yesterday.  The problem is, it loses the information about *how*
a page is used: pagecache lookup via readahead has different implications than
actual usage.  The other thing that looks a little problematic, which Rik also
pointed out, is the potential long lag before the inactive page is detected.
A lot of IO can take place in this time, filling up the active list with pages
that we could have evicted much earlier.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 21:40             ` Daniel Phillips
  2001-08-20 20:08               ` Marcelo Tosatti
@ 2001-08-20 21:44               ` Rik van Riel
  2001-08-20 22:47                 ` Daniel Phillips
  1 sibling, 1 reply; 35+ messages in thread
From: Rik van Riel @ 2001-08-20 21:44 UTC (permalink / raw)
  To: Daniel Phillips
  Cc: Marcelo Tosatti, Mike Galbraith, Frank Dekervel, linux-kernel

On Mon, 20 Aug 2001, Daniel Phillips wrote:
> On August 20, 2001 09:12 pm, Marcelo Tosatti wrote:

> > Find riel's message with topic "VM tuning" to linux-mm, then take a look
> > at the 4th aging option.
> >
> > That one _should_ be able to make us remove all kinds of "hacks" to do
> > drop behind, and also it should keep hot/warm active memory _in cache_
> > for more time.
>
> I looked at it yesterday.  The problem is, it loses the
> information about *how* a page is used: pagecache lookup via
> readahead has different implications than actual usage.

- How is that different from your use-once thing ?
- Where do we do "pagecache lookup via readahead"
  without "actual usage" of the page ?

> The other thing that looks a little problematic, which Rik also
> pointed out, is the potential long lag before the inactive page
> is detected. A lot of IO can take place in this time, filling up
> the active list with pages that we could have evicted much
> earlier.

The lag I described to you had to do with the different
kinds of page aging used and with the time it takes for
previously "hot" pages to cool down and become inactive
pages.

I think you have things mixed up here ;)

regards,

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 22:54                   ` Daniel Phillips
@ 2001-08-20 21:50                     ` Marcelo Tosatti
  2001-08-20 23:29                       ` Daniel Phillips
  0 siblings, 1 reply; 35+ messages in thread
From: Marcelo Tosatti @ 2001-08-20 21:50 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel



On Tue, 21 Aug 2001, Daniel Phillips wrote:

> On August 20, 2001 10:16 pm, Marcelo Tosatti wrote:
> > On Mon, 20 Aug 2001, Marcelo Tosatti wrote:
> > > On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > > > On Mon, 20 Aug 2001, Marcelo Tosatti wrote:
> > > > > Find riel's message with topic "VM tuning" to linux-mm, then take a look
> > > > > at the 4th aging option.
> > > > > 
> > > > > That one _should_ be able to make us remove all kinds of "hacks" to do
> > > > > drop behind, and also it should keep hot/warm active memory _in cache_
> > > > > for more time. 
> > > > 
> > > > I looked at it yesterday.  The problem is, it loses the information about *how*
> > > > a page is used: pagecache lookup via readahead has different implications than
> > > > actual usage.
> > 
> > And ah, I forgot something here. 
> > 
> > Your statement which says "pagecache lookup via readahead has different
> > implications than actual usage" is not really correct.
> > 
> > If you only consider "hot" pages as "pages which have been touched",
> > you're going to (potentially) fuck heavy streaming IO workloads.
> 
> "Hot" pages are pages that have been touched more than once.
>
> The idea of use-once (on the read side) is to retain the readahead
> pages just long enough to use them, and not a lot longer.
>
> If you've seen streaming IO pages getting evicted before being used,
> I'd like to know about it because something is broken in that case.

I've seen the first page read by "swapin_readahead()" (which is the actual
page we want to swapin) be evicted _before_ we could actually use it (so
the read_swap_cache_async() call had to read the same page _again_ from
disk).



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 23:29                       ` Daniel Phillips
@ 2001-08-20 22:05                         ` Marcelo Tosatti
  2001-08-20 23:54                           ` Daniel Phillips
  0 siblings, 1 reply; 35+ messages in thread
From: Marcelo Tosatti @ 2001-08-20 22:05 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel


On Tue, 21 Aug 2001, Daniel Phillips wrote:

> On August 20, 2001 11:50 pm, Marcelo Tosatti wrote:
> > On Tue, 21 Aug 2001, Daniel Phillips wrote:
> 
> > > If you've seen streaming IO pages getting evicted before being used,
> > > I'd like to know about it because something is broken in that case.
> > 
> > I've seen the first page read by "swapin_readahead()" (which is the actual
> > page we want to swapin) be evicted _before_ we could actually use it (so
> > the read_swap_cache_async() call had to read the same page _again_ from
> > disk).
> 
> It's not streaming IO, but whoops, 

It does not matter. It just tells you that you're dropping pages too
early. That is even more valid for streaming IO.

I understand that having readahead pages to apply too much pressure
on really used pages is bad.

However, considering readaheaded pages as a "special case" (and drop them
previously, or whatever) will _always_ potentially fuckup streaming IO (so
yes, I think the old drop behind code is bad too).

> is that even with yesterday's SetPageReferenced patch to do_swap_page?

No. It will not help: the call to read_swap_cache_async() is before the
SetPageReferenced call.



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 21:44               ` Rik van Riel
@ 2001-08-20 22:47                 ` Daniel Phillips
  0 siblings, 0 replies; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 22:47 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Marcelo Tosatti, Mike Galbraith, Frank Dekervel, linux-kernel

On August 20, 2001 11:44 pm, Rik van Riel wrote:
> On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > On August 20, 2001 09:12 pm, Marcelo Tosatti wrote:
> > > Find riel's message with topic "VM tuning" to linux-mm, then take a look
> > > at the 4th aging option.
> > >
> > > That one _should_ be able to make us remove all kinds of "hacks" to do
> > > drop behind, and also it should keep hot/warm active memory _in cache_
> > > for more time.
> >
> > I looked at it yesterday.  The problem is, it loses the
> > information about *how* a page is used: pagecache lookup via
> > readahead has different implications than actual usage.
> 
> - How is that different from your use-once thing ?

I presume that new pages start on the active list with age=2.  So they will
survive two complete scans before being deactivated.  This by itself is a
big difference.  The other difference is, you don't distinguish between page 
references caused by readahead and page references caused by actual use.  
This is not to say that your strategy is bad, only that it is different.  In 
the end, measured performance is the only important difference.  

> - Where do we do "pagecache lookup via readahead"
>   without "actual usage" of the page ?

Both in do_generic_file_read and do_swap_page.  Usually, we use all the 
readahead pages, yes, but not always, especially in the case of swap or 
random IO.

> > The other thing that looks a little problematic, which Rik also
> > pointed out, is the potential long lag before the inactive page
> > is detected. A lot of IO can take place in this time, filling up
> > the active list with pages that we could have evicted much
> > earlier.
> 
> The lag I described to you had to do with the different
> kinds of page aging used and with the time it takes for
> previously "hot" pages to cool down and become inactive
> pages.
> 
> I think you have things mixed up here ;)

OK, just strike the "which Rik pointed out".  It's still quite a lot more lag 
than we get when the page just moves from one end of the inactive queue to 
the other.  The lag you mentioned had more to do with replacing an entire 
working set, an orthogonal problem.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 20:16                 ` Marcelo Tosatti
@ 2001-08-20 22:54                   ` Daniel Phillips
  2001-08-20 21:50                     ` Marcelo Tosatti
  0 siblings, 1 reply; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 22:54 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel

On August 20, 2001 10:16 pm, Marcelo Tosatti wrote:
> On Mon, 20 Aug 2001, Marcelo Tosatti wrote:
> > On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > > On Mon, 20 Aug 2001, Marcelo Tosatti wrote:
> > > > Find riel's message with topic "VM tuning" to linux-mm, then take a look
> > > > at the 4th aging option.
> > > > 
> > > > That one _should_ be able to make us remove all kinds of "hacks" to do
> > > > drop behind, and also it should keep hot/warm active memory _in cache_
> > > > for more time. 
> > > 
> > > I looked at it yesterday.  The problem is, it loses the information about *how*
> > > a page is used: pagecache lookup via readahead has different implications than
> > > actual usage.
> 
> And ah, I forgot something here. 
> 
> Your statement which says "pagecache lookup via readahead has different
> implications than actual usage" is not really correct.
> 
> If you only consider "hot" pages as "pages which have been touched",
> you're going to (potentially) fuck heavy streaming IO workloads.

"Hot" pages are pages that have been touched more than once.  The idea of
use-once (on the read side) is to retain the readahead pages just long enough
to use them, and not a lot longer.  If you've seen streaming IO pages getting
evicted before being used, I'd like to know about it because something is
broken in that case.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 21:50                     ` Marcelo Tosatti
@ 2001-08-20 23:29                       ` Daniel Phillips
  2001-08-20 22:05                         ` Marcelo Tosatti
  0 siblings, 1 reply; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 23:29 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel

On August 20, 2001 11:50 pm, Marcelo Tosatti wrote:
> On Tue, 21 Aug 2001, Daniel Phillips wrote:

> > If you've seen streaming IO pages getting evicted before being used,
> > I'd like to know about it because something is broken in that case.
> 
> I've seen the first page read by "swapin_readahead()" (which is the actual
> page we want to swapin) be evicted _before_ we could actually use it (so
> the read_swap_cache_async() call had to read the same page _again_ from
> disk).

It's not streaming IO, but whoops, is that even with yesterday's 
SetPageReferenced patch to do_swap_page?

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 22:05                         ` Marcelo Tosatti
@ 2001-08-20 23:54                           ` Daniel Phillips
  2001-08-21  1:55                             ` Rik van Riel
  0 siblings, 1 reply; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 23:54 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Mike Galbraith, Frank Dekervel, linux-kernel

On August 21, 2001 12:05 am, Marcelo Tosatti wrote:
> On Tue, 21 Aug 2001, Daniel Phillips wrote:
> > On August 20, 2001 11:50 pm, Marcelo Tosatti wrote:
> > > On Tue, 21 Aug 2001, Daniel Phillips wrote:
> > 
> > > > If you've seen streaming IO pages getting evicted before being used,
> > > > I'd like to know about it because something is broken in that case.
> > > 
> > > I've seen the first page read by "swapin_readahead()" (which is the
> > > actual
> > > page we want to swapin) be evicted _before_ we could actually use it (so
> > > the read_swap_cache_async() call had to read the same page _again_ from
> > > disk).
> > 
> > is that even with yesterday's SetPageReferenced patch to do_swap_page?
> 
> No. It will not help: the call to read_swap_cache_async() is before the
> SetPageReferenced call.

Sure it will.  The readahead page will have to go all the way from one end of 
the inactive_dirty list to the other, then all the way down the 
inactive_clean list.  That should be plenty of time for the SetPageReferenced 
to catch it.  The main possibility to screw up is if we scan the inactive 
lists too fast, which probably happens sometimes because it's all grossly 
uncalibrated right now.

That's another issue, it needs fixing.  We'll never have really consistent, 
predictable aging or any other vm behaviour until the list scanning is 
operating in a rock-solid way.

As long as it isn't happening frequently we will be ok for now.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 23:54                           ` Daniel Phillips
@ 2001-08-21  1:55                             ` Rik van Riel
  2001-08-21  3:51                               ` Daniel Phillips
  0 siblings, 1 reply; 35+ messages in thread
From: Rik van Riel @ 2001-08-21  1:55 UTC (permalink / raw)
  To: Daniel Phillips
  Cc: Marcelo Tosatti, Mike Galbraith, Frank Dekervel, linux-kernel

On Tue, 21 Aug 2001, Daniel Phillips wrote:

> The main possibility to screw up is if we scan the inactive lists too
> fast, which probably happens sometimes because it's all grossly
> uncalibrated right now.

I've explained this to you about 5 times now and I'll
tell it one last time.

The TARGET SIZE for the inactive_dirty list is 1 second
of pageout activity. This means that with the use-once
scheme read-ahead pages have at most 1 second to be used
while the system is under pressure.

This is not enough.  With disks being able to do at most
100 reads a second (7ms seek, 3ms rotational) you'll have
limited the system to 100 threads of streaming IO at
maximum, assuming that the readahead window is limited to
1 second worth of data.

This may seem a lot, but don't worry because the readahead
window ISN'T limited to 1 second worth of data. Think an
FTP server serving 10kB/second to each client with readahead
expanding to the standard 128kB maximum.

This means that at any point we'll have evicted 90% of the
still unused readahead pages, leading to heavy thrashing of
the readahead window and reducing the maximum load supported
by the system a full _10_ FTP clients!

> That's another issue, it needs fixing.  We'll never have really
> consistent, predictable aging or any other vm behaviour until the
> list scanning is operating in a rock-solid way.

The issue is that you completely ignore the fact that your
use-once scheme has to interact with the rest of the VM.

You also ignore the fact that you haven't yet made any
proposal on how to make the rest of the VM interact nicely
with the use-once idea, preventing things like the thrashing
of the readahead window, etc...

regards,

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-21  1:55                             ` Rik van Riel
@ 2001-08-21  3:51                               ` Daniel Phillips
  2001-08-21  3:58                                 ` Rik van Riel
  0 siblings, 1 reply; 35+ messages in thread
From: Daniel Phillips @ 2001-08-21  3:51 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Marcelo Tosatti, Mike Galbraith, Frank Dekervel, linux-kernel

On August 21, 2001 03:55 am, Rik van Riel wrote:
> The TARGET SIZE for the inactive_dirty list is 1 second
> of pageout activity. This means that with the use-once
> scheme read-ahead pages have at most 1 second to be used
> while the system is under pressure.
> 
> This is not enough.  With disks being able to do at most
> 100 reads a second (7ms seek, 3ms rotational) you'll have
> limited the system to 100 threads of streaming IO at
> maximum, assuming that the readahead window is limited to
> 1 second worth of data.
> 
> This may seem a lot, but don't worry because the readahead
> window ISN'T limited to 1 second worth of data. Think an
> FTP server serving 10kB/second to each client with readahead
> expanding to the standard 128kB maximum.
> 
> This means that at any point we'll have evicted 90% of the
> still unused readahead pages, leading to heavy thrashing of
> the readahead window and reducing the maximum load supported
> by the system a full _10_ FTP clients!

I have to admit, the 100 FTP clients case wasn't on the top of my mind.  Even 
so, think about what is really happening.  Nothing is getting activated and 
nothing is competing with these allocations so you can just let the inactive 
list grow until it holds a large fraction of the physical pages.  FIFO isn't 
such a bad model for this situation, and that's exactly what will fall out.

Assuming that some of the files are more popular than others, these file 
pages will be touched more than once and will go onto the active ring, also 
exactly what you want.  As they get old they get fed into the inactive queue 
at a rate that's tunable.  I don't see what the problem is.

There are a couple of simple improvements that can be made.  We could mark 
all new pages referenced, age=1 (to distinguish from aged-to-zero pages).  We 
would not do unlazy activation but just allow age to increment with each 
touch.  Then, in addition to the Referenced test, we would test the age 
against a tunable threshold to decide which pages to rescue.  You can see 
that this would take care of your 100 streaming clients case nicely, while 
not negatively affecting the cases that are already working well.

A second simple improvement is to have separate activation and deactivation 
queues.  This allows you to tune the rate at which pages are pulled from the 
activation queue (these would be the streaming IO pages) against pages culled 
from the active list.  I can't think of any downside at all for doing this, 
except that it's not something I'd consider appropriate for the 2.4 series.

> [...] you haven't yet made any
> proposal on how to make the rest of the VM interact nicely
> with the use-once idea, preventing things like the thrashing
> of the readahead window, etc...

This is hypothetical thrashing so far, have you see it in the wild?

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-21  3:51                               ` Daniel Phillips
@ 2001-08-21  3:58                                 ` Rik van Riel
  2001-08-21  4:11                                   ` Daniel Phillips
  0 siblings, 1 reply; 35+ messages in thread
From: Rik van Riel @ 2001-08-21  3:58 UTC (permalink / raw)
  To: Daniel Phillips
  Cc: Marcelo Tosatti, Mike Galbraith, Frank Dekervel, linux-kernel

On Tue, 21 Aug 2001, Daniel Phillips wrote:

> I have to admit, the 100 FTP clients case wasn't on the top of my
> mind.  Even so, think about what is really happening.  Nothing is
> getting activated and nothing is competing with these allocations so

> Assuming that some of the files are more popular than others, these
> file pages will be touched more than once and will go onto the active
> ring, also exactly what you want.

... and so you contradict yourself in consecutive
paragraphs...

> As they get old they get fed into the inactive queue at a rate that's
> tunable.  I don't see what the problem is.

You just pointed it out.  The old pages get fed into the
inactive queue at a rate which isn't influenced by how
much pressure the new pages put on the VM, but only by
some "tunable" rate.

> There are a couple of simple improvements that can be made.  We could mark
> all new pages referenced, age=1 (to distinguish from aged-to-zero pages).  We
> would not do unlazy activation but just allow age to increment with each
> touch.  Then, in addition to the Referenced test, we would test the age
> against a tunable threshold to decide which pages to rescue.  You can see
> that this would take care of your 100 streaming clients case nicely, while
> not negatively affecting the cases that are already working well.

Nice way to fuck up the 100 streaming client case even more, you mean.

> A second simple improvement is to have separate activation and deactivation
> queues.  This allows you to tune the rate at which pages are pulled from the
> activation queue (these would be the streaming IO pages) against pages culled
> from the active list.  I can't think of any downside at all for doing this,
> except that it's not something I'd consider appropriate for the 2.4 series.

This is called "better page aging".

> > [...] you haven't yet made any
> > proposal on how to make the rest of the VM interact nicely
> > with the use-once idea, preventing things like the thrashing
> > of the readahead window, etc...
>
> This is hypothetical thrashing so far, have you see it in the wild?

Yes.

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-21  4:11                                   ` Daniel Phillips
@ 2001-08-21  4:08                                     ` Rik van Riel
  0 siblings, 0 replies; 35+ messages in thread
From: Rik van Riel @ 2001-08-21  4:08 UTC (permalink / raw)
  To: Daniel Phillips
  Cc: Marcelo Tosatti, Mike Galbraith, Frank Dekervel, linux-kernel

On Tue, 21 Aug 2001, Daniel Phillips wrote:
> On August 21, 2001 05:58 am, Rik van Riel wrote:
> > On Tue, 21 Aug 2001, Daniel Phillips wrote:
> > > This is hypothetical thrashing so far, have you see it in the wild?
> >
> > Yes.
>
> Could you supply details please?

No hard measurements since the kernel doesn't export the
statistics for all of this yet, but I've seen the behaviour
and after thinking for a few seconds I came up with the
maths I gave you before to explain the situation.

Now it's your turn to come up with the maths to back up
your assumptions.

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-21  3:58                                 ` Rik van Riel
@ 2001-08-21  4:11                                   ` Daniel Phillips
  2001-08-21  4:08                                     ` Rik van Riel
  0 siblings, 1 reply; 35+ messages in thread
From: Daniel Phillips @ 2001-08-21  4:11 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Marcelo Tosatti, Mike Galbraith, Frank Dekervel, linux-kernel

On August 21, 2001 05:58 am, Rik van Riel wrote:
> On Tue, 21 Aug 2001, Daniel Phillips wrote:
> > This is hypothetical thrashing so far, have you see it in the wild?
> 
> Yes.

Could you supply details please?

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 20:34         ` Daniel Phillips
  2001-08-20 19:12           ` Marcelo Tosatti
@ 2001-08-21  4:52           ` Mike Galbraith
  2001-08-21  5:14             ` Rik van Riel
  1 sibling, 1 reply; 35+ messages in thread
From: Mike Galbraith @ 2001-08-21  4:52 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Frank Dekervel, linux-kernel

On Mon, 20 Aug 2001, Daniel Phillips wrote:

> On August 20, 2001 09:14 pm, Mike Galbraith wrote:
> > On Mon, 20 Aug 2001, Daniel Phillips wrote:
> > > On August 20, 2001 05:40 pm, Mike Galbraith wrote:
> > > > I'll give your patch a shot.  In the meantime, below is what I did
> > > > to it here.  I might have busted use_once all to pieces ;-) but it
> > > > cured my problem, so I'll show it anyway.
> > >
> > > No, this doesn't break it at all, what it does is require the IO page
> > > to be touched more times before it's considered truly active.  This
> > > partly takes care of the theory that an intial burst of activity on
> > > the page should be considered as only one use.
> >
> > (it turns it into a ~sortof used twiceish in my specific case I think..
>
> Actually, used-thriceish.
>
> > the aging must happen to make it work right though.. very very tricky.
>
> I doubt the aging has much to do with it, what's more important is the length
> of the inactive_dirty queue.  Of course, aging affects that and so does
> scanning policy, both a little "uncalibrated" at the moment.
>
> > Nope, I don't have anything other than a 'rough visual' to work with..
> > might be totally out there ;-)
>
> What made you think of trying the higher activation threshold? ;-)

Well :)) there I sat daydreaming, imagining myself as a bonnie page
running around queues, got dizzy and finally just changed the little
spot that kept attracting my eyeballs.. a hunch.

	-Mike


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-21  4:52           ` Mike Galbraith
@ 2001-08-21  5:14             ` Rik van Riel
  0 siblings, 0 replies; 35+ messages in thread
From: Rik van Riel @ 2001-08-21  5:14 UTC (permalink / raw)
  To: Mike Galbraith; +Cc: Daniel Phillips, Frank Dekervel, linux-kernel

On Tue, 21 Aug 2001, Mike Galbraith wrote:
> On Mon, 20 Aug 2001, Daniel Phillips wrote:

> > What made you think of trying the higher activation threshold? ;-)
>
> Well :)) there I sat daydreaming, imagining myself as a bonnie page
> running around queues, got dizzy and finally just changed the little
> spot that kept attracting my eyeballs.. a hunch.

And a good hunch.  There is NO fundamental difference between
used-once vs. used-twice or used-twice vs. used-thrice.  It's
one big gray area of pages further or closer to eviction.

The solution to making a system which is resistant to scanning,
yet allows the streaming IO to evict the least used part of the
currently active pages (to replace old data) is to use a better
page aging tactic.

If you don't believe me, try streaming IO on Linux 2.0 for a
try, or grab my patch to introduce tunable page aging on
2.4.8-ac7+ and try it. ;)

regards,

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 problem
  2001-08-20 21:20     ` Daniel Phillips
@ 2001-08-23 10:04       ` Andrey Nekrasov
  2001-08-23 14:46         ` Daniel Phillips
  2001-08-23 15:39         ` Stephan von Krawczynski
  0 siblings, 2 replies; 35+ messages in thread
From: Andrey Nekrasov @ 2001-08-23 10:04 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: linux-kernel

Hello Daniel Phillips,

> > I am have problem with "kernel: __alloc_pages: 0-order allocation failed."
> 
> Could you please try it with this patch, which will tell us a little more 
> about what's happening (patch -p0):
> 
> --- ../2.4.9.clean/mm/page_alloc.c	Thu Aug 16 12:43:02 2001
> +++ ./mm/page_alloc.c	Mon Aug 20 22:05:40 2001
> @@ -502,7 +502,7 @@
>  	}
>  
>  	/* No luck.. */
> -	printk(KERN_ERR "__alloc_pages: %lu-order allocation failed.\n", order);
> +	printk(KERN_ERR "__alloc_pages: %lu-order allocation failed (gfp=0x%x/%i).\n", order, gfp_mask, !!(current->flags & PF_MEMALLOC));
>  	return NULL;
>  }

I applied patch, this is result:

...
Aug 23 14:00:29 sol kernel: __alloc_pages: 0-order allocation failed (gfp=0x30/1).
Aug 23 14:00:29 sol last message repeated 12 times
Aug 23 14:00:29 sol kernel: cation failed (gfp=0x30/1).
Aug 23 14:00:29 sol kernel: __alloc_pages: 0-order allocation failed (gfp=0x30/1).
Aug 23 14:00:53 sol last message repeated 334 times
Aug 23 14:00:53 sol kernel: cation failed (gfp=0x30/1).
Aug 23 14:00:53 sol kernel: __alloc_pages: 0-order allocation failed (gfp=0x30/1).
Aug 23 14:00:55 sol last message repeated 300 times
Aug 23 14:00:55 sol kernel: cation failed (gfp=0x30/1).
Aug 23 14:00:55 sol kernel: __alloc_pages: 0-order allocation failed (gfp=0x30/1).
Aug 23 14:00:55 sol last message repeated 281 times
...


Hm. That is it?

-- 
bye.
Andrey Nekrasov, SpyLOG.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 problem
  2001-08-23 10:04       ` Andrey Nekrasov
@ 2001-08-23 14:46         ` Daniel Phillips
  2001-08-23 15:39         ` Stephan von Krawczynski
  1 sibling, 0 replies; 35+ messages in thread
From: Daniel Phillips @ 2001-08-23 14:46 UTC (permalink / raw)
  To: Andrey Nekrasov; +Cc: linux-kernel

On August 23, 2001 12:04 pm, Andrey Nekrasov wrote:
> Hello Daniel Phillips,
> 
> > > I am have problem with "kernel: __alloc_pages: 0-order allocation failed."
> > 
> > Could you please try it with this patch, which will tell us a little more 
> > about what's happening (patch -p0):
> > 
> > --- ../2.4.9.clean/mm/page_alloc.c	Thu Aug 16 12:43:02 2001
> > +++ ./mm/page_alloc.c	Mon Aug 20 22:05:40 2001
> > @@ -502,7 +502,7 @@
> >  	}
> >  
> >  	/* No luck.. */
> > -	printk(KERN_ERR "__alloc_pages: %lu-order allocation failed.\n", order);
> > +	printk(KERN_ERR "__alloc_pages: %lu-order allocation failed (gfp=0x%x/%i).\n", order, gfp_mask, !!(current->flags & PF_MEMALLOC));
> >  	return NULL;
> >  }
> 
> I applied patch, this is result:
> 
> ...
> Aug 23 14:00:29 sol kernel: __alloc_pages: 0-order allocation failed (gfp=0x30/1).
> Aug 23 14:00:29 sol last message repeated 12 times
> Aug 23 14:00:29 sol kernel: cation failed (gfp=0x30/1).
> Aug 23 14:00:29 sol kernel: __alloc_pages: 0-order allocation failed (gfp=0x30/1).
> Aug 23 14:00:53 sol last message repeated 334 times
> Aug 23 14:00:53 sol kernel: cation failed (gfp=0x30/1).
> Aug 23 14:00:53 sol kernel: __alloc_pages: 0-order allocation failed (gfp=0x30/1).
> Aug 23 14:00:55 sol last message repeated 300 times
> Aug 23 14:00:55 sol kernel: cation failed (gfp=0x30/1).
> Aug 23 14:00:55 sol kernel: __alloc_pages: 0-order allocation failed (gfp=0x30/1).
> Aug 23 14:00:55 sol last message repeated 281 times
> ...
> 
> 
> Hm. That is it?

Marcelo already posted a patch to fix this problem (bounce buffer allocation). 
Look under subject "Re: With Daniel Phillips Patch (was: aic7xxx with 2.4.9 on
7899P)" with a correction in his next post.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 problem
  2001-08-23 10:04       ` Andrey Nekrasov
  2001-08-23 14:46         ` Daniel Phillips
@ 2001-08-23 15:39         ` Stephan von Krawczynski
  2001-08-23 18:05           ` Daniel Phillips
  1 sibling, 1 reply; 35+ messages in thread
From: Stephan von Krawczynski @ 2001-08-23 15:39 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: linux-kernel, riel

On Thu, 23 Aug 2001 16:46:54 +0200
Daniel Phillips <phillips@bonn-fries.net> wrote:

> Marcelo already posted a patch to fix this problem (bounce buffer allocation). 
> Look under subject "Re: With Daniel Phillips Patch (was: aic7xxx with 2.4.9 on
> 7899P)" with a correction in his next post.

Aehm, Daniel, just to inform you: Marcelos patch does not solve the problem. I just proofed it here. Is completely the same with or without patch.
I tried another thing which might be interesting. I think your opinion is that page_launder gives you free memory if available when the system runs short. But it does not. I tried the following:
DEF_PRIORITY in vmscan.c set to 0. This should come out as page_launder doing the complete pagelist over in search of free pages. And guess what: it does not find enough to keep the system running. In other words: at least the search strategy in page_launder is broken, too. I can see 500 Megs of Inact_dirty mem, but page_launder cannot find enough clean ones to keep a simple filecopy running.
Any ideas left.

Regards,
Stephan


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 problem
  2001-08-23 15:39         ` Stephan von Krawczynski
@ 2001-08-23 18:05           ` Daniel Phillips
  0 siblings, 0 replies; 35+ messages in thread
From: Daniel Phillips @ 2001-08-23 18:05 UTC (permalink / raw)
  To: Stephan von Krawczynski; +Cc: linux-kernel, riel

On August 23, 2001 05:39 pm, Stephan von Krawczynski wrote:
> On Thu, 23 Aug 2001 16:46:54 +0200
> Daniel Phillips <phillips@bonn-fries.net> wrote:
> 
> > Marcelo already posted a patch to fix this problem (bounce buffer 
allocation). 
> > Look under subject "Re: With Daniel Phillips Patch (was: aic7xxx with 
2.4.9 on
> > 7899P)" with a correction in his next post.
> 
> Aehm, Daniel, just to inform you: Marcelos patch does not solve the
> problem. I just proofed it here. Is completely the same with or without 
> patch.

That's because you have a different problem.  Marcelo's patch solves a
problem with bounce buffer allocation.  Your problem is a higher-order atomic 
allocation, very different.  Now lets take a close look at it and try to kill 
it.

> I tried another thing which might be interesting. I think your opinion is 
> that page_launder gives you free memory if available when the system runs 
> short. But it does not. I tried the following:
> DEF_PRIORITY in vmscan.c set to 0. This should come out as page_launder 
> doing the complete pagelist over in search of free pages. And guess what: 
> it does not find enough to keep the system running. In other words: at 
> least the search strategy in page_launder is broken, too. I can see 500 
> Megs of Inact_dirty mem, but page_launder cannot find enough clean ones to 
> keep a simple filecopy running.
> Any ideas left.

It's not a simple filecopy, it involves nfs, if I recall correctly.  Or maybe 
you have a different test case now?  Could you please (re)summarize the 
conditions that cause the allocation failures, and supply the rest of the 
information you consider relevant.

Could you please also list the problems that remain if you remove the 
"allocation failed" kprint completely.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 19:32 ` Daniel Phillips
@ 2001-08-20 21:38   ` Rik van Riel
  0 siblings, 0 replies; 35+ messages in thread
From: Rik van Riel @ 2001-08-20 21:38 UTC (permalink / raw)
  To: Daniel Phillips; +Cc: Benjamin Redelings I, linux-kernel, linux-mm

On Mon, 20 Aug 2001, Daniel Phillips wrote:

> A similar thing has to be done in filemap_nopage (which will
> take care of mmap pages) and also for any filesystems whose page
> accesses bypass generic_read/write,

Either that, or you fix page_launder() like I explained
to you on IRC yesterday ;)

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 17:02 ` Rik van Riel
@ 2001-08-20 20:13   ` Daniel Phillips
  0 siblings, 0 replies; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 20:13 UTC (permalink / raw)
  To: Rik van Riel, Benjamin Redelings I; +Cc: linux-kernel, linux-mm

On August 20, 2001 07:02 pm, Rik van Riel wrote:
> On Mon, 20 Aug 2001, Benjamin Redelings I wrote:
> 
> > Was it really true, that swapped in pages didn't get marked as
> > referenced before?
> 
> That's just an artifact of the use-once patch, which
> only sets the referenced bit on the _second_ access
> to a page.

It was an artifact of the change in lru_cache_add where all new pages start 
on the inactive queue instead of the active queue.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 16:13 Benjamin Redelings I
  2001-08-20 17:02 ` Rik van Riel
@ 2001-08-20 19:32 ` Daniel Phillips
  2001-08-20 21:38   ` Rik van Riel
  1 sibling, 1 reply; 35+ messages in thread
From: Daniel Phillips @ 2001-08-20 19:32 UTC (permalink / raw)
  To: Benjamin Redelings I, linux-kernel, linux-mm

On August 20, 2001 06:13 pm, Benjamin Redelings I wrote:
> Daniel Phillips wrote:
> > Could you please try this patch against 2.4.9 (patch -p0):
> > 
> > --- ../2.4.9.clean/mm/memory.c	Mon Aug 13 19:16:41 2001
> > +++ ./mm/memory.c	Sun Aug 19 21:35:26 2001
> > @@ -1119,6 +1119,7 @@
> >  			 */
> >  			return pte_same(*page_table, orig_pte) ? -1 : 1;
> >  		}
> > +		SetPageReferenced(page);
> >  	}
> >  
> >  	/*
> > 
> 
> 
> Well, I tried this, and.... WOW!  Much better  [:)]
> Was it really true, that swapped in pages didn't get marked as 
> referenced before?  It almost felt that bad, but that seems kind of 
> crazy - I don't completely understand what this fix is doing...

With the use-once optimization, all pages start on the inactive queue 
instead of the active ring.  If the page doesn't get referenced before it 
gets to the other end of the inactive queue then it will be evicted and 
freed.  This means that somebody has to "rescue" each page that is actually 
used, before it gets to the end of the inactive queue.  This is implemented 
explicitly for generic_file_read and generic_file_write via the 
check_used_once function (which implements the use-once logic) and implicitly 
for buffers via the existing touch_buffer function.

There was no such rescue implemented for swap pages because when I originally 
submitted patch as an [RFC] I was doing all my testing without using any swap 
space, just file IO.  At the time, also, there other things wrong with the 
swap cache so that it was hard to see the bad effects of this omission.

Just setting the page referenced means that page_launder or reclaim_page will 
see the referenced bit and move the page to the active list, so it can live 
out its normal life cycle.

A similar thing has to be done in filemap_nopage (which will take care of 
mmap pages) and also for any filesystems whose page accesses bypass  
generic_read/write, for example, the new directory-in-pagecache code in ext2.
I'm thinking now about whether it's best to take an approach that plugs all 
the holes in a generic way, or instead just hunt them down one by one.  Once 
you know such holes are there it's not particularly hard to find and fill 
them.  It's tempting to try and move this logic to a more central place - the 
problem with that is, in the central place it's hard to filter out accesses 
that aren't real uses, such as readahead.

A final note: though the swap cache is not able to take full advantage of the 
use-once logic (because we don't have a good way of checking the state of the 
hardware page referenced bit - yet) it still does get a small benefit from 
the machinery: when we optimistically read ahead from swap, those pages that 
are not actually used will not be faulted in, thus will not have their 
referenced bit set, thus will be discarded quickly.

--
Daniel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
  2001-08-20 16:13 Benjamin Redelings I
@ 2001-08-20 17:02 ` Rik van Riel
  2001-08-20 20:13   ` Daniel Phillips
  2001-08-20 19:32 ` Daniel Phillips
  1 sibling, 1 reply; 35+ messages in thread
From: Rik van Riel @ 2001-08-20 17:02 UTC (permalink / raw)
  To: Benjamin Redelings I; +Cc: linux-kernel, linux-mm

On Mon, 20 Aug 2001, Benjamin Redelings I wrote:

> Was it really true, that swapped in pages didn't get marked as
> referenced before?

That's just an artifact of the use-once patch, which
only sets the referenced bit on the _second_ access
to a page.

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: 2.4.8/2.4.9 VM problems
@ 2001-08-20 16:13 Benjamin Redelings I
  2001-08-20 17:02 ` Rik van Riel
  2001-08-20 19:32 ` Daniel Phillips
  0 siblings, 2 replies; 35+ messages in thread
From: Benjamin Redelings I @ 2001-08-20 16:13 UTC (permalink / raw)
  To: linux-kernel, linux-mm

Daniel Phillips wrote:
> Could you please try this patch against 2.4.9 (patch -p0):
> 
> --- ../2.4.9.clean/mm/memory.c	Mon Aug 13 19:16:41 2001
> +++ ./mm/memory.c	Sun Aug 19 21:35:26 2001
> @@ -1119,6 +1119,7 @@
>  			 */
>  			return pte_same(*page_table, orig_pte) ? -1 : 1;
>  		}
> +		SetPageReferenced(page);
>  	}
>  
>  	/*
> 


Well, I tried this, and.... WOW!  Much better  [:)]
Was it really true, that swapped in pages didn't get marked as 
referenced before?  It almost felt that bad, but that seems kind of 
crazy - I don't completely understand what this fix is doing...

-BenRI
P.S. I tried this on my 64Mb PPro and a 128Mb PIII, and both felt like 
they had a lot more memory - e.g. less swapping and stuff.
-- 
"I will begin again" - U2, 'New Year's Day'
Benjamin Redelings I      <><     http://www.bol.ucla.edu/~bredelin/


^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2001-08-23 17:59 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-08-17 13:10 2.4.8/2.4.9 VM problems Frank Dekervel
2001-08-19 21:00 ` Daniel Phillips
2001-08-19 21:13   ` 2.4.8/2.4.9 problem Andrey Nekrasov
2001-08-20 21:20     ` Daniel Phillips
2001-08-23 10:04       ` Andrey Nekrasov
2001-08-23 14:46         ` Daniel Phillips
2001-08-23 15:39         ` Stephan von Krawczynski
2001-08-23 18:05           ` Daniel Phillips
2001-08-20 15:40   ` 2.4.8/2.4.9 VM problems Mike Galbraith
2001-08-20 17:10     ` Daniel Phillips
2001-08-20 19:14       ` Mike Galbraith
2001-08-20 20:34         ` Daniel Phillips
2001-08-20 19:12           ` Marcelo Tosatti
2001-08-20 21:40             ` Daniel Phillips
2001-08-20 20:08               ` Marcelo Tosatti
2001-08-20 20:16                 ` Marcelo Tosatti
2001-08-20 22:54                   ` Daniel Phillips
2001-08-20 21:50                     ` Marcelo Tosatti
2001-08-20 23:29                       ` Daniel Phillips
2001-08-20 22:05                         ` Marcelo Tosatti
2001-08-20 23:54                           ` Daniel Phillips
2001-08-21  1:55                             ` Rik van Riel
2001-08-21  3:51                               ` Daniel Phillips
2001-08-21  3:58                                 ` Rik van Riel
2001-08-21  4:11                                   ` Daniel Phillips
2001-08-21  4:08                                     ` Rik van Riel
2001-08-20 21:44               ` Rik van Riel
2001-08-20 22:47                 ` Daniel Phillips
2001-08-21  4:52           ` Mike Galbraith
2001-08-21  5:14             ` Rik van Riel
2001-08-20 16:13 Benjamin Redelings I
2001-08-20 17:02 ` Rik van Riel
2001-08-20 20:13   ` Daniel Phillips
2001-08-20 19:32 ` Daniel Phillips
2001-08-20 21:38   ` Rik van Riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).