linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* VM nuisance
@ 2001-08-11  1:30 ` David Ford
  2001-08-11  2:48   ` Rik van Riel
  2001-08-14 14:00   ` "VM watchdog"? [was Re: VM nuisance] Pavel Machek
  0 siblings, 2 replies; 32+ messages in thread
From: David Ford @ 2001-08-11  1:30 UTC (permalink / raw)
  To: linux-kernel

Is there anything measurably useful in any -ac or -pre patches after 
2.4.7 that helps or fixes the blasted out-of-memory-but-let's-go-fsck 
-ourselves-for-a-few-hours?

I was very close to hitting the reset button and losing a lot of 
important information because of this.  I accidently got too close to 
the edge of memory (~6megs free) and the kernel went into FMM (fsck 
myself mode)...i.e. spin mightily looking for memory and going noplace 
whilst ignoring it's little buddy the OOM handler.

Again, it doesn't matter if I have swap or not, if I get within ~6 megs 
of the end of memory, the kernel goes FMM.  I've tested with and without 
swap.  And _please_  don't tell me "just add more swap".  That's 
ludicruous and isn't solving the problem, it's covering up a symptom.

</rant>

So, is there anything useful or any personal/private patches I can try?

David



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  1:30 ` VM nuisance David Ford
@ 2001-08-11  2:48   ` Rik van Riel
  2001-08-11  2:59     ` H. Peter Anvin
                       ` (4 more replies)
  2001-08-14 14:00   ` "VM watchdog"? [was Re: VM nuisance] Pavel Machek
  1 sibling, 5 replies; 32+ messages in thread
From: Rik van Riel @ 2001-08-11  2:48 UTC (permalink / raw)
  To: David Ford; +Cc: linux-kernel

On Fri, 10 Aug 2001, David Ford wrote:

> Is there anything measurably useful in any -ac or -pre patches after
> 2.4.7 that helps or fixes the blasted out-of-memory-but-let's-go-fsck
> -ourselves-for-a-few-hours?

No.  The problem is that whenever I change something to
the OOM killer I get flamed.

Both by the people for whom the OOM killer kicks in too
early and by the people for whom the OOM killer now doesn't
kick in.

I haven't got the faintest idea how to come up with an OOM
killer which does the right thing for everybody.

regards,

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  2:48   ` Rik van Riel
@ 2001-08-11  2:59     ` H. Peter Anvin
  2001-08-11  4:17       ` Rik van Riel
  2001-08-11 13:13       ` Alan Cox
  2001-08-11  3:50     ` safemode
                       ` (3 subsequent siblings)
  4 siblings, 2 replies; 32+ messages in thread
From: H. Peter Anvin @ 2001-08-11  2:59 UTC (permalink / raw)
  To: linux-kernel

Followup to:  <Pine.LNX.4.33L.0108102347050.3530-100000@imladris.rielhome.conectiva>
By author:    Rik van Riel <riel@conectiva.com.br>
In newsgroup: linux.dev.kernel
>
> I haven't got the faintest idea how to come up with an OOM
> killer which does the right thing for everybody.
> 

Basically because there is no such thing?

	-hpa
-- 
<hpa@transmeta.com> at work, <hpa@zytor.com> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt	<amsp@zytor.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  2:48   ` Rik van Riel
  2001-08-11  2:59     ` H. Peter Anvin
@ 2001-08-11  3:50     ` safemode
  2001-08-12 13:09     ` Maciej Zenczykowski
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 32+ messages in thread
From: safemode @ 2001-08-11  3:50 UTC (permalink / raw)
  To: Rik van Riel, David Ford; +Cc: linux-kernel

maybe these are signs that the OOM killer just wasn't ready for 2.4 with the 
VM that it has.   For people hoping to use 2.4 as a server platform, they 
should have the confidence to know what's going to happen when the OOM 
situation occurs.   If the current OOM handler cant give that confidence then 
perhaps it should be removed and slated for 2.5.     Simple way to beat the 
flames anyway.  


On Friday 10 August 2001 22:48, Rik van Riel wrote:
> On Fri, 10 Aug 2001, David Ford wrote:
> > Is there anything measurably useful in any -ac or -pre patches after
> > 2.4.7 that helps or fixes the blasted out-of-memory-but-let's-go-fsck
> > -ourselves-for-a-few-hours?
>
> No.  The problem is that whenever I change something to
> the OOM killer I get flamed.
>
> Both by the people for whom the OOM killer kicks in too
> early and by the people for whom the OOM killer now doesn't
> kick in.
>
> I haven't got the faintest idea how to come up with an OOM
> killer which does the right thing for everybody.
>
> regards,
>
> Rik
> --
> IA64: a worthy successor to i860.
>
> http://www.surriel.com/		http://distro.conectiva.com/
>
> Send all your spam to aardvark@nl.linux.org (spam digging piggy)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  2:59     ` H. Peter Anvin
@ 2001-08-11  4:17       ` Rik van Riel
  2001-08-11  4:40         ` David Ford
                           ` (2 more replies)
  2001-08-11 13:13       ` Alan Cox
  1 sibling, 3 replies; 32+ messages in thread
From: Rik van Riel @ 2001-08-11  4:17 UTC (permalink / raw)
  To: H. Peter Anvin; +Cc: linux-kernel

On 10 Aug 2001, H. Peter Anvin wrote:

> > I haven't got the faintest idea how to come up with an OOM
> > killer which does the right thing for everybody.
>
> Basically because there is no such thing?

Actually the killer itself isn't the problem.

It's deciding when to let it kick in.

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  4:17       ` Rik van Riel
@ 2001-08-11  4:40         ` David Ford
  2001-08-11  4:46           ` Rik van Riel
  2001-08-11  4:41         ` safemode
  2001-08-11  5:00         ` H. Peter Anvin
  2 siblings, 1 reply; 32+ messages in thread
From: David Ford @ 2001-08-11  4:40 UTC (permalink / raw)
  To: Rik van Riel; +Cc: H. Peter Anvin, linux-kernel

Perhaps a tunable load value w/ kswapd?  If you're trying to accomplish 
more than N iterations of kswapd's particular function...take your pick, 
then make the OOM killer more trigger happy, perhaps on a sliding scale. 
 At least then -something- will get killed the harder we try to get 
pages.  As it is now, it's very likely the kernel get's stuck on itself 
for hours on end...sometimes never recovering.  I suspect the only 
reason why I recovered it was because I happened to have about 8 ssh 
sessions to other machines that I was able to kill them on.

David

Rik van Riel wrote:

>Actually the killer itself isn't the problem.
>
>It's deciding when to let it kick in.
>



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  4:17       ` Rik van Riel
  2001-08-11  4:40         ` David Ford
@ 2001-08-11  4:41         ` safemode
  2001-08-11  5:00         ` H. Peter Anvin
  2 siblings, 0 replies; 32+ messages in thread
From: safemode @ 2001-08-11  4:41 UTC (permalink / raw)
  To: Rik van Riel, H. Peter Anvin; +Cc: linux-kernel

On Saturday 11 August 2001 00:17, Rik van Riel wrote:
> On 10 Aug 2001, H. Peter Anvin wrote:
> > > I haven't got the faintest idea how to come up with an OOM
> > > killer which does the right thing for everybody.
> >
> > Basically because there is no such thing?
>
> Actually the killer itself isn't the problem.
>
> It's deciding when to let it kick in.

I was under the presumption from what David and others have said that the OOM 
sometimes works the way it was meant to and kills the offending program, or 
it put the box into this super sluggish state for a very long time regardless 
if it was early or late.   If it only happens when it's late then what was 
David talking about?   



> Rik
> --
> IA64: a worthy successor to i860.
>
> http://www.surriel.com/		http://distro.conectiva.com/
>
> Send all your spam to aardvark@nl.linux.org (spam digging piggy)
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  4:40         ` David Ford
@ 2001-08-11  4:46           ` Rik van Riel
  0 siblings, 0 replies; 32+ messages in thread
From: Rik van Riel @ 2001-08-11  4:46 UTC (permalink / raw)
  To: David Ford; +Cc: linux-kernel, safemode

On Sat, 11 Aug 2001, David Ford wrote:

> Perhaps a tunable load value w/ kswapd?  If you're trying to accomplish
> more than N iterations of kswapd's particular function...take your pick,

David, safemode,

your patches are appreciated. OOM is usually quite a rare
condition so I won't be participating in any handwaving
discussions.

I'm willing to discuss tested patches any time, however.

regards,

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  4:17       ` Rik van Riel
  2001-08-11  4:40         ` David Ford
  2001-08-11  4:41         ` safemode
@ 2001-08-11  5:00         ` H. Peter Anvin
  2 siblings, 0 replies; 32+ messages in thread
From: H. Peter Anvin @ 2001-08-11  5:00 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel

Rik van Riel wrote:
> On 10 Aug 2001, H. Peter Anvin wrote:
> 
> 
>>>I haven't got the faintest idea how to come up with an OOM
>>>killer which does the right thing for everybody.
>>>
>>Basically because there is no such thing?
>>
> 
> Actually the killer itself isn't the problem.
> 
> It's deciding when to let it kick in.
> 

Well... yeah...

	-hpa


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  2:59     ` H. Peter Anvin
  2001-08-11  4:17       ` Rik van Riel
@ 2001-08-11 13:13       ` Alan Cox
  2001-08-11 15:39         ` David Ford
  1 sibling, 1 reply; 32+ messages in thread
From: Alan Cox @ 2001-08-11 13:13 UTC (permalink / raw)
  To: H. Peter Anvin; +Cc: linux-kernel

> Followup to:  <Pine.LNX.4.33L.0108102347050.3530-100000@imladris.rielhome.conectiva>
> By author:    Rik van Riel <riel@conectiva.com.br>
> In newsgroup: linux.dev.kernel
> >
> > I haven't got the faintest idea how to come up with an OOM
> > killer which does the right thing for everybody.
> 
> Basically because there is no such thing?

And also because 

-	people mix OOM and thrashing handling up - when they are logically
	seperated questions.

-	The 2.4 VM goes completely gaga under high load. Its beautiful under
	light loads, there nobody can touch it, but when you actually really
	need it - splat. 


So people either need to get an OOM when they are not but in fact might
thrash, or the box thrashes so hard it makes insufficient progress to
actually get out of memory.

OOM is also very hard to get right without reservations tracking in kernel
for the journalling file systems and other similar stuff. To an extent
thrash handling also wants RSS limits.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11 13:13       ` Alan Cox
@ 2001-08-11 15:39         ` David Ford
  0 siblings, 0 replies; 32+ messages in thread
From: David Ford @ 2001-08-11 15:39 UTC (permalink / raw)
  To: Alan Cox; +Cc: H. Peter Anvin, linux-kernel



Alan Cox wrote:

>>Followup to:  <Pine.LNX.4.33L.0108102347050.3530-100000@imladris.rielhome.conectiva>
>>By author:    Rik van Riel <riel@conectiva.com.br>
>>In newsgroup: linux.dev.kernel
>>
>>>I haven't got the faintest idea how to come up with an OOM
>>>killer which does the right thing for everybody.
>>>
>>Basically because there is no such thing?
>>
>
>And also because 
>
>-	people mix OOM and thrashing handling up - when they are logically
>	seperated questions.
>

I understand this, I use reiserfs and because of this there is an 
increased baseline for "no more memory".  This leads to your last 
paragraph below.  The problem is that the OOM handler hasn't been made 
aware of this.

Don't we already measure VM pressure?  It may be a hack but we could 
adjust the triggerpoint of the OOM handler on a sliding scale 
proportionate to the VM pressure, could we not?  I think that is the 
most simple hack until we have a decent resource baseline in place..each 
subsystem would keep a tally of how much required memory it had/wanted 
and that way, the OOM handle would be much more intelligent about when 
to fire.

So this posits two questions.

- How hard/how much time.. is a sliding pressure scale hack to implement

- How hard/how much time.. to implement resource baselines (or similar 
concept)

David

>-	The 2.4 VM goes completely gaga under high load. Its beautiful under
>	light loads, there nobody can touch it, but when you actually really
>	need it - splat. 
>
>
>So people either need to get an OOM when they are not but in fact might
>thrash, or the box thrashes so hard it makes insufficient progress to
>actually get out of memory.
>
>OOM is also very hard to get right without reservations tracking in kernel
>for the journalling file systems and other similar stuff. To an extent
>thrash handling also wants RSS limits.
>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  2:48   ` Rik van Riel
  2001-08-11  2:59     ` H. Peter Anvin
  2001-08-11  3:50     ` safemode
@ 2001-08-12 13:09     ` Maciej Zenczykowski
  2001-08-12 13:45       ` Rik van Riel
  2001-08-12 23:05       ` Dan Mann
  2001-08-13  0:01     ` Colonel
  2001-08-13 14:32     ` dean gaudet
  4 siblings, 2 replies; 32+ messages in thread
From: Maciej Zenczykowski @ 2001-08-12 13:09 UTC (permalink / raw)
  To: Rik van Riel; +Cc: David Ford, linux-kernel

> No.  The problem is that whenever I change something to
> the OOM killer I get flamed.
>
> Both by the people for whom the OOM killer kicks in too
> early and by the people for whom the OOM killer now doesn't
> kick in.
>
> I haven't got the faintest idea how to come up with an OOM
> killer which does the right thing for everybody.

How about adding some sort of per-process priority (i.e. a la nice) which
would determine the order in which they would be OOMed? Then we could
safely run X with a kigh KillMe and Netscape with an even higher KillMe
and we would probably avoid the something useing too much memory let's
kill root's shell...

[i.e. if a lower KillMe proc runs out of memory we kill off the process
with the highest KillMe using most mem and can safely give this mem to the
proc which just ran out]

MaZe.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-12 13:09     ` Maciej Zenczykowski
@ 2001-08-12 13:45       ` Rik van Riel
  2001-08-16 23:29         ` Justin A
  2001-08-12 23:05       ` Dan Mann
  1 sibling, 1 reply; 32+ messages in thread
From: Rik van Riel @ 2001-08-12 13:45 UTC (permalink / raw)
  To: Maciej Zenczykowski; +Cc: David Ford, linux-kernel

On Sun, 12 Aug 2001, Maciej Zenczykowski wrote:

> How about adding some sort of per-process priority (i.e. a la nice) which

That's not the problem. The killing itself is going
pretty well.

The problem is to decide WHEN to kill a process.

regards,

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-12 13:09     ` Maciej Zenczykowski
  2001-08-12 13:45       ` Rik van Riel
@ 2001-08-12 23:05       ` Dan Mann
  1 sibling, 0 replies; 32+ messages in thread
From: Dan Mann @ 2001-08-12 23:05 UTC (permalink / raw)
  To: linux-kernel

Rik Said:

> > No.  The problem is that whenever I change something to
> > the OOM killer I get flamed.
> >
> > Both by the people for whom the OOM killer kicks in too
> > early and by the people for whom the OOM killer now doesn't
> > kick in.
> >
> > I haven't got the faintest idea how to come up with an OOM
> > killer which does the right thing for everybody.

Would there be a way that you could have a proc interface to adjust the
sensitivity of the OOM killer so that users could raise or lower the
threshold that causes OOM killer activation?  Hopefully you wouldn't get any
flak for that unless users start blaming you for their own settings :-)

Most people would just use the standard setting that you provide...but
others that felt the need could change it on their system.

Dan

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  2:48   ` Rik van Riel
                       ` (2 preceding siblings ...)
  2001-08-12 13:09     ` Maciej Zenczykowski
@ 2001-08-13  0:01     ` Colonel
  2001-08-13 14:32     ` dean gaudet
  4 siblings, 0 replies; 32+ messages in thread
From: Colonel @ 2001-08-13  0:01 UTC (permalink / raw)
  To: linux-kernel

In clouddancer.list.kernel, you wrote:
>
>> No.  The problem is that whenever I change something to
>> the OOM killer I get flamed.
>>
>> Both by the people for whom the OOM killer kicks in too
>> early and by the people for whom the OOM killer now doesn't
>> kick in.
>>
>> I haven't got the faintest idea how to come up with an OOM
>> killer which does the right thing for everybody.
>
>How about adding some sort of per-process priority (i.e. a la nice) which
>would determine the order in which they would be OOMed? Then we could
>safely run X with a kigh KillMe and Netscape with an even higher KillMe
>and we would probably avoid the something useing too much memory let's
>kill root's shell...
>
>[i.e. if a lower KillMe proc runs out of memory we kill off the process
>with the highest KillMe using most mem and can safely give this mem to the
>proc which just ran out]


Ah, so you kill X because of the high KillMe you've assigned?  I can
see that this idea quickly leads to dependecies and their problems (if
I kill X, I therefore have killed all Netscapes too...  but if I kill
a dhcp, without killing Netscape first, netscape is useless ... blah
blah blah).


If there is insufficient memory for a process, tell it to sit on it
and spin, especially since : "I haven't got the faintest idea..."
Stop trying to make up for the sysadmin bozo.


-- 
Windows 2001: "I'm sorry Dave ...  I'm afraid I can't do that."


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-11  2:48   ` Rik van Riel
                       ` (3 preceding siblings ...)
  2001-08-13  0:01     ` Colonel
@ 2001-08-13 14:32     ` dean gaudet
  2001-08-13 19:47       ` Brian
  2001-08-14  8:27       ` Helge Hafting
  4 siblings, 2 replies; 32+ messages in thread
From: dean gaudet @ 2001-08-13 14:32 UTC (permalink / raw)
  To: Rik van Riel; +Cc: David Ford, linux-kernel

On Fri, 10 Aug 2001, Rik van Riel wrote:

> I haven't got the faintest idea how to come up with an OOM
> killer which does the right thing for everybody.

maybe the concept is flawed?  not saying that it is flawed necessarily,
but i've often wondered why linux vm issues never seem to be solved in the
years i've been reading linux-kernel.

if i understand the OOM killer correctly it's intended to make some sort
of intelligent choice as to which application to shoot when the system is
out of memory.

honestly, if applications can't stomach being shot then the bug is in the
application, not in the kernel.  vi can handle being shot because it does
the right thing:  it checkpoints state periodically.  it's a simple model
which any sane application could follow, and many do actually follow.

getting shot for OOM or getting shot because of power failure, or 2-bit
RAM failure, or backhoe fade, or ... what's really the difference?

so why not just use the most simple OOM around:  shoot the first app which
can't get its page.  app writers won't like it, and users won't like it
until the app writers fix their bugs, but then nobody likes the current
situation, so what's the difference?

maybe kernel support for making checkpointing easier would be a good way
to advance the science?  there certainly are tools which exist that do
part of the problem already -- except for sockets and pipes and such
interprocess elements it's pretty trivial to checkpoint.  interprocess
elements probably require some extra kernel support.  network elements are
where it really becomes challenging.

i would happily give up 10 to 20% system resources for checkpoint overhead
if it meant that i'd be that much closer to a crashproof system.  after a
year the performance deficit would be made up by hardware improvements.

i know it's a big pill to swallow, but i've been impressed time and time
again by the will of linux kernel hackers to just say "this is how it will
be, because it is the only known way to perfection, deal."

-dean


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-13 14:32     ` dean gaudet
@ 2001-08-13 19:47       ` Brian
  2001-08-14  8:27       ` Helge Hafting
  1 sibling, 0 replies; 32+ messages in thread
From: Brian @ 2001-08-13 19:47 UTC (permalink / raw)
  To: dean gaudet; +Cc: linux-kernel

On Monday 13 August 2001 10:32 am, dean gaudet wrote:
> On Fri, 10 Aug 2001, Rik van Riel wrote:
> > I haven't got the faintest idea how to come up with an OOM
> > killer which does the right thing for everybody.
>
> maybe the concept is flawed?  not saying that it is flawed necessarily,
> but i've often wondered why linux vm issues never seem to be solved in
> the years i've been reading linux-kernel.
>
> if i understand the OOM killer correctly it's intended to make some sort
> of intelligent choice as to which application to shoot when the system
> is out of memory.

As I understand it, the problem is not what to do when the system is out 
of memory.  The problem is deciding what actually consititutes out of 
memory.  IOW, if I have a 32 MB system with 512MB of swap, I could have 
200MB of swap free and still have a system so far gone that it swaps more 
than it works.  OTOH, I could have 400MB worth of idle daemons and other 
residents taking up swap with a perfectly usable system in RAM.

It would be nice if we could sack a few tasks in swap for awhile so stuff 
in RAM could get things done.  Something like "if a page gets swapped out, 
it'll stay there for at least a minute" should work fine in normal 
conditions (since the page has probably been idle for awhile, anyway).  
Under memory pressure, the scheduler would pass by any tasks waiting on a 
page that was just swapped out.

Now that I've shown off how little I know about the current OOM design, 
I'll go sit and be quiet...
	-- Brian

> honestly, if applications can't stomach being shot then the bug is in
> the application, not in the kernel.  vi can handle being shot because it
> does the right thing:  it checkpoints state periodically.  it's a simple
> model which any sane application could follow, and many do actually
> follow.
>
> getting shot for OOM or getting shot because of power failure, or 2-bit
> RAM failure, or backhoe fade, or ... what's really the difference?
>
> so why not just use the most simple OOM around:  shoot the first app
> which can't get its page.  app writers won't like it, and users won't
> like it until the app writers fix their bugs, but then nobody likes the
> current situation, so what's the difference?
>
> maybe kernel support for making checkpointing easier would be a good way
> to advance the science?  there certainly are tools which exist that do
> part of the problem already -- except for sockets and pipes and such
> interprocess elements it's pretty trivial to checkpoint.  interprocess
> elements probably require some extra kernel support.  network elements
> are where it really becomes challenging.
>
> i would happily give up 10 to 20% system resources for checkpoint
> overhead if it meant that i'd be that much closer to a crashproof
> system.  after a year the performance deficit would be made up by
> hardware improvements.
>
> i know it's a big pill to swallow, but i've been impressed time and time
> again by the will of linux kernel hackers to just say "this is how it
> will be, because it is the only known way to perfection, deal."
>
> -dean
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel"
> in the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-13 14:32     ` dean gaudet
  2001-08-13 19:47       ` Brian
@ 2001-08-14  8:27       ` Helge Hafting
  2001-08-17 13:34         ` Szabolcs Szakacsits
  1 sibling, 1 reply; 32+ messages in thread
From: Helge Hafting @ 2001-08-14  8:27 UTC (permalink / raw)
  To: dean gaudet, linux-kernel

dean gaudet wrote:
[...]
> so why not just use the most simple OOM around:  shoot the first app which
> can't get its page.  app writers won't like it, and users won't like it
> until the app writers fix their bugs, but then nobody likes the current
> situation, so what's the difference?

It used to be like that.  Unfortunately, the first app unable to
get its page might very well be init, and then the entire system goes
down in flames.  You might as well kill the kernel at that point.

Fix that, and people start complaining that the X server goes, taking
all X apps with it when killing one would suffice.  Fix that,
and you almost have today's OOM killer.  

The real solution is to have enough memory for the task at hand. 
Failing
that, get so much swap space that people will be happy when the OOM
killer kicks in and limits the trashing.

Helge Hafting

^ permalink raw reply	[flat|nested] 32+ messages in thread

* "VM watchdog"? [was Re: VM nuisance]
  2001-08-11  1:30 ` VM nuisance David Ford
  2001-08-11  2:48   ` Rik van Riel
@ 2001-08-14 14:00   ` Pavel Machek
  2001-08-16 22:24     ` Jakob Østergaard
  1 sibling, 1 reply; 32+ messages in thread
From: Pavel Machek @ 2001-08-14 14:00 UTC (permalink / raw)
  To: David Ford; +Cc: linux-kernel

Hi!

> Is there anything measurably useful in any -ac or -pre patches after 
> 2.4.7 that helps or fixes the blasted out-of-memory-but-let's-go-fsck 
> -ourselves-for-a-few-hours?
> 
> I was very close to hitting the reset button and losing a lot of 
> important information because of this.  I accidently got too close to 
> the edge of memory (~6megs free) and the kernel went into FMM (fsck 
> myself mode)...i.e. spin mightily looking for memory and going noplace 
> whilst ignoring it's little buddy the OOM handler.
> 
> Again, it doesn't matter if I have swap or not, if I get within ~6 megs 
> of the end of memory, the kernel goes FMM.  I've tested with and without 
> swap.  And _please_  don't tell me "just add more swap".  That's 
> ludicruous and isn't solving the problem, it's covering up a symptom.

Maybe creating userland program that
*) allocates few megs
*) while 1 sleep 1m, gettimeofday. If more tha two minutes elapsed,
	tell OOM handler to kick in.

Or maybe kernel could have some "VM watchdog", which would trigger OOM if
it is not polled once a minute...
								Pavel
-- 
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: "VM watchdog"? [was Re: VM nuisance]
  2001-08-14 14:00   ` "VM watchdog"? [was Re: VM nuisance] Pavel Machek
@ 2001-08-16 22:24     ` Jakob Østergaard
  2001-08-17  1:26       ` David Ford
  0 siblings, 1 reply; 32+ messages in thread
From: Jakob Østergaard @ 2001-08-16 22:24 UTC (permalink / raw)
  To: Pavel Machek; +Cc: David Ford, linux-kernel

On Tue, Aug 14, 2001 at 02:00:11PM +0000, Pavel Machek wrote:
> Hi!
> 
...
> > Again, it doesn't matter if I have swap or not, if I get within ~6 megs 
> > of the end of memory, the kernel goes FMM.  I've tested with and without 
> > swap.  And _please_  don't tell me "just add more swap".  That's 
> > ludicruous and isn't solving the problem, it's covering up a symptom.
> 
> Maybe creating userland program that
> *) allocates few megs
> *) while 1 sleep 1m, gettimeofday. If more tha two minutes elapsed,
> 	tell OOM handler to kick in.

On my compute-server in the basement this is completely unacceptable because it
*may* just be working hard on something big.  The excessive swapping may just
be 10-30 minutes where some app is using way more memory than the box has RAM,
in this case it's no problem at all, and all your suggestion would give me is
randomly dying compute jobs.

On my desktop this is unacceptable as well. You want the system to be frozen
for more than two minutes before doing anything about it ?

The problem with using such vague heuristics against fixed (arbitrary) limits
is that the effect will almost always be completely unacceptable.  Either your
arbitrary limit is way too high, or way too low.

I can't tell you how to do it - but I think your suggestion is an excellent way
to *not* do it     :)

> 
> Or maybe kernel could have some "VM watchdog", which would trigger OOM if
> it is not polled once a minute...

Didn't everyone pretty much agree that if we could turn off overcommit
completely and reliably, that would be the preferred solution ?   Simply sig11
the app that's unlucky enough to want more memory than there's in the system
(or, horror, have malloc() fail)

Now, I don't remember the entire thread, but IIRC it was difficult to kill
overcommit completely.

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-12 13:45       ` Rik van Riel
@ 2001-08-16 23:29         ` Justin A
  2001-08-17  0:06           ` Dr. Kelsey Hudson
  0 siblings, 1 reply; 32+ messages in thread
From: Justin A @ 2001-08-16 23:29 UTC (permalink / raw)
  To: linux-kernel

On Sun, Aug 12, 2001 at 10:45:40AM -0300, Rik van Riel wrote:
> On Sun, 12 Aug 2001, Maciej Zenczykowski wrote:
> 
> > How about adding some sort of per-process priority (i.e. a la nice) which
> 
> That's not the problem. The killing itself is going
> pretty well.
> 
> The problem is to decide WHEN to kill a process.
> 
> regards,
> 
> Rik

Though it is not a complete solution for all cases(servers etc),
could a SysReq combination be added that triggers OOM?  I'm sure many
people use SysReq-k on occasion to get a system out of endless
swapping,  I think having a SysReq key for OOM would be a great
improvement.

Comments?
-Justin

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-16 23:29         ` Justin A
@ 2001-08-17  0:06           ` Dr. Kelsey Hudson
  2001-08-17  0:24             ` Justin A
  0 siblings, 1 reply; 32+ messages in thread
From: Dr. Kelsey Hudson @ 2001-08-17  0:06 UTC (permalink / raw)
  To: Justin A; +Cc: linux-kernel

On Thu, 16 Aug 2001, Justin A wrote:

> Though it is not a complete solution for all cases(servers etc),
> could a SysReq combination be added that triggers OOM?  I'm sure many
> people use SysReq-k on occasion to get a system out of endless
> swapping,  I think having a SysReq key for OOM would be a great
> improvement.
>
> Comments?

I think it's a damn good idea. I'd actually begin coding that now, if I
knew where to start. SysRQ has saved my life more than once -- i'm sure it
would help all the other people who are having problems with randomized
thrashing and stuff.

 Kelsey Hudson                                           khudson@ctica.com
 Software Engineer
 Compendium Technologies, Inc                               (619) 725-0771
---------------------------------------------------------------------------


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-17  0:06           ` Dr. Kelsey Hudson
@ 2001-08-17  0:24             ` Justin A
  2001-08-17  2:54               ` Dr. Kelsey Hudson
  0 siblings, 1 reply; 32+ messages in thread
From: Justin A @ 2001-08-17  0:24 UTC (permalink / raw)
  To: Dr. Kelsey Hudson; +Cc: linux-kernel

Wouldn't it be just a matter of


case 'f':    /* F -- oom handlder _f_ree memory */
	printk("Run OOM Handler\n");
	oom_kill();
	break;

in handle_sysrq in sysrq.c?

-Justin

On Thu, Aug 16, 2001 at 05:06:40PM -0700, Dr. Kelsey Hudson wrote:
> On Thu, 16 Aug 2001, Justin A wrote:
> 
> > Though it is not a complete solution for all cases(servers etc),
> > could a SysReq combination be added that triggers OOM?  I'm sure many
> > people use SysReq-k on occasion to get a system out of endless
> > swapping,  I think having a SysReq key for OOM would be a great
> > improvement.
> >
> > Comments?
> 
> I think it's a damn good idea. I'd actually begin coding that now, if I
> knew where to start. SysRQ has saved my life more than once -- i'm sure it
> would help all the other people who are having problems with randomized
> thrashing and stuff.
> 
>  Kelsey Hudson                                           khudson@ctica.com
>  Software Engineer
>  Compendium Technologies, Inc                               (619) 725-0771
> ---------------------------------------------------------------------------
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: "VM watchdog"? [was Re: VM nuisance]
  2001-08-16 22:24     ` Jakob Østergaard
@ 2001-08-17  1:26       ` David Ford
  2001-08-17  1:41         ` Jakob Østergaard
  2001-08-17  9:04         ` Colonel
  0 siblings, 2 replies; 32+ messages in thread
From: David Ford @ 2001-08-17  1:26 UTC (permalink / raw)
  To: Jakob Østergaard; +Cc: Pavel Machek, linux-kernel

I think it is an excellent way to do it.  Nobody said you have to run 
the program and nobody forces you to use a particular program with a 
particular policy.  It puts the OOM policy in userland where -you- 
decide when and how things happen.

David

Jakob Østergaard wrote:

>>Maybe creating userland program that
>>*) allocates few megs
>>*) while 1 sleep 1m, gettimeofday. If more tha two minutes elapsed,
>>	tell OOM handler to kick in.
>>
>
>On my compute-server in the basement this is completely unacceptable because it
>*may* just be working hard on something big.  The excessive swapping may just
>be 10-30 minutes where some app is using way more memory than the box has RAM,
>in this case it's no problem at all, and all your suggestion would give me is
>randomly dying compute jobs.
>
>On my desktop this is unacceptable as well. You want the system to be frozen
>for more than two minutes before doing anything about it ?
>
>The problem with using such vague heuristics against fixed (arbitrary) limits
>is that the effect will almost always be completely unacceptable.  Either your
>arbitrary limit is way too high, or way too low.
>
>I can't tell you how to do it - but I think your suggestion is an excellent way
>to *not* do it     :)
>
>>Or maybe kernel could have some "VM watchdog", which would trigger OOM if
>>it is not polled once a minute...
>>
>
>Didn't everyone pretty much agree that if we could turn off overcommit
>completely and reliably, that would be the preferred solution ?   Simply sig11
>the app that's unlucky enough to want more memory than there's in the system
>(or, horror, have malloc() fail)
>
>Now, I don't remember the entire thread, but IIRC it was difficult to kill
>overcommit completely.
>

The kernel allocates memory within itself.  We will still reach OOM 
conditions.  It can't be avoided.

David



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: "VM watchdog"? [was Re: VM nuisance]
  2001-08-17  1:26       ` David Ford
@ 2001-08-17  1:41         ` Jakob Østergaard
  2001-08-17  9:04         ` Colonel
  1 sibling, 0 replies; 32+ messages in thread
From: Jakob Østergaard @ 2001-08-17  1:41 UTC (permalink / raw)
  To: David Ford; +Cc: Pavel Machek, linux-kernel

On Thu, Aug 16, 2001 at 09:26:38PM -0400, David Ford wrote:
> I think it is an excellent way to do it.  Nobody said you have to run 
> the program and nobody forces you to use a particular program with a 
> particular policy.  It puts the OOM policy in userland where -you- 
> decide when and how things happen.
> 

Sure - what I was trying to say was, that I don't think the solution
will work very well.

...
> 
> The kernel allocates memory within itself.  We will still reach OOM 
> conditions.  It can't be avoided.

Good point.

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-17  0:24             ` Justin A
@ 2001-08-17  2:54               ` Dr. Kelsey Hudson
  0 siblings, 0 replies; 32+ messages in thread
From: Dr. Kelsey Hudson @ 2001-08-17  2:54 UTC (permalink / raw)
  To: Justin A; +Cc: linux-kernel

On Thu, 16 Aug 2001, Justin A wrote:

> case 'f':    /* F -- oom handlder _f_ree memory */
> 	printk("Run OOM Handler\n");
> 	oom_kill();
> 	break;

This causes a nasty ass BUG() to get asserted. I can trigger the OOM
killer normally, but if I trigger it using SysRq, it'll complain rather
loudly and kill the interrupt handler. It appears to kill the proper
process first, however.

If I get some time later on, I'll boot up my devel machine at home and get
this thing all squared away. It looks as though some sort of lock or
something is being held when it shouldnt be, and it's causing Bad
Things(tm) to happen. Either way, I'll figure it out at home (or when I
get back from vacation on Monday).

LMK what you think.

 Kelsey Hudson                                           khudson@ctica.com
 Software Engineer
 Compendium Technologies, Inc                               (619) 725-0771
---------------------------------------------------------------------------


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: "VM watchdog"? [was Re: VM nuisance]
  2001-08-17  1:26       ` David Ford
  2001-08-17  1:41         ` Jakob Østergaard
@ 2001-08-17  9:04         ` Colonel
  2001-08-17 20:38           ` David Ford
  1 sibling, 1 reply; 32+ messages in thread
From: Colonel @ 2001-08-17  9:04 UTC (permalink / raw)
  To: linux-kernel

In clouddancer.list.kernel, David wrote:

>>Didn't everyone pretty much agree that if we could turn off overcommit
>>completely and reliably, that would be the preferred solution ?   Simply sig11
>>the app that's unlucky enough to want more memory than there's in the system
>>(or, horror, have malloc() fail)
>>
>>Now, I don't remember the entire thread, but IIRC it was difficult to kill
>>overcommit completely.
>>
>
>The kernel allocates memory within itself.  We will still reach OOM 
>conditions.  It can't be avoided.

That doesn't sound good.

What bugs me about this statement was that until 2.4, I never had
lockups.  I sometimes had a LOT of swapping and slow response, but I
also knew that running a complex numeric simulation when RAM <
'program needs' does that.  I accepted it and tended to arrange such
runs in my absence.  Now I find that I get some process nuked (or
worse - partially nuked) even after increasing to 4x swap and
eliminating lazy habits that would leave some idle process up for a
few days in case I needed it again (worked fine in 2.0.36).  There are
_alot_ of good things in 2.4, but sometimes....


Does your statement imply that a machine left "alone" must eventually
OOM given enough runtime??  It seems that it must.


-- 
2.4 VM: "I'm sorry Dave ...  I'm afraid I can't do that."


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-14  8:27       ` Helge Hafting
@ 2001-08-17 13:34         ` Szabolcs Szakacsits
  2001-08-17 17:29           ` Rik van Riel
  0 siblings, 1 reply; 32+ messages in thread
From: Szabolcs Szakacsits @ 2001-08-17 13:34 UTC (permalink / raw)
  To: Helge Hafting; +Cc: dean gaudet, linux-kernel


On Tue, 14 Aug 2001, Helge Hafting wrote:
> dean gaudet wrote:
> > i would happily give up 10 to 20% system resources for checkpoint
> > overhead if it meant that i'd be that much closer to a crashproof
> > system.

It's not necessarily needed so much and not even checkpointing using
(optional) non-overcommiting. See e.g. the Solaris numbers for yourself,
http://www.google.com/search?q=cache:http://lists.openresources.com/NetBSD/tech-userlevel/msg00722.html
All big guys support this optionally, sometimes even in a finer graded
level, like per process settable by developer overjudgable (sp?) by
admin (AIX) or using virtual swap space where the degree of the memory
overcommitment controllable (IRIX), etc. Linux had the "quasi"
non-overcommiting [it didn't reserved the demanded memory, just checked
if there is enough free(able) in user space allocation time]
controllable by /proc/sys/vm/overcommit_memory in 2.2 but in 2.4
vm_enough_memory() overestimates freeable memory (in contrast to 2.2
kernels) so basically it's useless - this is one of the reasons why OOM
so trendy topic recently.

Although non-overcommit prevents running out of VM but when VM is full
then system can either livelock or start arbitrary process killing so
non-overcommit becomes useless. How others solved this? They reserve
some VM for root so he can act whatever he wants. Well written apps (I
could mention e.g. apache) don't really care about system is in an OOM
situation - they happily do their jobs just as before (proved in
practice ;)

When root reserved VM is also used up, welcome OOM killer - however with
the above two protection bar the chance for this is pretty around 0 if
admin don't prefer running stuffs as root.

Note, these are optional for those who are willing to sacrifice a couple
of percent system resources.

> > so why not just use the most simple OOM around:  shoot the first app which
> > can't get its page.  app writers won't like it, and users won't like it
> > until the app writers fix their bugs, but then nobody likes the current
> > situation, so what's the difference?
> It used to be like that.  Unfortunately, the first app unable to
> get its page might very well be init, and then the entire system goes
> down in flames.

Well, I ported the 2.4 OOM killer to 2.2.19 and added reserved root VM
	http://mlf.linux.rulez.org/mlf/ezaz/reserved_root_memory.html

It works like it kills the process chosen by OOM killer when the first
app can't get its page in page fault. OOM killing in 2.4 works
differently. Apps loop forever in __alloc_pages() until they can get a
page or out_of_memory() decides its time to kill somebody. So whenever
there is some VM tuning, out_of_memory() should be tuned. Of course it's
usually missed. Futhermore it's also a heuristic, not an "exact"
decision made by e.g. 2.2 kernels. So in short, IMHO it will never work.
I asked explanation for a couple of times for this 2.4 behavior but
nobody bothered. I think partly it's because of the aggressive caching.
One easy solution could be to drop out_of_memory() completely, put back
oom_kill() to page fault from kswapd() and make tunable the number of
looping in __alloc_pages().

> The real solution is to have enough memory for the task at hand.

Define to the user the "enough memory" when he wants to open different
kind of documents, run scientific applications with different size data
sets or whatever. What would you expect from your computer either "Hey,
your resources is not enough for this task" or just crash the
application?

> Failing that, get so much swap space that people will be happy when

The "buy more disk, buy more RAM!" kind of answers were one of the
reasons Linux user base growth so big as it is today escaping from
the old advisers ...

2.4 is killer if expertise is given [not to install Linux but to
carefully setup the box for its job] but it fails otherwise because of
its OOM handling.

	Szaka


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
  2001-08-17 13:34         ` Szabolcs Szakacsits
@ 2001-08-17 17:29           ` Rik van Riel
  0 siblings, 0 replies; 32+ messages in thread
From: Rik van Riel @ 2001-08-17 17:29 UTC (permalink / raw)
  To: Szabolcs Szakacsits; +Cc: Helge Hafting, dean gaudet, linux-kernel

On Fri, 17 Aug 2001, Szabolcs Szakacsits wrote:

> Although non-overcommit prevents running out of VM but when VM
> is full then system can either livelock or start arbitrary

You people all have nice ideas.  I guess we may want
to preserve them for the future and don't let them
get lost in a mailing list archive ;)

If you have the time, please create a page for these
things in the Linux-MM wiki:

	http://linux-mm.org/wiki/

cheers,

Rik
--
IA64: a worthy successor to the i860.

		http://www.surriel.com/
http://www.conectiva.com/	http://distro.conectiva.com/


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: "VM watchdog"? [was Re: VM nuisance]
  2001-08-17  9:04         ` Colonel
@ 2001-08-17 20:38           ` David Ford
  0 siblings, 0 replies; 32+ messages in thread
From: David Ford @ 2001-08-17 20:38 UTC (permalink / raw)
  To: klink; +Cc: linux-kernel

>
>
>>The kernel allocates memory within itself.  We will still reach OOM 
>>conditions.  It can't be avoided.
>>
>
>That doesn't sound good.
>
>What bugs me about this statement was that until 2.4, I never had
>lockups.  I sometimes had a LOT of swapping and slow response, but I
>also knew that running a complex numeric simulation when RAM <
>'program needs' does that.  I accepted it and tended to arrange such
>runs in my absence.  Now I find that I get some process nuked (or
>worse - partially nuked) even after increasing to 4x swap and
>eliminating lazy habits that would leave some idle process up for a
>few days in case I needed it again (worked fine in 2.0.36).  There are
>_alot_ of good things in 2.4, but sometimes....
>
>
>Does your statement imply that a machine left "alone" must eventually
>OOM given enough runtime??  It seems that it must.
>

Nope, not at all.  The kernel acquires and releases memory as it goes. 
 My statement is to the fact that we can reach a point where we have 
exhausted all the memory resources by the time we start a particular 
code path but in order to complete that code path we need more memory. 
 I.e. journaled filesystems.  We have reached 0 memory but we need to 
start down a code path to update data on the disk.  That means we may 
have to allocate memory to read and update the journal as we start 
writing regular file data to disk.

This only occurs when we ride right on the very edge of OOM.  This will 
also be fixed when we implement all the desired bean counting.

David



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
       [not found] <no.id>
@ 2001-08-11 16:45 ` Alan Cox
  0 siblings, 0 replies; 32+ messages in thread
From: Alan Cox @ 2001-08-11 16:45 UTC (permalink / raw)
  To: David Ford; +Cc: Alan Cox, H. Peter Anvin, linux-kernel

> - How hard/how much time.. to implement resource baselines (or similar 
> concept)

Ben La Haise has been working on that part of things. Its non trivial. I've
been discussing changes with Rik for VM behaviour shifts when thrashing but
not OOM. We have some ideas but this is research

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: VM nuisance
       [not found] <20010811035112.59EC438D01@perninha.conectiva.com.br>
@ 2001-08-11  3:52 ` Rik van Riel
  0 siblings, 0 replies; 32+ messages in thread
From: Rik van Riel @ 2001-08-11  3:52 UTC (permalink / raw)
  To: safemode; +Cc: David Ford, linux-kernel

On Fri, 10 Aug 2001, safemode wrote:

> maybe these are signs that the OOM killer just wasn't ready for 2.4
> with the VM that it has.  For people hoping to use 2.4 as a server
> platform, they should have the confidence to know what's going to
> happen when the OOM situation occurs.  If the current OOM handler cant
> give that confidence then perhaps it should be removed and slated for
> 2.5.  Simple way to beat the flames anyway.

It was better than the alternative of having random programs
die.

But don't let that stop you from sending patches.

regards,

Rik
--
IA64: a worthy successor to i860.

http://www.surriel.com/		http://distro.conectiva.com/

Send all your spam to aardvark@nl.linux.org (spam digging piggy)


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2001-08-17 20:39 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <9li6sf$h5$1@ns1.clouddancer.com>
2001-08-11  1:30 ` VM nuisance David Ford
2001-08-11  2:48   ` Rik van Riel
2001-08-11  2:59     ` H. Peter Anvin
2001-08-11  4:17       ` Rik van Riel
2001-08-11  4:40         ` David Ford
2001-08-11  4:46           ` Rik van Riel
2001-08-11  4:41         ` safemode
2001-08-11  5:00         ` H. Peter Anvin
2001-08-11 13:13       ` Alan Cox
2001-08-11 15:39         ` David Ford
2001-08-11  3:50     ` safemode
2001-08-12 13:09     ` Maciej Zenczykowski
2001-08-12 13:45       ` Rik van Riel
2001-08-16 23:29         ` Justin A
2001-08-17  0:06           ` Dr. Kelsey Hudson
2001-08-17  0:24             ` Justin A
2001-08-17  2:54               ` Dr. Kelsey Hudson
2001-08-12 23:05       ` Dan Mann
2001-08-13  0:01     ` Colonel
2001-08-13 14:32     ` dean gaudet
2001-08-13 19:47       ` Brian
2001-08-14  8:27       ` Helge Hafting
2001-08-17 13:34         ` Szabolcs Szakacsits
2001-08-17 17:29           ` Rik van Riel
2001-08-14 14:00   ` "VM watchdog"? [was Re: VM nuisance] Pavel Machek
2001-08-16 22:24     ` Jakob Østergaard
2001-08-17  1:26       ` David Ford
2001-08-17  1:41         ` Jakob Østergaard
2001-08-17  9:04         ` Colonel
2001-08-17 20:38           ` David Ford
     [not found] <20010811035112.59EC438D01@perninha.conectiva.com.br>
2001-08-11  3:52 ` VM nuisance Rik van Riel
     [not found] <no.id>
2001-08-11 16:45 ` Alan Cox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).