linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Break 2.4 VM in five easy steps
@ 2001-06-05 22:19 Derek Glidden
  2001-06-05 23:38 ` Jeffrey W. Baker
                   ` (8 more replies)
  0 siblings, 9 replies; 142+ messages in thread
From: Derek Glidden @ 2001-06-05 22:19 UTC (permalink / raw)
  To: linux-kernel


After reading the messages to this list for the last couple of weeks and
playing around on my machine, I'm convinced that the VM system in 2.4 is
still severely broken.  

This isn't trying to test extreme low-memory pressure, just how the
system handles recovering from going somewhat into swap, which is a real
day-to-day problem for me, because I often run a couple of apps that
most of the time live in RAM, but during heavy computation runs, can go
a couple hundred megs into swap for a few minutes at a time.  Whenever
that happens, my machine always starts acting up afterwards, so I
started investigating and found some really strange stuff going on.

To demonstrate this to a co-worker, I cooked up this really simple,
really stupid, very effective test.  (Note that this all is probably
specific to IA32, which is the platform on which I'm running.)

-- How to Break your 2.4 kernel VM in 5 easy steps

1) compile the following code:

#include <stdlib.h>
void main(void) {
   /* allocate a buttload of memory and try to touch it all */
   void *ptr = (void *)calloc(100000000, sizeof(int)) ;

   /* sleep for a bit to let the system quiesce */
   sleep(20);

   /* let it all go away now */
   free(ptr);
}

2) depending on the amount of RAM/swap available in your machine, you
might need to adjust the calloc to allocate a different amount.  This
allocates about 400MB.  

3) Run the program, or more than one copy at once.  You want to put your
machine somewhat into swap, but not totally overwhelmed.  On the system
I'm using to write this, with 512MB of RAM and 512MB of swap, I run two
copies of this program simultaneously and it puts me a couple hundred
megs into swap.

4) Let the program exit, run "free" or cat /proc/memstat or something to
make sure your machine has paged a bunch of stuff out into swap.

5) try to "swapoff" your swap partition and watch the machine become
completely and entirely unresponsive for several minutes.

--

If I do this on my machine, which is a K7-700 on an ASUS K7M motherboard
with 512MB each of swap and RAM where I'm writing this (but I can make
any machine running 2.4 behave the same way, and any version I've tried
it with from 2.4.2 on up through most of the -ac kernels too), the
machine will become _entirely_ unresponsive for several minutes.  The HD
comes on for a few seconds at the very start of the "swapoff", CPU
utilization immediately pegs up to 100% system time, and then for a few
minutes after, as far as anyone can tell, the machine is TOTALLY locked
up.  No console response, no response from anything on the machine. 
However, after a few minutes of TOTAL catatonia, it will mysteriously
come back to life, having finally released all its swap.

Now, this is a VERY contrived test, but there are a couple of things
about doing this against 2.4 compared with 2.2 that seem VERY BROKEN to
me.

1) Running this against a machine running a 2.2-series kernel does
nothing out of the ordinary.  You hit a bunch of swap, exit the
"allocate" program, swapoff, and everything is fine after a few seconds
of disk activity as it pages everything back into RAM.  Least surprise. 
Under 2.4, when you "swapoff" it appears as far as anyone can tell that
the machine has locked up completely.  Very surprising.  In fact, the
first time it happened to me, I hit the Big Red Switch thinking the
machine _had_ locked up.  It wasn't until I started playing around with
memory allocation a bit more and read some of the problems on LKML that
I started to realize it wasn't locked up - just spinning.

2) Under 2.2, when the "allocate" programs exit, the amount of mem and
swap that show up in the "used" column are quite small - about what
you'd expect from all the apps that are actually running. No surprise
there.  Under 2.4, after running the "allocate" program, "free" shows
about 200MB each under mem and swap as "used".  A lot of memory shows up
in the "cached" column, so that explains the mem usage, (although not
what's cached, unless it's caching swap activity, which is odd) but what
the heck is in that swap space?  Very surprising.

Now, I'm sure some of the response will be "Don't run 2.4.  If you want
to run a stable kernel run 2.2."  That may be a reasonable, but there
are a couple of features and a couple of drivers that make the 2.4 very
appealing, and somewhat necessary, to me.  Also, I want to help FIX
these problems.  I don't know if my hokey test is an indication of
something for real, but hopefully it's something that's simple enough
that a lot of people can run it and see if they experience similar
things.  

And, AFAIC, a truly stable kernel (like 2.2) should be able to go deep
into swap, and once the applications taking up the memory have exited,
be able to turn off that swap and not have something utterly surprising,
like the machine becoming comatose for several minutes, happen.  If it
does, that's an indication to me that there is something severely wrong.

Now, with that being said, is there anything I can do to help?  Run
experimental patches?  Try things on different machines?  I have access
to a number of different computers (all IA32) with widely varying memory
configurations and am willing to try test patches to try to get this
working correctly.

Or am I completely smoking crack and the fact that my machine hoses up
for several minutes after this very contrived test is only an indication
that the test is very contrived and in fact the kernel VM is perfectly
fine and this is totally expected behaviour and I just should never try
to "swapoff" a swap partition under 2.4 if I want my machine to behave
itself?

Please respond to me directly, as I'm not subscribed to the list.  I
have tried to keep current via archives in the last couple of weeks, but
with the PSI/C&W disconnect going down, it seems like I'm unable to
reach some of the online archives.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 142+ messages in thread
* Re: Break 2.4 VM in five easy steps
@ 2001-06-06 15:31 Derek Glidden
  2001-06-06 15:46 ` John Alvord
  2001-06-06 21:30 ` Alan Cox
  0 siblings, 2 replies; 142+ messages in thread
From: Derek Glidden @ 2001-06-06 15:31 UTC (permalink / raw)
  To: Alexander Viro, linux-kernel


> Funny. I can count many ways in which 4.3BSD, SunOS{3,4} and post-4.4 BSD
> systems I've used were broken, but I've never thought that swap==2*RAM rule
> was one of them.

Yes, but Linux isn't 4.3BSD, SunOS or post-4.4 BSD.  Not to mention, all
other OS's I've had experience using *don't* break severely if you don't
follow the "swap==2*RAM" rule.  Except Linux 2.4.

> Not that being more kind on swap would be a bad thing, but that rule for
> amount of swap is pretty common. ISTR similar for (very old) SCO, so it's
> not just BSD world. How are modern Missed'em'V variants in that respect, BTW?

Yes, but that has traditionally been one of the big BENEFITS of Linux,
and other UNIXes.  As Sean Hunter said, "Virtual memory is one of the
killer features of
unix."  Linux has *never* in the past REQUIRED me to follow that rule. 
Which is a big reason I use it in so many places.

Take an example mentioned by someone on the list already: a laptop.  I
have two laptops that run Linux.  One has a 4GB disk, one has a 12GB
disk.  Both disks are VERY full of data and both machines get pretty
heavy use.  It's a fact that I just bumped one laptop (with 256MB of
swap configured) from 128MB to 256MB of RAM.  Does this mean that if I
want to upgrade to the 2.4 kernel on that machine I now have to back up
all that data, repartition the drive and restore everything just so I
can fastidiously follow the "swap == 2*RAM" rule else the 2.4 VM
subsystem will break?  Bollocks, to quote yet another participant in
this silly discussion.

I'm beginning to be amazed at the Linux VM hackers' attitudes regarding
this problem.  I expect this sort of behaviour from academics - ignoring
real actual problems being reported by real actual people really and
actually experiencing and reporting them because "technically" or
"theoretically" they "shouldn't be an issue" or because "the "literature
[documentation] says otherwise - but not from this group.  

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

^ permalink raw reply	[flat|nested] 142+ messages in thread
* Re: Break 2.4 VM in five easy steps
@ 2001-06-07 10:46 Bernd Jendrissek
       [not found] ` <20010607153835.T14203@jessica>
  2001-06-08 19:32 ` Pavel Machek
  0 siblings, 2 replies; 142+ messages in thread
From: Bernd Jendrissek @ 2001-06-07 10:46 UTC (permalink / raw)
  To: linux-kernel

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
NotDashEscaped: You need GnuPG to verify this message

First things first: 1) Please Cc: me when responding, 2) apologies for
dropping any References: headers, 3) sorry for bad formatting

"Jeffrey W. Baker" wrote:
> On Tue, 5 Jun 2001, Derek Glidden wrote: 
> > This isn't trying to test extreme low-memory pressure, just how the 
> > system handles recovering from going somewhat into swap, which is
> > a real 
> > day-to-day problem for me, because I often run a couple of apps
> > that 
> > most of the time live in RAM, but during heavy computation runs,
> > can go 
> > a couple hundred megs into swap for a few minutes at a time.
> > Whenever 
> > that happens, my machine always starts acting up afterwards, so I 
> > started investigating and found some really strange stuff going on. 

Has anyone else noticed the difference between
 dd if=/dev/zero of=bigfile bs=16384k count=1
and
 dd if=/dev/zero of=bigfile bs=8k count=2048
deleting 'bigfile' each time before use?  (You with lots of memory may
(or may not!) want to try bs=262144k)

Once, a few months ago, I thought I traced this to the loop at line ~2597
in linux/mm/filemap.c:generic_file_write
  2593          remove_suid(inode);
  2594          inode->i_ctime = inode->i_mtime = CURRENT_TIME;
  2595          mark_inode_dirty_sync(inode);
  2596  
  2597          while (count) {
  2598                  unsigned long index, offset;
  2599                  char *kaddr;
  2600                  int deactivate = 1;
...
  2659  
  2660                  if (status < 0)
  2661                          break;
  2662          }
  2663          *ppos = pos;
  2664  
  2665          if (cached_page)

It appears to me that pseudo-spins (it *does* do useful work) in this
loop for as long as there are pages available.

BTW while the big-bs dd is running, the disk is active.  I assume that
writes are indeed scheduled and start happening even while we're still
dirtying pages?

Does this freezing effect occur on SMP machines too?  Oops, had access
to one until this morning :(  Would an SMP box still have a 'spare'
cpu which isn't dirtying pages like crazy, and can therefore do things
like updating mouse cursors, etc.?

Bernd Jendrissek

P.S. here's my patch that cures this one symptom; it smells and looks
ugly, I know, but at least my mouse cursor doesn't jump across the whole
screen when I do the dd=torture.

I have no idea if this is right or not, whether I'm allowed to call
schedule inside generic_file_write or not, etc.  And the '256' is
just random - small enough to let the cursor move, but large enough
to do work between schedule()s.

If this solves your problem, use it; if your name is Linus or Alan,
ignore or do it right please.

diff -u -r1.1 -r1.2
--- linux-hack/mm/filemap.c     2001/06/06 21:16:28     1.1
+++ linux-hack/mm/filemap.c     2001/06/07 08:57:52     1.2
@@ -2599,6 +2599,11 @@
                char *kaddr;
                int deactivate = 1;
 
+               /* bernd-hack: give other processes a chance to run */
+               if (count % 256 == 0) {
+                       schedule();
+               }
+
                /*
                 * Try to find the page in the cache. If it isn't there,
                 * allocate a free page.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE7H1tb/FmLrNfLpjMRAguAAJ0fYInFbAa6LjFC/CWZbRPQxzZwrwCeNqT0
/Kod15Nx7AzaM4v0WhOgp88=
=pyr6
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 142+ messages in thread
* Re: Break 2.4 VM in five easy steps
@ 2001-06-07 14:22 Bulent Abali
  2001-06-07 15:38 ` Mike Galbraith
  0 siblings, 1 reply; 142+ messages in thread
From: Bulent Abali @ 2001-06-07 14:22 UTC (permalink / raw)
  To: Mike Galbraith; +Cc: Eric W. Biederman, Derek Glidden, linux-kernel, linux-mm



>> O.k.  I think I'm ready to nominate the dead swap pages for the big
>> 2.4.x VM bug award.  So we are burning cpu cycles in sys_swapoff
>> instead of being IO bound?  Just wanting to understand this the cheap
way :)
>
>There's no IO being done whatsoever (that I can see with only a blinky).
>I can fire up ktrace and find out exactly what's going on if that would
>be helpful.  Eating the dead swap pages from the active page list prior
>to swapoff cures all but a short freeze.  Eating the rest (few of those)
>might cure the rest, but I doubt it.
>
>    -Mike

1)  I second Mike's observation.  swapoff either from command line or
during
shutdown, just hangs there.  No disk I/O is being done as I could see
from the blinkers.  This is not a I/O boundness issue.  It is more like
a deadlock.

I happened to saw this one with debugger attached serial port.
The system was alive.  I think I was watching the free page count and
it was decreasing very slowly may be couple pages per second.  Bigger
the swap usage longer it takes to do swapoff.  For example, if I had
1GB in the swap space then it would take may be an half hour to shutdown...


2)  Now why I would have 1 GB in the swap space, that is another problem.
Here is what I observe and it doesn't make much sense to me.
Let's say I have 1GB of memory and plenty of swap.  And let's
say there is process with little less than 1GB size.  Suppose the system
starts swapping because it is short few megabytes of memory.
Within *seconds* of swapping, I see that the swap disk usage balloons to
nearly 1GB. Nearly entire memory moves in to the page cache.  If you
run xosview you will know what I mean.  Memory usage suddenly turns from
green to red :-).   And I know for a fact that my disk cannot do 1GB per
second :-). The SHARE column of the big process in "top" goes up by
hundreds
of megabytes.
So it appears to me that MM is marking the whole process memory to be
swapped out and probably reserving nearly 1 GB in the swap space and
furthermore moves entire process pages to apparently to the page cache.
You would think that if you are short by few MB of memory MM would put
few MB worth of pages in the swap. But it wants to move entire processes
in to swap.

When the 1GB process exits, the swap usage doesn't change (dead swap
pages?).
And shutdown or swapoff will take forever due to #1 above.

Bulent





^ permalink raw reply	[flat|nested] 142+ messages in thread
* Re: Break 2.4 VM in five easy steps
@ 2001-06-10 22:04 Rob Landley
  0 siblings, 0 replies; 142+ messages in thread
From: Rob Landley @ 2001-06-10 22:04 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux

>I realize that assembly is platform-specific. Being 
>that I use the IA32 class machine, that's what I 
>would write for. Others who use other platforms could
>do the deed for their native language.

Meaning we'd still need a good C implementation anyway
for the 75% of platforms nobody's going to get around
to writing an assembly implementation for this year,
so we might as well do that first, eh?

As for IA32 being everywhere, 16 bit 8086 was
everywhere until 1990 or so.  And 64 bitness is right
around the corner (iTanic is a pointless way of
de-optimizing for memory bus bandwidth, which is your
real bottleneck and not whatever happens inside a chip
you've clock multiplied by a factor of 12 or more. 
But x86-64 looks seriously cool if AMD would get off
their rear and actually implement sledgehammer in
silicon within our lifetimes.  And that's probably
transmeta's way of going 64 bit eventually too.  (And
that was obvious even BEFORE the cross licensing
agreement was announced.))

And interestingly, an assembly routine optimized for
386 assembly just might get beaten by C code compiled
for Athlon optimization.  It's not JUST "IA32". 
Memory management code probably has to know about the
PAE addressing extensions, different translation
lookaside buffer versions, and interacting with the
wonderful wide world of DMA.  Luckily in kernel we
just don't do floating point (MMX/3DNow/whatever it
was they're so proud of in Pentium 4 whose acronym
I've forgotten at the moment.  Not SLS, that was a
linux distribution...)

If your'e a dyed in the wool assembly hacker, go help
the GCC/EGCS folks make a better compiler.  They could
use you.  The kernel isn't the place for assembly
optimization.

>Being that most users are on the IA32 platform, I'm 
>sure they wouldn't reject an assembly solution to 
>this problem.

If it's unreadable to C hackers, so that nobody
understands it, so that it's black magic that
positively invites subtle bugs from other code that
has to interface with it...

Yes they darn well WOULD reject it.  Simplicity and
clarity are actually slightly MORE important than raw
performance, since if you just six months the midrange
hardware gets 30% faster.

The ONLY assembly that's left in the kernel is the
stuff that's unavoidable, like boot sectors and the
setup code that bootstraps the first kernel init
function in C, or perhaps the occasional driver that's
so amazingly timing dependent it's effectively
real-time programming at the nanosecond level.  (And
for most of those, they've either faked a C solution
or restricted the assembly to 5 lines in the middle of
a bunch of C code.  Memo: this is the kind of thing
where profanity gets into kernel comments.)  And of
course there are a few assembly macros for half-dozen
line things like spinlocks that either can't be done
any other way or are real bottleneck cases where the
cost of the extra opacity (which is a major cost, that
is definitely taken into consideration) honestly is
worth it.

> As for kernel acceptance, that's an
>issue for the political eggheads. Not my forte. :-)

The problem in this case is an O(n^2) or worse
algorithm is being used.  Converting it to assembly
isn't going to fix something that gets exponentially
worse, it just means that instead of blowing up at 2
gigs it now blows up at 6 gigs.  That's not a long
term solution.

If eliminating 5 lines of assembly is a good thing,
rewriting an entire subsystem in assembly isn't going
to happen.  Trust us on this one.

Rob

__________________________________________________
Do You Yahoo!?
Get personalized email addresses from Yahoo! Mail - only $35 
a year!  http://personal.mail.yahoo.com/

^ permalink raw reply	[flat|nested] 142+ messages in thread

end of thread, other threads:[~2001-06-12  7:50 UTC | newest]

Thread overview: 142+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-06-05 22:19 Break 2.4 VM in five easy steps Derek Glidden
2001-06-05 23:38 ` Jeffrey W. Baker
2001-06-06  1:42   ` Russell Leighton
2001-06-06  7:14     ` Sean Hunter
2001-06-06  2:16   ` Andrew Morton
2001-06-06  3:19     ` Derek Glidden
2001-06-06 14:16       ` Disconnect
     [not found]       ` <3B1DEAC7.43DEFA1C@idb.hist.no>
2001-06-06 14:51         ` Derek Glidden
2001-06-06 21:34           ` Alan Cox
2001-06-09  8:07             ` Rik van Riel
2001-06-07  7:23           ` Helge Hafting
2001-06-07 16:56             ` Eric W. Biederman
2001-06-07 20:24             ` José Luis Domingo López
2001-06-06  4:03     ` Jeffrey W. Baker
2001-06-06  8:19     ` Xavier Bestel
2001-06-06  8:54       ` Sean Hunter
2001-06-06  9:57         ` Dr S.M. Huen
2001-06-06 10:06           ` DBs (ML)
2001-06-06 10:08           ` Vivek Dasmohapatra
2001-06-06 10:19             ` Lauri Tischler
2001-06-06 10:22           ` Sean Hunter
2001-06-06 10:48             ` Alexander Viro
2001-06-06 16:58               ` dean gaudet
2001-06-06 17:10               ` Remi Turk
2001-06-06 22:44             ` Kai Henningsen
2001-06-09  7:17             ` Rik van Riel
2001-06-06 16:47           ` dean gaudet
2001-06-06 17:17           ` Kurt Roeckx
2001-06-06 18:35             ` Dr S.M. Huen
2001-06-06 18:40               ` Mark Salisbury
2001-06-07  0:20           ` Mike A. Harris
2001-06-09  8:16             ` Rik van Riel
2001-06-09  8:57               ` Mike A. Harris
2001-06-07 21:31           ` Shane Nay
2001-06-07 20:00             ` Marcelo Tosatti
2001-06-07 21:55               ` Shane Nay
2001-06-07 20:29                 ` Marcelo Tosatti
2001-06-07 23:29                   ` VM Report was:Re: " Shane Nay
2001-06-08  1:18                   ` Jonathan Morton
2001-06-08 12:50                     ` Mike Galbraith
2001-06-08 14:19                       ` Tobias Ringstrom
2001-06-08 16:51                         ` Mike Galbraith
2001-06-08 19:09                           ` Tobias Ringstrom
2001-06-09  4:36                             ` Mike Galbraith
2001-06-08 15:51                       ` John Stoffel
2001-06-08 17:01                         ` Mike Galbraith
2001-06-09  3:34                           ` Rik van Riel
2001-06-08 17:43                         ` John Stoffel
2001-06-08 17:35                           ` Marcelo Tosatti
2001-06-09  5:07                             ` Mike Galbraith
2001-06-08 18:30                           ` Mike Galbraith
2001-06-09 12:31                             ` Zlatko Calusic
2001-06-08 20:58                           ` John Stoffel
2001-06-08 20:04                             ` Marcelo Tosatti
2001-06-08 23:44                             ` Jonathan Morton
2001-06-09  2:36                               ` Andrew Morton
2001-06-09  6:33                                 ` Mark Hahn
2001-06-09  3:43                               ` Mike Galbraith
2001-06-09  4:05                               ` Jonathan Morton
2001-06-09  5:09                                 ` Mike Galbraith
2001-06-06 10:04         ` Jonathan Morton
2001-06-06 11:16         ` Daniel Phillips
2001-06-06 13:58         ` Gerhard Mack
2001-06-08  4:56           ` C. Martins
2001-06-06 15:28         ` Richard Gooch
2001-06-06 15:42           ` Christian Bornträger
2001-06-06 15:57             ` Requirement: swap = RAM x 2.5 ?? Jeff Garzik
2001-06-06 18:42               ` Eric W. Biederman
2001-06-07  1:29                 ` Jan Harkes
2001-06-06 16:12             ` Richard Gooch
2001-06-06 16:15               ` Jeff Garzik
2001-06-06 16:19               ` Richard Gooch
2001-06-06 16:53                 ` Mike Galbraith
2001-06-06 17:05               ` Greg Hennessy
2001-06-06 17:14           ` Break 2.4 VM in five easy steps Ben Greear
2001-06-06 19:11         ` android
2001-06-07  0:27           ` Mike A. Harris
2001-06-06  9:16       ` Xavier Bestel
2001-06-06  9:25         ` Sean Hunter
2001-06-06 12:07       ` Jonathan Morton
2001-06-06 14:41       ` Derek Glidden
2001-06-06 20:29       ` José Luis Domingo López
2001-06-06 13:32     ` Eric W. Biederman
2001-06-06 14:41     ` Marc Heckmann
2001-06-06 14:51     ` Hugh Dickins
2001-06-06  7:47   ` Jonathan Morton
2001-06-06 13:08   ` Eric W. Biederman
2001-06-06 16:48     ` Jeffrey W. Baker
     [not found] ` <m2lmn61ceb.fsf@sympatico.ca>
2001-06-06 14:37   ` Derek Glidden
2001-06-07  0:34     ` Mike A. Harris
2001-06-07  3:13       ` Miles Lane
2001-06-07 15:49         ` Derek Glidden
2001-06-07 19:06         ` Miles Lane
2001-06-09  5:57         ` Mike A. Harris
2001-06-06 18:59 ` Mike Galbraith
2001-06-06 19:39   ` Derek Glidden
2001-06-06 20:47 ` Linus Torvalds
2001-06-07  7:42   ` Eric W. Biederman
2001-06-07  8:11     ` Linus Torvalds
2001-06-07  8:54       ` Eric W. Biederman
2001-06-06 21:39 ` android
2001-06-06 22:08 ` Jonathan Morton
2001-06-06 22:27 ` android
2001-06-06 22:33   ` Antoine
2001-06-06 22:38 ` Robert Love
2001-06-06 22:40 ` Jonathan Morton
2001-06-06 15:31 Derek Glidden
2001-06-06 15:46 ` John Alvord
2001-06-06 15:58   ` Derek Glidden
2001-06-06 18:27     ` Eric W. Biederman
2001-06-06 18:47       ` Derek Glidden
2001-06-06 18:52         ` Eric W. Biederman
2001-06-06 19:06           ` Mike Galbraith
2001-06-06 19:28             ` Eric W. Biederman
2001-06-07  4:32               ` Mike Galbraith
2001-06-07  6:38                 ` Eric W. Biederman
2001-06-07  7:28                   ` Mike Galbraith
2001-06-07  7:59                     ` Eric W. Biederman
2001-06-07  8:15                       ` Mike Galbraith
2001-06-07 17:10                 ` Marcelo Tosatti
2001-06-06 19:28           ` Derek Glidden
2001-06-09  7:55           ` Rik van Riel
2001-06-06 20:43       ` Daniel Phillips
2001-06-06 21:57       ` LA Walsh
2001-06-07  6:35         ` Eric W. Biederman
2001-06-07 15:25           ` LA Walsh
2001-06-07 16:42             ` Eric W. Biederman
2001-06-07 20:47               ` LA Walsh
2001-06-08 19:38                 ` Pavel Machek
2001-06-09  7:34     ` Rik van Riel
2001-06-06 21:30 ` Alan Cox
2001-06-06 21:57   ` Derek Glidden
2001-06-09  8:09     ` Rik van Riel
2001-06-07 10:46 Bernd Jendrissek
     [not found] ` <20010607153835.T14203@jessica>
2001-06-08  7:37   ` Bernd Jendrissek
2001-06-08 19:32 ` Pavel Machek
2001-06-11 12:06   ` Maciej Zenczykowski
2001-06-11 19:04     ` Rik van Riel
2001-06-12  7:46       ` Bernd Jendrissek
2001-06-07 14:22 Bulent Abali
2001-06-07 15:38 ` Mike Galbraith
2001-06-10 22:04 Rob Landley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).